Abstract

Machine learning (ML) techniques have played a revolutionary role in improving cybersecurity by providing advanced tools that not only detect but also prevent and mitigate cyber threats. This research paper aims to explore the intersection of ML and cybersecurity, with a special emphasis on various methods and how each of the methods can be used to improve cybersecurity. The paper reviews various ML algorithms including Graph Neural Networks, Adversarial Learning, Federated Learning, Explainable AI and Reinforcement Learning. Each algorithm plays a crucial role in helping to improve the detection and mitigation of cyber attacks. Graph Neural Networks help to model complex relationships in cybersecurity data. It not only helps to make future predictions, but also helps in anomaly detection and network traffic analysis. Adversarial Learning helps to train the ML models to tackle the challenge of generating deceptive input data that can mislead any model, thus improving the efficiency of such models. Federated Learning is explored as a way to train ML models across distributed networks while keeping data private and improving model accuracy. Explainable AI methods mainly provide transparency and interpretability in ML-driven cybersecurity decisions, which is essential for understanding and trust in automated security systems. Reinforcement Learning is centred around a trial-and-error-based approach, where the model can learn new tasks based on a punishment-reward system. These advanced algorithms collectively enhance the efficacy, accuracy, and transparency of cybersecurity measures, providing robust defences against evolving cyber threats.

1. Introduction

1.1 Cyberattacks

A cyberattack is a malicious attempt to gain unauthorised access to a computer, computing system, or computer network with the intent to cause damage (Pratt, 2022). Wolf (2023) states that a cyberattack usually occurs due to three reasons: revenge against a certain perpetrator, financial gain, and when government organisations hire professionals to infiltrate a neighbouring country’s databases. It is a widespread issue that plagues the modern world, causing billion-dollar losses for companies and organisations, and the leaking of a huge amount of sensitive information across the globe. The profitability of cybercrimes has spurred a 1.5 trillion-dollar business with an entire ecosystem of criminal organisations acting as legitimate businesses, intensifying the urgency for cybersecurity to act as a buffer to halt their advance (ArcticWolf, 2024). As the future of artificial intelligence (AI) begins to dawn, a more advanced framework and network must be created to ensure cybersecurity and cyber safety amongst the population.

Cyberattacks generally occur when hackers look for weaknesses in a software code that they can exploit by employing a phishing attack. A phishing attack involves randomly emailing several people with a socially engineered message which, when clicked on, adds a malicious code to the user’s computer. This code grants the computer access to the hacker who may steal bank passwords or important files on the computer. This occurrence is unfortunately common worldwide, and large amounts of information are stolen every year. Hoffman (2010) states that in 2008, the US military government faced a large-scale cyberattack by foreign enemies who successfully breached several American computer systems, stealing classified and declassified information.  During the attack, the foreign agent plugged a small infected flash drive into a laptop used by the U.S. military at a base in the Middle East, and installed a code created to steal information from US networked computers. The attack was described as the most significant breach of US military computers by William Lynn 3rd, deputy secretary of defence (Hoffman, 2010).

With cybercrimes occurring more frequently and expanding as an underworld profiting industry, a solution must be proposed to combat the rising threats. With the development of AI and its rising innovations, ML could be the benefactor that would prevent such cyberattacks ensuring security for businesses and communities. The advancing industry of ML could create an advanced security detail that would avert any menacing activity online, sustaining a progressive future without cyber attacks.

1.2 Cybersecurity

Bharadiya (2023) states that in the last 50 years, the Information and Communication Technology (ICT) industry has advanced by leaps and bounds and has become an integral part of modern society. Hence, it has become necessary to protect ICT systems from cyberattacks by malicious people (Bharadiya, 2023). This role of protection is given to cybersecurity. Ahsan et al (2022) define cybersecurity as technologies and techniques that help safeguard systems, programs, networks, etc. from being corrupted, accessed or deleted by malicious people or unauthorised organisations. Cybersecurity covers a wide range of industries from mobile to corporate computing and can be separated into various areas. Some common areas are network security, application security, information security and operation security (Ahsan et al, 2022). However, all such areas broadly involve detecting, mitigating and tracking any cyberattacks on the system. 

Bharadiya (2023) describes cybersecurity as the understanding of cyberattacks and developing various defensive strategies to protect a network. Some traditional defence strategies used in cybersecurity include a firewall, antivirus software or an intrusion detection system in the network and computer security systems (Bharadiya, 2023). However, the forever-changing industry of cyberattacks requires researchers to keep innovating and developing better cybersecurity systems. One such innovative solution is the use of ML in cybersecurity.

1.3 Machine Learning

ML is a progressive field of computational methods designed to emulate human intelligence by learning from the surrounding environment (Naqa and Murphy, 2015). The techniques based on ML have been successfully applied across various sectors, including pattern recognition, computer vision, spacecraft engineering, finance, entertainment, computational science, as well as biomedical and medical applications (Naqa and Murphy, 2015).

ML algorithms create mathematical models that can make predictions or decisions based on sample data, known as training data. Perlman (n.d.) states that some key features of ML include the ability to automatically learn and improve from experience, the use of algorithms to build predictive models, and the capacity to process large amounts of data to uncover patterns and insights (Perlman, n.d.).

One of the critical applications of ML is the detection of cyberattacks. ML algorithms can be trained on historical data of known cyber threats to find patterns and anomalies that may indicate new or emerging attacks. Perlman (n.d.) asserts that ML can significantly improve cybersecurity by making it more straightforward, proactive, cost-efficient, and effective, but this is contingent on having comprehensive and accurate data, as inadequate data results in ineffective outcomes. For example, ML models can analyse network traffic and user behaviour to find suspicious activity that could be a cyberattack in progress. By learning and adapting continuously, these ML-powered security systems can stay ahead of the evolving threats (Perlman, n.d.).

In summary, ML is a powerful AI technique that enables computers to learn and improve automatically, making it invaluable for tasks like cybersecurity where the threat landscape is constantly shifting. The ability of ML to process large datasets, identify patterns, and adapt to new information makes it a crucial tool for organisations looking to protect their systems and data.

2. MACHINE LEARNING ALGORITHMS

In today’s fast-changing world of cybersecurity, using advanced ML algorithms is essential to protect digital systems. This section looks at the important roles of different ML techniques that help make cybersecurity stronger. By studying these algorithms, the section shows how they improve the security and strength of digital systems.

2.1 Graph Neural networks

Graph Neural Networks (GNN) is a system of ML that analyses data presented in the form of a graph (Sanchez-Lengeling et al, 2021). Basically, it is a type of data structure containing nodes and edges of a graph. The nodes, which are the vertices on the graph, represent input data points, and the edges are the lines between nodes which represent the connection between data points (see Figure 1).  With the usage of deep learning techniques through the analysis of nodes and edges, GNNs interpret data presented as a graph and then execute problem-solving predictions using the interpretation from the given data. However, GNNs  not only generate future predictions, but also find anomalies in points of the data, which helps to detect suspicious online activity and other vital outliers.

Diagram showing the GNN structure

Figure 1: GNN Structure (Sanchez-Lengeling et al, 2021)

GNNs work by considering both the features of individual nodes and the structure of their connections (CyberPoint Blog, 2023). Initially, each node has its own features. Nodes then exchange and aggregate information with their neighbours through message passing. This information is used to update the node features. After several iterations, the GNN produces an output, which can be a prediction for individual nodes or the entire graph, depending on the task.

2.1.1 The Applications of GNNs

Some of the applications of GNNs can be listed as follows (CyberPoint Blog, 2023):

  1. Network Detection: GNNs play a crucial role in detecting anomalies or incursions in network traffic data. GNNs can analyse the similarities and differences between traffic data, identifying any suspicious behaviour, such as data being sent to an external server, which may represent a data breach. 
  2. Vulnerability Detection: Cybersecurity also involves a segment where software is analysed for detection of possible vulnerabilities that hackers can exploit. GNNs can analyse software graphs, identifying complete vulnerabilities that are generally missed by traditional linear analysis methods and improving the security function of the software. 
  3. User behaviour analysis: GNNs can also analyse user behaviours with a specific system to identify any dangers in the network. By absorbing the usual patterns of communication of a particular user, GNNs are capable of identifying irregularities which may state that the sensitive information of a user has been compromised by someone external. 

This process enables a principle known as graph theory, which can help contextualising systems and networks into graphs where the network can capture the relationships inherent in the graph structure. Due to the various systems that can be contextualised, graph theory can be proposed as a game-changer, ensuring cybersecurity and combatting cybercrimes by detecting anomalies and attacks before they occur. Therefore, GNNs have several uses in cybersecurity. 

The various advantages of graph theory proposed by GNNs and its widespread application in the detection of anomalies indicate the sheer significance of this ML algorithm in cybersecurity. Its ability to detect network intrusions and complex vulnerabilities that traditional linear methods cannot exhibits the GNN’s robust applications that ensure cybercrimes are fought against and that cybersecurity is maintained. GNNs’ being a segment of ML has several implications, as they are only one of many ML algorithms. This provides an equitable stance that ML is essential and capable of doing wonders against fighting cybercrime and ensuring cybersecurity.

2.2 Adversarial Learning

Thomas et al. (2019) asserts that adversarial machine-learning algorithms tackle the challenge of generating deceptive input data that can mislead any machine-learning model. For example, features of legitimate software can be incorporated into a malicious executable to trick the classifier into recognising it as safe. As the term “adversary” suggests, it refers to an opponent or enemy (Thomas et al., 2019, p.185). When any input data causes a ML model to incorrectly classify it, these are known as adversarial samples (Thomas et al., 2019). Adversarial learning is a subset of ML where models are trained to be resilient against adversarial attacks. These attacks involve inputs designed to deceive ML models into making incorrect predictions or classifications.

2.2.1 The Applications of Adversarial Learning

Adversarial Learning has many applications in cybersecurity:

  1. Malware Detection: Kolosnjaji et al. (2018) state that traditional malware detection techniques often rely on signature-based methods, which can be bypassed by slight modifications in malware code. Adversarial learning enhances malware detection systems by training them on adversarial examples—malicious codes that have been intentionally altered to avoid detection. This process not only helps the system recognise known malware, but also variants that are designed to evade detection (Kolosnjaji et al., 2018).
  2. Biometric Security: Biggio (2018) states that biometric systems, such as facial recognition and fingerprint scanning, are susceptible to adversarial attacks where inputs are slightly altered to fool the system. Adversarial learning can enhance these systems by training them on adversarial examples that mimic such attacks. This makes biometric systems more resilient to spoofing attempts, ensuring more secure authentication processes (Biggio, 2018).

Due to these various applications in Cybersecurity, Adversarial Learning is used in various well-known apps around the world:

  1. Google’s Safe Browsing: This involves training ML models with data that includes simulated attacks, making the models more adept at recognising and mitigating malicious activities. Gerbet (n.d.) states that by exposing the models to adversarial examples, Google’s Safe Browsing can anticipate and defend against sophisticated malware, phishing attempts, and other forms of online deception. This proactive approach enhances the robustness and accuracy of threat detection, ensuring that users are better protected from evolving cyber threats, and ultimately leading to a safer online experience for millions of users worldwide (Gerbet, nd).
  2. Intrusion Detection Systems like Snort and CISCO NGIPS: Grosse (2017) demonstrated that when adversarial examples were incorporated into the training process, the IDS models showed a marked improvement in detecting evasion attacks. The enhanced models were able to identify and mitigate sophisticated attack strategies that traditional detection methods failed to catch. The detection rate of evasion attacks improved by over 25% compared to baseline models (Grosse, 2017).

In summary, adversarial learning in cybersecurity enhances defences against deceptive inputs, improving malware detection and biometric security. It fortifies systems like Google’s Safe Browsing against sophisticated threats and boosts intrusion detection by identifying evasion tactics more effectively.

2.3 Federated Learning

According to Li et al. (2020), federated learning is an approach that allows ML to train and share the models without sharing the data inside the system. After updating the data, the model sends that information to a central server to be analysed. This approach helps to maintain the privacy and security of personal information for the user. There are three main key concepts: data privacy, security, and scalability (Li et al., 2020).

2.3.1 The Applications of Federated Learning

  1. Intrusion Detection Systems (IDS): IDS trains systems to only use local data, removing the need to share sensitive information of the user. By applying this approach, the risk of breaching information is very low, since the system is only using local systems (Kairouz et al., 2021). 
  2. Fraud Detection: The essential application of federated learning for fraud detection is in financial institutions. The system is responsible for protecting the transaction information and updating the model. This system can be integrated with other systems for different financial institutions to enhance the detection of fraudulent activities across the globe (Yang et al., 2019). 
  3. Spam and Phishing Detection: Combatting malicious emails is considered a priority for every company. Those emails threaten the system, and include spam and malware files that could breach the information of the users in the local system of the institution. This spam system can also be integrated with other global systems from other institutions. Detecting those emails will help to avoid data breaches in the system that expose the personal data of employees and the financial statements of the company (Makkar et al., 2021). 

In addition, the following examples discuss real-life applications of the federated system:

  1. Apple’s Siri and QuickType: According to Yang et al. (2019), Apple is using a federated system to enhance and improve the effectiveness of quick-typing and Siri. The data used in those systems must not leave the user’s device.
  2. Google Gboard: Based on a study by Kairouz et al. (2021), Google is using the federated learning system to provide the best experience for the user when typing in the search bar while using the website. Also, the search history of the user remains on the user’s device. 
  3. Healthcare Industry: Also, Kairouz et al. (2021) confirm that all hospitals and medical centres have a huge database of patients’ information. This information must be protected because it could contain personal data, history of medical records, and sensitive information. 

In conclusion, a federated system is an essential tool integrated with ML to avoid data breaches and protect the personal data of the user. In addition, three main key factors construct the federated system, which are data privacy, security, and scalability. One of the real-life applications of the federated system can be noticed with Apple’s quick-typing and Google Gboard.

2.4 Explainable AI

    According to Gillis (n.d.), Explainable AI (XAI) is an artificial intelligence (AI) model that has been programmed to clarify its purpose, rationale, and decision-making process in terms that the average person can comprehend. XAI assists human users in understanding the thinking behind AI and ML systems, thus increasing their trust (Gillis, n.d.).

    IBM (n.d.) describes explainability as the concept that a ML model and its output can be explained in a way that makes sense to a human being at an acceptable level. Certain classes of algorithms, including more traditional ML algorithms, tend to be more readily explainable while being potentially less performant. Others, such as deep learning systems, while being more performant, remain much harder to explain. Improving our ability to explain AI systems remains an area of active research (IBM, n.d.).

    Explainable AI allows organisations to understand and adjust AI decisions, enhancing user confidence and improving model performance. It involves techniques focused on prediction accuracy, traceability, and decision understanding, helping stakeholders grasp AI behaviours and ensure accurate, fair, and high-quality model outputs (IBM, n.d.).

    2.4.1 The Applications of Explainable AI

    1. Cars: Dheenadayaln and Kulkarni (n.d.) state that autonomous driving is the future of the automotive industry and has been a developing theme. Self-driving or driverless cars are fascinating – that is, if they don’t make any mistakes. In this high-stakes AI application, one mistake could result in the loss of one or more lives. Explainability is essential for comprehending the system’s potential and constraints before implementation. It is critical to comprehend the flaws of driving assistance as it applies to customers, to evaluate, clarify, and prioritise the necessary fixes. Voice assistants and assisted parking are largely appealing features that rely on the model’s rather low-risk judgments (Dheenadayalan and Kulkarni, n.d.).
    2. Healthcare: Tonekaboni et al. (2019) states that Explainable AI models are used to predict patient outcomes in intensive care units (ICUs). The models provide explanations for their predictions, such as highlighting which clinical measurements (like heart rate or blood pressure) are most influential (Tonekaboni et al., 2019).

    In summary, XAI clarifies AI decisions, enhancing user trust and comprehension. Techniques like prediction accuracy, traceability, and decision understanding help organisations improve model performance and transparency. Some of its real-life examples include autonomous vehicles, where it ensures safety, and healthcare, where it aids in predicting patient outcomes.

    2.5 Reinforcement Learning

      Another subset of ML is reinforcement learning (RL). Gottsegen (n.d.) defines reinforcement learning as a ML model where, through a trial-and-error approach, the model can learn new tasks by punishing it for incorrect actions and rewarding it for correct actions. As depicted in Figure 2, the training model or agent is placed in an environment where it reacts to the situation. Through this reward system and repeated training in new environments, the agent learns which actions to take in which situations (Cengiz and Gök, 2023).

      Diagram showing the training process of the RL model

      Figure 2: Training the RL model (Cengiz and Gök, 2023, p.3)

        A reinforcement learning model is based on some key components (Kabanda et al., 2023):

        (i) Policy: A policy refers to the mapping of the environment to the actions that can be taken when such conditions are present. An RL agent chooses its action for the current environment based on this policy.

        (ii) Reward Signal: It assigns the objective to the agents. The reward signal gives a numerical value known as the reward to the RL agent for every action that it takes. Using the reward signal, changes are made to the policy.

        (iii) Value Function: The value function shows which actions are better over the long term while the reward signal shows what is beneficial for the short term. The value function is the total reward that an agent can hope to obtain throughout the environment, starting in that particular condition. It is used to encourage the agent to take the actions which result in the highest total reward instead of the actions which will give it the highest reward. These values are updated every time an agent takes an action.

        RL algorithms are different from any supervised ML algorithm. RL models try to maximise the total reward over a set of observations while supervised algorithms try to predict an output value against the true value by either minimising the loss function or maximising the likelihood function (Sewak et al., 2022).

        2.5.1 The Applications of Reinforcement Learning

        RL algorithms are gaining popularity in various industries from gaming to industrial processes and cyber-physical systems. In recent years, reinforcement learning models have been increasingly used in cybersecurity too (Sewak et al., 2022). RL is used in cybersecurity for tasks such as penetration testing to identify system vulnerabilities, autonomous intrusion detection systems, and cyber-physical systems (Cengiz and Gök; SailPoint, 2023).

        Some real-life applications of reinforcement learning in cybersecurity are:

        1. Paypal: It is one of the global leaders in ML algorithms. It has used its vast transaction data from 350 million consumers in 200 markets to train an AI using a reinforcement learning-based model to combat payment fraud and identify bad buying habits (Fetisov and Talochka, 2022).
        2. Visa: This company uses its global processing network as data to analyse and predict the behaviour of customers. This is done by using an RL algorithm to create these predictive behavioural models which help to detect fraud (Fetisov and Talochka, 2022).

        Hence, reinforcement learning and other ML models like graph neural networks, adversarial and federated learning and explainable AI have many applications in cybersecurity, and are playing an increasingly important role.

        3. THE FUTURE OF MACHINE LEARNING IN CYBERSECURITY

        Based on a study conducted by Mansoori and Salem (2023), the solid integration of AI and ML inside the cybersecurity field enhances the ability to detect malware threats, analyse the behaviour, and respond to the threat in time. In addition, we can see that AI and ML are improving the resilience of cybersecurity frameworks, such as improving the architecture process of building the software, and extending the capabilities of data protection to enhance the response behaviour. Furthermore, a study by Nasser (2021) confirms that the integration of both systems will add value to the security framework to accurately address the nature and dynamic of cyber threats.

        Some of the challenges of ML are as follows (Barbierato and Gatti, 2024):

        • Data Quality and Model Interpretability: The challenge here is to provide high-quality data to train the models and keep those models at a high level of response time.
        • Ethical Concerns: There are several ethical concerns surrounding ML models, such as data privacy, consent, and algorithmic fairness.
        • Adversarial Attacks: Both AI and ML systems can be manipulated by anonymous hackers. This issue hurts the effectiveness of the software and the accuracy of output.
        • Integration with existing systems: The current cybersecurity framework needs to be adjusted with the new integration of AI and ML, to avoid creating unwanted vulnerabilities in the current system.
        • Scalability: Due to the huge amount of data that must be analysed on the server, there are several concerns regarding the accurate handling of the data. These concerns negatively impact the environment of the cybersecurity detection process.

        These challenges need to be overcome in the future. According to Wang (2022) the future of ML in cybersecurity involves creating adaptable defence systems that can predict and identify new threats in real-time, reducing human error and enhancing 24/7 protection. These systems might rely on AI to analyse large data sets and detect anomalies much faster than traditional methods. The increasing shortage of skilled cybersecurity professionals underscores the importance of AI and ML in filling this gap, enabling more efficient threat management and predictive modelling​ (Wang 2022).

        ML might play a critical role in enhancing cybersecurity by enabling more sophisticated threat detection, automating responses to cyber-attacks, and reducing reliance on human intervention in the future. AI-driven systems might analyse vast amounts of data to identify patterns and anomalies, making cybersecurity measures more proactive and efficient (Raytman, 2024). As technology advances, these systems will become essential for managing the growing complexity and scale of cyber threats.

        4. CONCLUSION

        Bharatiya (2023) asserts that the cyber world is growing fast and is playing an important role in daily life. It has become the centre of information in the modern world. This information needs to be protected from cyberattacks through cybersecurity. As attack strategies to invade a network to steal or corrupt data are rapidly diversifying, traditional cybersecurity technologies like firewalls, etc are becoming obsolete. Hence, ML algorithms are being widely used instead to tackle cybersecurity issues due to their ability to adapt (Bharatiya, 2023). 

        ML is a progressive field of computational methods designed to emulate human intelligence by learning from the surrounding environment (Naqa and Murphy, 2015). ML is the core of online safety and cybersecurity for the future. Its robust algorithms efficiently and consistently detect threats that are rendered blind by older security systems. The primary algorithms of ML are Graph Neural Networks (GNN), reinforcement learning, adversarial learning, and federated learning algorithms. All of these key ML subsets play crucial roles in ensuring the security of multiple applications; for example, federated learning algorithms are used in fraud detection, spam and phishing attacks, and intrusion detection systems. The wide functionality and security of the algorithms make ML greatly useful in businesses as it prevents the threat of personal data from being stolen from ransomware gangs, saving companies and organisations billions of dollars every year (Wolf, 2022). 

        Despite the wide benefits of ML in cybersecurity, several constraints constrict the wide use of ML such as inadequate training data where there is a shortage of quality and quantity of data in the training dataset. The shortage creates inaccuracies in the ML algorithm that cause difficulties in generalising specific data for accurate predictions. Also, the intricate nature of ML requires a fully educated workforce who can handle the complexities of mathematics, science, and technology, and the lack of these skills causes a shortage of people who can operate and successfully integrate ML models into operating systems, causing some companies not to use ML algorithms (IABAC, 2024). Hence, for the continued efficacy of ML against cybersecurity and privacy of data, ML algorithms need to be continuously updated, trained with new data and integrated with new and cutting-edge technologies like blockchain technologies (Bharadiya, 2023). 

        This research paper contributes to the scientific community by providing an overview of the integration of ML in cybersecurity and providing recommendations on how ML algorithms can keep up with the ever-increasing cyber-attacks.

        Bibliography

        Ahsan, M., Nygard, K.E., Gomes, R., Chowdhury, M.M., Rifat, N. and Connolly, J.F. (2022). Cybersecurity Threats and Their Mitigation Approaches Using Machine Learning—A Review. Journal of Cybersecurity and Privacy, [online] 2(3), pp.527–555. doi: https://doi.org/10.3390/jcp2030027.

        ArcticWolf (April 19, 2024). A Brief History of Cybercrime. Available at: https://arcticwolf.com/resources/blog/decade-of-cybercrime/ [Accessed 18th June, 2024].

        Barbierato, E. and Gatti, A., 2024. The challenges of machine learning: A critical review. Electronics, 13(2), p.416.

        Bharadiya, J. (2023). Machine Learning in Cybersecurity: Techniques and Challenges. European Journal of Technology, [online] 7(2), pp.1–14. doi: https://doi.org/10.47672/ejt.1486.

        Cengiz, E. and Gök, M., 2023. Reinforcement learning applications in cyber security: A review. Sakarya University Journal of Science, 27(2), pp.481-503.

        CyberPoint Blog (2023). CyberPoint Blog:: Graphing the Future: Harnessing the Power of Graph Neural Networks for Cybersecurity. [online] www.cyberpointllc.com. Available at: https://www.cyberpointllc.com/blog-posts/Graphing-the-Future-Harnessing-the-Power-of-Graph-Neural-Networks-for-CyberSecurity.php [Accessed 21st June 2024].

        El Naqa, I. and Murphy, M.J. (2015). What Is Machine Learning? Machine Learning in Radiation Oncology, [online] 1(1), pp.3–11. doi: https://doi.org/10.1007/978-3-319-18305-3_1.

        Fetisov, E. and Talochka, A., 2022. 9 Companies Using Machine Learning: Tesla, Facebook, PayPal, and Others. Available at: https://jaydevs.com/machine-learning-and-its-use-in-cybersecurity/ [Accessed 19 Jun. 2024].

        Gerbet, T., Kumar, A. and Lauradoux, C. (n.d.). On the (In)security of Google Safe Browsing. [online] Available at: https://www.inrialpes.fr/planete/people/amkumar/papers/gsb-security.pdf [Accessed 29 Jun. 2024].

        Grosse, K., Papernot, N., Manoharan, P., Backes, M. & McDaniel, P.(2017). Adversarial perturbations against deep neural networks for malware classification. Proceedings of the 2017 European Symposium on Research in Computer Security (ESORICS ’17), pp.62-79

        Hoffman, S. (2010). Pentagon Confirms 2008 Cyber Attack Against U.S. Military | CRN. [online] www.crn.com. Available at: https://www.crn.com/news/security/227001109/pentagon-confirms-2008-cyber-attack-against-us-military [Accessed 23 Jun. 2024].

        Kolosnjaji, B., Demontis, A., Biggio, B., Maiorca, D., Giacinto, G., Eckert, C. and Roli, F. (2018). Adversarial Malware Binaries: Evading Deep Learning for Malware Detection in Executables. [online] IEEE Xplore. doi: https://doi.org/10.23919/EUSIPCO.2018.8553214.

        Makkar, A., Ghosh, U., Rawat, D.B. and Abawajy, J.H., 2021. Fedlearnsp: Preserving privacy and security using federated learning and edge computing. IEEE Consumer Electronics Magazine, 11(2), pp.21-27.

        May Wang, 2022. The Future of Machine Learning in Cybersecurity. Available at: https://www.cio.com/article/406441/the-future-of-machine-learning-in-cybersecurity.html [Accessed 20 Jun. 2024].

        Perlman, A. (n.d.). The Growing Role of Machine Learning in Cybersecurity. [online] Palo Alto Networks. Available at: https://www.paloaltonetworks.com/cybersecurity-perspectives/the-growing-role-of-machine-learning-in-cybersecurity.

        Pratt, M. (2022). What is a cyber attack? Definition, types, and examples. [online] SearchSecurity. Available at: https://www.techtarget.com/searchsecurity/definition/cyber-attack [Accessed 19 Jun. 2024].

        Roytman, M., 2024. The Future Of AI And ML In Cybersecurity. Available at: https://www.forbes.com/sites/forbestechcouncil/2024/03/05/the-future-of-ai-and-ml-in-cybersecurity/ [Accessed on 20 Jun. 2024].

        Sanchez-Lengeling, B., Reif, E., Pearce, A. and Wiltschko, A. (2021). A Gentle Introduction to Graph Neural Networks. Distill, [online] 6(8). doi: https://doi.org/10.23915/distill.00033.

        SEON. (n.d.). Graph Neural Network (GNN). [online] Available at: https://seon.io/resources/dictionary/graph-neural-network-gnn/#:~:text=Short%20for%20graph%20neural%20network [Accessed 21 Jun. 2024].

        Wolf, A. (2022). History of Cybercrime. [online] Arctic Wolf. Available at: https://arcticwolf.com/resources/blog/decade-of-cybercrime/#:~:text=Technically%2C%20the%20first%20cyber%20attack [Accessed 19 Jun. 2024].