According to a recent study published by GrandView Research, global investment in AI in cybersecurity is set to reach $93.75 billion by 2030. A 2022 PwC survey found that a significant number of security leaders plan to use it to forecast market predictions (57%), enhance supply chain operations (54%), monitor physical assets (45%), and make long-term strategic decisions (53%).
What accounts for the sudden adoption of AI in cybersecurity? There are several factors at play.
First, AI technology has matured tremendously in recent years. What was once limited to rule-based technology can now accurately detect anomalous user behavior with speed and precision. Second, data is being generated en masse, with over 328.77 million terabytes created each day, which is impossible to sort through using manual methods. More than 80% of security professionals in an IBM survey said that manually investigating threats was significantly slowing their response time. Finally, the cybersecurity industry is facing a global shortage of 3.4 million workers, and needs a solution to respond to the increasing number and sophistication of these attacks. Humans are unable to sufficiently secure an enterprise-level attack surface alone.
These factors work together to make AI an essential element in defending against third-party attacks.
What is Artificial Intelligence (AI) in Cyber Security?
AI in cybersecurity has a wide variety of applications today. As attack surfaces continue to expand and the need to analyze massive amounts of data increases, organizations have recognized the value of AI to quickly and accurately identify and respond to cybersecurity threats. These AI and machine learning models become more effective and accurate as they train on data with the goal of making better decisions over time.
For example, organizations might use AI to generate a knowledge graph that can identify suspicious IP addresses connected to your network, or which users have been infected by a particular malware tool – and what path the malware took to infect those users. Or they might use chatbots, natural language processing systems, that draw on a knowledge base of information to help security teams further research security issues.
Like most newly adopted technologies, however, AI systems are also used by attackers to launch more sophisticated attacks. For example, cybercriminals exploit the ChatGPT model by training it with social media posts to mimic the tone and voice of an author and convince users to give them payment information and other sensitive data. With these types of AI-powered solutions, they can launch these attacks on a far wider scale than ever before.
How Security Teams Are Using AI in Cybersecurity
Security teams are using AI to investigate, identify, report and further research any cybersecurity risks and potential security issues facing organizations today. This is critical as 68% of security teams today are responsible for responding to cybersecurity attacks on multiple fronts, another result of the global cybersecurity staff shortage.
These AI-based cybersecurity systems are used in:
- Threat detection. AI-based systems use machine-learning techniques to analyze network traffic and user behavior to identify emerging threats. For example, if a user attempts to access assets in your computer systems they don’t have access to, the AI-powered systems alert your security team of a possible insider threat.
- Direct incident response. The ability to analyze large amounts of data means that AI-based systems can detect and prioritize incidents, proactively responding to both known and unknown threats. They can also trace incidents back to their origin, resulting in more effective security measures.
- Endpoint protection. Traditional threat detection relies on signatures, or known user behavior. AI-based endpoint protection includes machine-learning algorithms that detect anomalous behavior after establishing a baseline, usually in real-time.
- Breach risk prediction. AI technology can take inventory of all of your organization’s IT assets and the different users that have various access and permissions to those assets. It can then use that information to predict the most likely method of attack and the potential entry point so that your organization can best defend itself.
- Network security. Analyzing network activity for unusual activity, such as unusually large data transmitted over the network that could be indicators of potential DDoS attacks. It can also quickly spot and patch vulnerabilities in the network infrastructure to mitigate against emerging threats such as zero-day vulnerabilities.
The Risks of Using AI in Cybersecurity
With the advantages of generative AI come a number of risks for security teams, from both the process of the AI systems and cybercriminals who leverage the technology for their own malicious purposes.
These risks include:
- Inaccurate results and/or false positives. AI technologies are based on datasets, so their accuracy is dependent on the amount of datasets they use in their training. Acquiring and investing in such massive datasets often requires more time and resources than many organizations have available. While training AI models on smaller datasets may be more cost-effective, they also render inaccurate results.
- Data leakage. All data entered into ChatGPT is stored and used to continue to train its model. With access to sensitive data, these highly distributed LLM models are at constant risk of data leakage. Between May 2022 and June 2023, more than 100,000 ChatGPT accounts were compromised by information-stealing malware.
- Prompt engineering. Text from the AI model can be structured for malicious purposes, including writing commands in an email text prompting it to forward to an attacker or writing injection-style text in a place where AI models can easily access the data.
- Phishing attacks. Cybercriminals can leverage generative AI tools such as ChatGPT to write convincing phishing emails, using social media as training data to mimic the tone and voice of the author. This lures users into giving sensitive data such as customer credentials and payment information. This leaked information can also lead to data breaches.
- Hallucinations. AI systems can generate answers even with a relatively low confidence level. This results in inaccurate, biased and false results. For example, if you ask a model to identify five examples of vulnerabilities in your system but only 3 exist, it may make up two to satisfy the request.
What are Best Practices for the Use of AI in Cybersecurity?
To mitigate these risks, organizations can put a number of best practices in place.
1. Implement a company-wide strategy for AI
Since AI delivers both benefits and risks to your security team, your organization should develop a policy for how it plans to integrate AI technology into its security architecture and processes.
Before doing so, however, you’ll need to determine:
- Your security objectives. Are you looking to improve your threat detection, identify third-party risk, or detect unknown vulnerabilities?
- Your business goals. How can AI help us align with our business goals? For example, can it assist in spotting emerging trends, enhance the security of your customer interactions or identify potential security issues before they become critical?
- Your organization’s current AI skillset. Does your team currently have the necessary AI skills to meet these objectives, or will it require hiring additional staff?
- Your measurement for success. After implementing AI, do you expect a decrease in the time your security spends on patching vulnerabilities? An improvement in your security posture?
2. Provide access to high-quality and accurate data
Since AI works by recognizing patterns in data, its models are only as good as the data it is given. Biased and outdated data can generate inaccurate results. ChatGPT, for example, trained its AI models on data from the web up until 2021, so it cannot achieve accurate results for questions related to current events. After enabling plugins and web browsing capabilities, the AI system now has access to the most recent web data. It also allows users to upload relevant documents needed to obtain the most accurate results.
3. Consider ethical implications
Data privacy and security are key concerns when it comes to AI. The technology can be used to monitor and track user behavior, violating their privacy. To guard against this, organizations should implement data anonymization techniques and create transparent guidelines to ensure that users have control over the collection of their data. To protect user security, organizations should conduct ongoing monitoring of the AI systems for suspicious activity and regular security assessments, including verifying that the AI systems are configured correctly. This is critical since attackers can leverage vulnerabilities in the system and use them to change AI algorithms, impacting their ability to deliver effective cybersecurity.
4. Continually test and update AI models
Since the models are trained with data and data changes over time, it’s important to keep your AI models updated with new data to ensure their accuracy. Otherwise, you’ll experience “model drift,” where AI models move away from their original performance levels over time. Continuous testing also identifies any model limitations that you may need to address. For example, it can help ensure that the AI model recognizes the latest threats so that your organization can mitigate cyber threats effectively.
How Panorays Utilizes AI for Third-Party Risk Management
According to research from the Identify Theft Resource Center, the number of data breaches in 2022 originating from supply chain attacks exceeded those from malware attacks by 40%.
As attack surfaces expand, it becomes increasingly challenging for your organization to gain visibility of its supply chain and understand the risks exposed to your organization from new technologies such as artificial intelligence. Panorays delivers this visibility by combining automated and contextualized security questionnaires with external attack surface assessments to gain a 360-degree rating of your supplier’s risk, including identifying third, fourth, and N-th party suppliers who are using AI, which assets are vulnerable to exploitation, and the likelihood of any given supplier could be breached through that vulnerability.
Not only can Panorays identify these risks, but its AI capabilities benefit third-party risk managers with AI-assisted responses to security questionnaires based on users’ previous answers, reducing friction for third parties and enabling quicker and more accurate answers. Panorays also uses AI to parse new data to stay on top of the latest data breaches so that your organization can mitigate risks proactively, rather than responding to any security incident once it’s too late.
Want to learn more about how you can leverage AI systems in your third-party risk management program? Get a demo today.
FAQs
Examples of AI in cybersecurity include:
Predictive breach detection. By detailing the number of users, devices and applications with different levels of access to your system, AI and machine learning models can predict where and how the next security breach will occur in your organization.
Threat detection. Detecting emerging threats through pattern recognition and identifying new tools for launching malware attacks and malicious user behavior that indicate a possible insider threat.
Endpoint protection. AI-based endpoint protection uses training models to continuously establish a baseline for normal behavior. It then uses that baseline to monitor and respond to any behavior that acts outside of that baseline.
Direct incident response. AI models can assist in identifying, prioritizing and proactively responding to emerging and existing security threats in your infrastructure. For example, it can identify a threat to a specific compromised device within your organization and isolate it to prevent a data breach.
AI is used in cybersecurity for several purposes. First, it is used to analyze large amounts of data and to discover anomalous user behavior or traffic that could pose a threat to your organization. For example, it is used in network security to monitor and analyze traffic patterns and detect when anomalous traffic patterns indicate a possible DDoS attack. Second, its ability to analyze large amounts of data is used to help make better security decisions. For example, it can predict where a future data breach might occur and respond proactively to defend against it. Third, it can help monitor and identify vulnerabilities in your organization’s supply chain by identifying which third parties are using AI and if they are meeting the proper compliance regulations to mitigate risk from those technologies.
As AI capabilities expand, they will continue to contribute to the improved cybersecurity of organizations even as cybercriminals leverage them for their malicious purposes.