Generative AI in cybersecurity presents the industry with a double-edged sword. Since ChatGPT was introduced in 2022, the cybersecurity industry has reported a 1265% increase in malicious phishing and a 2137% increase in deepfakes. According to the U.K National Cyber Security Center, it is also expected to contribute to the rapid rise of ransomware over the next two years.
At the same time, however, supply chains and third-party vendors are increasingly adopting AI technologies that drive value in their supply chain with applications in demand forecasting, risk management, identifying issues, and improving relationships with third parties.
Understanding Generative AI in Supply Chains
With its ability to automate conversations between customers and their users, analyze massive amounts of data to predict future trends, and continually assess risks at scale, generative AI promises the ability to deliver greater efficiency and allow supply chain teams to focus on higher-value tasks such as strategic planning. It shouldn’t be a surprise that it has quickly been adopted across supply chains in organizations of all sizes and industries.
Generative AI offers organizations the ability to generate text quickly (whether it’s by filtering through data to retrieve relevant information quickly or by basing new text on prior templates) and create complex documents at scale such as contracts and third-party assessments. It also enables organizations to analyze massive amounts of data and discover patterns and trends, in addition to modifying current risk strategies based on real-time data.
Cybersecurity Risks Introduced by Generative AI
Although generative AI delivers organizations benefits for third-party risk management, it also introduces risks. Cybercriminals have leveraged AI to automate phishing attacks at scale, the development of malware and deepfakes, and can exploit vulnerabilities in AI-powered systems and algorithms. Biased data and hallucinations, or the presentation of false or incorrect information, can result in poor decision-making in security issues such as improper identification of vulnerabilities and lack of proper incident response. In addition, the risk always exists that the data from models could leak, compromising IP company secrets exposing sensitive data from customers, and causing violations of compliance such as HIPAA and PCI DSS.
Third-Party Risk in an AI-Driven Ecosystem
As more AI technology is incorporated into supply chains, it becomes increasingly challenging to identify and defend against its risks.
These third-party risks include:
- An expanded attack surface that demands greater visibility into the supply chain.
- Data privacy issues include the risk of data leakage, and failure to adhere to regulations, which are more challenging when the data processing system becomes more complex.
- Vendor interdependence increases the risk of operational failure and significant financial and reputational damage when relying on a critical service.
- Regulatory compliance challenges in a dynamic landscape affected by industry, geography, and target audience.
- Bias and fairness risks due to inaccurate and irrelevant data or statistically insignificant sample sizes.
Expanded Attack Surface
As AI technologies increasingly outsource to third, fourth, and n-th parties, it expands the attack surface, with organizations often unaware of which suppliers and technologies are in their supply chain. Without identifying the business relationships in their supply chain, they also cannot understand the criticality of each vendor and the cybersecurity risk it poses to them, or which areas of compliance it may be violating. It also becomes much harder to control critical aspects of security such as data security, which is one of the reasons cloud storage systems such as AWS operate on the shared responsibility model, with the responsibility for data security solely in the hands of the SaaS provider hosting its data on AWS.
Data Privacy Issues
When AI models are trained on sensitive data or information, there is always a risk of the data being leaked. Leakage of financial statements or IP information can lead to damage of the organization’s brand and a loss in customer trust. Many regulations (e.g., HIPAA, GDPR, NYDFS) have explicit clauses related to the protection of data privacy in third-party technology, including security controls and the reporting of data breaches within specific time frames. In general, the use of generative AI in cybersecurity makes it harder for organizations to control how the data is stored, generated, or transferred in their supply chain. This issue is magnified in the case of complex data analysis where it is more difficult to understand how the data is being used and what privacy issues it might violate.
Vendor Interdependence
Since many of these AI technologies and applications in the supply chain are new, organizations often rely on a single vendor. When this vendor delivers a critical service, the inability to deliver service due to an operational failure, power outage, or data breach presents a serious third-party risk. Vulnerabilities in an AI technology shared by multiple vendors can create a security issue throughout the supply chain that provides the perfect opportunity for an attack. It can also make it harder for the organization to mitigate in the event of an attack since coordination is required among multiple vendors.
HRegulatory Compliance Challenges
Regulations aren’t standardized across industries or geographic locations, so organizations may have to ensure that each third party or supplier adheres to a different set of regulations. These regulations are updated frequently, and it can be challenging to stay ahead of the different changes. These may include compliance aimed specifically at AI. However, the evolving regulatory landscape is still coming to terms with AI technologies and we have only started to see regulations aimed at its regulation with the AI Act and the Executive Order on AI.
Bias and Fairness Risks
Algorithms used in generative AI technology may be based on inaccurate or biased data, and reiterations of old models can become practically worthless (e.g., model collapse).
For example, training data in cybersecurity may include more data from specific networks or user profiles, leading to more false positives. The AI model developer may assume more threats from a specific type of user profile or state actor, and focus more on threats from specific countries such as Russia and China, enabling other threat actors to fly under the radar.
Or if the AI models lack the ability to understand context, it may flag what it perceives to be abnormal behavior that is actually normal variation by those inside the industry. For example, employees who use casual conversation and abbreviations in email correspondence with one another may be falsely considered phishing threats.
Opportunities for Strengthening Cybersecurity with AI
Fortunately, AI has also created a number of opportunities that allow organizations to strengthen their cybersecurity across the supply chain. All of them also allow you to adopt a more proactive approach to cybersecurity and third-party risk management in particular.
Let’s examine these AI-driven opportunities in more detail.
AI-driven Tools for Continuous Third-Party Risk Monitoring
AI can quickly detect any new breaches or vulnerabilities in the supply chain, prioritizing them based on their risk exposure and criticality scores. Since it enables massive amounts of data to be gathered quickly, it allows for risk scoring to be updated in real time. For example, if your third party experiences a sudden increase in user traffic, AI can immediately detect this behavior and adjust your risk score accordingly. Advanced third-party risk scoring takes other conditions into account when updating your organization’s risk score, such as any recent data breaches in your supply chain, their adherence to compliance, and attack surface vulnerabilities.
Leveraging AI for Anomaly Detection and Threat Prediction in Vendor Systems
Since AI can analyze data at scale, it is ideal for sorting through historical data of third parties to reveal trends and user behavioral patterns that might not be detected otherwise. Security and risk management teams can use this data to evaluate how the organization performed in the past against similar risks and improve the risk management framework to mitigate future risks. This type of behavioral analytics can defend against data breaches, non-compliance, and operational failures in advance, before they become major attacks and cause major damage. For example, natural learning processing (NLP) technology can quickly examine thousands of emails and contracts to evaluate any language that could pose a risk to your organization.
Enhancing Third-Party Compliance with Automated Assessments
In advanced third-party risk management, AI can help streamline and automate vendor evaluations by gathering information from vendor documents and similar documented assessments. In addition, continuous monitoring with the help of AI ensures that you have up-to-date information, such as the most current risk score.
The streamlining of vendor evaluations saves time and resources consumed by the traditional process of due diligence necessary when onboarding new clients, which may become more frequent as your organization scales. When a company expands, continuous evaluation of your third party’s security posture is also critical as supply chains become increasingly complex and dynamic and new vendors may need to be evaluated for different areas of compliance. It can also automate parts of these assessments during a compliance audit, saving time and ensuring that it isn’t missing security gaps related to regulatory compliance.
Automated Security Protocols
AI can be used for the automation of malware scanning, network monitoring, and patch management for the early detection and prevention of threats, delivering a more proactive approach to cybersecurity. In addition, routine security tasks can be automated, reducing human error in time-consuming tasks such as updating software across the entire company or analyzing logs. Endpoint detection response (EDR) can also be automated across all network devices, gathering enough data to quickly detect anomalous behavior and react quickly in the event of an attack. Over time, both detection and reaction time can improve as the AI continues to analyze massive amounts of data to understand patterns and behavior.
Enhanced User Authentication
AI can reduce the friction inherent in the user authentication process by introducing frictionless user authentication such as biometric data or passwordless authentication. It can also ensure continuous authentication rather than only at login, strengthening security while at the same time enhancing without interrupting the user experience. These authentication systems analyze large amounts of data, enabling it to easily detect and mitigate against new and evolving threats. One AI-powered method of strengthening user authentication involves customizing authentication according to different user behaviors or internal requirements.
Generative AI Cybersecurity for Third-Party Risk Management
When examining generative AI in cybersecurity, it’s a balance of risks and opportunities in the supply chain. For cybercriminals, it offers a number of methods of scaling and precise targeting of organizations, but for organizations, it also delivers the ability to defend itself using precise automated methods that can be easily customized and scaled to save time and resources while avoiding human error. Most importantly, AI allows organizations to transform their third-party risk management from a reactive approach to a proactive one.
With the help of AI, Panorays reduces third-party risk with its contextual cyber management.
Some of the benefits it delivers include:
- Automatic identification of third, fourth, and n-th parties in your supply chain
- Identifying the criticality of each vendor based on their business impact using internal questionnaires and AI-driven classification
- Quick sending and verification of customized internal assessments using AI-based response autofill gather with data from vendor documents and external data
- Dynamic calculation of Risk DNA for each third-party relationship that includes an AI-based risk prediction based on business impact, and risk appetite
Want to learn more about how Panorays uses AI to manage cybersecurity risk in your supply chain? Get a demo today!
Generative AI Cybersecurity FAQs
-
Generative AI has various applications in cybersecurity, including the scaling and automation of risk assessments, continuous updating of threat monitoring and risk scores, threat intelligence, anomaly detection, and data analysis for insights. It can also gather and analyze massive amounts of data to detect patterns in user behavior and predict future behavior, enabling organizations to become more proactive in their approach to cybersecurity.
-
The main generative AI security risks include the expansion of your organization’s attack surface, data privacy issues, data bias, and fairness, failure to adhere to compliance and regulations, and vendor interdependence. Data and information used to develop the AI algorithm may also be leaked, which may contain sensitive IP or personal data. Generative AI is also used directly by attackers to develop malware, execute phishing attacks at scale, and execute business fraud with deepfakes.
-
Generative AI has a few main applications for third-party risk management. These include the automation of third-party risk assessments, anomaly detection, and threat prediction in vendor systems, and the automation of user authentication and security protocols. It can also analyze massive amounts of data to see how current security protocols measure up against future possible third-party attacks and make recommendations for strengthening your organization’s security posture.