Opportunities & Risks in AI: What Every CISO Should Know

5 min read
(August 23, 2023)

As cyberattacks escalate in both volume and complexity, the digital security landscape is undergoing a fundamental transformation, increasingly powered by artificial intelligence (AI) and machine learning.  

Where traditional software systems are overwhelmed by the sheer influx of new malware appearing weekly, AI can quickly analyze vast data sets and unearth a plethora of cyber threats, from elusive malware to subtle indications of potential phishing attacks. In doing so, AI is not just augmenting but revolutionizing under-resourced security operations, mitigating the impact of a global shortfall of 3.4 million cybersecurity professionals. 

By assimilating billions of data artifacts, AI is honing its capacity to comprehend cyber risk and cybersecurity threats, swiftly establishing relationships between malicious files, suspicious IP addresses, and potential insider threats. This rapid analysis results in curated risk assessments, drastically reducing response times and facilitating more efficient critical decision-making and threat remediation. This paradigm shift has seen the value of AI in cybersecurity skyrocket, rising from over $10 billion in 2020, with forecasts predicting a jump to $46.3 billion by 2027 (source: Statista).  

That’s why AI platforms must follow the same path as other institutions that hold onto and pass along potentially compromising information as part of their business process – that is, with security procedures and fail safes that protect endpoints from malicious content.  

In this article, I’ll explore the risks of present applications of AI in cybersecurity, as well as the promising new opportunities on the horizon. 

Risks of AI Applications in Cybersecurity 

Internal AI Threats: Employees Misusing LLMs 

In the rapidly evolving domain of technology, Large Language Models (LLMs), like OpenAI's ChatGPT, have emerged as powerful tools with a wide range of applications. However, without proper access controls, an employee might feed sensitive information into LLMs to get an explanation, reformulate complex details, or just see how the model responds out of simple curiosity. This can result in intellectual property, business strategies, customer data, or other sensitive corporate information being inadvertently or maliciously fed into these models, risking data leakage. 

While platforms like OpenAI ensure users that data provided to models like ChatGPT isn't stored or used for future model training, not all implementations of LLMs follow these stringent protocols. Thus, there's a risk that sensitive information may be retained, either intentionally or inadvertently, by less secure platforms. Even if a model itself doesn't retain the data, the platforms or infrastructures it operates on might. For instance, when using cloud-based instances of LLMs, if the cloud provider doesn't have strict data isolation or deletion policies, sensitive data could be exposed. 

In addition, the autonomous generation of content might lead to unintentional violations of compliance policies or the creation of inappropriate or offensive material. To mitigate these risks, it's imperative that CISOs maintain strict usage guidelines, monitor interactions, and continually educate the workforce on responsible LLM usage. They can also look to solutions like Zero Trust to ensure that any harmful data they may encounter unwillingly doesn’t make its way into the organization’s endpoint.  

External AI Threats: AI-Based Malware Creation 

As AI technology evolves, so does the potential for its misuse by malicious actors. For example, generative AI tools are being used by hackers as well. It provides fluency in English to bolster their phishing campaigns, it can be tricked into generating malware, and of course, there’s the potential for ChatGPT itself to be breached. Palo Alto Networks report a significant increase in scams related to ChatGPT, including a 910% increase in domain registrations that mimic ChatGPT and more than one hundred daily detections of ChatGPT-related malicious URLs. These alarming figures serve as a clarion call for CISOs: AI-driven risks are escalating, and a proactive, AI-informed security strategy is no longer optional—it's imperative. 

Opportunities of AI Applications in Cybersecurity 

Internal AI Threats: Enabling of Our Human Defenders with AI 

AI systems can analyze enormous datasets, identify patterns, and detect anomalies more efficiently than human analysts. Machine learning (ML), a subset of AI, allows systems to learn from data, enabling them to adapt and improve threat detection over time. These systems can highlight suspicious activity and potential threats, assisting your security team in acting swiftly and accurately. 

AI can also help security teams identify patterns in a macro's behavior or code, determining whether it's malicious or benign. It can monitor macros' activities, making it easier for human defenders to spot and prevent malicious activity—improving overall security while boosting employee productivity. 

External AI threats: New Tools & Tech to Prevent Threats

Traditional anti-malware solutions rely on signature-based detection, making it challenging to detect new or modified malware. AI can step in to fill this gap. By using machine learning algorithms, AI can analyze the behavior and structure of files, allowing for effective detection of unknown malware. It's especially useful in fighting against polymorphic and metamorphic malware, which continually alter their code to evade detection. 

AI can also analyze millions of emails in real time, learning to identify the subtle clues that indicate a phishing attempt. It can spot anomalies in email headers, body text, and links, providing a robust defense against phishing attacks. This includes common email schemes that can snare even the most vigilant employees, such as impersonation attempts, business email compromise, and spear-phishing. 

AI and machine learning can also be used to develop multi-factor authentication systems that analyze biometric data more effectively, including facial recognition, voice recognition, and fingerprint scanning. These systems could offer a higher level of security than traditional password-based systems.

Questions About AI that CISOs Should Be Asking

Am I up to date with the latest on AI? The CISO’s role is pivotal in leveraging the benefits of AI while mitigating the potential risks it presents. Staying abreast of the latest developments in AI technology will be crucial for maintaining a strong cybersecurity posture in the years to come. 

Have I explored available tools? Traditional anti-malware solutions, like AV, can’t always keep up with the latest threats, especially zero days. Tools that leverage AI can better secure your perimeter. 

Are my teams aware of the risks? Educate employees about the potential risks of sharing sensitive information with external tools, including LLMs. Training should cover both inadvertent and intentional sharing. 

Have I implemented Access Controls? Set up robust Access Controls and use-case limitations for interacting with LLMs, especially when operating in environments dealing with sensitive information. 

Am I monitoring AI-related activity? Regularly audit interactions with LLMs to detect and address potential data exposures promptly. Anomaly detection tools can help flag unusual queries or data submissions. 

Am I using Data Masking and Tokenization? Before feeding data into external tools or systems, employ Data Masking or Tokenization to obscure sensitive data elements. 

Do I have protections in place for zero-day threats? As generative AI evolves, so do the threats created by it. This includes new variants of malware that have the potential to bypass detection or circumvent traditional sandboxing techniques. To combat this, look to more robust solutions like Content Disarm and Reconstruction (CDR) to protect endpoints from signature-less, unknown threats. 

The rapid evolution of AI promises a future where cybersecurity can become increasingly proactive, automated, and intelligent. By understanding and leveraging these technologies, you can ensure your organization is well-prepared to face the cyber threats of the future.