Exclusive AI Insights from Tech Leaders at RSA Conference 2024

12 min read
(May 13, 2024)
Exclusive AI Insights from Tech Leaders at RSA Conference 2024
18:34

This article is available in audio format, click play above to listen to the article. 

As the spotlight intensifies on how threat actors are leveraging AI to penetrate organizations' defenses, the call for investing in protection against AI attacks grows louder. A recent inquiry from a CISO brought forth a crucial question: what data exists to demonstrate the extent of AI's infiltration within a breach and the resulting damage caused by these malicious actors? Surprisingly, amidst the widespread coverage, the CISO found a scarcity of actionable real life data to guide their decision-making process.

cyber security tribe RSA conference 2024 article

In order to gather more insights, we consulted a panel of 16 technology experts who were in attendance at the RSA Conference 2024. They shared their perspectives on the utilization of AI by threat actors and how AI can aid security teams in safeguarding their organization's data and operations. 

In addition to this article, we are also producing a summary of the most important cybersecurity statistics that were released at the RSA Conference 2024 providing insights from surveys and newly released research.

RSA Conference 2024 Tech Leaders 

 

Rony Ohayon, CEO and Founder, DeepKeep

Threat actors leverage AI's capabilities to streamline attacks, employing multiple automated bots from diverse locations to exploit vulnerabilities. This approach drastically reduces costs and amplifies the challenge of detection, as these automated actions diverge from conventional human behavior. Consequently, traditional cybersecurity measures, such as blacklists, prove inadequate against these AI-driven threats. To effectively combat such sophisticated attacks, cybersecurity professionals must harness the power of AI and GenAI.

Adversarial attacks pose a significant concern, wherein malicious actors inject poisoned or backdoored models into repositories like Hugging Face, deceiving users into downloading compromised models. This underscores the necessity for comprehensive security solutions that span the entire AI lifecycle. From pre-model download assessments to scrutinizing training and validation phases, a multifaceted approach is imperative.

Furthermore, the deployment of malware within neural networks, including large language models and computer vision models, poses a formidable challenge. Such malware can operate stealthily, evading traditional detection methods and impacting performance without raising alarms. This necessitates specialized defenses tailored to the unique vulnerabilities of AI systems.

In response, cybersecurity professionals are developing bidirectional AI firewalls to fortify both input and response sides of AI models. These firewalls not only prevent data breaches and injection attacks but also address concerns regarding model hallucinations, toxicity, fairness, and biases. However, safeguarding AI systems extends beyond mere firewalls; it requires a holistic approach that encompasses the entire AI ecosystem.

Enterprises, in particular, must secure their AI infrastructure across multiple interfaces, APIs, and resources, mitigating risks throughout the AI lifecycle. This comprehensive strategy is essential to combatting the evolving threats in the cybersecurity landscape and ensuring the integrity and security of AI-driven technologies.

 

Dan Lohrmann, CISO, Presidio

Bad actors are capitalizing on AI methods initially used by good actors. For example, in Montgomery County, Maryland, AI is employed for multilingual assistance, yet it's also yielding some sophisticated fraud schemes and phishing attacks. These attacks have evolved from traditional phishing tactics to more targeted and sophisticated methods utilizing techniques like voice impersonation, leveraging AI to mimic trusted entities. Attackers leverage AI to analyze targets, exploit vulnerabilities in processes, and manipulate trust to gain unauthorized access.

AI is also used to help reduce fraudulent sales on platforms where threat actors may attempt to bypass multi-factor authentication. For instance, a scammer posed as a concerned buyer on Facebook Marketplace, attempting to reset the seller's password under the guise of a security check. This illustrates how attackers exploit trust to manipulate victims into compromising their security. AI can be used in detecting and mitigating such threats by monitoring online activities for suspicious behavior across various platforms.

 

Vivin Sathyan, Senior Technology Evangelist, ManageEngine

There is a growing use of AI by threat actors, particularly in the realm of malware development. It highlights the evolution from static malware definitions to dynamic, generative AI-powered variants, complicating detection and response efforts. There is a necessity for cybersecurity defenses to adapt by leveraging generative AI tools for proactive threat detection and mitigation.

In essence, threat actors are exploiting AI to create diverse strains of malware, challenging traditional defense mechanisms. To counter this, cybersecurity professionals are adopting similar AI technologies to enhance their defensive strategies. This approach includes developing specialized threat hunting processes tailored to detect and respond to AI-generated malware effectively.

There is a shift in cybersecurity paradigms, where defenders must harness AI to stay ahead of increasingly sophisticated threats. Insights from interactions with CIOs at events like the RSA Conference indicate a growing trend towards integrating AI into security operations, particularly in the development of malware detection and response capabilities.

 

Rodrigo Alves, Head of Product, Axur

The general landscape is that AI allows the threat actors to come up with very specialized techniques but in large scale. Scams are now orchestrated with AI-generated content, mimicking trusted sources to deceive targets. This fusion of specialized and large-scale attacks, such as spear phishing, presents a significant challenge in current cybersecurity scenarios.

However, AI also offers opportunities for defenders to scale their protective measures effectively. By leveraging generative AI, analysts can process vast amounts of data quickly, extracting actionable insights and issuing personalized warnings about specific vulnerabilities in seconds. This capability not only enhances the efficiency of threat detection and response but also enables proactive measures against emerging threats.

Moreover, advancements in multimodal multi-image models further enhance the detection of fraudulent activities, such as phishing websites or fake social profiles. Despite existing cost efficiency challenges, the future outlook is promising, as increasingly affordable AI models enable proactive fraud prevention measures, reducing reliance on reactive responses. 

 

Paul Reid, Global Head of Threat Intelligence, OpenText Cybersecurity

AI is increasingly leveraged by threat actors, particularly in targeted phishing and spear phishing campaigns, exploiting generative AI to craft convincing emails tailored to specific recipients. These attacks are becoming more sophisticated and timely, often capitalizing on current events or personal information gleaned from social media. While these threats are escalating, security measures are also advancing. Our software successfully quarantined billions of malicious emails last year, showcasing the effectiveness of AI in threat detection and mitigation. However, AI's role extends beyond phishing to the creation of innovative cyber attacks.

Generative AI can now generate malware, decode malicious code, and assess the legitimacy of PowerShell scripts, facilitating rapid threat analysis. While the prospect of AI generating zero-day exploits looms, adherence to fundamental security practices remains crucial. Effective cybersecurity solutions, like ArcSight Intelligence, focus on behavioral anomalies rather than attack vectors, enabling early detection and response to emerging threats. Additionally, tools like cyDNA offer proactive threat intelligence, empowering organizations to prioritize patching based on real-time adversarial signals. By leveraging AI-driven technologies and adopting a proactive security stance, organizations can effectively mitigate evolving cyber threats.

 

Elisa Costante, VP of Research at Forescout

AI is increasingly utilized by threat actors to accelerate the adaptation of different programming languages, facilitating the creation of malware tailored to specific targets. This enables threat actors to scale up their operations while reducing costs, as they seek less expensive and highly skilled individuals to develop malware. Additionally, AI is employed to enhance phishing attacks, resulting in emails with improved grammar and context, making them more convincing and difficult to detect. An example of this is the Dark Loader ransomware, which infiltrated systems through Teams messages impersonating company executives with remarkable accuracy. Furthermore, AI is leveraged for voice simulation in scams, enabling fraudsters to mimic individuals' voices and deceive targets into transferring funds under false pretenses.

To combat these threats, it is essential to educate organizations on the evolving tactics of cyber threat actors and empower them with AI-driven solutions. AI not only assists in summarizing vast amounts of data for better decision-making but also enriches incident handling and reporting by providing concise insights and actionable recommendations. As the industry progresses, the integration of AI into security solutions is becoming increasingly prevalent, offering enhanced capabilities in threat detection and response.

 

Nara Pappu, CEO, Zendata

Data breaches, primarily driven by data leakage, are a growing concern, especially with organizations aiming to leverage captured data for value generation, cost reduction, and enhanced customer service. Currently, identifying data flows and potential risks associated with data sharing is a manual and often disjointed process between engineering and security teams. AI presents an opportunity for the "good guys" to streamline this process by automatically identifying data flows, assessing risk postures, and detecting anomalies in data handling practices. By leveraging AI models, organizations can enhance transparency and mitigate bias in data-related processes, ensuring compliance with regulations even before deploying new systems or processes.

From the perspective of malicious actors, AI empowers them to identify and exploit risk vectors more efficiently, accelerating the pace of adversarial attacks such as prompt injection. This creates a dynamic cat-and-mouse scenario, where organizations must proactively defend against emerging threats or risk exploitation by adversaries. Acting preemptively to address vulnerabilities and strengthen defenses is crucial in thwarting potential breaches and safeguarding sensitive information from unauthorized access or exposure. Thus, AI's dual potential as both a defensive and offensive tool underscores the importance of proactive cybersecurity measures in mitigating risks posed by evolving threat landscapes.

 

Ira Winkler, CISO, CYE

Threat actors are leveraging AI tools to sift through large datasets and identify vulnerable targets, such as state and local government systems, by exploiting known vulnerabilities. They can use AI algorithms to analyze data from sources like Shodan or Google to pinpoint potential entry points. Additionally, AI aids in crafting sophisticated phishing messages by generating content tailored to exploit psychological vulnerabilities of specific individuals within targeted organizations. For instance, combining data from LinkedIn and Facebook profiles to identify psychologically vulnerable users and crafting personalized phishing messages.

On the other hand, the use of AI by "good guys" varies depending on the context. Many anti-malware tools utilize AI algorithms to detect malicious behavior rather than relying solely on signature-based detection methods. By analyzing behavioral patterns, these tools can identify anomalous activities indicative of cyber threats. Organizations leverage AI to analyze diverse datasets and determine the most probable vulnerabilities to exploitation, optimizing resource allocation for cybersecurity efforts. This approach involves utilizing complex mathematical algorithms to process vast amounts of data and tailor defensive strategies accordingly. Overall, AI plays a crucial role in enhancing both offensive and defensive cybersecurity capabilities, driving innovation and efficiency in threat detection and mitigation efforts.

 

Joe Ferrara, CEO, Security Journey

The introduction of AI into the Software Development Life Cycle (SDLC) offers immense potential for accelerating development processes and enhancing productivity. However, there's a significant challenge associated with this integration: the risk of introducing vulnerabilities into the system. Numerous studies examining the impact of AI on the SDLC have consistently highlighted this concern, indicating that the adoption of AI often results in the creation of more vulnerabilities rather than mitigating existing ones.

One of the primary reasons behind this issue is the knowledge gap among developers regarding the potential risks associated with AI implementation and their overall secure coding knowledge. Without a comprehensive understanding of AI's intricacies coupled with secure coding principles, developers will inadvertently introduce vulnerabilities into the system.  This risk is particularly pronounced when junior developers, who may lack sufficient experience and expertise, are tasked with utilizing AI in software development projects.

 

Bruce Snell, Cybersecurity Strategist, Qwiet AI

AI empowers cybercriminals with sophisticated tools for crafting convincing phishing emails at scale, bypassing traditional detection methods. Utilizing platforms like ChatGPT, they rapidly generate tailored messages, amplifying cybercrime.

Additionally, AI democratizes malware creation, enabling novices to develop potent threats via platforms like Github’s Copilot. This accessibility fuels a surge in AI-driven cybercrime, blurring the lines between skilled hackers and amateurs. As AI continues to evolve, combating such threats demands innovative strategies and heightened vigilance.

 

Larry Whiteside Jr., CISO, RegScale

When considering how threat actors leverage AI, phishing and deepfakes stand out as significant tools. Phishing tactics have evolved, becoming harder to detect due to AI-generated messages tailored to bypass traditional filters. Deepfake technology raises concerns about trusting visual and auditory cues, undermining the reliability of what we see and hear. Executives are particularly vulnerable, as AI can mimic their voices and appearances, facilitating elaborate scams.

Threat actors utilize AI for efficiency, automating tasks like identifying vulnerabilities and crafting malicious scripts. On the defensive side, AI empowers security teams by automating mundane tasks, allowing focus on genuine threats. It also enables deeper analysis of diverse datasets, uncovering hidden connections and enhancing threat detection capabilities. Ultimately, AI shapes both offensive and defensive strategies in cybersecurity, highlighting the need for continued innovation and vigilance.

 

Michael Assraf, CEO & Co-Founder at Vicarius

In exploring the utilization of AI by malicious actors, it's evident that automation plays a pivotal role in both identifying vulnerabilities and executing cyberattacks. Automated tools powered by AI, such as Evil GPT and Worm GPT, facilitate the discovery of software flaws and the orchestration of attacks with unprecedented efficiency. This automation significantly increases the potential damage, as demonstrated by incidents where attack simulations spiraled out of control.

On the defensive side, two primary challenges emerge: the inadequacy of existing cybersecurity tools and the complexity of managing sensitive data. AI offers solutions to these challenges by automating vulnerability remediation processes and enhancing data analysis capabilities. For instance, community-driven approaches to script development, coupled with AI-driven content generation, streamline the identification and mitigation of vulnerabilities.

Looking ahead, the future of cybersecurity operations is envisioned as highly automated and data-driven, with AI playing a central role in anomaly detection and threat response. However, achieving this vision requires addressing privacy concerns associated with data analysis. Hybrid deployment models, balancing data privacy with analytical capabilities, are poised to emerge as a solution to this challenge.

 

Guy Bejerano, CEO & Co-founder at SafeBreach

When assessing AI's implications as a threat, it's essential to consider both its positive and negative impacts. While AI can empower sophisticated threat actors to automate attacks and develop intricate models, it also poses risks by enabling less skilled attackers to execute more sophisticated attacks. Leveraging AI for security purposes presents opportunities to enhance mitigation strategies and address vulnerabilities effectively. For instance, AI-driven platforms can analyze data sets to identify gaps in security controls and recommend optimal mitigation measures. However, there's a concern that the market's fixation on AI as a buzzword may overshadow the need for genuine expertise and understanding of AI principles. Despite its potential, the efficacy of AI implementation in security contexts ultimately hinges on informed decision-making and a nuanced understanding of its capabilities and limitations.

 

Chris Scheels, VP, Gurucul

Overall, AI can help make security investigations more efficient, and reduce false positives in threat detection. AI can learn from events in the wild by scouring the internet, social media, and other public sources for news and details of successful attacks, new TTPs, and evolving threats. Then it can combine that with insights from analyzing everything happening inside the enterprise. Next, this data can be used to refine existing ML threat detection models, and even suggest new detection models entirely. Adversarial AI can also be used to train detection AI to keep up with the latest threats and trends.

Consider the impact of the SOC team wasting 40% - 60% less time on triaging, investigating and responding to false positives. This makes analysts more efficient and ultimately helps them stay ahead of the attackers.

 

Will Schroeder, Security Researcher at SpecterOps

The reality of bad guys utilizing AI is already here - attackers are using large language models to rapidly iterate phishing campaigns and optimize targeting, along with using deepfake technology to impersonate voice and video during campaigns. Here, companies are looking to evolve existing defenses to keep up with the pace of new vectors being introduced.

Organizations are also starting to investigate using large language models to increase efficiency in security operations, aiming to automate some of the initial repetitive work that often fatigues analysts. These models can help process and interpret the flood of incoming data within the framework of existing security controls, but from our perspective, most companies are still getting a grip on the best path forward.

 

Erez Tadmor, Field CTO, Tufin

The ultimate goal of AI is to make it easier to do more, faster, and at a greater scale. The idea is that it will free folks up from repetitive tasks and mundane work, allowing them to spend time on valuable tasks that truly matter in their work - and in their lives.

Unfortunately, cybercriminals understand this all too well. AI can help their efforts become better, and lowers the barrier to entry for a potential attacker. Why learn how to write a ransomware program when AI can do it for you? If AI can draft a persuasive phishing email or call script, then anyone can conduct an attack. AI can even automate the scanning for potential vulnerabilities - or help attackers to move quickly when a new zero day is discovered. AI can also help attackers hide their intentions from defenses, adjusting actions along the way to fool even the strongest of programs. When used like this, AI can also make attackers more profitable - they need less overhead and less people if AI can make them more efficient.

At the same time, AI is being harnessed by security teams to fortify defenses against such threats. The complexity of today's company networks has grown exponentially. No longer do any borders apply - the network is not just where employees work, it’s also wherever and whomever your employees are doing business with (partners, providers, etc.), and that can change on a daily basis. Anything that integrates with your network becomes a great entry point for a cyberattack.