The Dark Side of AI: Unmasking the Malicious LLMs Fueling Cybercrime

4 min read
(May 6, 2024)
The Dark Side of AI: Unmasking the Malicious LLMs Fueling Cybercrime
5:47

This article is available in audio format, click play above to listen to the article. 

Generative AI is revolutionizing the way we interact with the digital world, offering unprecedented opportunities for innovation and problem-solving. From tackling climate change to combating human trafficking, AI has the potential to address some of the world's most pressing challenges. However, there is a dark side to AI - fueling cybercrime.

Cybercriminals have been quick to recognize the potential of AI and are actively adopting malicious variations of AI models (such as ChatGPT or Google Gemini) commonly referred to as Dark Large Language Models (LLMs) to unleash highly automated and sophisticated cyberattacks on an unprecedented scale. These so-called "Dark LLMs" operate without the ethical guidelines, filters, or built-in limitations of their benign counterparts. Often trained on data from the dark web and other malicious sources, Dark LLMs are providing bad actors with sophisticated tools to conduct various forms of cybercrime, from generating malware and phishing emails to automating complex attack sequences.

How are cybercriminals using AI?

  • Cybercriminals are using Dark LLMs for a wide range of malicious purposes, including:
  • Vulnerability research: Identifying and exploiting weaknesses in systems and software.
  • Enhancing scripting techniques: Automating and optimizing malicious code.
  • Informed reconnaissance: Gathering intelligence on potential targets.
  • Enhancing anomaly detection evasion: Crafting attacks that can bypass security measures.
  • Refining operational command techniques: Improving the efficiency and effectiveness of attacks.
  • Technical translations: Adapting malware and other tools for different languages and platforms.
  • Malicious code development: Creating new malware and exploits.
  • Payload crafting: Tailoring attacks to specific targets and objectives.
  • Bypassing security features: Circumventing defenses like CAPTCHA.
  • Operational planning: Coordinating and executing complex, multi-stage attacks.

The implications are chilling. Imagine a threat actor using AI to generate thousands of personalized spear-phishing emails, each one carefully crafted to exploit the psychological vulnerabilities of its target.

Real-world examples of AI-powered cybercrime are already emerging. In April 2024, security researchers found evidence suggesting that an AI-generated PowerShell script was used to distribute the Rhadamanthys infostealer malware in a targeted phishing campaign against German businesses.

What are some examples of Dark LLMs?

Dark LLMs are proliferating at an alarming rate, with new AI models constantly emerging to cater to the diverse needs of cybercriminals. Each is specialized to perform specific malicious tasks, ranging from generating convincing deepfake media to enabling the rapid deployment of botnets capable of crippling networks within minutes. It's even more concerning that access to these powerful AI models is available for purchase on the Dark Web marketplaces for prices as low as $30 per month. As AI advances, these Dark LLMs are going to become more powerful, dangerous, and affordable for cyber criminals.

Though numerous dark LLMs exist today, the following seem to be among the most prevalent and widely utilized.

  1. WormGPT: Generates malware and exploits that can evade security software, and creates fake content like fraudulent invoices.
  2. FraudGPT: Crafts persuasive phishing emails and social engineering scripts to trick victims into revealing sensitive information.
  3. DarkGPT: Conducts reconnaissance, identifies vulnerabilities, and automates complex attack sequences.
  4. ruGPT: Created for Russian-speaking users for creating malicious code and deceptive content.
  5. DarkBART: Generates deep fakes and manipulated media for disinformation campaigns.
  6. FoxGPT: Produces fake news and propaganda to sow discord and confusion.
  7. WolfGPT: Develops stealthy, persistent malware that can lurk undetected in compromised networks.
  8. XXXGPT: Aids in deployment of botnets, RATs, and other malware tools, including ATM malware and crypto stealers.
  9. DarkLLaMa: Adapted to create malicious code and bypass security controls.
  10. ChaosGPT: Generates chaos-inducing content, from ransom notes to misinformation.
  11. ShadowGPT: Conducts covert surveillance and gathers sensitive information on targets.
  12. NinjaGPT: Creates malicious code to infiltrate networks and exfiltrate data evading detection.

Nation-State threat actors are leveraging AI

The weaponization of AI is not limited to underground cybercriminals. Nation-state actors and organized crime groups have also recognized the potential of these tools to supercharge their attacks.

According to recent threat intelligence from Microsoft, state-sponsored threat actors from China (Charcoal Typhoon and Salmon Typhoon), Iran (Crimson Sandstorm), North Korea (Emerald Sleet), and Russia (Forest Blizzard, linked to the GRU military intelligence agency) have all been observed using LLMs for various malicious purposes. These range from generating content for spear-phishing campaigns impersonating NGOs to conducting technical research on satellite and radar technology that could be used to support real-world military operations.

The consequences of such AI-powered attacks could be devastating, from massive data breaches to critical infrastructure disruption and even loss of life.

What Now?

As AI continues to advance, even more sophisticated and dangerous AI models are likely to emerge. Cybersecurity leaders must remain vigilant, proactive, and committed to staying ahead of the curve in order to safeguard their organizations and the wider digital ecosystem from the growing dangers posed by Dark LLMs and rapidly advancing AI technologies. Here are a few strategies cybersecurity leaders should consider to bolster their organizational defenses against AI-powered threats:

  • Establish policies and guidance: Define acceptable and responsible use of AI-enabled products and services within your organization to set clear expectations and address AI risks.
  • Enhance training and awareness: Expand security awareness training to educate employees about the risks of AI-generated phishing emails, deepfakes, and social engineering. Conduct phishing simulations using AI-generated content to monitor variance in click or success rates and identify areas for improvement.
  • Strengthen email security: Implement email filtering and authentication protocols like DMARC, DKIM, and SPF to block phishing emails and spoofing attempts. Consider exploring advanced phishing detection solutions.
  • Monitor for data leaks: Watch for any leaks of sensitive data or intellectual proprietary information, such as source code, that could be used to train malicious Dark LLMs. Employ data loss prevention (DLP) tools and monitor dark web forums for potential leaks. 
  • Gather threat intelligence: Collaborate with industry peers, government agencies, and threat intelligence providers to share information on emerging AI-powered threats and tactics, techniques, and procedures. 
  • Improve overall cyber hygiene: Conduct vulnerability scans and penetration tests regularly to identify vulnerabilities and prioritize remediation of critical issues. Ensure that all systems and software are updated with the latest security patches. 
  • Update incident response plans: Revise your incident response plan to incorporate AI-powered threat scenarios and test it through regular tabletop exercises. 

In the escalating battle against AI-powered threats, our most powerful tools will be vigilance, a robust security culture, close collaboration, and strategic partnerships.