GenAI in Cyber - Automated Phishing, Detecting Anomalies and Awareness Training

4 min read
(September 24, 2024)
GenAI in Cyber - Automated Phishing, Detecting Anomalies and Awareness Training
6:48

As threat actors evolve, so must our defense strategies. The rise of Generative AI presents both new opportunities and challenges for security teams. This technology, which enables the mass creation of high-quality, human-like content, is empowering attackers to craft more sophisticated phishing attacks at an unprecedented scale. In response, cybersecurity professionals must rethink traditional approaches to threat detection and mitigation, particularly as account takeovers and social engineering techniques continue to increase in complexity.


AI Automated Phishing

Phishing used to rely on high-volume, low-effort tactics, such as the “spray and pray” approach. Attackers would send out poorly constructed emails en masse, hoping that a few unsuspecting individuals would take the bait. As organizations and users became savvier, phishing evolved into more targeted and sophisticated spear-phishing attempts, where attackers would conduct research and craft personalized messages aimed at specific individuals. However, this more personalized approach was constrained by the time and effort required to craft each message. 

Enter Generative AI. Since the mainstream adoption of tools like ChatGPT, threat actors have gained the ability to produce well-written, convincing phishing emails at scale. Attackers who previously struggled to craft coherent messages can now easily produce persuasive content that is almost indistinguishable from legitimate communication. As a result, the volume of sophisticated phishing attacks has surged, and the traditional red flags, such as grammatical errors or awkward phrasing—are no longer reliable indicators of a phishing attempt.  

Research published in 2024 by HBR showed that 60% of participants fell victim to artificial intelligence (AI)-automated phishing, which is comparable to the success rates of non-AI-phishing messages created by human experts. 

The real danger lies in the shift from traditional phishing, which often involved malicious links or attachments, to a more conversational approach. Attackers are no longer just sending harmful payloads upfront; instead, they are initiating conversations to build trust and familiarity. Once rapport is established, the victim is more likely to comply with the attacker’s requests, whether that’s clicking on a link or providing sensitive information. This subtle manipulation makes it much harder for even well-trained employees to recognize and avoid these threats. 

The Role of Security Awareness Training 

Given these developments, it’s clear that the traditional approach to security awareness training is no longer sufficient. For years, we’ve told users to look out for certain warning signs, suspicious links, unexpected attachments, and poor grammar. However, as phishing tactics evolve, these guidelines are becoming increasingly outdated. 

Security awareness training needs to reflect the reality of today’s threat landscape. Users must understand that even a well-written email from a familiar contact could be fraudulent. Training should emphasize caution around any unexpected or urgent communication, regardless of how professional or legitimate it appears.  

Users should also be trained to look for emerging attack tactics, like malicious QR codes that might be leveraged in QR code phishing (quishing) attacks, or impersonations of file-sharing services that are increasingly being used in file-sharing phishing attacks.  

While it's essential to improve security awareness, training alone isn’t enough. We can’t expect every employee to become a security expert, nor can we rely solely on human vigilance to catch these increasingly sophisticated attacks. That’s where the second part of the solution comes in: leveraging advanced AI technologies for threat detection. 

Harnessing AI to Detect Anomalies 

If AI has leveled the playing field for attackers, it’s up to security teams to harness that same technology to defend their organizations. AI can be a powerful tool in identifying malicious or anomalous behavior, enabling security systems to detect threats that may go unnoticed by traditional defenses. 

AI-based solutions can process vast amounts of data and identify patterns that are indicative of an attack. By baselining what constitutes “normal” behavior within an organization, AI algorithms can quickly flag deviations that may signal an account compromise or phishing attempt. This ability to detect subtle changes in behavior is crucial in preventing attacks that rely on social engineering or account takeovers. 

Account takeovers, in particular, are becoming a major concern. Cloud email accounts, such as those provided by Microsoft 365, are prime targets for attackers. Once an attacker gains control of a cloud email account, they can access not only email communication but also a range of other services within the cloud ecosystem, including collaboration tools like Microsoft Teams and cloud storage like OneDrive. With this access, attackers can exploit trusted relationships within an organization, posing as a legitimate user to deceive others into divulging sensitive information or executing unauthorized transactions. 

The use of AI in defending against these types of attacks is particularly well-suited for detecting anomalies in communication patterns, login activity, and data access. For example, if an attacker compromises an account and begins accessing files or sending emails outside of the normal behavioral patterns for that user, AI-driven security systems can flag this activity as suspicious and initiate a response. 

The Human Element in a Tech-Driven Defense 

The most effective protection against cybercrime combines people, processes, and technology in a multi-layered defense strategy.  

Security teams need to ensure that employees are educated on the evolving tactics used by attackers and should foster a culture of security where employees feel empowered to report suspicious activity without fear of retribution.  

However, with attacks becoming more sophisticated by the day, and targeting a vulnerability that’s notoriously difficult to protect––human behavior––security awareness and culture can only go so far. Security teams also need to have modern technology in place to prevent attacks from reaching inboxes at all.  

This is where solutions leveraging AI to automate threat detection and response have a major role to play, providing good AI to fight malicious AI, and keeping organizations protected against evolving threats today and in the future.