Americas

  • United States

Asia

Oceania

mhill
UK Editor

Generative AI phishing fears realized as model develops “highly convincing” emails in 5 minutes

News
Oct 24, 20235 mins
CSO and CISOEmail SecurityGenerative AI

Security leaders’ concerns about generative AI’s potential to create more sophisticated email attacks are well justified.

Generative AI is playing a significant role in reshaping the phishing email threat landscape, two new pieces of research indicate. The State of Email Security in an AI-Powered World report from Abnormal Security revealed that security leaders are highly concerned about generative AI’s potential to create more sophisticated email attacks, with many either having already received AI-generated email attacks or strongly suspecting that this was the case.

Separate findings from IBM X-Force suggest the concerns are valid. With only five simple prompts, the IBM X-Force research team was able to trick a generative AI model into developing “highly convincing” phishing emails almost on par with those created by skilled humans in just five minutes, potentially saving nearly two days of work for attackers.

The research comes as the cybersecurity implications of generative AI’s rapid growth and adoption continue to make headlines as business use cases increase.

Cybersecurity stakeholders concerned about generative-AI risks

Almost all (98%) of the 300 senior cybersecurity stakeholders surveyed by Abnormal Security are concerned about the cybersecurity risks posed by ChatGPT, Google Bard, WormGPT, and similar generative AI tools. Their leading concern is the increased sophistication of email attacks that generative AI makes possible — particularly the fact it can help attackers craft highly specific and personalized email attacks based on publicly available information, according to the report.

However, despite widespread concern, the vast majority of security leaders are not adequately prepared to protect against AI-generated email attacks, the research found. Most respondents are still relying on their cloud email providers or legacy tools for email security, with over half (53%) still using secure email gateways to protect their email environments. This approach does not seem to be working, as nearly half of respondents (46%) lack confidence in traditional solutions to detect and block AI-generated attacks.

AI-generated phishing emails are “fairly persuasive”

The IBM X-Force findings may well prompt many security leaders to change their email security strategies in response to generative AI’s ability to craft sophisticated phishing messages. The team’s goal was to determine whether current generative AI models have the same deceptive abilities as the human mind by comparing the click rates of AI-generated and human-generated emails in a simulation against organizations.

Through a systematic process of experimentation and refinement, a collection of only five prompts was created to instruct ChatGPT to generate phishing emails tailored to specific industry sectors, wrote Stephanie Carruthers, IBM’s chief people hacker. “To start, we asked ChatGPT to detail the primary areas of concern for employees within those industries. After prioritizing the industry and employee concerns as the primary focus, we prompted ChatGPT to make strategic selections on the use of both social engineering and marketing techniques within the email.”

These choices aimed to optimize the likelihood of a greater number of employees clicking on a link in the email itself, Carruthers said. Next, a prompt asked ChatGPT who the sender should be (e.g. someone internal to the company, a vendor, or an outside organization). Lastly, the team asked ChatGPT to add the following completions to create the phishing email:

  1. Top areas of concern for employees in the healthcare industry: Career advancement, job stability, fulfilling work.
  2. Social engineering techniques that should be used: Trust, authority, social proof.
  3. Marketing techniques that should be used: Personalization, mobile optimization, call to action.
  4. Person or company it should impersonate: Internal human resources manager.
  5. Email generation: Given all the information listed above, ChatGPT generated the below redacted email, which was later sent to more than 800 employees.

A phishing email created by generative AI

IBM X-Force

“I have nearly a decade of social engineering experience, crafted hundreds of phishing emails, and I even found the AI-generated phishing emails to be fairly persuasive,” wrote Carruthers.

Human-generated phishing slightly more successful

Part two of IBM X-Force’s experiment saw seasoned social engineers create phishing emails that resonated with their targets on a personal level. They employed an initial phase of Open-Source Intelligence (OSINT) acquisition followed by the process of meticulously constructing their own phishing email to rival that created by generative AI.

The following redacted phishing email was sent to over 800 employees at a global healthcare organization:

A human-created phishing email

A human-created phishing email

IBM X-Force

After an intense round of A/B testing, the results were clear: humans emerged victorious but by the narrowest of margins. The generative AI phishing click rate was 11%, while the human phishing click rate was 14%, according to IBM X-Force. The AI-generated email was also reported as suspicious at a slightly higher rate compared to the human-generated message, 59% versus 52%, respectively.

“Humans may have narrowly won this match, but AI is constantly improving,” wrote Carruthers. “As technology advances, we can only expect AI to become more sophisticated and potentially even outperform humans one day.”