A recent study conducted by Abnormal Security, an email security platform, has revealed the increasing use of generative AI in creating authentic and persuasive email attacks. The study analyzed the probability of AI-generated novel email attacks intercepted by the platform and found that cybercriminals are utilizing GenAI tools to craft email attacks that are becoming progressively more realistic and convincing. The use of AI-generated email attacks has raised concerns among security leaders since the emergence of ChatGPT.

AI-Based Attack Methods

According to the study, generative AI is now being used to create new attack methods, including credential phishing, an advanced version of the traditional business email compromise (BEC) scheme, and vendor fraud. Employees have traditionally relied on identifying typos and grammatical errors to detect phishing attacks. However, generative AI can help create flawlessly written emails that closely resemble legitimate communication, making it increasingly challenging for employees to distinguish between authentic and fraudulent messages.

BEC actors often use templates to write and launch their email attacks. However, with generative AI tools like ChatGPT, cybercriminals are writing a greater variety of unique content, making it difficult to detect based on known attack indicator matches while also allowing them to scale the volume of their attacks. Abnormal’s research further revealed that threat actors go beyond traditional BEC attacks and leverage tools similar to ChatGPT to impersonate vendors. These vendor email compromise (VEC) attacks exploit the existing trust between vendors and customers, proving highly effective social engineering techniques.

Human Detection Challenge and Detection Techniques

The company has highlighted the substantial challenge these meticulously crafted emails pose regarding human detection. When faced with emails that lack grammatical errors or typos, individuals are more susceptible to falling victim to such attacks. Abnormal found that AI-generated email attacks can mimic legitimate communications from both individuals and brands, making them more deceptive.

Abnormal’s platform utilizes open-source large language models (LLMs) to evaluate the probability of each word based on its context. This enables the classification of emails that consistently align with AI-generated language. Two external AI detection tools, OpenAI Detector and GPTZero, are employed to validate these findings. However, the company acknowledges that this approach is not foolproof.

Shiebler advocates employing AI as the most effective method to identify AI-generated emails. However, certain non-AI-generated emails, such as template-based marketing or sales outreach emails, may contain word sequences similar to AI-generated ones. To address this issue, Shiebler advises organizations to adopt modern solutions that detect contemporary threats, including highly sophisticated AI-generated attacks that closely resemble legitimate emails.

Incorporating modern solutions are important to ensure that these solutions can differentiate between legitimate AI-generated emails and those with malicious intent. Shiebler advises organizations to maintain good cybersecurity practices, including conducting ongoing security awareness training to ensure employees remain vigilant against BEC risks. Additionally, implementing strategies such as password management and multi-factor authentication (MFA) will enable organizations to mitigate potential damage in the event of a successful attack.

AI

Articles You May Like

Amazon’s Great Indian Festival: Unbeatable Deals on Home Appliances
The Impact of Adobe’s Bid for Figma on Global Design Markets
EU Law Requires Handheld Gaming Consoles to Have Replaceable Batteries by 2027
Regulatory Guidelines on Digital Lending and the Rise of Cyber Fraud Cases

Leave a Reply

Your email address will not be published. Required fields are marked *