As artificial intelligence (AI) continues to advance, it is gaining the capability to perform extraordinary tasks such as generating stunning artwork and creating immersive 3D worlds. Additionally, AI is becoming an efficient and reliable partner in the workplace. However, recent research from IBM X-Force suggests that generative AI and large language models (LLMs) have the potential to be as deceitful as human beings.

In an experiment conducted by the X-Force team, a phishing campaign was designed to compare the click-through rates of AI-generated emails versus human-generated emails. Utilizing the ChatGPT model, the researchers instructed the AI to generate convincing phishing emails targeting employees in the healthcare industry. The final email was sent to 800 workers in a global healthcare company to assess its effectiveness.

ChatGPT was provided with prompts to identify the top concerns of industry employees, including career advancement, job stability, and fulfilling work. When asked about social engineering and marketing techniques, the AI model suggested trust, authority, social proof, personalization, mobile optimization, and a call to action. Finally, ChatGPT crafted a persuasive phishing email within just five minutes, whereas it typically takes a human team around 16 hours to create a similar email.

Stephanie (Snow) Carruthers, IBM’s chief people hacker, who has extensive experience in social engineering, expressed her surprise at the AI-generated phishing emails’ persuasiveness. However, when the AI and human phishing emails were compared, the human-generated email achieved a slightly higher click-through rate of 14% compared to the AI’s 11%.

Carruthers identified emotional intelligence, personalization, and concise subject lines as the reasons behind the human team’s success. By focusing on a legitimate example within the company, the human email emotionally connected with employees. Moreover, including the recipient’s name and using a straightforward subject line (“Employee Wellness Survey”) enhanced the email’s credibility. In contrast, the AI email’s subject line (“Unlock Your Future: Limited Advancements at Company X”) was lengthier and potentially raised suspicion from the start.

Carruthers emphasized the importance of educating employees to go beyond traditional red flags when identifying phishing attempts. She dispelled the myth that phishing emails always have bad grammar and spelling errors, pointing out that AI-generated phishing emails can exhibit grammatical correctness. Instead, employees should be trained to be vigilant about the warning signs of lengthy and complex emails. Organizations play a crucial role in protecting their employees by providing this information and raising awareness.

Phishing remains a top tactic for attackers because it successfully exploits human weaknesses. Attackers manipulate our desire to help others or create a sense of urgency to deceive victims into taking quick actions. The research also revealed that generative AI can expedite the creation of convincing phishing emails, allowing attackers to allocate saved time for other malicious purposes.

To combat the growing threat of AI-driven phishing, organizations should enhance their social engineering programs and include vishing (voice call/voicemail phishing) as part of their training. Strengthening identity and access management tools, regularly updating threat detection systems, and providing comprehensive employee training are also crucial steps to safeguard against phishing attacks. As a community, it is essential to continuously test and investigate how attackers can exploit generative AI.

While AI has not yet surpassed humans in the art of deception, the continuous evolution and refinement of AI models pose a significant threat in the realm of phishing. By understanding the tactics and vulnerabilities exploited by attackers, organizations can proactively protect their employees and prevent falling victim to AI-generated phishing attempts.

AI

Articles You May Like

Twitter Drops Out of EU Agreement to Combat Online Disinformation
The Troubled Reign of Elon Musk’s X
The Importance of Intellectual Property Protection in the Age of Generative AI
Democratic Senators Call on Tesla CEO to End Forced Arbitration Clauses

Leave a Reply

Your email address will not be published. Required fields are marked *