The long-promised AI revolution is finally here. OpenAI’s ChatGPT, a language model with 175 billion parameters, has set a new record for the fastest-growing user base. The wave of generative AI has extended to other platforms, creating a massive shift in the technology world. It is changing the landscape of the threat and risk industry, and we are starting to see some of these risks come to fruition.
AI: A Tool for Attackers
Attackers are using AI to improve phishing and fraud. Meta’s 65-billion parameter language model got leaked, which will undoubtedly lead to new and improved phishing attacks. We see new prompt injection attacks on a daily basis. Users are putting business-sensitive data into AI/ML-based services, leaving security teams scrambling to support and control the use of these services. For example, Samsung engineers put proprietary code into ChatGPT to get help debugging it, leaking sensitive data. A survey by Fishbowl showed that 68% of people who are using ChatGPT for work aren’t telling their bosses about it.
AI Misuse: A Growing Concern
Misuse of AI is increasingly on the minds of consumers, businesses, and even the government. The White House announced new investments in AI research and forthcoming public assessments and policies. The AI revolution is moving fast and has created four major classes of issues.
The Future of AI-based Attacks
Over the next decade, we will see a new generation of attacks on AI/ML systems. Attackers will influence the classifiers that systems use to bias models and control outputs. They’ll create malicious models that will be indistinguishable from the real models, which could cause real harm depending on how they’re used. Prompt injection attacks will become more common, too.
The Need for Responsible Use of AI
The costs of building and operating large-scale models will create monopolies and barriers to entry that will lead to externalities we may not be able to predict yet. In the end, this will impact citizens and consumers in a negative way. Misinformation will become rampant, while social engineering attacks at scale will affect consumers who will have no means to protect themselves.
The federal government’s announcement that governance is forthcoming serves as a good start, but there’s so much ground to make up to get in front of this AI race. The nonprofit Future of Life Institute published an open letter calling for a pause in AI innovation. It got plenty of press coverage, with the likes of Elon Musk joining the crowd of concerned parties, but hitting the pause button simply isn’t viable. Even Musk knows this — he has seemingly changed course and started his own AI company to compete.
The silver lining is that this also creates opportunities for innovative approaches to security that use AI. We will see improvements in threat hunting and behavioral analytics, but these innovations will take time and need investment. Any new technology creates a paradigm shift, and things always get worse before they get better. We’ve gotten a taste of the dystopian possibilities when AI is used by the wrong people, but we must act now so that security professionals can develop strategies and react as large-scale issues arise.
AI has revolutionized the technology world, but it has also created new challenges. Attackers are increasingly using AI to improve phishing and fraud, while users put sensitive information into AI/ML-based services. Misuse of AI has become a growing concern, with the potential to impact citizens and consumers in a negative way. The need for responsible use of AI is more important than ever before. While there are opportunities for innovative approaches to security that use AI, we must act now so that security professionals can develop strategies to react to large-scale issues as they arise.
Leave a Reply