To enhance the security of generative artificial intelligence (AI) systems, the Biden-Harris administration has called for public assessments of existing AI models. Adversarial attacks, where security researchers actively attack the technology in a controlled environment, are considered one of the most effective ways to test the security of an application.

AI Village Hack at DEF CON 31 Security Conference

To implement this approach, the DEF CON 31 security conference, which will take place from August 10 to 13, is featuring a public assessment of generative AI at the AI Village. This independent exercise will provide critical information to researchers and the public about the impacts of these models and enable AI companies and developers to fix any issues found in those models.

The AI Village hack will have some of the leading vendors in the generative AI space, such as Anthropic, Google, Hugging Face, Microsoft, Nvidia, OpenAI, and Stability AI, participating. This conference has been one of the largest gatherings of security researchers every year and has been a location where new vulnerabilities are discovered and disclosed.

The AI Village generative AI attack simulation will consist of on-site access to large language models (LLMs) from the participating vendors. The event will have a capture the flag point-system approach where attackers gain points for achieving certain objectives that will demonstrate a range of potentially harmful activities. The individual with the highest number of points will win a “high-end Nvidia GPU.”

In the past, after the 2016 US election and fears over election interference, a Voting Village was set up at DEF CON to look at the security in voting machine technologies, infrastructure, and processes. With the villages at DEF CON, attendees can discuss and probe into technologies in a responsible disclosure model that aims to help improve the state of security overall.

Need to Examine AI Technology for Risks

Generative AI technology has been deployed into society at large, and there is a particular need to examine the technology for risks. Sven Cattell, the founder of AI Village, stated that traditionally, companies have solved the problem of identifying risks by using specialized red teams, which are cybersecurity groups that simulate attacks to detect potential issues. However, the challenge with generative AI is that a lot of the work around generative AI has happened in private, without the benefit of a red team evaluation.

The evaluation platform the event will run on is being developed by Scale AI. The company has spent more than seven years building AI systems from the ground up and claims that it is unbiased and not beholden to any single ecosystem. As such, Scale is able to independently test and evaluate systems to ensure they are ready to be deployed into production. The company hopes to ensure progress in foundation model capabilities happens alongside progress in model evaluation and safety by bringing its expertise to a wider audience at DEF CON.

AI

Articles You May Like

Augmented Reality Platform Squint Raises $6M to Optimize Factory Procedures
OpenAI CEO Testifies Before Senate Panel on AI Regulation
OpenAI CEO meets with Indian Prime Minister to discuss AI regulation
A New Approach to Understanding Droplet Properties for Fighting Infectious Agents

Leave a Reply

Your email address will not be published. Required fields are marked *