Researchers Propose a Better Way to Report Dangerous AI Flaws

0
0
researchers-propose-a-better-way-to-report-dangerous-ai-flaws
Researchers Propose a Better Way to Report Dangerous AI Flaws

In late 2023, a team of third-party researchers discovered a troubling glitch in OpenAI’s widely used artificial intelligence model GPT-3.5.

When asked to repeat certain words a thousand times, the model began repeating the word over and over, then suddenly switched to spitting out incoherent text and snippets of personal information drawn from its training data, including parts of names, phone numbers, and email addresses. The team that discovered the problem worked with OpenAI to ensure the flaw was fixed before revealing it publicly. It is just one of scores of problems found in major AI models in recent years.

In a proposal released today, more than 30 prominent AI researchers, including some who found the GPT-3.5 flaw, say that many other vulnerabilities affecting popular models are reported in problematic ways. They suggest a new scheme supported by AI companies that gives outsiders permission to probe their models and a way to disclose flaws publicly.

“Right now it’s a little bit of the Wild West,” says Shayne Longpre, a PhD candidate at MIT and the lead author of the proposal. Longpre says that some so-called jailbreakers share their methods of breaking AI safeguards the social media platform X, leaving models and users at risk. Other jailbreaks are shared with only one company even though they might affect many. And some flaws, he says, are kept secret because of fear of getting banned or facing prosecution for breaking terms of use. “It is clear that there are chilling effects and uncertainty,” he says.

The security and safety of AI models is hugely important given widely the technology is now being used, and how it may seep into countless applications and services. Powerful models need to be stress-tested, or red-teamed, because they can harbor harmful biases, and because certain inputs can cause them to break free of guardrails and produce unpleasant or dangerous responses. These include encouraging vulnerable users to engage in harmful behavior or helping a bad actor to develop cyber, chemical, or biological weapons. Some experts fear that models could assist cyber criminals or terrorists, and may even turn on humans as they advance.

The authors suggest three main measures to improve the third-party disclosure process: adopting standardized AI flaw reports to streamline the reporting process; for big AI firms to provide infrastructure to third-party researchers disclosing flaws; and for developing a system that allows flaws to be shared between different providers.

The approach is borrowed from the cybersecurity world, where there are legal protections and established norms for outside researchers to disclose bugs.

“AI researchers don’t always know how to disclose a flaw and can’t be certain that their good faith flaw disclosure won’t expose them to legal risk,” says Ilona Cohen, chief legal and policy officer at HackerOne, a company that organizes bug bounties, and a coauthor on the report.

Large AI companies currently conduct extensive safety testing on AI models prior to their release. Some also contract with outside firms to do further probing. “Are there enough people in those [companies] to address all of the issues with general-purpose AI systems, used by hundreds of millions of people in applications we’ve never dreamt?” Longpre asks. Some AI companies have started organizing AI bug bounties. However, Longpre says that independent researchers risk breaking the terms of use if they take it upon themselves to probe powerful AI models.

The researchers behind the initiative include academics from MIT, Stanford University, Princeton, and Carnegie Mellon University, large companies including Microsoft and Mozilla, and several independent AI research organizations.

Ruth Appel, a postdoctoral fellow at Stanford University who worked on the proposal, says that a formal way for faults in AI models to be flagged quickly and will hold companies publicly accountable. Without such a scheme, she says, “users will experience a worse product, or potentially a more dangerous product, because flaws may not be reported or may not even be discovered because of these chilling effects.”

The proposal comes at a time when the US government’s AI Safety Institutes, created under the Biden administration to help vet the most powerful AI models for serious problems, faces an uncertain future due to cuts being implemented by Elon Musk’s Department of Government Efficiency.

Longpre and Appel helped organize a workshop at Princeton University on the subject of third party AI flaw disclosure last October. The event was attended by researchers from companies including Google, OpenAI, Microsoft, and Cohere.

Longpre says the researchers have begun discussing the proposals with researchers from some big AI firms including OpenAI, Google, and Anthropic. These companies did not immediately respond to a request for comment. Longpre was part of a group of researchers that previously called for companies to change their terms of service to allow third-party researchers to probe models, but this did not happen.

Nicholas Carlini, an ex-Google researcher and a member of the team that discovered the GPT-3.5 flaw in 2023, told the Princeton workshop that the flaw-reporting system needs to change. “It’s very difficult to find out the exact ways in which you should do these things,” Carlini said. “We need to do a bunch of work, I think, as a community to get more established norms going on here.”

LEAVE A REPLY

Please enter your comment!
Please enter your name here