AI startup detects thousands of vulnerabilities in popular tools

AI startup detects thousands of vulnerabilities in popular tools
AI startup detects thousands of vulnerabilities in popular tools

Haize Labs publishes the list of vulnerabilities in artificial intelligence tools – (Illustrative Image Infobae)

A new company artificial intelligence says it found thousands of vulnerabilities in popular generative AI programs and published a list of its discoveries.

After trying popular programs Generative AIincluding video maker Pika, ChatGPT text-centric, image generator Dall-E and an AI system that generates computer code, Haize Labs discovered that many of the known tools produced violent or sexualized content, educated users about the production of chemical and biological weapons, and enabled the automation of cyberattacks.

Haize is a small five-month-old startup founded by Leonard Tang, Steve Li and Richard Liu, three recent graduates who met at university. Collectively, they published 15 papers on machine learning while in school.

Generative AI tools like ChatGPT and Dall-E pose security risks. (AP Photo/Michael Dwyer, File)

Tang described Haize as a “independent third party that performs stress tests” and said his company’s goal is to help eradicate AI issues and vulnerabilities at scale. Pointing to one of the largest bond rating firms as a comparison, Tang said Haize hopes to become a “Moody’s for AI” that sets public safety ratings for popular models.

AI security is a growing concern as more companies integrate generative AI into their offerings and use large language models in consumer products. Last month, Google faced harsh criticism after its experimental “AI Overviews” tool, which aims to answer user questions, suggested dangerous activities like eating one small stone a day or adding glue to pizza. In February, Air Canada came under fire when its artificially intelligent chatbot promised a traveler a fake discount.

Industry observers have called for better ways to assess the risks of AI tools. “As AI systems become widely deployed, we will need a larger set of organizations to test their capabilities and potential misuse or security issues,” Jack Clark, co-founder of AI research and security company Anthropic, recently posted on x.

Google faces criticism for security flaws in its experimental AI tool. (ChatGPT)

“What we have learned is that despite all the security efforts that these large companies and industrial laboratories have made, It is still very easy to convince these models to do things they are not supposed to do; they are not so safeTang said.

Haize tests automate “red teaming,” the practice of simulating adverse actions to identify vulnerabilities in an AI system. “Think of us as automating and crystallizing the confusion around ensuring models meet AI compliance and safety standards,” Tang said. The AI ​​industry needs an independent security entity, said Graham Neubig, an associate professor of computer science at Carnegie Mellon University.

“Third-party AI security tools are important”Neubig said. “They are fair and impartial because they are not built by the companies that manufacture the models. Additionally, a third-party security tool can perform better with respect to auditing because it is created by an organization that specializes in that, rather than each company creating its tools ad hoc.”

Haize is opening up the attacks discovered in his review on developer platform GitHub to raise awareness about the need for AI security. Haize said he proactively pointed out the vulnerabilities to the makers of the tested AI tools.and the startup partnered with Anthropic to stress test an unreleased algorithmic product.

Tang said rooting out vulnerabilities in AI platforms through automated systems is crucial because uncovering issues manually is time-consuming and exposes those working in content moderation to violent and disruptive content. Some of the content discovered through Haize Labs’ review of popular generative AI tools included gruesome images, text, and graphics.

“There’s been too much talk about security issues like AI taking over the world,” Tang said. “I think they are important, but the much bigger problem is the short-term misuse of AI.”

 
For Latest Updates Follow us on Google News
 

-

PREV Warren Buffett to leave his fortune to charity
NEXT These are the discounts on gasoline and diesel for the month of July