Governments and major technology companies around the world focused on artificial intelligence at a summit in Seoul, South Korea, on Tuesday, making a series of pledges to invest in research, testing and safety.
Amazon, Google, Meta, Microsoft, OpenAI, and Samsung have announced voluntary and binding measures to ensure that AI does not engage in biological weapons, disinformation, or automated cyberattacks, according to a statement from the summit and reports from Reuters and the Associated Press. This is one of the companies that made promises they didn’t make. The companies also agreed to incorporate a “kill switch” into the AI, allowing it to effectively shut down the system in the event of a catastrophe.
“We cannot sleepwalk into a dystopian future where the power of AI is controlled by a few,” United Nations Secretary-General António Guterres said in a statement. “How we act now will determine the times.”
The pledge by governments and big tech companies is the latest in a series of efforts to create rules and guardrails as the use of AI continues to expand. In the year and a half since OpenAI released its generated AI chatbot ChatGPT, businesses have flocked to the technology to help with automation and communication. Companies are using AI to monitor the safety of their infrastructure, identify cancer in patient scans, and guide children through their math homework. (For CNET’s hands-on reviews, AI news, tips, and explanations on generative AI products like Gemini, Claude, ChatGPT, and Microsoft Copilot, visit our AI Atlas resource page.)
read more: AI Atlas, your guide to today’s artificial intelligence
The Seoul summit comes as Microsoft, on the other side of the Pacific, unveils its latest AI tools at the Build conference for developers and engineers, and Google’s I/O developers, where the search giant announced development advances. It was held one week after the conference. He also mentioned the Gemini AI system and his commitment to AI safety.
But despite promises of safety, AI experts warn that developing AI comes with extreme risks.
“While the first steps were promising, society’s response has not matched the potential for rapid and transformative progress that many experts expect,” including AI pioneer Jeffrey Hinton. A group of human experts wrote in the journal Science earlier this week. “There is a responsible path, if you have the wisdom to choose it.”
Tuesday’s agreement between governments and major AI companies follows a series of commitments made by the companies last November, when delegates from 28 countries agreed to contain potentially “catastrophic risks” from AI, including through legislation.
Look at this: Everything Google announced at I/O 2024
Correction on May 22nd: This article originally incorrectly listed the location of this week’s AI Summit. It was held in Seoul, South Korea.
Editor’s note: CNET used an AI engine to create dozens of articles and label them accordingly. The notes you are reading are attached to articles that substantively cover the topic of AI, all written by our expert editors and writers. For more information, AI Policy.