Some of the world’s biggest technology companies pledged to work together to protect against the dangers of artificial intelligence as they wrapped up a two-day AI summit in Seoul, which was attended by several governments.
Leaders from companies ranging from South Korea’s Samsung Electronics to Google pledged at the event, co-hosted by Britain, to “minimise risks” and develop new AI models responsibly while pushing forward the cutting edge of the field.
The new pledge, codified in the so-called “Seoul AI Business Pledge” on Wednesday, and a new set of safety pledges announced the day before, were won at the first Global AI Safety Summit held at Bletchley Park in the UK last year. based on the agreed agreement.
Tuesday’s commitments will explain how companies such as OpenAI and Google DeepMind will assess the risks of their technology, including risks that are deemed “unbearable,” and how to avoid exceeding such thresholds. I promised to share.
Advertisement – SCROLL TO CONTINUE
But experts have warned that AI will be difficult for regulators to understand and govern as the sector evolves rapidly.
“I think this is a really, really big problem,” said Markus Anderjung, policy director at the Center for AI Governance, a non-profit research organization based in Oxford, UK.
“We predict that responding to AI will be one of the biggest challenges facing governments around the world in the coming decades.”
Advertisement – SCROLL TO CONTINUE
“The world is going to need to come to some kind of joint understanding of the risks that this kind of cutting-edge general model poses,” he said.
“The pace of AI development is accelerating, so we must keep up to limit the risks,” Britain’s Science and Technology Minister Michelle Donnellan said in Seoul on Wednesday.
She said the next AI summit in France will provide even more opportunities to “push the boundaries” when it comes to testing and evaluating new technologies.
Advertisement – SCROLL TO CONTINUE
“At the same time, we need to turn our attention to risk mitigation beyond these models to ensure that society as a whole is resilient to the risks posed by AI,” Donnellan said.
ChatGPT became a massive success upon its release in 2022, sparking a generative AI gold rush, with tech companies around the world pouring billions of dollars into developing their own models.
These AI models can generate text, photos, audio, and even video from simple prompts, and their supporters hail this as a breakthrough technology that will improve lives and businesses around the world. I am.
Advertisement – SCROLL TO CONTINUE
But critics, rights activists and governments have warned that it could be misused in a variety of ways, including manipulating voters with fake news articles and “deepfake” photos and videos of politicians.
There are many calls for international standards to govern the development and use of AI.
“I think there’s a growing recognition that we need global cooperation to seriously think about the problems and harms of artificial intelligence. AI knows no borders,” said the director of Humane Intelligence, an independent nonprofit organization. Raman Chowdhury, an expert on AI ethics, said: Evaluate and evaluate AI models.
Chaudhry told AFP that the big concern is not just the sci-fi nightmare of “out-of-control AI,” but issues such as widespread inequality in the field.
“All AI is built, developed and profited from by very few people and organizations,” she told AFP on the sidelines of a summit in Seoul.
People in developing countries like India are “often the cleaning staff. They are the data annotators, the content moderators. “We’re cleaning the ground so we can walk on it.”