Close Menu
Nabka News
  • Home
  • News
  • Business
  • China
  • India
  • Pakistan
  • Political
  • Tech
  • Trend
  • USA
  • Sports

Subscribe to Updates

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

What's Hot

OGDCL announces major oil, gas discovery in KP’s Kohat district

January 1, 2026

Justice must be accessible in practice, not just in principle: CJP

January 1, 2026

National leadership call for unity in 2026

January 1, 2026
Facebook X (Twitter) Instagram
  • Home
  • About NabkaNews
  • Advertise with NabkaNews
  • DMCA Policy
  • Privacy Policy
  • Terms of Use
  • Contact us
Facebook X (Twitter) Instagram Pinterest Vimeo
Nabka News
  • Home
  • News
  • Business
  • China
  • India
  • Pakistan
  • Political
  • Tech
  • Trend
  • USA
  • Sports
Nabka News
Home » Tech companies agree on AI ‘kill switch’ to prevent Terminator-style risks
Tech

Tech companies agree on AI ‘kill switch’ to prevent Terminator-style risks

i2wtcBy i2wtcMay 21, 2024No Comments5 Mins Read
Share Facebook Twitter Pinterest LinkedIn Tumblr Email WhatsApp Copy Link
Follow Us
Google News Flipboard Threads
Share
Facebook Twitter LinkedIn Pinterest Email Copy Link


You can’t put AI back into Pandora’s box. But the world’s largest AI companies have voluntarily announced a new deal to address the biggest concerns surrounding the technology and allay concerns that unchecked AI development could lead to sci-fi scenarios where AI turns against its creators. We are working with the government. But without strict legal provisions to strengthen governments’ AI efforts, the debate will only go so far.

This morning, 16 influential AI companies, including Anthropic, Microsoft, and OpenAI, 10 countries, and the European Union held a summit in Seoul to develop guidelines for responsible AI development. One of the big outcomes of yesterday’s summit was that the AI ​​companies in attendance agreed to a so-called kill switch, a policy that halts development of cutting-edge AI models if they are deemed to have exceeded a certain risk threshold. However, it is unclear how effective this policy will be in practice, given the failure to attach real legal weight to the agreement or define specific risk thresholds. Other of his AI companies that were not present, as well as competitors of companies that have mentally agreed to the terms, are not subject to the pledge.

“In extreme cases, organizations commit not to develop or deploy models or systems at all if mitigation measures cannot be applied to reduce risk below a threshold,” says AI companies such as Amazon, Google, and Samsung. This is stated in the signed policy document. The summit is a follow-up to the Bletchley Park AI Safety Summit held last October, which attracted similar AI developers but lacked viable short-term efforts to protect humanity from the spread of AI. As a result, he was criticized for being “valuable, but lacking in ability.”

Following the last summit, a group of participants wrote an open letter criticizing the forum’s lack of formal rulemaking and the outsized role of AI companies in driving regulation of their own industries. “Experience has shown that the best way to address these harms is through enforceable regulatory mandates, rather than through self-regulation or voluntary action,” the letter said. I am.

Writers and researchers have been warning about the risks of powerful artificial intelligence for decades, first in science fiction and now in the real world. One of the most well-known references is “terminator ‘Scenario’ is the theory that if left unchecked, AI could become more powerful than its human creators and potentially attack humans. The theory’s name comes from the 1984 Arnold Schwarzenegger movie. The film features a woman whose cyborg travels back in time to kill her unborn son, who ends up fighting her AI system, which plans to cause a nuclear holocaust.

“AI offers an enormous opportunity to transform our economy and solve our greatest challenges, but I have always been clear that we will only be able to realise this full potential if we can grasp the risks posed by this rapidly evolving and complex technology,” said UK Technology Secretary Michelle Donnellan.

AI companies themselves recognize that their cutting-edge products are venturing into technologically and morally uncharted territory. Sam Altman, CEO of OpenAI, defined artificial general intelligence (AGI) as AI that exceeds human intelligence and said it is “coming soon” but carries risks.

“AGI will also come with significant risks, including abuse, serious accidents, and social disruption,” OpenAI’s blog post says. “The benefits of AGI are so great that we do not believe it is possible or desirable for society to permanently halt its development. Instead, society and AGI developers need to understand how to properly realize it. you need to find out.”

But so far, efforts to cobble together a global regulatory framework around AI have been scattered and largely lack legislative authority. A UN policy framework calling on countries to prevent AI risks to human rights, monitor the use of personal data and reduce AI risks was approved unanimously last month, but was not binding. The Bletchley Declaration, the centerpiece of the Global AI Summit held in the UK last October, contained no specific regulatory commitments.

Meanwhile, AI companies themselves are starting to establish their own organizations to promote AI policy. Yesterday, Amazon and Meta joined the Frontier Model Foundation, an industry nonprofit organization “dedicated to improving the safety of frontier AI models,” according to its website. They join founding members Anthropic, Google, Microsoft, and OpenAI. The nonprofit group has not yet come up with a firm policy proposal.

Individual governments have been more successful: Administration officials hailed President Biden’s executive order last October regulating AI safety as “the first time the government has gone ahead with a policy” that includes strict legal requirements that go beyond the vague promises outlined in other similarly intended documents. Biden has invoked the Defense Production Act, for example, to require AI companies to share the results of their safety tests with the government. The EU and China have also enacted formal policies addressing topics such as copyright law and the collection of users’ personal data.

States are also taking action, with Colorado Governor Jared Polis yesterday banning algorithmic discrimination in AI and requiring developers to share internal data with state regulators to ensure compliance. announced a new bill requiring the

This is not the last chance for global AI regulation. France plans to host another summit early next year, following meetings in Seoul and Bletchley Park. By then, participants said they would have developed a formal definition of risk benchmarks that would require regulatory action, a major step forward in a process that has so far been relatively cautious.

Subscribe to the Eye on AI newsletter to stay up to date on how AI is shaping the future of business. You can apply for free.



Source link

Share. Facebook Twitter Pinterest LinkedIn Tumblr Email WhatsApp Copy Link
i2wtc
  • Website

Related Posts

Tech

Google stock wraps best year since 2009 as AI excites Wall Street

December 31, 2025
Tech

5 themes that defined business and markets in 2025: Morning Squawk

December 31, 2025
Tech

Space and defense boom lifted these satellite stocks by 200% in 2025

December 31, 2025
Tech

China accuses Netherlands of making ‘mistakes’ over chipmaker Nexperia

December 31, 2025
Tech

Can a new AI Siri trigger an iPhone super cycle

December 30, 2025
Tech

Softbank has fully funded $40 billion investment in OpenAI, sources tell CNBC

December 30, 2025
Add A Comment
Leave A Reply Cancel Reply

Top Posts

House Republicans unveil aid bill for Israel, Ukraine ahead of weekend House vote

April 17, 2024

Prime Minister Johnson presses forward with Ukraine aid bill despite pressure from hardliners

April 17, 2024

Justin Verlander makes season debut against Nationals

April 17, 2024

Tesla lays off 285 employees in Buffalo, New York as part of major restructuring

April 17, 2024
Don't Miss

Trump says China’s Xi ‘hard to make a deal with’ amid trade dispute | Donald Trump News

By i2wtcJune 4, 20250

Growing strains in US-China relations over implementation of agreement to roll back tariffs and trade…

Donald Trump’s 50% steel and aluminium tariffs take effect | Business and Economy News

June 4, 2025

The Take: Why is Trump cracking down on Chinese students? | Education News

June 4, 2025

Chinese couple charged with smuggling toxic fungus into US | Science and Technology News

June 4, 2025

Subscribe to Updates

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

About Us
About Us

Welcome to NabkaNews, your go-to source for the latest updates and insights on technology, business, and news from around the world, with a focus on the USA, Pakistan, and India.

At NabkaNews, we understand the importance of staying informed in today’s fast-paced world. Our mission is to provide you with accurate, relevant, and engaging content that keeps you up-to-date with the latest developments in technology, business trends, and news events.

Facebook X (Twitter) Pinterest YouTube WhatsApp
Our Picks

OGDCL announces major oil, gas discovery in KP’s Kohat district

January 1, 2026

Justice must be accessible in practice, not just in principle: CJP

January 1, 2026

National leadership call for unity in 2026

January 1, 2026
Most Popular

China holds its allure for global investors in a world of flux-Xinhua

December 30, 2025

Old Tricks for a New Game – Diplomats

June 10, 2024

US expands sanctions against Russia at G7 summit

June 12, 2024
© 2026 nabkanews. Designed by nabkanews.
  • Home
  • About NabkaNews
  • Advertise with NabkaNews
  • DMCA Policy
  • Privacy Policy
  • Terms of Use
  • Contact us

Type above and press Enter to search. Press Esc to cancel.