Close Menu
Nabka News
  • Home
  • News
  • Business
  • China
  • India
  • Pakistan
  • Political
  • Tech
  • Trend
  • USA
  • Sports

Subscribe to Updates

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

What's Hot

FO rejects Jaishankar’s ‘irresponsible’ remarks

June 12, 2025

Netherlands thrash Malta, Poland stumble in World Cup qualifying – Sport

June 12, 2025

France’s Macron says he wants country to make cutting edge chips

June 12, 2025
Facebook X (Twitter) Instagram
  • Home
  • About NabkaNews
  • Advertise with NabkaNews
  • DMCA Policy
  • Privacy Policy
  • Terms of Use
  • Contact us
Facebook X (Twitter) Instagram Pinterest Vimeo
Nabka News
  • Home
  • News
  • Business
  • China
  • India
  • Pakistan
  • Political
  • Tech
  • Trend
  • USA
  • Sports
Nabka News
Home » Scarlett Johansson’s AI commotion has echoes of Silicon Valley’s good old days
News

Scarlett Johansson’s AI commotion has echoes of Silicon Valley’s good old days

i2wtcBy i2wtcMay 23, 2024No Comments7 Mins Read
Share Facebook Twitter Pinterest LinkedIn Tumblr Email WhatsApp Copy Link
Follow Us
Google News Flipboard Threads
Share
Facebook Twitter LinkedIn Pinterest Email Copy Link


Image source, Getty Images

Scarlett Johansson’s AI controversy is reminiscent of the bad old days of Silicon Valley

Article information

  • author, Zoe Kleinman
  • role, Technology Editor
  • twitter, follow
  • 3 hours ago

“Move fast and break things” is a motto that still holds true in the tech world nearly 20 years after it was coined by a young Mark Zuckerberg.

These five words have come to symbolize Silicon Valley at its worst: a combination of ruthless ambition, breathtaking arrogance, and profit-driven innovation with no fear of consequences.

I was reminded of this phrase this week when actress Scarlett Johansson clashed with OpenAI. Johansson claimed that both she and her agent had declined to hire her to voice a new ChatGPT product, but then when the product was announced, it sounded just like her anyway. OpenAI denies that it was an intentional imitation.

This is a prime example of why the creative industries are so concerned about being copied and eventually replaced by artificial intelligence.

Sony Music, the world’s largest music publisher, sent letters to Google, Microsoft and OpenAI last week demanding to know whether its artists’ songs are being used to develop AI systems, and asking for permission. He said he had not received it.

All of this has echoes of the macho Silicon Valley titans of yore. Ask for forgiveness, not permission, as an informal business plan.

But tech companies in 2024 will be very keen to distance themselves from that reputation.

OpenAI wasn’t built from that mold: it was originally founded as a non-profit that would invest excess profits back into the business.

When it created the for-profit sector in 2019, it said it would be led by the nonprofit sector and that there would be a cap on the returns investors could earn.

Not everyone was happy about the change, which was reportedly the main reason former co-founder Elon Musk decided to step down.

When OpenAI CEO Sam Altman was abruptly fired by his own board late last year, one of the theories was that he wanted to move further away from the company’s original mission. We’ll never know for sure.

But even as OpenAI becomes more profit-oriented, it still has to face its responsibilities.

Nearly everyone in the policymaking world agrees that clear boundaries are needed to rein in companies like OpenAI before disaster strikes.

So far, the AI ​​giants have acted largely on paper. Six months ago, at the world’s first AI Safety Summit, many technology leaders made a voluntary pledge to create responsible and safe products that maximize the benefits and minimize the risks of AI technology. I signed it.

These risks, initially identified by event organizers, were a nightmare. At the time, I asked about the more real threat of AI tools discriminating against people or forcing people out of their jobs, and I was told that this gathering was only meant to discuss worst-case scenarios, and that the Terminator, Doomsday, etc. We were told flatly that this was a realm in which AI would run amok and destroy humanity.

When the summit resumed six months later, the word “safety” had been completely removed from the conference title.

Last week, a draft UK government report by a group of 30 independent experts concluded there was “no evidence yet” that AI could generate biological weapons or carry out sophisticated cyber-attacks. The possibility of humans losing control of AI is “highly debatable,” the report said.

Some people in the field have been saying for quite some time that the immediate threat from AI tools is that they either take away jobs or become incapable of recognizing skin color. Dr. Raman Chaudhry, an expert on AI ethics, says these are “real issues.”

The AI ​​Safety Institute declined to say whether it has performed any safety testing of new AI products that have been launched in recent days, particularly OpenAI’s GPT-4o and Google’s Project Astra, both of which are among the most powerful and advanced generative AI systems available to the public that I have seen to date. Meanwhile, Microsoft announced a new laptop with AI hardware, marking the beginning of AI tools being physically built into devices.

The independent report also notes that there is currently no reliable way (even among developers) to understand exactly why an AI tool produces the output it does, and that evaluators may intentionally It also says it has established safety testing practices for red teaming to acquire AI tools. There are no best practice guidelines when it comes to cheating.

At a follow-up summit co-hosted by Britain and South Korea in Seoul this week, companies pledged to shelve products that do not meet certain safety standards, but these will not be set until the next meeting. 2025.

Some worry that all these promises and pledges are not enough.

“Volunteer agreements are essentially just a way for companies to mark their homework,” says Andrew Straight, deputy director of the Ada Lovelace Institute, an independent research organization. “This is essentially no substitute for the legally binding and enforceable rules needed to encourage the responsible development of these technologies.”

OpenAI just announced its own 10-point safety process that it says it’s working on, but one of its senior safety-focused engineers recently resigned, writing to X that his department was “sailing against headwinds” within the company.

“Over the past few years, safety culture and processes have taken a backseat to shiny products,” posted Jan Leike.

Of course, there are other teams at OpenAI that continue to focus on safety and security.

But there is currently no official, independent oversight of what they actually do.

“There is no guarantee that these companies will keep their promises,” says Professor Wendy Hall, one of Britain’s leading computer scientists.

“How can we hold them accountable for what they say, like we do with pharmaceutical companies and other high-risk sectors?”

We may also find that these powerful tech leaders become less compliant once pressure mounts and voluntary agreements become a bit more legally enforceable.

When the UK government said it wanted the power to suspend the rollout of security features by big tech companies if they could compromise national security, Apple called it an “unprecedented overreach” by lawmakers. , threatened to remove the service from the UK. .

The bill passed, and for now Apple is here to stay.

The European Union’s AI law has just been signed, and it is the first law and the most stringent. There are also severe penalties for companies that do not comply. But Gartner VP Analyst Nader Henein says it’s more work for AI users than it is for the AI ​​giants themselves.

“I would say the majority [of AI developers] “We’re overestimating the impact this law will have on them,” he says.

Companies that use AI tools need to categorize and risk score them, and the AI ​​companies that provide them must provide enough information to allow them to do this, he explains.

But this does not mean that they are indifferent.

“We need to take our time and move towards legal regulation, but we can’t rush it,” Professor Hall said. “It’s really difficult to set global governance principles that everyone agrees on.”

“We also need to make sure that we are not just protecting the West and China, but really the whole world.”

Those who attended the AI ​​Seoul Summit said they found it informative. It was “less flashy” than Bletchley, but more contentious, one attendee said. Interestingly, the event’s final statement was signed by 27 countries, but not China, despite having direct representation there.

The most important problem, as always, is that regulation and policy move much slower than innovation.

Professor Hall believes “the stars are aligning” at a government level – the question is whether the tech giants can be persuaded to wait them out.

BBC In Depth is your new home for websites and apps that bring you the best analysis and expertise from top journalists. Under our distinctive new brand, we deliver fresh perspectives that challenge assumptions and in-depth reporting on the biggest issues to help you make sense of our complex world. We also feature thought-provoking content from BBC Sounds and iPlayer. We’re starting small, but we’re thinking big. I would like to know everyone’s opinions. Please send us your feedback by clicking the button below.

contact

InDepth is your new home for the best analysis from across BBC News. Please tell me what you think.





Source link

Share. Facebook Twitter Pinterest LinkedIn Tumblr Email WhatsApp Copy Link
i2wtc
  • Website

Related Posts

News

The water of Hajj: A simple illustrated guide to Zamzam | Religion News

June 4, 2025
News

Iraq’s Jewish community saves a long-forgotten shrine | Religion News

June 4, 2025
News

Iran’s Khamenei slams US nuclear proposal, vows to keep enriching uranium | Nuclear Energy News

June 4, 2025
News

Hunger and bullets: Palestinians recall Gaza aid massacre horror | Israel-Palestine conflict News

June 4, 2025
News

Aboriginal community shaken by second death in Australian police custody | Indigenous Rights News

June 4, 2025
News

UEFA Nations League: Germany-Portugal – Start, team news, lineups, Ronaldo | Football News

June 4, 2025
Add A Comment
Leave A Reply Cancel Reply

Top Posts

FO rejects Jaishankar’s ‘irresponsible’ remarks

June 12, 2025

House Republicans unveil aid bill for Israel, Ukraine ahead of weekend House vote

April 17, 2024

Prime Minister Johnson presses forward with Ukraine aid bill despite pressure from hardliners

April 17, 2024

Justin Verlander makes season debut against Nationals

April 17, 2024
Don't Miss

Trump says China’s Xi ‘hard to make a deal with’ amid trade dispute | Donald Trump News

By i2wtcJune 4, 20250

Growing strains in US-China relations over implementation of agreement to roll back tariffs and trade…

Donald Trump’s 50% steel and aluminium tariffs take effect | Business and Economy News

June 4, 2025

The Take: Why is Trump cracking down on Chinese students? | Education News

June 4, 2025

Chinese couple charged with smuggling toxic fungus into US | Science and Technology News

June 4, 2025

Subscribe to Updates

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

About Us
About Us

Welcome to NabkaNews, your go-to source for the latest updates and insights on technology, business, and news from around the world, with a focus on the USA, Pakistan, and India.

At NabkaNews, we understand the importance of staying informed in today’s fast-paced world. Our mission is to provide you with accurate, relevant, and engaging content that keeps you up-to-date with the latest developments in technology, business trends, and news events.

Facebook X (Twitter) Pinterest YouTube WhatsApp
Our Picks

FO rejects Jaishankar’s ‘irresponsible’ remarks

June 12, 2025

Netherlands thrash Malta, Poland stumble in World Cup qualifying – Sport

June 12, 2025

France’s Macron says he wants country to make cutting edge chips

June 12, 2025
Most Popular

Chinese swimming doping scandal rocks Paris Summer Olympics: NPR

April 22, 2024

Mutual hatred for America will only bring China, Russia and Iran closer together

April 25, 2024

Tesla founder Musk visits China, competitors unveil new electric cars at Beijing Motor Show

April 28, 2024
© 2025 nabkanews. Designed by nabkanews.
  • Home
  • About NabkaNews
  • Advertise with NabkaNews
  • DMCA Policy
  • Privacy Policy
  • Terms of Use
  • Contact us

Type above and press Enter to search. Press Esc to cancel.