Scarlett Johansson’s AI controversy is reminiscent of the bad old days of Silicon Valley
- author, Zoe Kleinman
- role, Technology Editor
- twitter,
“Move fast and break things” is a motto that still holds true in the tech world nearly 20 years after it was coined by a young Mark Zuckerberg.
These five words have come to symbolize Silicon Valley at its worst: a combination of ruthless ambition, breathtaking arrogance, and profit-driven innovation with no fear of consequences.
I was reminded of this phrase this week when actress Scarlett Johansson clashed with OpenAI. Johansson claimed that both she and her agent had declined to hire her to voice a new ChatGPT product, but then when the product was announced, it sounded just like her anyway. OpenAI denies that it was an intentional imitation.
This is a prime example of why the creative industries are so concerned about being copied and eventually replaced by artificial intelligence.
Sony Music, the world’s largest music publisher, sent letters to Google, Microsoft and OpenAI last week demanding to know whether its artists’ songs are being used to develop AI systems, and asking for permission. He said he had not received it.
All of this has echoes of the macho Silicon Valley titans of yore. Ask for forgiveness, not permission, as an informal business plan.
But tech companies in 2024 will be very keen to distance themselves from that reputation.
OpenAI wasn’t built from that mold: it was originally founded as a non-profit that would invest excess profits back into the business.
When it created the for-profit sector in 2019, it said it would be led by the nonprofit sector and that there would be a cap on the returns investors could earn.
Not everyone was happy about the change, which was reportedly the main reason former co-founder Elon Musk decided to step down.
When OpenAI CEO Sam Altman was abruptly fired by his own board late last year, one of the theories was that he wanted to move further away from the company’s original mission. We’ll never know for sure.
But even as OpenAI becomes more profit-oriented, it still has to face its responsibilities.
Nearly everyone in the policymaking world agrees that clear boundaries are needed to rein in companies like OpenAI before disaster strikes.
So far, the AI giants have acted largely on paper. Six months ago, at the world’s first AI Safety Summit, many technology leaders made a voluntary pledge to create responsible and safe products that maximize the benefits and minimize the risks of AI technology. I signed it.
These risks, initially identified by event organizers, were a nightmare. At the time, I asked about the more real threat of AI tools discriminating against people or forcing people out of their jobs, and I was told that this gathering was only meant to discuss worst-case scenarios, and that the Terminator, Doomsday, etc. We were told flatly that this was a realm in which AI would run amok and destroy humanity.
When the summit resumed six months later, the word “safety” had been completely removed from the conference title.
Last week, a draft UK government report by a group of 30 independent experts concluded there was “no evidence yet” that AI could generate biological weapons or carry out sophisticated cyber-attacks. The possibility of humans losing control of AI is “highly debatable,” the report said.
Some people in the field have been saying for quite some time that the immediate threat from AI tools is that they either take away jobs or become incapable of recognizing skin color. Dr. Raman Chaudhry, an expert on AI ethics, says these are “real issues.”
The AI Safety Institute declined to say whether it has performed any safety testing of new AI products that have been launched in recent days, particularly OpenAI’s GPT-4o and Google’s Project Astra, both of which are among the most powerful and advanced generative AI systems available to the public that I have seen to date. Meanwhile, Microsoft announced a new laptop with AI hardware, marking the beginning of AI tools being physically built into devices.
The independent report also notes that there is currently no reliable way (even among developers) to understand exactly why an AI tool produces the output it does, and that evaluators may intentionally It also says it has established safety testing practices for red teaming to acquire AI tools. There are no best practice guidelines when it comes to cheating.
At a follow-up summit co-hosted by Britain and South Korea in Seoul this week, companies pledged to shelve products that do not meet certain safety standards, but these will not be set until the next meeting. 2025.
Some worry that all these promises and pledges are not enough.
“Volunteer agreements are essentially just a way for companies to mark their homework,” says Andrew Straight, deputy director of the Ada Lovelace Institute, an independent research organization. “This is essentially no substitute for the legally binding and enforceable rules needed to encourage the responsible development of these technologies.”
OpenAI just announced its own 10-point safety process that it says it’s working on, but one of its senior safety-focused engineers recently resigned, writing to X that his department was “sailing against headwinds” within the company.
“Over the past few years, safety culture and processes have taken a backseat to shiny products,” posted Jan Leike.
Of course, there are other teams at OpenAI that continue to focus on safety and security.
But there is currently no official, independent oversight of what they actually do.
“There is no guarantee that these companies will keep their promises,” says Professor Wendy Hall, one of Britain’s leading computer scientists.
“How can we hold them accountable for what they say, like we do with pharmaceutical companies and other high-risk sectors?”
We may also find that these powerful tech leaders become less compliant once pressure mounts and voluntary agreements become a bit more legally enforceable.
When the UK government said it wanted the power to suspend the rollout of security features by big tech companies if they could compromise national security, Apple called it an “unprecedented overreach” by lawmakers. , threatened to remove the service from the UK. .
The bill passed, and for now Apple is here to stay.
The European Union’s AI law has just been signed, and it is the first law and the most stringent. There are also severe penalties for companies that do not comply. But Gartner VP Analyst Nader Henein says it’s more work for AI users than it is for the AI giants themselves.
“I would say the majority [of AI developers] “We’re overestimating the impact this law will have on them,” he says.
Companies that use AI tools need to categorize and risk score them, and the AI companies that provide them must provide enough information to allow them to do this, he explains.
But this does not mean that they are indifferent.
“We need to take our time and move towards legal regulation, but we can’t rush it,” Professor Hall said. “It’s really difficult to set global governance principles that everyone agrees on.”
“We also need to make sure that we are not just protecting the West and China, but really the whole world.”
Those who attended the AI Seoul Summit said they found it informative. It was “less flashy” than Bletchley, but more contentious, one attendee said. Interestingly, the event’s final statement was signed by 27 countries, but not China, despite having direct representation there.
The most important problem, as always, is that regulation and policy move much slower than innovation.
Professor Hall believes “the stars are aligning” at a government level – the question is whether the tech giants can be persuaded to wait them out.
BBC In Depth is your new home for websites and apps that bring you the best analysis and expertise from top journalists. Under our distinctive new brand, we deliver fresh perspectives that challenge assumptions and in-depth reporting on the biggest issues to help you make sense of our complex world. We also feature thought-provoking content from BBC Sounds and iPlayer. We’re starting small, but we’re thinking big. I would like to know everyone’s opinions. Please send us your feedback by clicking the button below.
InDepth is your new home for the best analysis from across BBC News. Please tell me what you think.