On December 8, 2023, the OpenAI logo is displayed on a mobile phone alongside an image on a computer screen generated by ChatGPT’s Dall-E text-to-image translation model.
We live in interesting times, as we see some (but not all) of the big tech companies move quickly and disrupt their own stuff.
Google is burning through its remaining funds Trust in Search Through the new AI overview, Provided incorrect information Responses to searches include explanations like how Barack Obama was the first Muslim president of the United States and that it’s safe to stare at the sun for five to 10 minutes a day (though the company has cut back on such explanations following public outcry).
Microsoft has used up some of its remaining inventory. Trust in Cybersecurity The recall function allows Take a screenshot of your computer every few seconds It then stores that information in a database for future searches. (After a flurry of articles denouncing the feature as a “security disaster,” Microsoft first announced that it would no longer enable recalls by default in Windows, and then later removed the feature entirely. of launch of the company’s Copilot Plus PC.
rear Research Publication 67% of remote workers interviewed claim they “trust their colleagues more when they have their video on” on Zoom calls, and Zoom’s CEO now Video conferencing filled with AI deepfakes (Journalists who interviewed him described him as a “digital twin” that could join Zoom meetings in your place and even make decisions on your behalf.)
Meanwhile, Amazon has AI-generated knockoffs A number of books, including “an alternative version of Artificial Intelligence: A Guide for Thinking Humans.”
Meta didn’t have much of a credibility with me, Inserting AI-generated comments into conversations between members of Facebook groups (It occasionally features bizarre claims of AI parentage.) And X is trying hard not to be Twitter, but it’s already overrun with bots. Announced updated policy This means that the company will allow “consensually produced and distributed adult pornographic content,” including “AI-generated adult nudity and sexual content” (but not content that is “exploitative… or promotes objectification,” which of course AI-generated content does not do).
OpenAI kicked off the generative AI era with the initial release of ChatGPT, and then launched the ChatGPT Store, a platform where users can distribute software based on ChatGPT but with specific features added, creating what the company calls “custom versions of ChatGPT.” January Announcement The company says that users have already created over 3 million versions since the store launched. The reliability of these tools will also impact the trust users have in OpenAI.
Is generative AI a “personal productivity tool” as some tech executives claim, or is it primarily a device to destroy trust in tech companies?
But in their rush to adopt their products, these companies are not only disrupting themselves. By over-hyping their generative AI products and encouraging adoption by people who don’t understand their limitations, they are confusing perceptions of access to accurate information, privacy and security, interactions with other humans, and the wide range of organizations (including government agencies and non-profits) that are adopting and deploying flawed generative AI tools.
Generative AI also has a huge negative impact on the environment. Recent studies have shown that: Article published by the World Economic Forum“The computing power required to keep up with AI advances is doubling roughly every 100 days,” it states, and 80% of the environmental impact occurs during the “inference” or usage phase, not the initial training of the algorithms. The “inference” pool contains all the AI-generated summaries in search, AI-generated comments in social media groups, AI-generated Fakebooks on Amazon, and AI-generated “adult” content on X. This pool, unlike the trust reservoir, is growing every day.
Many companies developing AI tools are doing so as part of their internal initiatives. ESG (Environmental, Social and Governance Standards) and Rai (Responsible AI). But despite these efforts, generative AI is rapidly consuming energy, water, and scarce elements (including trust).
Irina Reik I am the director of Internet Ethics Program in Markkula Center for Applied Ethics At Santa Clara University.