AI startup Anthropic is changing its policies to allow minors to use its generative AI systems, at least in certain situations.
Announced in a post on the company’s official blog on Friday, Anthropic will allow third-party apps (but not necessarily its own apps) that leverage its AI models, as long as the app developer implements certain safety features. ) to begin using tweens and teens. Disclose to users which Anthropic technologies are used.
In a support article, Anthropic lists safety measures that developers creating AI-powered apps for minors should include: age verification systems, content moderation and filtering, and “safe and responsible” design for minors. Lists educational resources and more on the use of AI.The company also says this May Make available “technical measures” aimed at tailoring AI product experiences for minors, such as “child safety system prompts” that developers targeting minors must implement. .
Developers using Anthropic’s AI models must also comply with “applicable” child safety and data privacy regulations, such as the Children’s Online Privacy Protection Act (COPPA), the U.S. federal law that protects the online privacy of children under 13. Must be complied with. Anthropic says it plans. “Regularly” audit apps for compliance, suspend or terminate the accounts of users who repeatedly violate compliance requirements, and “clearly document” that developers are compliant on their public sites and documentation. oblige.
“There are specific use cases where AI tools can provide significant benefits for young users, such as test preparation and tutoring support,” Anthropic wrote in the post. “With this in mind, our latest policy allows organizations to incorporate our APIs into products intended for minors.”
Anthropic’s policy change comes as children and teens are increasingly turning to generative AI tools to solve not only academic but also personal problems, and competing generative AI vendors such as Google and OpenAI The move comes as the company seeks more use cases for children. This year, OpenAI announced a new team to research child safety and a partnership with Common Sense Media to co-create child-friendly AI guidelines. Google also rebranded its chatbot Bard as Gemini before making it available in English to teens in some regions.
A Center for Democracy and Technology poll found that 29% of children have used generative AI like OpenAI’s ChatGPT to deal with anxiety or mental health issues, and 22% have used generative AI to deal with problems with friends. , 16% reported using it in conflicts with family members.
Last summer, schools and universities rushed to ban generative AI apps, specifically ChatGPT, over fears of plagiarism and misinformation. Some have since lifted their bans. However, not everyone is convinced of the potential of generative AI for good, with surveys such as the UK Safer Internet Center finding that more than half (53%) of children believe that their peers do not believe in negative ways. We found that people reported seeing them using generative AI. Credible false information or images (including pornographic deepfakes) used to offend someone.
There is a growing call for guidelines regarding the use of generative AI by children.
Late last year, the United Nations Educational, Scientific and Cultural Organization (UNESCO) called on governments to regulate the use of generative AI in education, including age limits for users and the introduction of guardrails around data protection and user privacy. “Generative AI has the potential to offer great opportunities for human development, but it also has the potential to cause harm and prejudice,” UNESCO Director-General Audrey Azoulay said in a press release. “It cannot be integrated into education without public involvement and the necessary safeguards and regulations from governments.”