This audio is automatically generated, if you have any feedback please let us know.
New guidance from the U.S. Department of Education Department of Education and Technology It outlines a “shared responsibility” mentality to help ed tech providers build trust with school district leaders as they integrate artificial intelligence into their products and platforms.
This means education technology providers need to proactively manage risks from rapidly evolving technology, the ministry said.
In guidance issued on July 8, the Department for Education outlined eight “categories of risk” regarding the use of AI in schools.
- “Race to Liberation”.
- Prejudice and fairness.
- Data privacy and security.
- Harmful content.
- Ineffective system.
- Malicious use.
- Managing misinformation.
- Transparency and explainability.
- Unprepared users.
Reflecting the current distrust of AI, the department quoted Patrick Ghitis-Libengl, vice chancellor of the Lynwood Unified School District in California.
“Would I buy a generative AI product? Yes! But nothing that I can deploy right now.” There are unresolved issues such as fairness of access, data privacy, model bias, and security. There is a lack of evidence of safety, a clear research base, and efficacy.”
During the 2023 presidential campaign, President Joe Biden called on education technology providers to share responsibility with schools to bring AI into the classroom. Presidential DecreeAccording to the executive order, mitigating the risks of using AI will require a “whole-of-society effort involving government, the private sector, academia, and civil society.”
This month’s guidance for education technology providers also follows a 2023 Department for Education report which states:Human involvement“” approach to using AI in schools.
Still, the Department for Education said this month that “it is neither realistic nor fair to require educators to review all their use of AI or AI-based output,” which is why it’s important that edtech providers also take a shared responsibility in reviewing AI use and output, it added.
The agency outlined five key areas for edtech providers to consider in building shared responsibility with schools.
- Design for education. Developers need to begin to understand the specific values and challenges of education, and educator and student feedback must be incorporated into every aspect of product development.
- Provide evidence of rationale and impact. Institutions need evidence of the advertised solutions of their educational technology tools.
- Promote equity and protect civil rights. Educational technology providers need to be aware of representation and bias in datasets, algorithmic discrimination, and how to ensure accessibility for students with disabilities.
- Ensuring safety and security. Education technology providers will need to explain how they will protect the safety and security of users of their AI tools.
- Promote transparency and gain trust. Building trust with district leadership requires collaboration between edtech providers, educators, and other stakeholders.
The department’s guidance noted that states and school districts are also developing their own AI usage guidelines. As of June, 15 states had published resources for integrating AI in education. The department added that edtech providers should review relevant school and state AI guidance when considering working with school districts.