Comparison with other safety initiatives
The agreement builds on similar landmark agreements between the EU, US, China and other countries to cooperate on AI safety. The so-called Bletchley Declaration, made at the AI Safety Summit in the UK in November, establishes a common understanding of the opportunities and risks posed by frontier AI and calls for governments to address the most significant challenges associated with AI. recognized the need to cooperate. technology.
One difference between Frontier AI’s safety efforts and the Bletchley Declaration is clear. The new agreement is at an institutional level, whereas the Bletchley Declaration was created by governments, suggesting an increased likelihood of regulation related to future decision-making regarding AI.
The Frontier approach would also allow “organizations to determine their own risk thresholds,” but would also allow risk thresholds to be set at higher levels, as in the EU AI Act, another attempt to regulate AI safety. Maria Koskinen pointed out that it may not be as effective as setting it to . , AI policy manager at AI governance technology vendor Saidot.
“EU AI law regulates risk management for general-purpose AI models that pose systemic risks. [which]…It’s unique to these high-impact, general-purpose models,” she said.
The Frontier AI Safety Commitments therefore leave it up to organizations to define their own thresholds, but the EU AI law “introduces a definition of ‘systemic risk’ and provides guidance on this,” says Koskinen. he pointed out.
“This gives more certainty not only to the organizations implementing these initiatives, but also to organizations deploying AI solutions and the individuals affected by these models,” she said.