AI Safety Summit Shakes Up Global Policymaking!
In a week filled with AI-related news, the UK’s AI safety summit takes the spotlight. Held at Bletchley Park, the summit witnessed western and Chinese officials coming together to discuss AI. Although several European leaders declined the invite, Commission president Ursula von der Leyen attended. The event excluded EU universities, with only one institution receiving an invitation. Concerns about the exclusion relate to the EU’s AI capabilities or the UK’s post-Brexit stance.
The summit’s main outcome was the Bletchley Declaration, signed by major powers worldwide. It addresses both the present risk of algorithmic discrimination and the future concerns of powerful AI systems causing harm and loss of control. The declaration aims to reframe the political debate on AI and stresses the importance of international cooperation. It sets the stage for the UK summit to pave the way for international AI governance.
Meanwhile, a joint warning from Chinese and Western AI experts highlighted the potential for leading-edge AI to aid terrorists in developing weapons of mass destruction. This adds to the urgency for global collaboration on AI safety measures.
Across the Atlantic, the White House issued a significant executive order on AI. The order covers various measures, including fairness and privacy in AI usage. Notably, it demands that tech giants share their safety test results with the US government. This move represents an initial step toward external oversight of AI models to ensure their safety in real-world applications. The National Institute of Standards and Technology will provide guidelines to companies on testing AI systems for dangerous capabilities, along with red-teaming exercises.
However, research by Oxford scholars shows that academic researchers often face barriers to accessing AI models developed by major tech firms. This lack of access hinders the ability to evaluate and understand the capabilities of leading-edge AI systems. To address this issue, the US is launching the Artificial Intelligence Safety Institute, which will provide testing environments for researchers to assess AI risks.
The executive order also includes initiatives to support privacy research, establish a National AI Research Resource, and increase grants for AI research in various fields. Additionally, it emphasizes the need to expand opportunities for highly skilled immigrants and nonimmigrants with expertise in critical AI areas to study and work in the US.
As these significant developments unfold in AI policymaking, one question remains: How do you believe international cooperation can effectively address the risks associated with AI? Share your thoughts in the comments below!
Comment below on how international cooperation can tackle the risks of AI!
IntelliPrompt curated this article: Read the full story at the original source by clicking here