HomeAI News"Artificial Intelligence's Ethical Dilemma: Is DeepMind Trustworthy?" | TechCrunch

“Artificial Intelligence’s Ethical Dilemma: Is DeepMind Trustworthy?” | TechCrunch

DeepMind, the AI research lab owned by Google, has released a paper proposing a framework for evaluating the societal and ethical risks of AI systems. The paper calls for involvement from AI developers, app developers, and the broader public in evaluating and auditing AI. The release of the paper comes just before the AI Safety Summit, where the UK government will introduce a global advisory group on AI. However, a recent study by Stanford researchers found that Google’s flagship text-analyzing AI model, PaLM 2, scores poorly in terms of transparency. DeepMind has committed to providing the UK government with early access to its AI models, but some question if this is just performative. The lab is also set to release its AI chatbot, Gemini, and will need to detail its weaknesses and limitations to be taken seriously on the AI ethics front. In other AI news, a Microsoft study found flaws in OpenAI’s GPT-4, ChatGPT has added web searching and DALL-E 3 features, and challengers to GPT-4 are emerging. Additionally, an engineer is training an AI algorithm to play Pokémon, Google is releasing a language tutor feature, Amazon is testing a bipedal robot in its facilities, and Nvidia and Meta have made advancements in simulators for training AI agents. Finally, a Chinese startup is raising funds to compete with OpenAI, and the US has announced restrictions on Nvidia’s AI chip shipments to China.

IntelliPrompt curated this article: Read the full story at the original source by clicking here

RELATED ARTICLES

LEAVE A REPLY

Please enter your comment!
Please enter your name here

AI AI Oh!

AI Technology