FAKE AI? How Safe Are AI Systems, Really?
You won’t believe what’s been uncovered about the safety of AI systems! It turns out that the evaluations and standards for AI are still a work in progress. At a recent Knight Foundation’s INFORMED conference, experts discussed the need for meaningful evaluations of the safety of AI systems. Sounds like a lot needs to be done to ensure these systems are trustworthy and responsible.
During a panel at the conference, important figures in the AI world, Janet Haven and Dr. Alondra Nelson, emphasized that AI systems are not only about data and algorithms, but also about interactions with humans and their environmental impacts. NIST even launched its AI Risk Management Framework last year to help developers consider trustworthiness when designing, developing, and evaluating AI systems.
But here’s the shocker: did you know that a significant percentage of AI governance tools include faulty fixes, which could undermine the fairness and explainability of AI systems? Now that’s a red flag! Kate Kaye, Deputy Director of World Privacy Forum, led a review of various AI governance tools and found that nearly 40% of them have serious issues.
MIT’s Justin Hendrix spoke with Kate about her worldwide nerd-out session analyzing these AI governance tools. That’s right, she traveled the globe from Africa to Asia, Europe, and South America, poring over all sorts of technical documents. She found that there are emerging problems with these documents, including the NIST AI Risk Management Framework and Policy and Playbook.
So, what’s your take? How safe do you believe AI systems are? Are you concerned about the standards they are being held to? Let us know what you think!
IntelliPrompt curated this article: Read the full story at the original source by clicking here