**New Tool AntiFake Provides a Defense against Deepfake**
These days, celebrities are not safe from invasive artificial intelligence. Deepfakes, incredibly lifelike simulations of their faces and voices, are on the rise and are practically impossible to distinguish from genuine ones. But researchers from Washington University in St. Louis have invented a new tool, AntiFake, to help people defend against them.
AntiFake, which is still in the early stages of research, scrambles audio signals, making it difficult for AI models to synthetically create an identical copy of a person’s voice. The new technology would be available to the public as an app or a web tool and may offer a proactive approach to protect a person’s speech.
While this new tool sounds promising, there are still other solutions available – including deepfake detection technologies that embed digital watermarks in video and audio. These solutions can help identify content made by AI, although sometimes, they only work on published content.
The increasing prevalence of deepfakes could lead to new legal implications, with a bipartisan bill on the horizon. If approved, the “NO FAKES Act of 2023” would hold the creators of unauthorized deepfakes liable, aiming to protect the identities of those whose likenesses have been replicated.
In the world of generative AI, consent remains key – it is essential to protect the voices and faces of individuals from being misused. However, maintaining a balance for the use of generative AI is equally important, especially when it comes to creating new synthetic voices for the benefit of people who have lost their ability to speak.
The question remains – are these solutions and regulations enough to keep AI deepfakes at bay? It’s time to take a stand to protect the voices and images of celebrities and individuals alike. What do you think? Will AntiFake and other tools effectively combat the rise of AI deepfakes? Comment below to share your thoughts!
IntelliPrompt curated this article: Read the full story at the original source by clicking here