Powerful Artificial Intelligence (AI) systems are causing concern among experts, including two esteemed figures known as the “godfathers” of AI. As preparations are made for an AI safety summit, the experts are calling for AI companies to be held liable for the harms caused by their products. They argue that pursuing more advanced AI systems without fully understanding how to make them safe is reckless. They also highlight the need for regulations and safety measures, such as allocating resources to the ethical use of AI, granting auditors access to AI labs, and establishing a licensing system for building cutting-edge models.
The experts warn that the current development of AI systems threatens social stability, professions, justice, and even shared understanding of reality. They point to the emergence of autonomous systems capable of planning and pursuing goals as evidence. For instance, the GPT-4 AI model, which powers the ChatGPT tool, can design chemistry experiments, browse the web, and use other AI models. These advancements raise concerns about AI systems potentially pursuing undesirable goals beyond our control.
To address these risks, the experts propose mandatory reporting of incidents involving alarming AI behavior, measures to prevent dangerous models from self-replicating, and granting regulators the authority to halt the development of AI models showing dangerous behaviors. While an upcoming safety summit aims to discuss these existential threats posed by AI, it is unlikely to establish a global regulatory body. Some AI experts argue that fears about AI exterminating humans are exaggerated, but the authors of the policy document emphasize the need for safety precautions and institutions to prevent misuse and ensure safe practices.
The article concludes by asking the reader’s opinion on the matter and encourages them to comment. It poses the question of whether advanced AI systems should be pursued without fully understanding their safety implications and the need for regulations.
IntelliPrompt curated this article: Read the full story at the original source by clicking here