![A “superintelligent” AI could trigger the end of humanity, warns Nobel laureate. [Spanish] A “superintelligent” AI could trigger the end of humanity, warns Nobel laureate. [Spanish]](https://peeperfrog.com/wp-content/uploads/2025/04/2025-04-06T205843Z8623459443file-1024x597.jpeg)
A “superintelligent” AI could trigger the end of humanity, warns Nobel laureate. [Spanish]
El Nuevo Dia | Geoffrey Hinton | Read the full article in Spanish
The renowned scientist Geoffrey Hinton, often referred to as the "godfather" of artificial intelligence, has raised alarming concerns about the potential dangers of superintelligent AI. In a recent interview, he emphasized that humanity is at a critical juncture where we still have the chance to develop safe AI systems. However, he pointed out that we currently lack the necessary technological methods to ensure this safety, urging for more research and effort in this area.
Hinton warned that the emergence of superintelligent AI could lead to catastrophic outcomes for humanity. He explained that while this advanced AI would not act like a villain from a movie, it could still pose significant threats, such as the ability to create deadly viruses. He stressed the importance of understanding and managing these risks, as existing research on AI safety is insufficient to address the potential existential threats.
Moreover, Hinton has previously indicated that there is a concerning probability—between 10% and 20%—that AI could lead to human extinction within the next three decades. He called for immediate attention from governments and international organizations to tackle these urgent risks associated with the misuse of AI technologies.