A “superintelligent” AI could trigger the end of humanity, warns Nobel laureate. [Spanish]
Geoffrey Hinton, a Nobel laureate, warns that the development of superintelligent AI could pose existential risks to humanity. He emphasizes the urgent need for safety measures, highlighting the potential for AI to create deadly viruses and autonomous weapons. Hinton estimates a 10-20% chance of human extinction within 30 years.