Geoffrey Hinton, the computer scientist known as the “godfather of AI,” recently left his job at Google to speak openly about his concerns over the future of artificial intelligence. Hinton, who developed deep learning, believes that future versions of the technology could pose a real threat to humanity. He is worried about the rapid advancement of AI, particularly in the race between Big Tech companies to develop more powerful AI, and warns that it is hard to prevent bad actors from using it for harmful purposes.
Experts’ Concerns about the Risks of AI

Image | RNZ
Hinton is not alone in his concerns. Other experts, including the co-founders of the Center for Humane Technology and the Association for the Advancement of Artificial Intelligence, Tristan Harris and Aza Raskin, have also spoken out publicly about the risks associated with AI. While they acknowledge the potential for AI to be game-changing in fields such as healthcare, climate, education, and engineering, they warn about the limitations and concerns of AI, including the potential for AI systems to make errors, provide biased recommendations, threaten privacy, and empower bad actors with new tools.
Despite his concerns, Hinton remains optimistic about the potential of AI to do good. He believes that the techniques he developed can be used for an enormous amount of good, such as detecting health risks earlier than doctors and providing more accurate weather warnings about earthquakes and floods. Nonetheless, Hinton and other experts urge caution in AI’s rapid advancement and deployment to ensure that it benefits humanity and does not cause harm.