![]() ![]() We want to work with the government to prevent that from happening. “I think if this technology goes wrong, it can go quite wrong,” Altman said in a recent Senate Judiciary subcommittee hearing about potential oversight of AI. Demis Hassabis, who heads Google’s AI division, also signed the statement.Īltman, Hinton and other industry leaders have become increasingly vocal about their concerns about AI and the need for some kind of technological guardrails for it, including government regulation. More: It's the end of the world as we know it: 'Godfather of AI' warns nation of trouble aheadĪlso signing the statement: Sam Altman, the chief executive of OpenAI, the firm behind the popular conversation bot ChatGPT, which has made AI accessible to millions of users and allowed them to pose questions to it. ![]() One of them is renowned researcher and “Godfather of AI” Geoffrey Hinton, who quit his job as a vice president of Google last month so he could speak freely of the dangers of a technology he helped develop. ![]() "Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war," said the one-sentence statement, which was released by the Center for AI Safety, or CAIS, a San Francisco-based nonprofit organization.ĬAIS said it released the statement as a way of encouraging AI experts, journalists, policymakers and the public to talk more about urgent risks relating to artificial intelligence.Īmong the 350 signatories of the public statement were executives from the top four AI firms, OpenAI, Google DeepMind, Microsoft and Anthropic. Hundreds of scientists, tech industry execs and public figures – including leaders of Google, Microsoft and ChatGPT – are sounding the alarm about artificial intelligence, writing in a new public statement that fast-evolving AI technology could create as high a risk of killing off humankind as nuclear war and COVID-19-like pandemics. ![]()
0 Comments
Leave a Reply. |