[ad_1]
An anonymous reader quotes a report from The Guardian: The world must treat the risks from artificial intelligence as seriously as the climate crisis and cannot afford to delay its response, one of the technology’s leading figures has warned. Speaking as the UK government prepares to host a summit on AI safety, Demis Hassabis said oversight of the industry could start with a body similar to the Intergovernmental Panel on Climate Change (IPCC). Hassabis, the British chief executive of Google’s AI unit, said the world must act immediately in tackling the technology’s dangers, which included aiding the creation of bioweapons and the existential threat posed by super-intelligent systems.
“We must take the risks of AI as seriously as other major global challenges, like climate change,” he said. “It took the international community too long to coordinate an effective global response to this, and we’re living with the consequences of that now. We can’t afford the same delay with AI.” Hassabis, whose unit created the revolutionary AlphaFold program that depicts protein structures, said AI could be “one of the most important and beneficial technologies ever invented.” However, he told the Guardian a regime of oversight was needed and governments should take inspiration from international structures such as the IPCC.
“I think we have to start with something like the IPCC, where it’s a scientific and research agreement with reports, and then build up from there.” He added: “Then what I’d like to see eventually is an equivalent of a Cern for AI safety that does research into that — but internationally. And then maybe there’s some kind of equivalent one day of the IAEA, which actually audits these things.” The International Atomic Energy Agency (IAEA) is a UN body that promotes the secure and peaceful use of nuclear technology in an effort to prevent proliferation of nuclear weapons, including via inspections. However, Hassabis said none of the regulatory analogies used for AI were “directly applicable” to the technology, though “valuable lessons” could be drawn from existing institutions. Hassabis said the world was a long time away from “god-like” AI being developed but “we can see the path there, so we should be discussing it now.”
He said current AI systems “aren’t of risk but the next few generations may be when they have extra capabilities like planning and memory and other things … They will be phenomenal for good use cases but also they will have risks.”
[ad_2]
Source link