[ad_1]
Italy’s privacy watchdog said the bot had “no age verification system” to stop access by under 18s. Under European data rules, the regulator can impose fines of up to €20m (£18m), or 4pc of OpenAI’s turnover, unless it changes its data practices.
In response, OpenAI said it had stopped offering ChatGPT in Italy. “We believe we offer ChatGPT in compliance with GDPR and other privacy laws,” the company said.
France, Ireland and Germany are all examining similar regulatory crackdowns. In Britain, the Information Commissioner’s Office said: “There really can be no excuse for getting the privacy implications of generative AI wrong.”
However, while the privacy watchdog has raised red flags, so far the UK has not gone as far as to threaten to ban ChatGPT. In an AI regulation white paper published earlier this month, the Government decided against a formal AI regulator. Edward Machin, a lawyer at the firm Ropes & Gray, says: “The UK is striking its own path, it is taking a much lighter approach.”
Several AI experts told The Telegraph that the real concerns about ChatGPT, Bard and others were less about the long-term consequences of some kind of killer, all-powerful AI, but the damage it could do in the here and now.
Juan José López Murphy, head of AI and data science at tech company Globant, says there are near-term issues with helping people spot deep fakes or false information generated by chatbots. “That technology is already here… it is about how we misuse it,” he says.
“Training ChatGPT on the whole internet is potentially dangerous due to the biases of the internet,” says computer expert Dame Wendy Hall. She suggests calls for a moratorium on development would likely be ineffective, since China is rapidly developing its own tools.
OpenAI appears alive to the possibility of a crackdown. On Friday, it posted a blog which said: “We believe that powerful AI systems should be subject to rigorous safety evaluations. Regulation is needed to ensure that such practices are adopted.”
Marc Warner, of UK-based Faculty AI, which is working with OpenAI in Europe, says regulators will still need to plan with the possibility that a super powerful AI may be on the horizon.
“It seems general artificial intelligence might be coming sooner than many expect,” he says, urging labs to stop the rat race and collaborate on safety.
“We have to be aware of what could happen in the future, we need to think about regulation now so we don’t develop a monster,” Dame Wendy says.
“It doesn’t need to be that scary at the moment… That future is still a long way away, I think.
Fake images of the Pope might seem a long way from world domination. But, if you believe the experts, the gap is starting to shrink.
[ad_2]
Source link