[ad_1]
Theft of editorial content is understood to have impacted a number of major news outlets, including The Guardian.
Earlier this year, Chris Moran, head of editorial innovation at The Guardian, said the fabrication of articles was “particularly troubling for trusted news organisations and journalists whose inclusion adds legitimacy and weight to a persuasively written fantasy”.
The NMA warned the creation of fake news articles could have even more catastrophic implications if new generative models start relying on AI data rather than human-made content.
This is known as “model collapse”, which the industry group warned could “pollute the human race’s collective pool of knowledge altogether”.
The warning echoes recent comments by Robert Thomson, the chief executive of News Corp and a close ally of Rupert Murdoch, who said AI could crush readers under the weight of a “maggot-ridden mind mould”.
Creative industries clash with AI
The growth in fake news articles represents a new front in the backlash against AI, which is being waged in all corners of the creative industries.
Publishers have already raised concerns that AI companies are using vast swathes of copyrighted material to train their tools without permission.
Earlier this year, the Telegraph revealed that The Daily Mail is gearing up for a legal battle with Google over claims the tech giant used hundreds of thousands of online news stories to train its Bard chatbot.
Revenue threat
Publishers are also concerned that tech firms could use AI to summarise articles, diverting traffic away from news websites and further depriving them of advertising revenues.
But while some outlets have threatened legal action, other parts of the news industry are hoping to establish landmark deals that would see tech giants such as Google, Open AI and Microsoft pay to use their content.
The Associated Press has already struck a deal with OpenAI that will allow the ChatGPT maker to use the newswire’s content.
In return, AP will have access to Open AI’s technology.
Mr Meredith, who also recently raised concerns about AI in a submission from the NMA to the House of Lords, said: “Tech firms must take responsibility and work with news publishers to ensure the right safeguards are in place.”
[ad_2]
Source link