Section 230 Immunity Isn’t a Guarantee in a ChatGPT World

[ad_1]

ChatGPT has benefited from much fanfare about how it will transform the way we share and receive information through the internet. And for good reason. Every day we read about new ways people are using this transformative technology, whether it be passing the bar exam or diagnosing obscure illnesses.

Given its quick adoption, it is unsurprising that we are already starting to read media reports of misrepresentations and defamatory comments generated by ChatGPT. In one report, ChatGPT falsely asserted that a law professor made suggestive comments and attempted to sexually assault one of his students.

The model cited a media article that had never been written and claimed the professor was part of the faculty of a school where he never taught. In another, an Australian politician was reportedly threatening suit over ChatGPT-generated false statements that he was convicted and served prison time for bribing officials in Southeast Asia over 20 years ago.

These early accounts presage novel legal questions surrounding generative artificial intelligence platforms, not the least of which is whether defamation and product liability theories are well-suited causes of action arising from false statements made by these models.

It so happens that ChatGPT was released to the public as the US Supreme Court considered whether technology companies will continue to benefit from the immunity afforded by Section 230 of the Communications Decency Act.

To the delight of some and dismay of others, for almost three decades, Section 230 immunity has stymied nearly every civil claim brought against “big tech” for the republishing of third-party material. The section is attributed within technology circles as the reason for the exponential growth of modern internet platforms, with one author referring to Section 230 as the “twenty-six words that created the internet.”

Section 230 came of age in a world dominated by search engines and social media companies, which both rely on advertising revenue and user engagement derived from third-party created content.

It’s a blunt instrument that enables technology platforms to quickly dismiss civil claims, obviating the need for discovery or robust legal analyses of the technologies underpinning today’s internet platforms. This in turn has given attorneys and courts license to be willfully blind to machine learning and artificial intelligence.

But new large language models, often called LLMs, such as ChatGPT operate in a fundamentally different way. They don’t present users with third-party material, but rather generate original content in response to user queries.

Moreover, these algorithms, like any technology solution, have inherent weaknesses that include “hallucinations,” inconsistent accuracy, and the risk of inherent bias towards underrepresented groups.

Are we on the cusp of a flood of lawsuits over actionable content created by LLMs such as ChatGPT? In answering this question, the conversation to date has focused on whether Section 230 applies to ChatGPT and other LLMs. There’s been less attention paid to what relief, if any, courts can provide future plaintiffs even if Section 230 is inapplicable.

In the absence of Section 230, the nature or extent of liability will depend on whether courts apply traditional publisher and distributor legal constructs to generative artificial platforms. This is easier said than done.

ChatGPT is not a person or legal entity. It’s a neural network generating an original narrative based on the calculated probability that a series of text strings are relevant to the user’s inquiry.

Such a tool doesn’t manifest intent. Nor is it necessarily the case that developers of such a tool are negligent when the model misrepresents facts. LLMs are probabilistic models, meaning model narratives are not pre-determined and there is a recognized error rate inherent in their output.

Given this known limitation, can a reasonable person rely on facts asserted by these models? Should developers be held to a strict liability standard when false statements are made?

Section 230 has insulated the courts and Big Tech from addressing these and similar questions over the past 30 years. But whether LLM-related defamation suits are successful won’t hinge on whether Section 230 applies.

Its time will soon pass as LLMs and other AI technologies replace first-generation, recommender algorithm technologies. Be that as it may, there remain fundamental questions whether AI platforms generating original content based on probabilistic, non-determinative processes can trigger liability under traditional defamation or product liability theories.

A gap in existing law necessitates that AI-specific regulation be adopted that affords victims relief when LLMs make false statements.

Time will tell whether lawmakers pass legislation that follows in Section 230’s footsteps and similarly affords LLMs immunity, or, alternatively, legislation that sets forth legal standards for these models.

In the absence of legislation, courts will be forced to either shoehorn existing legal standards to fit ChatGPT and other LLMs, or devise new legal standards that recognize the unique nature of artificial intelligence.

Either way, with the arrival of LLMs, the days appear numbered of technology providers using Section 230 to quickly dispose of civil claims pre-discovery as part of a motion to dismiss. Big Tech won’t be able to hide behind Section 230 as courts grapple with these questions.

Technology companies have rung the alarm for years saying that repealing Section 230 would trigger a flood of lawsuits and hamper adoption of next generation tools. We are about to find out if they are right.

This article does not necessarily reflect the opinion of Bloomberg Industry Group, Inc., the publisher of Bloomberg Law and Bloomberg Tax, or its owners.

Author Information

Kurt Drozd, director at Sionic Advisors, is an attorney and management consultant focused on technology, data analytics, and machine learning.

Write for Us: Author Guidelines

[ad_2]

Source link