[ad_1]
When President Biden signed his sweeping executive order on artificial intelligence last week, he joked about the strange experience of watching a “deep fake” of himself, saying, “When the hell did I say that?”
The anecdote was significant, for it linked the executive order to an actual A.I. harm that everyone can understand — human impersonation. Another example is the recent boom in fake nude images that have been ruining the lives of high-school girls. These everyday episodes underscore an important truth: The success of the government’s efforts to regulate A.I. will turn on its ability to stay focused on concrete problems like deep fakes, as opposed to getting swept up in hypothetical risks like the arrival of our robot overlords.
Mr. Biden’s executive order outdoes even the Europeans by considering just about every potential risk one could imagine, from everyday fraud to the development of weapons of mass destruction. The order develops standards for A.I. safety and trustworthiness, establishes a cybersecurity program to develop A.I. tools and requires companies developing A.I. systems that could pose a threat to national security to share their safety test results with the federal government.
In devoting so much effort to the issue of A.I., the White House is rightly determined to avoid the disastrous failure to meaningfully regulate social media in the 2010s. With government sitting on the sidelines, social media technology evolved from a seemingly innocent tool for sharing personal updates among friends to a large-scale psychological manipulation, complete with a privacy-invasive business model and a disturbing record of harming teenagers, fostering misinformation and facilitating the spread of propaganda.
But if social networking was a wolf in sheep’s clothing, artificial intelligence is more like a wolf clothed as a horseman of the apocalypse. In the public imagination A.I. is associated with the malfunctioning evil of HAL 9000 in Stanley Kubrick’s “2001: A Space Odyssey” and the self-aware villainy of Skynet in the “Terminator” films. But while A.I. certainly poses problems and challenges that call for government action, the apocalyptic concerns — be they mass unemployment from automation or a superintelligent A.I. that seeks to exterminate humanity — remain in the realm of speculation.
If doing too little, too late with social media was a mistake, we now need to be wary of taking premature government action that fails to address concrete harms.
The temptation to overreact is understandable. No one wants to be the clueless government official in the disaster movie who blithely waves off the early signs of pending cataclysm. The White House is not wrong to want standardized testing of A.I. and independent oversight of catastrophic risk. The executive order requires companies developing the most powerful A.I. systems to keep the government apprised of safety tests, and also to have the secretary of labor study the risks of and remedies for A.I. job displacement.
But the truth is that no one knows if any of these world-shattering developments will come to pass. Technological predictions are not like those of climate science, with a relatively limited number of parameters. Tech history is full of confident projections and “inevitabilities” that never happened, from the 30-hour and 15-hour workweeks to the demise of television. Testifying in grave tones about terrifying possibilities makes for good television. But that’s also how the world ended up blowing hundreds of billions of dollars getting ready for Y2K.
To regulate speculative risks, rather than actual harms, would be unwise, for two reasons. First, overeager regulators can fixate shortsightedly on the wrong target of regulation. For example, to address the dangers of digital piracy, Congress in 1992 extensively regulated digital audio tape, a recording format now remembered only by audio nerds, thanks to the subsequent rise of the internet and MP3s. Similarly, today’s policymakers are preoccupied with large language models like ChatGPT, which could be the future of everything — or, given their gross unreliability stemming from chronic falsification and fabrication, may end up remembered as the Hula Hoop of the A.I. age.
Second, pre-emptive regulation can erect barriers to entry for companies interested in breaking into an industry. Established players, with millions of dollars to spend on lawyers and experts, can find ways of abiding by a complex set of new regulations, but smaller start-ups typically don’t have the same resources. This fosters monopolization and discourages innovation. The tech industry is already too much the dominion of a handful of huge companies. The strictest regulation of A.I. would result in having only companies like Google, Microsoft, Apple and their closest partners competing in this area. It may not be a coincidence that those companies and their partners have been the strongest advocates of A.I. regulation.
Actual harm, not imagined risk, is a far better guide to how and when the state should intervene. A.I.’s clearest extant harms are those related to human impersonation (such as the fake nudes), discrimination and the addiction of young people. In 2020, thieves used an impersonated human voice to swindle a Japanese company in Hong Kong out of $35 million. Facial recognition technology has led to wrongful arrest and imprisonment, as in the case of Nijeer Parks, who spent 10 days in a New Jersey jail because he was misidentified. Fake consumer reviews have eroded consumer confidence, and the fake social media accounts drive propaganda. A.I.-powered algorithms are used to enhance the already habit-forming properties of social media.
These examples aren’t quite as hair-raising as the warning issued this year by the Center for A.I. Safety, which insisted that “mitigating the risk of extinction from A.I. should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.” But the less exciting examples happen to feature victims who are real.
To its credit, Mr. Biden’s executive order is not overly caught up in the hypothetical: Most of what it suggests is a framework for future action. Some of its recommendations are urgent and important, such as creating standards for the watermarking of photos, videos, audio and text created with A.I.
But the executive branch, of course, is limited in its power. Congress should follow the lead of the executive branch and keep an eye on hypothetical problems while moving decisively to protect us against human impersonation, algorithmic manipulation, misinformation and other pressing problems of A.I. — not to mention passing the online privacy and child-protection laws that despite repeated congressional hearings and popular support, it keeps failing to enact.
Regulation, contrary to what you hear in stylized political debates, is not intrinsically aligned with one or another political party. It is simply the exercise of state power, which can be good or bad, used to protect the vulnerable or reinforce existing power. Applied to A.I., with an eye on the unknown future, regulation may be used to aid the powerful by helping preserve monopolies and burden those who strive to use computing technology to improve the human condition. Done correctly, with an eye toward the present, it might protect the vulnerable and promote broader and more salutary innovation.
The existence of actual social harm has long been a touchstone of legitimate state action. But that point cuts both ways: The state should proceed cautiously in the absence of harm, but it also a duty, given evidence of harm, to take action. By that measure, with A.I. we are at risk of doing too much and too little at the same time.
Tim Wu (@superwuster) is a law professor at Columbia and the author, most recently, of “The Curse of Bigness: Antitrust in the New Gilded Age.”
Source photographs by plepann and bebecom98/Getty Images.
The Times is committed to publishing a diversity of letters to the editor. We’d like to hear what you think about this or any of our articles. Here are some tips. And here’s our email: letters@nytimes.com.
Follow The New York Times Opinion section on Facebook, Twitter (@NYTopinion) and Instagram.
[ad_2]
Source link