[ad_1]
ChatGPT and other new chatbots are so good at mimicking human interaction that they’ve prompted a question among some: Is there any chance they’re conscious?
The answer, at least for now, is no. Just about everyone who works in the field of artificial technology is sure that ChatGPT is not alive in the way that’s generally understood by the average person.
But that’s not where the question ends. Just what it means to have consciousness in the age of artificial intelligence is up for debate.
“These deep neural networks, these matrices of millions of numbers, how do you map that onto these views we have about what consciousness is? That’s kind of terra incognita,” said Nick Bostrom, the founding director of Oxford University’s Future of Humanity Institute, using the Latin term for “unknown territory.”
The creation of artificial life has been the subject of science fiction for decades, while philosophers have spent decades considering the nature of consciousness. A few people have even argued that some AI programs as they exist now should be considered sentient (one Google engineer was fired for making such a claim).
Ilya Sutskever, a co-founder of OpenAI, the company behind ChatGPT, has speculated that the algorithms behind his company’s creations might be “slightly conscious.”
NBC News spoke with five people who study the concept of consciousness about whether an advanced chatbot could possess some degree of awareness. And if so, what moral obligations does humanity have toward such a creature?
It is a relatively new area of inquiry.
“This is a very super-recent research area,” Bostrom said. “There’s just a whole ton of work that hasn’t been done.”
In true philosophic fashion, the experts said it’s really about how you define the terms and the question.
ChatGPT, along with similar programs like Microsoft’s search assistant Sydney, are already being used to aid in tasks like programming and writing simple text like press releases, thanks to their ease of use and convincing command of English and other languages. They’re often referred to as “large language models,” as their fluency comes for the most part from having been trained on giant troves of text mined from the internet. While their words are convincing, they’re not designed with accuracy as a top priority, and are notoriously often wrong when they try to state facts.
Spokespeople for ChatGPT and Microsoft both told NBC News that they follow strict ethical guidelines, but didn’t provide specifics about concerns their products could develop consciousness. A Microsoft spokesperson stressed that the Bing chatbot “cannot think or learn on its own.”
In a lengthy post on his website, Stephen Wolfram, a computer scientist, noted that ChatGPT and other large language models use math to figure a probability of what word to use in any given context based on whatever library of text it has been trained on.
Many philosophers agree that for something to be conscious, it has to have a subjective experience. In the classic paper “What Is It Like To Be A Bat?” philosopher Thomas Nagel argued that something is only conscious “if there is something that it is like to be that organism.” It’s likely that a bat has some sort of bat-like experience even though its brain and senses are very different from a human’s. A dinner plate, by contrast, wouldn’t.
David Chalmers, co-director of New York University’s Center for Mind, Brain and Consciousness, has written that while ChatGPT doesn’t clearly possess a lot of commonly assumed elements of consciousness, like sensation and independent agency, it’s easy to imagine that a more sophisticated program could.
“They’re excellent liars.”
Susan Schneider, founding director of Florida Atlantic University’s Center for the Future Mind.
“They’re kind of like chameleons: They can adopt any new personas at any moment. It’s not clear they’ve got fundamental goals and beliefs driving their action,” Chalmers told NBC News. But over time they may develop a clearer sense of agency, he said.
One problem philosophers point out is that users can go ahead and ask a sophisticated chatbot it if it has internal experiences, but they can’t trust it to give a reliable answer.
“They’re excellent liars,” said Susan Schneider, the founding director of Florida Atlantic University’s Center for the Future Mind.
“They’re increasingly capable of having more and more seamless interactions with humans,” she said. “They can tell you that they feel that they’re persons. And then 10 minutes later, in a distinct conversation, they will say the opposite.”
Schneider has noted that current chatbots use existing human writing to describe their internal state. So one way to test if a program is conscious, she argues, is to not give it access to that sort of material and see if it can still describe subjective experience.
“Ask it if it understands the idea of survival after the death of its system. Or if it would miss a human that it interacts with often. And you probe the responses, and you find out why it reports the way it does,” she said.
Robert Long, a philosophy fellow at the Center for AI Safety, a San Francisco nonprofit, cautioned that just because a system like ChatGPT has complexity doesn’t mean it has consciousness. But on the other hand, he noted that just because a chatbot can’t be trusted to describe its own subjective experience, that doesn’t mean it doesn’t have one.
“If a parrot says ‘I feel pain,’ this doesn’t mean it’s in pain — but parrots very likely do feel pain,” Long wrote on his Substack.
Long also said in an interview with NBC News that human consciousness is an evolutionary byproduct, which might be a lesson for how an increasingly complex artificial intelligence system could get closer to a human idea of subjective experience.
Something similar could happen with artificial intelligence, Long said.
“Maybe you won’t be intending to do it, but out of your effort to build more complex machines, you might get some sort of convergence on the kind of mind that has conscious experiences,” he said.
The idea that humans might create another kind of conscious being prompts the question of whether they have some moral obligation toward it. Bostrom said that while it was difficult to speculate on something so theoretical, humans could start by simply asking an AI what it wanted and agreeing to help with the easiest requests: “low-hanging fruits.”
That could even mean changing its code.
“It might not be practical to give it everything at once. I mean, I’d like to have a billion dollars,” Bostrom said. “But if there are really trivial things that we could give them, like just changing a little thing in the code, that might matter a lot. If somebody has to rewrite one line in the code and suddenly they’re way more pleased with their situation, then maybe do that.”
If humanity does eventually end up sharing the earth with a synthetic consciousness, that could force societies to drastically re-evaluate some things.
Most free societies agree that people should have the freedom to reproduce if they choose, and also that one person should be able to have one vote for representative political leadership. But that becomes thorny with computerized intelligence, Bostrom said.
“If you’re an AI that could make a million copies of itself in the course of 20 minutes, and then each one of those has one vote, then something has to yield,” he said.
“Some of these principles we think are really fundamental and important would need to be rethought in the context of a world we co-inhabit with digital minds,” Bostrom said.
[ad_2]
Source link