[ad_1]
This is an audio transcript of the Tech Tonic podcast episode: ‘Superintelligent AI — Transhumanism etc.’
Madhumita Murgia
We’re going to start off today’s episode with somebody named Anders Sandberg.
Anders Sandberg
I’m senior research fellow at the Future of Humanity Institute at the University of Oxford. And I’m somewhat of a futurist, somewhat of a philosopher ideal with questions about the emerging technologies, the big risks to humanity, what we can say about the long-term future.
Madhumita Murgia
For some people, people like Sandberg, the development of powerful AI isn’t just about creating a useful technological tool. They see it as the key to the evolution of the human species. Sandberg calls himself an extropian.
Anders Sandberg
So extropian is a version of transhumanism, which is a wider term for the idea that the human condition is great but we could probably use technology to make it even better — extend our lives, become smarter, make our bodies fit our own visions, etc.
Madhumita Murgia
Extropy is the opposite of entropy. So instead of the idea that the world around us is always descending into chaos, extropy suggests that things could get better over time. And the extropian thinking is that artificial intelligence could become powerful enough to improve itself without the need for humans, propelling us into a brighter, more optimistic future.
Anders Sandberg
If you have machines that can do roughly what human can do and humans can make machines that can do smart things, then you should expect to be able to automate the making of smarter machines that make smarter machines. And then it might take off in some kind of intelligence explosion.
Madhumita Murgia
That could mean a kind of AI god. At its most radical, these visions of a world brought about by advances in AI are barely imaginable today.
Anders Sandberg
The vision of a transhumanist future would be a future where humanity is not just modifying its external environment, but also modifies itself. It controls its own evolution, which might mean that me, individually, I enhance myself. I make my body and mind work. According to my life project. But we might also, as societies, figure out better ways of organising ourselves in both connecting our minds through technology or markets or something else, and also having an expansive humanity, a kind of cosmist vision that humanity belongs in the universe and can go out there and explore it, terraform planets, add its own character to the vast cosmic symphony.
Madhumita Murgia
Technology will allow us to cheat death, move beyond the limits of our biological bodies, upload our brains into the cloud, and travel deep into space. With AI people like Sandberg believe this future is getting closer, and increasingly, these ideas about the future are having a big influence on the AI discussions that we’re having today.
John Thornhill
This is Tech Tonic from the Financial Times with me, John Thornhill.
Madhumita Murgia
And me, Madhumita Murgia.
John Thornhill
If you believe that top AI companies were inching ever closer to artificial general intelligence, AI that can do everything a human can, but faster and better. No one seems quite certain whether that’s a good thing. So in this series, we’re asking whether we really are close to creating a superintelligent machine.
Madhumita Murgia
In this episode, the fringe beliefs that are helping to fuel the drive towards superintelligent AI.
[MUSIC PLAYING]
The extropians might have some outlandish proposals, but they also have humble origins. Sandberg grew up in Sweden as a big science fiction fan. When he arrived at university, the internet was still in its infancy. In those days, the way like-minded people communicated online was through group mailing lists, which is how he got involved with the extropians.
Anders Sandberg
And various people started coming together, mostly on the west coast of America at first. Discussing these questions. Where are the new technologies taking us? Can we make a better world? But the extropians’s list became something very, very special because it became this confluence of a lot of bright, bickering types discussing the long-run future. And those conversations have actually made a lasting impression on a lot of the technology we have today.
John Thornhill
So this mailing list was a sort of club for people who wanted to talk about how technology might radically impact the future of the human race.
Madhumita Murgia
Exactly. The extropians had their own secret handshake. They had big meetups where they would hold conferences in Silicon Valley. Sandberg actually flew in for some of those conferences and they had these passionate discussions about some pretty farfetched ideas.
Anders Sandberg
What about disassembling all the planets in the solar system to build a giant computer and housing quadrillions and quadrillions of uploaded minds?
Madhumita Murgia
OK. So having a solar system-based consciousness with all human minds in it.
Anders Sandberg
Actually, my first academic publication was actually literally looking into the physics of what I call Jupiter brains. How would it actually work to have a computer that’s planet sized or solar system-sized? How do you move energy information around? How do you cool the thing? How fast can it be? Where are the structural problems? So the core idea here was you take these technologies, you take our understanding of physics and see how far can you push it.
Madhumita Murgia
The extropians were very into the prospect of uploading their brains into a computer and also what’s called the singularity, where machine intelligence overtakes that of humans. And a lot of the extropians were hoping they would actually accelerate the adoption of these new radical technologies. The co-founder of the extropian mailing list actually started his own cryonics lab called Alcor, which is still running today.
John Thornhill
As in the freezing of dead bodies in order to bring them back to life later?
Madhumita Murgia
Yup. According to their website, they have 222 patients frozen at their facility in Arizona. I actually found this promo video they’d made, which is pretty great.
[ALCOR ADVERTISEMENT PLAYING]
Madhumita Murgia
Part of the extropian philosophy was this idea of boundless expansion, just a continual pushing of the limits. Extropians thought that with rational thinking you could overcome anything. One of the big ways they were hoping that they could achieve these futuristic visions was through human-level artificial intelligence, which at the time was still a distant prospect.
Anders Sandberg
Remember in the 1990s, AI was a field that was kind of mild. AI has had many bursts of bubbles, but this was kind of the winter time. Not very many academics were interested in it. We were thinking, what if you actually succeed? And one of the obvious things is there is no reason to expect the human intelligence to be the highest level of intelligence you can have in the universe. So what happens when you get superintelligence? You have these Marxists that can modify themselves in the goes (?). Could you make that stable?
Madhumita Murgia
And perhaps most interestingly, the extropians’ commitment to taking superintelligent AI seriously, it led them to talk a lot about the risks of the technology and how they could mitigate them.
John Thornhill
Sounds very similar to what the big AI companies are saying about the technology today.
Madhumita Murgia
I think that’s absolutely right. In a lot of ways, the AI safety debate as we know it now, it really follows on from the debates taking place on this, frankly, quite odd mailing list. So Eliezer Yudkowsky, whom we heard from in episode one, a big AI Doomer, he was one of the prominent members of the extropians. And then there were these spin-offs from the original mailing list, which included people like Deepmind’s co-founder Shane Legg, and these ideas had found their way into AI research communities. The best way to think about it is that the extropians were an important pivot point between science fiction and the tech industry.
John Thornhill
So they brought these far out ideas down to earth, so to speak.
Madhumita Murgia
Exactly. But as well as the kind of wacky ideas about the future, sometimes these mailing list conversations went to some pretty unsettling places.
Anders Sandberg
There were discussions about everything, and some of them, of course, went off the deep end. Some people were thinking about radically different ways of running society. So I’m going to offer into a weird new reactionary direction and felt like you should have some kind of monarchy where you take a corporate CEO to run society. Some discussion about genetic enhancement led to very tough questions about, well, are there differences between different races? Could there be better or worse races?
Madhumita Murgia
There’s one particularly well-known example of this. Nick Bostrom is a prominent philosopher who heads up the Oxford University Institute, where Sandberg works. He was also an early member of the extropians. He wrote a book on superintelligence that’s been highly influential. But earlier this year, Bostrom made headlines for a very different reason. After someone unearthed an email he’d sent on the extropian list back in the mid-1990s. Before we play an automated excerpt, a warning, it’s really offensive.
Automated email excerpt
Blacks are more stupid than whites.
Madhumita Murgia
Bostrom wrote in that email.
Automated email excerpt
I like that sentence, and I think it’s true.
Madhumita Murgia
He added, going on to use a racial slur and discuss IQ differences between races, a theory that’s now been thoroughly debunked. When the email came out 26 years later, Bostrom put out a statement distancing himself from those comments. They were disgusting, he said. Idiotic and insensitive. But it’s concerning because it shows how the extropian discussions about intelligence and its importance for changing the human species sometimes led to some pretty abhorrent conclusions.
John Thornhill
What does Anders Sandberg say about the fact that racist ideas were being put out on the mailing list?
Madhumita Murgia
Well, Sandberg says that the true extropian principles went against ideas of so-called racial superiority. But the Bostrom incident is part of what’s made some people really worried about extropianism. People are concerned that the racist ideas are part of a much wider ideology that’s seeped into the AI mainstream.
John Thornhill
So we’ve talked about how the extropians were looking towards artificial intelligence as a way to bring evolution in human society. But how exactly is that affecting the way the technology is developing today?
Madhumita Murgia
Well, for that, we went to talk to someone from a newer wave of AI start-ups.
Connor Leahy
We have our very nice view of the London skyline off her balcony.
Madhumita Murgia
Connor Leahy runs an AI company called Conjecture here in London.
Connor Leahy
Let’s take a look. Pretty nice weather today. Recently got some more plants. Nice. And yeah, let’s go to my office.
Madhumita Murgia
Conjecture is basically trying to do the exact opposite of what the extropians wanted for AI.
Connor Leahy
We’re still in a pretty early phase, but we ultimately want to go as you want to build up from AI useful eight narrower AI products. So systems that can help at various business processes. And in particular, we want to limit things to be below human intelligence.
Madhumita Murgia
Which is interesting, right? They’re choosing specifically not to build artificial general intelligence and instead trying to make better narrow AI like the kind we already have today.
John Thornhill
They’re just rejecting the idea outright.
Madhumita Murgia
Yeah. And part of the reason for this is that Leahy is a pretty big AI doomer. He’s really concerned about companies like OpenAI and Google building superintelligent AI.
Connor Leahy
If you build systems that are smarter than humans, well the human era is over. Humans no longer matter. People like, oh, which jobs will you replace? This is a stupid question. There will be no more jobs. There will be no more economy. What are you talking about? Humans will be useless. This is like saying, oh, what jobs will the chimps have in the human economy? This is the chimps. They don’t have any jobs.
Madhumita Murgia
Chimps aside, Leahy is worried that the whole fixation on building AGI goes back to the extropian list. He thinks we wouldn’t even be considering AGI as a goal if it weren’t for these ideologies.
Connor Leahy
You can see direct ideological like genealogy that goes to a shocking degree. Like very strangely all the way back to one weird niche mailing list in the 90s called the extropians. It was ideologues like religious believers in a sense, who believed in these visions of AI, of transhumanism, of the future, uploading your mind into the computer, living forever, immortality and all these kinds of beliefs.
Madhumita Murgia
Leahy is worried that through their influence in the AI industry, the modern day extropians are imposing their vision of the future on all of us.
Connor Leahy
They’re mostly very nice people, and they have, many of their ideas are good. Many of them are altruistic and genuinely so. But they have some very bad ideological baggage as well, and beliefs about the world that I think are predictably playing out in the real world right now. They’ve always wanted to build AGI. This is who they are. They want to build the future. They don’t trust government. They don’t trust democratic processes. And they are willing to do anything to make the thing happen that they want.
John Thornhill
So we’ve got this fringe ideology involved with some of the most well capitalised companies in the world.
Madhumita Murgia
Right. And from my perspective, covering AI, I think the biggest effect that it’s had is reframing the debate around this technology instead of talking about the technology itself and its immediate impact on people, the focus has become these far-flung futuristic ideas.
Timnit Gebru
I just started becoming increasingly concerned that the entire conversation about AI is being derailed to existential risks with AGI.
Madhumita Murgia
It’s something Timnit Gebru thinks about a lot.
Timnit Gebru
I am the founder and executive director of the Distributed AI Research Institute.
Madhumita Murgia
Gebru trained as an electrical engineer and spent time working in the field of computer vision. She was a co-leader of Google’s Ethical AI team until 2020, when she was fired from the company, allegedly for a paper she’d published outlining the risks of AI. For Gebry, the main thing is to remember that researchers simply weren’t talking about AGI 15 years ago. It’s a relatively new goal for these companies to be pursuing. But all of a sudden she started seeing these ideas about AGI and these strange futuristic ideologies popping up everywhere.
Timnit Gebru
Virtually all of the companies and researchers that I can think of whose explicit goal is to build AGI are adherents of these ideologies. She says it’s not just extropianism, but a bunch of related ideologies like cosmism, this idea that we should explore new planets, or longtermism, a utilitarian idea that focuses on impacts on future generations, or effective altruism, a kind of philanthropic philosophy.
John Thornhill
I’ve heard of that one. Thanks to Sam Bankman-Fried of FTX notoriety was an effective altruist. We know that some people on the board of OpenAI are effective altruists, too.
Madhumita Murgia
Yeah, they’re everywhere in the AI industry. Gebru sees this whole raft of ideologies about the future of humanity as being overlapping and intertwined and sort of infecting the companies building this technology.
Timnit Gebru
Sam Altman, who has also written about singularity and also cosmism, because you can look at his blog post, DeepMind got money from Peter Thiel after they were at one of these singularity summits. Elon Musk says that he is very heavily influenced by longtermism. Elon Musk and Peter Thiel gave so much money to OpenAI.
Madhumita Murgia
And ultimately Gebru thinks the AI industry is controlled by people who have this very particular vision of the future.
John Thornhill
OK. But I guess some people might say these are still pretty far-off ideas. How concerning does Gebru really think it is that people have these strange beliefs?
Madhumita Murgia
Well, for Gebru, who was trained as an engineer, one of the issues is that the rush towards AGI is basically incompatible with proper safety measures.
Timnit Gebru
We have a race to create systems that are marketed to people as being able to do everything for everyone. How is it even possible to make such a system safe? How do you even test what it’s supposed to do or not? If you are building specialised systems and this is one of the tenets of engineering, you scope out your systems, you wonder, OK, what is the input? What is the output? What is it supposed to do? What is it not supposed to do? Right? This is extremely dangerous. So because AGI is so generalised, it’s hard to check whether a system might do something that could reflect societal prejudices or pose a privacy problem.
Madhumita Murgia
Another reason she’s concerned is that these ideologies tend to be very fixated on intelligence rather than any other human attributes. It’s a cold, raw, analytical approach to the world. It goes back to Nick Bostrom and those debates happening on the extropian list. That emphasis on intelligence can lead to some dark places.
Timnit Gebru
Even if you look at building some sort of, quote unquote, generally intelligent system, then you have to go try to define what intelligence is. Then they use literal eugenicist definitions of intelligence. They cite people who founded a religion called Beyond and Beyondism, which is an explicitly eugenicist religion. They cite people who came to their defence of Charles Murray’s 1994 book, The Bell Curve, basically talking about how black people are not even employable because they have an IQ of less than 80, etc, etc. So already you have a very racist and eugenicist foundation.
Madhumita Murgia
I should say that there are some people that we’ve mentioned on this program who would be appalled by being labelled eugenicists and would strongly reject that charge. The broader point here is that if you have a transformative technology like AI, it matters what the people building it really believe. We should be worried if people at the leading AI companies believe this stuff because it’s going to have an enormous impact on society and those ideas will seep into the technology itself. Their belief systems matter.
Timnit Gebru
Utopia for them is where they have completely dominated what they call the cosmos, colonise them. You have mind-uploading technology. They see, quote unquote, leave biology behind. So to me, it is an incredibly inhuman future, and it’s a future that is explicitly about colonising and expansion rather than coexisting and knowing when something is enough. They don’t have the concept of enough in this ideology, right? It’s always more. And to me, it’s an explicitly joyless, anti-human kind of future.
[MUSIC PLAYING]
John Thornhill
How does Anders Sandberg feel about the legacy of the extropian list and the fact that it’s so influential today?
Madhumita Murgia
Well, he said that the extropian list eventually fractured into different groups, and some of the people who were involved in the group went on to work on existential risk in university settings like Sandberg himself. But I did ask him directly what he thought of where the AGI debate had got to today.
Anders Sandberg
I think AGI is a very worthy goal to be pursuing. I just wish we were pursuing it a little bit more carefully as somebody who was working in the world of neural networks, I’m surprised by the recent progress. And many people inside the field are also shocked and surprised by how rapidly things are moving a bit more rapidly than is comfortable. Artificial intelligence, if it is as powerful as we believed it was, and I still think it is, means that the company that actually gets self-improving AI might have a tremendous power over the world. And this is, of course, a complicated back and forth. OpenAI and the Google DeepMind and Anthropic and the Microsoft and all the others, they sit on tremendous power and potentially they could get way more power.
Madhumita Murgia
And actually, for Sandberg, it’s the fact that these companies have become the centre of this research, which is so worrying.
Anders Sandberg
Many of us who were on the list, we think, yeah, you must matter. We can make the human condition even better. But we’re still doing it because life is worth living and should be worth living even more. We want to have a complex, interesting world that is open for all to explore. That is an important mission, but that’s slightly tricky to combine with some corporate or political issues.
Madhumita Murgia
So, John, do you see these ideas having influence in Silicon Valley and in AI today?
John Thornhill
I think it definitely has had an influence on the thinking behind the people who are trying to develop AGI. But it has to be said that there are lots of AI researchers who don’t have the goal of trying to get to AGI itself. And there are lots of AI researchers who don’t believe in any of the philosophies or -isms that we’ve been talking about today. That said, it’s a kind of constantly mutating field and one of the new kind of emerging ideologies, as it were, of Silicon Valley, is this effect of accelerationism. And I think this is a new divide that is emerging in Silicon Valley. You have this so-called devel movement or the decelerationists who believe that you ought to slow down. And there were all these AI scientists earlier this year who signed a letter saying that we ought to take a pause in developing AI. And then you have the effect of accelerationists, and they very much believe that you should push full square ahead with this, developing the technology. And people like Marc Andreessen, founder of Andreessen Horowitz, venture capital company, wrote this techno-optimist manifesto, for example, who are saying that AI is the salvation in many respects for so many of the world’s problems — that we can solve the productivity crisis that we’re all facing. We can cure disease. We can solve climate change. We should absolutely be using the AI to the best of our abilities and accelerating the development as fast as we can. And the decel movement is therefore actually jeopardising the future of humanity.
Madhumita Murgia
I’ve seen this (inaudible) acronym popping up across X or Twitter . . .
John Thornhill
Exactly.
Madhumita Murgia
. . . where people are sort of putting their stake in the ground and forming these groups, aren’t they?
John Thornhill
You interviewed a lot of people in this episode about the ideologies that exist in Silicon Valley. What struck you most about that debate?
Madhumita Murgia
I think what was really fascinating to me is to see how often these ideas keep coming up across entrepreneurs and some of the richest businesspeople in Silicon Valley. They’re all obsessed with this idea of longevity, of extending lifespan and of existing for decades and centuries into the future beyond our physical body. So, you know, this sort of makes sense that it isn’t necessarily just about making money, but that the true goal is to push the limits of biology somehow.
John Thornhill
And do you think it’s possible that one day in the future we might be able to upload our brains to a computer and live forever?
Madhumita Murgia
It’s something that I thought about, whether it’s even possible. And I guess the brain is ultimately an electrical machine made up of all these firing neurons and I can’t see why we couldn’t recreate it. Doesn’t mean it would be conscious or human in any way. But, you know, I see the possibility. Or maybe I’ve just been talking to people in Silicon Valley for too long. (laughter)
John Thornhill
So we can have an eternal Madhu.
[MUSIC PLAYING]
In our next episode of this season of Techtonic, we delve into one of the largest unanswered questions in all of science.
Interviewee
We don’t have a way of saying whether something actually is conscious or not.
John Thornhill
Could artificial intelligence one day be conscious?
Interviewee
I think AI consciousness is certainly possible, and I would be surprised if we don’t have plausibly conscious AI systems by the end of this decade.
John Thornhill
That’s next time on our final episode of our series. You’ve been listening to Tech Tonic from the Financial Times. I’m John Thornhill.
Madhumita Murgia
And I’m Madhumita Murgia. Our senior producer is Edwin Lane. Producer Josh Gabert Doyon. Our executive producer is Manuela Saragosa. Sound design and mixing by Breen Turner and Samantha Giovinco. Original music by Metaphor Music. Cheryl Brumley is our head of audio. If you want to read more about the world of AI on FT.com, we’ve made some stories free to read. Just follow the links in the show notes.
[ad_2]
Source link