[ad_1]
Silicon Valley has entered the Hail Mary phase of its business cycle — a desertic part of a tech-industry downturn where desperation can turn into recklessness.
The biggest players of the last decade are facing an existential crisis as their original products lose steam and seismic shifts in the global economy force them to search for new sources of growth. Enter generative AI — algorithms like the viral program ChatGPT that seem to mimic human intelligence by spitting out text or images. While everyone in Silicon Valley is suddenly, ceaselessly talking about this new tech, it is not the kind of artificial intelligence that can power driverless cars, or Jetson-like robot slaves, or bring about the singularity. The AI that companies are deploying is not at that world-changing level yet, and candidly, experts will tell you it’s unclear if it ever will be. But that hasn’t stopped the tech industry from trying to ride the wave of excitement and fear of this new innovation.
As soon as it was clear that OpenAI, the creator of ChatGPT, had a cultural hit, it was off to the races. Hoping to cash in on the craze, Microsoft poured $10 billion into OpenAI in January and launched an AI-powered version of their search engine, Bing, soon after. Google has scrambled to keep up, launching their own AI-inflected search engine, Bard, in March. Nearly every other major tech company has followed suit, insisting that their business will be at the forefront of the AI revolution. Venture capitalists — who’ve been miserly with their money since the market turned last year — have started writing checks for AI startups. And in a surefire sign that something has exploded beyond recognition, Elon Musk started claiming the whole thing was his idea all along.
All of this hype is more of a billionaire ego brawl than a real revolution in technology, one AI startup consultant and a longtime researcher who spoke on the condition of anonymity to speak candidly about products in development told me. “I hate to frame the story as another gang of bros, but that’s what OpenAI is,” they said. “They’re going through riffs and tiffs.” To get a piece of that sweet AI-craze money, even the most powerful tech moguls are trying to make it seem as if their company is the real leader in AI, embracing the timeless truth passed down by Will Ferrell’s fictional race car driver Ricky Bobby: “If you ain’t first, you’re last.”
Wall Street, never one to miss a trend, has also embraced the AI hype. But as Daniel Morgan, a senior portfolio manager at Synovus Trust, said in an interview with Bloomberg TV, “This AI hype doesn’t really trickle down into any huge profit growth. It’s just a lot of what can happen in the future.” AI-driven products are not bringing in big bucks yet, but the concept is already pumping valuations.
That is what makes the hype cycle a Hail Mary: Silicon Valley is hoping and praying that AI hype can keep customers and investors distracted until their balance sheets can bounce back. Sure, rushing out an unproven new technology to distract from the problems of the tech industry and global economy may be a bit ill-advised. But, hey, if society suffers a little along the way, well — that’s what happens when you move fast and break things.
Don’t fear the robots
To understand the Hail Mary moment, it’s important to understand the actual capabilities of technology these tech titans are touting. Companies are claiming AI-powered tech can revolutionize everything from travel to dating. And every CEO trying to sell investors on the AI future is playing up its supposedly fearsome power.
Take Sundar Pichai, the CEO of Google parent company, Alphabet. He gave a rare in-depth interview to CBS’ “60 Minutes” and leaned heavily into the potential for AI to teach itself to think and move like a human. He made it sound as if the technology is advancing so fast that Google, one of the world’s richest and most powerful companies, is helpless in the face of it: It’s coming whether humans want it or not.
Pichai’s proclamations were light fare compared to what Musk has to say about AI. Even though he helped start OpenAI in 2015, Musk exited the board in 2018 and missed out on the company’s ChatGPT explosion. Left out in the cold, the Tesla-Twitter-SpaceX CEO went on Tucker Carlson’s recently departed Fox News show in April to tell the world that the former Google CEO Larry Page — a former friend — is trying to make a godlike AI that could destroy civilization. Musk also emphasized that he alone would create a more responsible version of AI, never mind those other guys. Again, what Musk is describing is AI general intelligence — something much more advanced than the generative AI OpenAI is building on at the moment.
Contrary to these claims of earth-shattering tech, the current crop of AI products is fairly limited. They can operate as pseudo personal assistants, sell consumers more targeted ads, or teach themselves how to make computer programs and work processes more efficient. Instead of globe-shaking new ways of working, experts told me, expect to see more AI wingdings and widgets.
”We’re not there at all,” the AI startup consultant told me, referring to the predictions of human-surpassing general intelligence. “For them to take what we have now — they’re framing it as something scarier than it actually is.” But framing new tech as scary is more powerful — and more lucrative — than to admit that it is limited.
In the interview, Pichai had a hard time explaining why all this AI business was popping up now. The technology for mining large-language models has been around since around 2018. And we know generative AI is imperfect: It tells lies and it could be used as a tool to spread misinformation. Even Pichai could not assure the interviewer that it is 100% “safe.” Despite these limitations, it’s no accident that this technology is thundering toward popular commercialization right now. The release of ChatGPT turned AI into a buzzword, and there’s nothing Silicon Valley loves more than to raise money on that.
When making money on awesome growth is impossible, Silicon Valley will settle for making money on awesome growth potential — regardless of how far in the future it may be and regardless of the consequences. This is what makes the sudden surge in AI interest actually dangerous: AI itself is a neutral power. Humans can use it for good or for ill. But the more quickly and carelessly it is scaled, the more likely it is to cause mayhem.
Desperate times
What has changed the state of play for generative AI is not technological advancement, but the advanced state of Silicon Valley’s malaise. The pandemic opened a spigot of cash for the sector as more people relied on tech products to get through isolation. Venture capitalists couldn’t spray their money around fast enough — any startup with crypto, blockchain, or metaverse in the name was moving to Miami and swimming in the warm, churning South Florida waters. Then the money dried up. Layoffs have swept the industry, even at blue-chip companies like Meta, Google, and Amazon. Tiger Global — a Wall Street hedge fund turned cash-drunk venture-capitalist pirate— has started unwinding its investments. In March, Silicon Valley Bank collapsed — along with smaller banks that served the tech community, such as Signature and Silvergate. Venture-capital funding is now at a six-year low. Thanks to higher interest rates, money is drying up all over the world, but nowhere is it drying up faster than in tech.
So far in 2023, tech stocks — especially big ones — have rallied from last year’s crash, but there are still clear signs that tech remains in a dry season. Taiwan Semiconductor Corporation, the world’s biggest chip producer, in April missed sales expectations. Since TSMC produces chips for everything from phones to missiles, sliding semiconductor demand could signal weakening consumer appetite for all kinds of technology. During the pandemic when governments were handing out cash and people were stuck at home, the world bought everything Silicon Valley was selling. Now it’s not.
During this dismal period, AI has been the one oasis in the tech desert. When Silicon Valley saw the explosive growth of ChatGPT — which became the fastest-growing consumer-tech tool in history — it realized that generative AI’s humanlike qualities are enough to draw curiosity and excitement from popular culture. That kind of attention can always be used to drag money out of investors, whether or not a product is particularly useful. So now the tech industry wants the world to believe the emerging technology could be used for anything from healthcare to human resources and from news reporting to writing legal drafts. This hope that AI is the “next big thing” has triggered familiar tales of FOMO across Silicon Valley. Venture capitalists told Fortune’s Anne Srader that it’s hard to see “how much crazier it could get.” VC funding for a lot of startups dried up between 2021 and 2022. AI investment, however — especially in early-stage companies — continued at a steady clip, only falling to $4.5 billion in 2022 from $4.8 billion in 2021. According to PitchBook, generative-AI investments totaled $1.6 billion in the first quarter of 2023.
And it’s not just the startups and venture capitalists betting on this emerging tech, it’s the big players, too. Beyond Microsoft’s giant OpenAI stake, Google invested $400 million in the ChatGPT rival Anthropic in its attempt to boost Bard. Meta is making noise about AI, too. Last month, CEO Mark Zuckerberg published a note saying that AI was the “single largest investment” at the company (so much for the metaverse). Given the rapid deterioration of the tech industry’s fortunes, anything that promises to replicate the heady growth of the previous decade is more than welcome. From 2020 to 2022, Silicon Valley was drunk on low interest rates and bubble behavior. But now the party’s over, and AI is helping ease the hangover with a little hair of the dog.
Un peu pump
None of this is to say that AI isn’t or will never be useful, it’s just that — as a group of Stanford University researchers wrote in a 2021 report — “we currently lack a clear understanding of how they work, when they fail, and what they are even capable of due to their emergent properties.” That means everyone from researchers to the developers are learning what AI products are capable of in real time, and the speed with which they are rolled out to the public could become a serious problem.
Most of the consumer-facing products you’re about to come across are built on large-language models, and the information the bots spit out is only as good as the information they ingest and the algorithms that interpret that information. For instance, models owned by Google, OpenAI, and Microsoft are using information from Reddit to teach themselves. That should be concerning to anyone who has read some of the more vicious, offensive, or juvenile material on that platform. It’s also concerning to Reddit’s executives, who seem to have just realized that they’ve been raising a golden goose and giving its eggs away for free.
The quality of large-language models also depends on the foresight of the people who make them. Neil Sahota, an IBM vet and AI advisor to the United Nations, told me that there’s actually little programming involved in making this tech. So the Silicon Valley titans who cut their teeth on programming — like the Zuckerbergs of the world — may not know as much about it as they think they do. ChatGPT is literally offering to pay people $20,000 to find bugs in its systems. If that doesn’t tell you that this tech may not have been stress tested for mass consumption, I don’t know what does.
Making a model that does more good than harm requires having a cross-disciplinary team of social scientists and ethicists on call. But those are exactly the types of workers getting laid off in Silicon Valley right now. And given their shrinking pool of money, big companies may not want to invest in unsexy safeguards like — as Sahota suggested during our talk. It’s possible, but expensive, to build in protocols that force the AI to lay out exactly how it’s operating and executing tasks, so we can get a clear picture of why the bots are doing what they do.
“It’s like doing a math test and just showing the answer, not showing your work,” he said. “And that’s a problem.”
The recklessness of bubble behavior only adds to the danger of taking generative AI to the commercial market. In their paper, the Stanford researchers warned that “poorly constructed foundations are a recipe for disaster.” But the Silicon Valley machine only knows one speed, and that’s growth at all costs. It sees no other option but an AI Hail Mary. And all of us will have to suffer the consequences for that.
Linette Lopez is a senior correspondent at Insider.
[ad_2]
Source link