Oh, Good, OpenAI’s Biggest Rival Has a Weird Structure Too

[ad_1]

This article is from Big Technology, a newsletter by Alex Kantrowitz.

Life got interesting for Anthropic two weeks ago, when OpenAI nearly lit itself on fire. Anthropic had been operating comfortably in OpenAI’s shadow, collecting billions in investment from Amazon, Google, and others as it developed similar technology with an increased focus on safety. Then, as the chaos rolled on, companies that built their products entirely on top of OpenAI’s GPT-4 model looked for a hedge. And Anthropic was there, waiting for them.

Anthropic is now in prime position to take advantage of OpenAI’s misstep, in which the nonprofit board that controls the company fired CEO Sam Altman, only for him to be rehired five days later. But Anthropic has its own untraditional board structure to contend with. The company is a public benefit corporation, with a board that serves its shareholders. But a separate “long-term benefit trust” will select most of its board members over time, with a mandate to focus on A.I. safety. The trust has no direct authority over the CEO, but it can influence the company’s direction, setting up another novel governance structure in an industry now painfully aware of them.

“The LTBT could not remove the CEO or president (or any other employee of Anthropic),” an Anthropic spokesperson told me. “The LTBT elects a subset of the board (presently one of five seats). Even once the LTBT’s authority has fully phased in and it has appointed three of five director seats, the LTBT would not be able to terminate employees.”

Several OpenAI employees left in late 2020 to start Anthropic. With serious technical ability, and concern about the dangers of A.I., the group raised $7 billion, expanded to around 300 employees, and built Claude, an A.I. chatbot and underlying large language model. Anthropic now works with 70 percent of the largest banks and insurance companies in the U.S. and has high-profile clients including LexisNexis, Slack, and Pfizer. It announced billion-dollar investments from Google and Amazon this fall.

The founders of Anthropic claim to be even more concerned than OpenAI with safety, but were aware of the pitfalls of their ex-employer’s board structure. So they created a traditional board responsible to shareholders and installed the LTBT to pick board members—a departure from OpenAI’s nonprofit model.

The trust consists of “five financially disinterested members” there to help “align our corporate governance with our mission of developing and maintaining advanced AI for the long-term benefit of humanity,” the company said. Effectively, it’s an effort to sync Anthropic’s governance with its mission but insulate the company from dogmatic chaos.

“Stability is a key,” said Gillian Hadfield, a University of Toronto law professor and ex–OpenAI policy adviser who spoke with Anthropic as it was structuring the trust. “They don’t want their company to fall apart.”

The trust is not risk-free. Board members will have responsibilities to shareholders, but they won’t easily forget those who nominated them and why they did it. They’ll have to find a way to balance the two. The structure should make Anthropic more stable than OpenAI but not entirely immune to a repeat of the Altman situation.

“Could you see it happen with Antropic? Yes, I think we could,” Hadfield said. “I’m proud and supportive of the fact that these companies are thinking deeply and structurally about the potential risks. And they’re thinking about ‘How would we distribute the benefits?’ ”

Anthropic’s leadership is also close to the effective altruism movement, which has ties to ex-FTX CEO Sam Bankman-Fried (who directed his crypto hedge fund to invest in Anthropic, likely with his customers’ funds), along with some of the board members who ousted Altman two weeks ago. The long-term benefit trust has at least two members connected to effective altruism. Paul Christiano, the founder of the Alignment Research Center, is a prolific writer on EA forums. Zach Robinson, the interim CEO of Effective Ventures US, runs a firm tied directly to the movement.

Many effective altruists ascribe to a philosophy called longtermism, which holds that the lives of people deep in the future are as valuable as lives today. So they tend to proceed with A.I. development with exceptional caution. The theory sounds righteous on the surface, but its critics contend that it’s hard to predict the state of the world generations from now, leading longtermists to sometimes act rashly.

Yale Law School’s John Morley, who helped design the trust’s structure, declined to comment. Amazon declined to comment. Google didn’t respond to a request for comment. Amy Simmerman, a partner at Wilson Sonsini Goodrich & Rosati who also worked on developing the trust, didn’t respond.

Anthropic’s governance should be stable enough to make customers feel comfortable working with the company, at least in the coming years. That’s a significant benefit after OpenAI’s chaos showed the risks of relying on a single A.I. model. And those betting on the company seem to be aware of its structure and happy it’s in place, even if it adds some uncertainty.

“This long-term benefit trust is a little bit different. My sense is there’s some level of security in implementing something like that,” said Paul Rubillo, an investor who participated in Anthropic’s Series C round in May. “We’re in uncharted waters, right?”



[ad_2]

Source link