[ad_1]
Stay informed with free updates
Simply sign up to the Artificial intelligence myFT Digest — delivered directly to your inbox.
The writer is founder of Sifted, an FT-backed site about European start-ups
The failings of OpenAI’s corporate governance regime have been laid brutally bare in recent days. On Friday, the four independent board members of the world’s hottest start-up fired its chief executive, Sam Altman, for misleading them. A new interim CEO was appointed and then almost immediately ditched for another.
But by Tuesday, following howls of protests from OpenAI’s employees and investors, the board concluded that Altman was in fact trustworthy enough to be reinstated as chief executive and three board directors quit instead. Only a figure skater on speed could compress more pirouettes into such a short routine.
It would be tempting to conclude that OpenAI’s experiment of having a not-for-profit board, responsible for ensuring the safe development of AI, overseeing a for-profit commercial business, should be scrapped. Microsoft, which has invested $13bn in OpenAI, has already called for the governance structure to change. And OpenAI’s rapidly reconstituted three-man board, which now includes a former chief executive of Salesforce and a former US Treasury secretary, seems better suited to carrying out traditional fiduciary responsibilities.
Yet it would be a tragic mistake to abandon attempts to hold the leadership of the world’s most important artificial intelligence companies to account for the impact of their technology. Even the bosses of those companies, including Altman, accept that, despite AI’s immense promise, it also poses potentially catastrophic risks.
One entrepreneur who is close to OpenAI says the board was “incredibly principled and brave” to confront Altman, even if it failed to explain its actions in public. “The board is rightly being attacked for incompetence,” the entrepreneur told me. “But if the new board is composed of normal tech people, then I doubt they’ll take safety issues seriously.”
The old board certainly took those responsibilities seriously. According to several reports, one former member, Helen Toner, a Georgetown University academic, told OpenAI executives that the board’s mission was to ensure that AI “benefits all of humanity” even if it meant destroying the company. OpenAI’s organisational “kill switch” would now appear to have fused.
At the core of the recent turmoil is the tension between the commercial drive of any company to make money and concerns about the collateral damage that technology can cause.
OpenAI was explicitly founded in 2015 on the promise of prioritising safety over profit. But the company quickly realised that if it wanted to attract the best researchers and develop the biggest AI models, it would need to access colossal amounts of capital and computing power. It therefore created a capped profit business arm that made OpenAI investable by venture capital firms and Microsoft. The increasing commercialisation of OpenAI appeared to prompt some senior researchers, led by Daniela and Dario Amodei, to quit the company in 2020 to found Anthropic as a public-benefit corporation.
Still, most of OpenAI’s employees and investors cheered Altman’s return. As the former head of Y Combinator, which has incubated many of Silicon Valley’s most successful start-ups, Altman has an impressive fan club. The applause of OpenAI’s employees was no doubt amplified by the possibility of cashing in secondary share sales near the company’s $86bn pre-turmoil valuation, too.
Altman also has a magnetic ability to motivate employees, mobilise capital and cultivate powerful allies. As Paul Graham, the co-founder of Y Combinator, once said: “Sam is extremely good at becoming powerful.”
Yet corporate America’s history has repeatedly shown the dangers of over-mighty CEOs and compliant boards. And even the most well-intentioned corporate boss might cut corners for the glory of winning the race to develop human-level AI.
Next year, OpenAI may well release an even more powerful generative AI model, GPT-5. At the developer day this month, Altman said: “What we launch today is going to look quaint relative to what we’re busy creating for you now.”
In March, hundreds of researchers warned in an open letter of the “dangerous” arms race that was developing in AI and called for a pause in the development of frontier models until stronger governance could be put in place. Since then, the race has only accelerated. Open AI’s revamped board has little time to show it is up to the challenge of aligning innovation, profit and safety.
[ad_2]
Source link