Opinion: The conflict at the heart of OpenAI – The Globe and Mail

[ad_1]

Images are unavailable offline.

Sam Altman, then-CEO of OpenAI, attends the Asia-Pacific Economic Cooperation (APEC) CEO Summit in San Francisco on Nov. 16.

CARLOS BARRIA/Reuters

Jordan Jacobs is co-founder and managing partner of Radical Ventures, an AI-focused venture capital firm.

The news of Sam Altman’s sudden firing from OpenAI (and his move to Microsoft Corp., MSFT-Q a major backer to the organization) caught much of the tech world by surprise. But, on closer inspection, the roots of this fissure were always visible.

At its heart, OpenAI has a problem which is not just about how much attention to pay to safety (that is, the alignment with human values) and thus how fast or slow to go in commercializing its products. The tensions at the board level that resulted in Mr. Altman’s ouster reflect a misalignment between the organization’s stated mission and the subsequent commercial execution that created ChatGPT and a company worth tens of billions of dollars. (OpenAI Inc., a non-profit research organization, operates the for-profit OpenAI Global LLC.)

Story continues below advertisement

OpenAI’s mission, as defined by its charter, is to build artificial general intelligence (AGI) and to ensure it “benefits all of humanity.” It is not a mission that can be easily reconciled with commercializing AI products, which is why OpenAI was initially designed as a non-profit research organization rather than a company.

In its pursuit of AGI, it became apparent that the compute costs necessary to train large language models (LLMs) at huge scale would necessitate funding that could perhaps only be supported by investors seeking a commercial return. (Of course, opening ChatGPT to the world also has the benefit of generating user feedback that helps train the models at scale.)

OpenAI’s efforts have had extraordinary impact in accelerating global understanding and adoption of AI and have likely reshaped the technology landscape forever. However, the speed and focus needed to win the ultracompetitive race to create LLM products and platforms in a way that is beneficial to the company comes at the expense of a mission that is a research-focused quest to achieve AGI that is beneficial to all.

In other words, how do you reconcile winning the AI products and platform race for the benefit of one company while at the same time trying to create AGI that is beneficial for everyone? When viewed through this lens, the boardroom drama reflects this contradiction between stated mission and practical execution.

Story continues below advertisement

The tension that led to Mr. Altman’s unexpected departure was not a sudden rupture, but rather a culmination of inherent misalignment. The conflict underscores the challenge faced by companies striving to align their mission with their execution – a challenge not unique to OpenAI. We may see similar upheavals at companies in which inconsistencies between the original mission and the practical execution create cracks within teams in which individuals joined for different purposes. Those cracks may, over time, widen into chasms.

When investing, we think a lot about a founding team’s mission and the alignment of that mission with how things will unfold practically in building a business. It is not impossible to have a seemingly utopian mission inside a company. But it does require a lot of thought and communication to reach a common understanding among the founding team (and subsequent investors and other stakeholders) about how the mission will be realized and, in the case of very capital-intensive businesses, what you must be prepared to do in order to raise that capital.

We began working with the founders of the AI startup Cohere before incorporation in 2019, and we had perceived this conflict at the heart of OpenAI that we thought could be very hard to reconcile over the long term. The mission at Cohere was therefore defined very clearly: to make transformer-based generative language AI safely and securely available to businesses globally, and to build a cloud-agnostic platform and products in service of that mission. So, while the company is deeply committed to building safe and responsible AI, there is no tension between its stated mission and the work it pursues.

For startups, the “why” of the mission cannot conflict with the “what and how” of the execution of that mission. To succeed at building a generational business, a team must be unified in believing that what-and-how they are building is in support of why they are building. It is not impossible to realign when these two core elements become out of sync (something I’ve experienced as an AI company founder). But getting back in sync is unlikely to be a smooth ride for anyone involved.

Story continues below advertisement

Starting with that alignment – and ensuring that challenges are met with constant open communication – helps to give a team the unified clarity of purpose that is foundational to success.

[ad_2]

Source link