AI Companies Can Define Their Public Purpose Through Governance

[ad_1]

Artificial intelligence, particularly generative AI, is mostly developed, owned, and controlled by a small, powerful group of private companies operating under US laws and management structures. This means US corporate governance is crucial to regulating AI in a safe manner that broadly benefits humanity.

To meet this challenge, lawmakers should build on existing US corporate governance infrastructure. Adding to federal regulatory guidelines, corporate governance should de-emphasize shareholder primacy and embrace a stakeholder model akin to the Delaware public benefit corporation, or PBC.

Current US corporate governance—the laws, duties, organizational structures, incentives, and practices that allow a corporation to fulfill its purpose—is largely designed around the Nobel laureate economist Milton Friedman’s concept of shareholder primacy. Simplified, this is the notion that a corporation’s principal purpose should be wealth generation for its owners or shareholders.

But an alternative theory—stakeholder capitalism—is gaining prominence in management strategies, investment philosophies, and governance structures. Built off the work of Edward Freeman at University of Virginia’s Darden School of Business, stakeholder capitalism recognizes the interconnected relationships vital to a healthy company, economy, and society. It holds that every company depends on serving its unique network of stakeholders, which also might include founders, management, employees, customers, users, suppliers, partners, local or impacted communities, or even the natural environment.

This is far from a novel theory. Many senior corporate executives and investors—including Marc Benioff of Salesforce and Larry Fink of BlackRock—openly discuss the importance of the stakeholder approach to managing operations and assets, as well as risk. Benioff summarized in the latest annual Salesforce Stakeholder Impact Report that “when we focus on stakeholder value as well as shareholder value, our companies will be more successful, our communities will be more equal, our societies will be more just, and our planet will be healthier.”

State lawmakers also have worked to evolve corporate purpose. Delaware’s PBC offers a worthwhile framework for how directors and executives can embed stakeholder governance. Established in 2013, it’s proven quite workable, even popular. Mainstream investors have poured billions of dollars into it, and there are nearly two dozen publicly traded PBCs on US stock exchanges—including Allbirds, Warby Parker, Lemonade, Planet Labs, and Laureate Education—and more if including public companies with subsidiary PBCs.

Among a PBC’s central features is the obligation that its directors and managers consider the “interests of those materially affected by the corporation’s conduct” and balance them with the interests of its shareholders and the public benefits identified in the company’s charter. Under PBC rules, there is no fiduciary obligation to prioritize one set of interests over another so long as decisions are rational, informed, and disinterested—that is, free of conflicts of interest.

As a result, PBC governance structure actualizes the theory of stakeholder capitalism through a relatively simple procedure—requiring directors and managers to identify and communicate with stakeholders and allowing them to raise issues that fall outside the scope of traditional fiduciary duties.

The success of PBCs suggests that a path toward reasonable AI governance can be built around transparency and trust. Using the PBC as a model, companies that use or produce generative AI technologies should be required to build internal management and oversight systems to scope impacts and identify materially affected stakeholders; establish metrics to track and mitigate harms; gather objective information from those metrics; and report to shareholders, government, and the public.

This isn’t unworkable. Anthropic, a generative AI company formed in 2021 as a PBC, embraces stakeholder capitalism with a public mission to “build reliable, interpretable, and steerable AI systems.” Anthropic’s value has skyrocketed and it recently entered into a $4 billion partnership with Amazon.com Inc. and is reportedly considering other billion-dollar deals.

To Anthropic, the PBC requirements are a strength, not a weakness. The company says the PBC structure “gives our board the legal latitude to weigh long- and short-term externalities of decisions—whether to deploy a particular AI system, for example—alongside the financial interests of our stockholders.”

No jurisdictional guidance exists yet on how to balance these interests when they come into tension. But failure to perform this consideration and balancing process may subject Anthropic to shareholder litigation. Furthermore, Anthropic has established a separate Delaware purpose trust, aligned with Anthropic’s public purpose, that holds special governance rights to ensure that the company is adhering to its PBC obligations. How this structure operates in practice remains to be seen.

As society grapples with momentous changes driven by novel AI technologies, we have an opportunity to re-conceive corporate purpose. Building directly from the foundation of Delaware’s PBC model, any new corporate governance rules applied to AI companies should incorporate the proven, tangible benefits of stakeholder capitalism—and they may find that this provides a competitive advantage.

This article does not necessarily reflect the opinion of Bloomberg Industry Group, Inc., the publisher of Bloomberg Law and Bloomberg Tax, or its owners.

Author Information

Jesse Finfrock is an attorney and adviser who teaches social enterprise law at the University of California Berkeley School of Law.

Write for Us: Author Guidelines

[ad_2]

Source link