CORPORATE STRATEGIES FOR AI SAFETY AND GOVERNANCE

[ad_1]

Governments rarely agree, so it was quite a surprise to see 28 national governments leading in artificial intelligence (AI) sign a declaration to promote AI safety. This follows on the heels of President Joe Biden’s Executive Order, which established new AI security standards. Despite the hype around AI, both governments and AI developers, such as Elon Musk and Sam Altman, now acknowledge the grave risks of AI to humanity.

Ritwik Gupta, an AI researcher and member of UC Berkeley’s Center for Security in Politics, told me he cautiously welcomed the move. “These agreements are a meaningful step forward towards the responsible governance of AI in society,” he said. “Among many things, the Executive Order instructs government agencies to monitor and guide activities that could impact national security.”

But are they enough to assure AI safety? I’d argue not.

There are two reasons. First, these agreements are directed primarily to government leaders and AI developers, not corporate leaders seeking to deploy AI in or through their business. It’s like these agreements give guidance for how to make hammers, but not how to use them.

Second, these agreements generally focus on the technology itself, rather than on the people at risk from the technology, especially when the technology changes so rapidly. For technology to serve society, people need to govern the use of the technology.

This article turns the spotlight on the corporate leaders that use or deploy AI and argues that any AI safety measure needs to put people at the center. This article offers corporate leaders guidance on what is AI safety, why it’s important, and how to manage AI for safety.

WHAT IS AI SAFETY?

AI safety ensures no harm is caused to people and planet. Breaches include generating and releasing false or distorted information, sharing private information, amplifying biases, and displacing jobs.

The newness of the technology makes AI safety risks particularly difficult for corporate executives to monitor, measure, and manage. New safety risks arise with the release of each AI bot.

Further, given the newness of the technology, executives find it hard to attribute the source of harms inflicted. Did the AI algorithm distort facts, or the person issuing the prompts? Did an employee lose his job because he lacked the skills needed, or because AI made him redundant? It’s hard to manage risks when the causes are elusive and unattributable.

WHY IS AI SAFETY A CONCERN FOR BUSINESS?

Few companies remain untouched by AI, which is disrupting businesses of all sorts – everything from high-tech businesses including software developers and bankers, and businesses in the ‘real’ economy, such as farming, mining, and manufacturing. What’s more, AI is touching all aspects of business, from production to internal operations to retail and the customer experience.

Corporate leaders are worried they will lose competitive advantage if they do not quickly deploy AI in their business. So, executives everywhere are scrambling to figure out ways to do so.

Yet, if not deployed well, companies open themselves to security risks, employee or customer backlash, and reputational risks. It is important for all companies to deploy AI safely.

BEST PRACTICES FOR AI SAFETY

I started formulating this article by first asking ChatGPT4, “What are the best practices for deploying AI safely?” It was quick to offer advice to corporate leaders. Yet, the advice it offered was, not surprisingly, incredibly generic.

ChatGPT4 said corporate leaders should improve governance, assess risks, manage those risks, be transparent about what they are doing, and audit their practices. This advice could be applied to managing safety in any business domain, from automobile manufacturing to financial reporting.

To deploy AI safely, people and planet must be placed at the centre of the control mechanisms. Many AI safety risks arise because people and planet are an afterthought. Because AI technologies are changing so quickly, the procedures and processes need to be agile and adaptive. New risks need to be identified and AI management processes adjusted quickly.

Below, I offer three guiding principles.

1. Use AI to solve problems, not chase opportunities

Executives are pressured to apply AI to their business. They are either excited about potential opportunities or terrified about being left behind by the competition.

When I asked executives of a major pension fund what problem they hoped to solve with AI, they could not give me a straight answer. One executive said “lower costs” while another said “gain a competitive advantage.” But, these aren’t problems, they are simply goals. So, when I asked again what problem they were trying to solve, they started to look for problems that AI could solve in the organization. They had the hammer and were looking for the nail.

Without a problem to solve, AI will inevitably create unintended consequences. It is in the deployment of new technologies that firms inadvertently create harm, as their enthusiasm about the opportunities blinds them to the costs.

AI needs to be seen as just one more tool in the toolbox – albeit a pretty powerful tool. If the pension fund, for example, was trying to improve its research capabilities, its leaders need to first figure out what are the current problems in the research department. It might be that the problems in the research group have nothing to do with AI, and everything to do with weak management. Deploying AI would simply aggravate those problems.

2. Make AI critical to the corporation’s commitment to social responsibility

Most companies have departments dedicated to corporate social responsibility (CSR). If taken seriously, these departments are governed by people who carefully think about how the company can be governed responsibly, including tackling issues like sound labor practices, diversity initiatives, and carbon emissions.

Yet, AI is often deployed by tech or innovation departments, with little insight or oversight from CSR departments. By engaging the company’s CSR departments, companies looking to deploy AI can ensure that smart people who understand responsibility can systematically consider the implications of AI processes on people and planet.

When AI deployment is driven purely by business considerations, the company risks making mistakes, which puts it under the public spotlight. Google was criticized heavily when its lead of AI governance, Jen Gennai, pressured Google developers to relax safety standards in deploying AI. She wanted developers to see AI deployment as experiments, not product development. Had the CSR team been involved, they could more carefully understand the implications of what Gennai was asking of developers to ensure her requests aligned with the company’s overall responsibility strategy.

By treating AI the same as other strategic issues that can impact people and planet, the company is more likely to ensure it systematically explores new and emerging safety risks.

3. Embed AI safety into corporate culture

If AI safety is not front and center in executives and employees’ minds, it will inevitably play a back seat to business concerns. Gupta reinforces this point by saying that safety can be easily sidelined in the race for profit. He said that “when AI companies pay high wages and have a high burn rate, it’s not hard to imagine that the social mission can be subsumed by the desire to meet corporate objectives.”

To manage their applications of AI safely, corporate leaders must continuously reinforce the company’s commitment to people and planet. There are a number of ways to embed AI safety into corporate culture, including:

  • Making room in meetings for senior leadership teams and boards to discuss AI safety;
  • Creating multidisciplinary AI teams that involve both software engineers and social scientists; and,
  • Encouraging AI teams to take workshops and professional development programs on the emerging frontiers of AI safety.

All of these practices elevate the importance of AI safety and allow the company to move with agility within the changing AI landscape.

AI SAFETY IS NOT GOING AWAY

There is little doubt society is seeing just the front edge of AI development. While the future is uncertain, there is general consensus that more powerful, prescient models are around the corner.

Unless corporate leaders handle AI safety now, such developments will wreak havoc for humankind. On the other hand, well-managed artificial general intelligence could potentially solve some of humanity’s greatest challenges, such as public health, biodiversity loss, and climate change.

For AI to be safe, it must be developed and deployed with people and planet top of mind.

Follow me on Twitter or LinkedInCheck out my website



[ad_2]

Source link