OpenAI gives veto power to board on risky AI

[ad_1]

The potential dangers of generative AI are a concern to the public, politicians and AI researchers.

ADVERTISEMENT

OpenAI has expanded its internal safety process to tackle the threat of harmful artificial intelligence (AI) as governments look to clamp down on the technology.

The start-up, which boasts its hugely successful chatbot ChatGPT and is backed by Microsoft, said it would establish a dedicated team to oversee technical work and an operational structure, which also allows the board to reverse safety decisions.

“We are creating a cross-functional safety advisory group to review all reports and send them concurrently to leadership and the board of directors. While leadership is the decision-maker, the board of directors holds the right to reverse decisions,” the company said on its website on Monday.

The company updated its “preparedness framework,” saying it would also invest in diligent capability evaluations and forecasting to better detect emerging risks. It also said it would continually update “scorecards” for their models.

“We will evaluate all our frontier models, including at every 2x effective compute increase during training runs. We will push models to their limits,” OpenAI said.

The potential dangers of the technology are a concern to the public, politicians and AI researchers as generative AI can spread disinformation.

OpenAI, which is the most valuable start-up in the United States, was thrown into turmoil in November when its co-founder and CEO Sam Altman was removed from the board and the majority of the company’s staff threatened to quit unless he was reinstated.

Several days before Altman was fired from the board, several staff researchers wrote a letter to the board of directors warning of a powerful AI discovery that could threaten humanity, Reuters reported in November, citing two people familiar with the matter.

Governments around the world are starting to regulate the technology with the European Union becoming the first to agree to an AI rulebook.

[ad_2]

Source link