[ad_1]
By Siddharth Pai
About a month has passed since the kerfuffle at OpenAI which saw Sam Altman booted out as its boss, only to triumphantly return three days later. The last bastion of ‘responsible’ artificial intelligence
Let me explain: OpenAI was first a non-profit research lab whose mission was to safely develop AI that was at a human level or beyond, often called artificial general intelligence or AGI (and “singularity” the point where AI is beyond human intelligence).
The emphasis was on “safety” to avoid what historian Yuval Noah Hariri said some years ago in the Financial Times: “Once big data knows me better than I know myself, authority will shift from humans to algorithms”.
The structure was governed by the original non-profit board, which answered only to the goals of the original mission, and had the power to fire Altman, as he himself said once in an interview. The truth is that Altman was not in fact replaceable by the board, as we have seen. Now that the old boss is back and most of the board is gone, a new board that will probably be more pliant has taken over. This means we are in a free-for-all.
We are now back to the point where we must look at voluntary safeguards set in place by these companies (OpenAI, Amazon
For instance, OpenAI has now announced early results from its “superalignment” team, its voluntary, in-house initiative to prevent a computer with superintelligence—one that can outsmart human beings—from going off unsupervised and creating harm.
Today, the primary method used by these models (primarily generative AI models such as ChatGPT and Google’s Gemini) is reinforcement learning via human feedback. Simply stated, human testers give the model high scores when they see output they like and low scores when the output seems subpar to them (as human beings). This is partly what contributes to their output being so engaging.
According to OpenAI’s superalignment team, however, co-led by Ilya Sutskever, OpenAI’s chief scientist (who was supposedly behind the board’s push to fire Altman), the issue with this method is that it requires human beings in the first place to tell the model what is desirable and what is undesirable—but the team thinks that a supermodel will go off and do things that a human is unable to score.
The team proposes to use older models to supervise newer ones. So, for example, instead of human beings, GPT-2 (a five-year-old product) would supervise GPT-4. The flaw in this reasoning is plain to see, but according to OpenAI’s superalignment team, if this could be pulled off, then it may be evidence that similar techniques could be used by humans to supervise superhuman models. I’m still foxed; maybe you can understand this reasoning better than I.
Apart from such internal and voluntary methods, there is nothing else seeking to rein in the horrors that unbridled AI development may rain on us (save a few bleatings by governments).
Now, hope is centered around the European Union’s AI Act which would require extra checks for “high risk” uses which have the most potential to harm humans.Europeans led the way around data privacy legislation a few years ago, and the hope is that they will light this technology path yet again. Two and a half years after it was mooted, after gruelling negotiations and much back and forth, the EU’s lawmakers have reached a deal and moved the AI Act into law. As with data, the Europeans will become the world’s guard dogs around AI. The rest of us will follow.
Not to be outdone, the US also issued a proclamation on October 30, 2023, in the form of an executive order. I say proclamation since the US’ legislative bodies are in polarised disarray and can barely stay in business, let alone ratify laws, and it’s likely that a change in an incoming President next year could also topple the current executive order.
That said, in the current executive order, President Joe Biden says: “In the end, AI reflects the principles of the people who build it, the people who use it, and the data upon which it is built.” He has also said “We need to govern this technology. There’s no other way around it in my view. It must be governed.”
The executive order makes many statements like these, and touts voluntary safeguards, but in my opinion, somewhat like my columns, lacks enough specifics on how the rules show be enforced.
AI can become a horror long before we reach the point of singularity, which is when machines theoretically become smarter than humans. Jobs
According to The Independent, the UN says military drones may have already attacked humans without being instructed to. As Ifeoma Ajunwa, an expert in this field, says, “The actual present danger is not AI becoming too intelligent. It’s more that humans are using AI in ways that are counter to our democratic beliefs about equal opportunity and equal protection.” And that’s exactly the point. Let’s have humans govern humans first—before we worry about machines governing machines.
(The author is technology consultant and venture capitalist)
[ad_2]
Source link