[ad_1]
The stock prices are soaring. Everyone is still amazed by the way the generative AI algorithms can whip off some amazing artwork in any style and then turn on a dime to write long essays with great grammar. Every CIO and CEO has a slide or three in their deck ready to discuss how generative AI is going to transform their business.
The technology is still in its infancy but the capabilities are already undeniable. The next wave of computing will involve generative AI, probably in several places along the workflow. The ride is going to be unstoppable.
What could possibly go wrong? Well, many things. The doomsayers imagine the total destruction of the economy and the enslavement of humans along with a good fraction of the animal world, too.
They’re probably hyperventilating. But even if the worst cases never come along, it doesn’t mean that everything will be perfect. Generative AI algorithms are still very new and evolving rapidly, but it’s still possible to see cracks in the foundation. Look deeply into the algorithms and you’ll still see places where they’ll fail to deliver on the hype.
Here are N of the dark secrets of generative AI algorithms to keep in mind when planning how to incorporate the technology into your enterprise workflow.
They conjure mistakes out of thin air
There’s something almost magical about the way large language models (LLMs) write 1,000-word essays on obscure topics like the mating rituals of sand cranes or the importance of crenulations in 17th century Eastern European architecture. But the same magic power also enables them to conjure up mistakes out of nothing. They’re cruising along, conjugating verbs and deploying grammar with the ability of a college-educated English major. Many of the facts are completely correct. Then, voilà, they just made something up like a fourth grader just trying to fake it.
The structure of LLMs makes this inevitable. They use probabilities to learn just how the words go together. Occasionally, the numbers choose the wrong words. There’s no real knowledge or even ontology to guide them. It’s just the odds and sometimes the dice rolls come up craps. We may think we’re mind melding with a new superior being but we’re really no different from a gambler on a bender in Las Vegas looking for a signal in the sequence of dice rolls.
They are data sieves
Humans have tried to create an elaborate hierarchy of knowledge where some details are known to insiders and some are shared with everyone. This wishful hierarchy is most apparent in the military’s classification system but many businesses have them as well. Maintaining these hierarchies is often a real hassle for the IT department and the CIOs that manage them.
LLMs don’t do so well with these classifications. While computers are the ultimate rule followers and they can keep catalogs of almost infinite complexity, the structure of LLMs don’t really allow for some details to be secret and some to be shareable. It’s all just a huge collection of probabilities and random walks down the Markov chains.
There are even creepy moments when an LLM will glue together two facts using its probabilities and infer some fact that’s nominally secret. Humans might even do the same thing given the same details.
There may come a time when LLMs are able to maintain strong layers of secrecy, but for now the systems are best trained with information that’s very public and won’t cause a stir if it leaks out. Already there are several high profile examples involving company data leaks and LLM guardrails being circumvented. Some companies are trying to turn AI into a tool to stop data leaks but it will take some time before we understand the best way to do that. Until then, CIOs might do better to keep a tight leash on the data that’s fed to them.
They proliferate laziness
Humans are very good at trusting machines, especially if they save work. When the LLMs prove themselves correct most of the time, the humans start to trust them all the time.
Even asking humans to double check the AIs doesn’t work too well. After the humans get used to the AIs being right, they start drifting off and trusting that the machines will just be right.
This laziness starts filling the organization. Humans stop thinking for themselves and eventually the enterprise sinks into a low-energy stasis where no one wants to think outside the box. It can be relaxing and stress-free for a bit — until the competition shows up.
Their true cost is unknown
No one knows the correct cost for using an LLM. Oh, there’s a price tag for many of the APIs that spells out the cost per token but there are some indications that the amount is heavily subsidized by venture capital. We saw the same thing happen with services like Uber. The prices were low until the investors’ money ran out and then the prices soared.
There are some indications that the current prices aren’t the real prices that will eventually come to dominate the marketplace. Renting a good GPU and keeping it running can be much more expensive. It is possible to save a bit of money by running your LLMs locally by filling a rack with video cards, but then you lose all the advantages of turnkey services like paying only for the machines when you need them.
They are a copyright nightmare
There are some nice LLMs on the market already that can handle general chores like doing high school homework assignments or writing college admissions essays that emphasize a student’s independence, drive, writing ability, and moral character — oh, and their ability to think for themselves.
But most enterprises don’t have these kinds of general chores for AI to undertake. They need to customize the results for their specific business. The basic LLMs can provide a foundation but there’s still a great deal of training and fine-tuning needed.
Few have figured out the best way to assemble this training data. Some enterprises are lucky enough to have big datasets they control. Most, however, are discovering that they don’t have all the legal issues settled regarding copyrights (here, here, and here). Some authors are suing because they weren’t consulted on using their writing to train an AI. Some artists feel plagiarized. Issues of privacy are still being sorted out (here and here).
Can you train your AI on your customers’ data? Are the copyright issues settled? Do you have the right legal forms in place? Is the data available in the right format? There are so many questions that stand in the way of creating a great, customized AI ready to work in your enterprise.
They may invite vendor lock-in
In theory, AI algorithms are generalized tools that have abstracted away all the complexity of user interfaces. They’re supposed to be standalone, independent, and able to handle what life — or the idiot humans they serve — throws their way. In other words, they’re not supposed to be rigid and inflexible as an API. In theory, this means it should be easy to switch vendors quickly because the AIs will just adapt. There won’t be a need for some team of programmers to rewrite the glue code and do all the things that cause trouble when it becomes time to switch vendors.
In reality, though, there are still differences. The APIs may be simple, but they still have differences, like JSON structures for invocations. But the real differences are buried deep inside. Writing prompts for the generative AIs is a real art form. The AIs don’t make it easy to get the best performance out of them. Already there’s a job description for smart people who understand the idiosyncrasies and can write better prompts that will deliver better answers. Even if the API differences are minor, the weird differences in the prompt structure makes it hard to just switch AIs quickly.
Their intelligence remains shallow
The gap between a casual familiarity with the material and a deep, intelligent understanding has long been a theme in universities. Alexander Pope wrote, “A little learning is a dangerous thing ;
Drink deep, or taste not the Pierian spring.” That was in 1709.
Other smart people have noted similar problems with the limits of human intelligence. Socrates concluded that for all his knowledge, he really knew nothing. Shakespeare thought that the wise man knows himself to be a fool.
The list is long and most of these insights into epistemology apply in one form or another to the magic of generative AI, often to a much greater extent. CIOs and tech leadership teams have a difficult challenge ahead of them. They need to leverage the best that generative AIs can, well, generate while trying to avoid running aground on all the intellectual shoals that have long been a problem for intelligences anywhere, human, alien, or computational.
[ad_2]
Source link