[ad_1]
Unlock the Editor’s Digest for free
Roula Khalaf, Editor of the FT, selects her favourite stories in this weekly newsletter.
OpenAI’s announcement this week of an enterprise-grade version of its ChatGPT service reflects a reality about the spread of this year’s hottest new technology: if workers are tapping into AI-powered chatbots to help with everyday tasks, it makes sense to adapt them to appeal to risk-averse corporate IT departments.
The safe-for-work version of ChatGPT, however, only scratches the surface of the benefits workers could get from using generative AI, and raises an interesting question about the way the technology will make itself felt in working life.
Will the rapid improvements being recorded by general-purpose AI models such as those from OpenAI make them increasingly effective, even if they haven’t been given specific training on the business task at hand? Or will the biggest advances come from companies shaping the technology to their own ends by training their own, targeted AI models using their own data?
Based on the email accounts they used to sign up with, OpenAI estimates that employees of 80 per cent of the large US companies already use ChatGPT while at work. That is despite the fact that many companies have told their workers not to use the chatbot, fearful that a system that learns from the queries it receives will suck up their corporate data and might leak it to others.
Like other consumer internet services that creep into work life, ChatGPT also lacks many business-grade attributes, from a guaranteed level of availability and speed to the tools that IT departments need to monitor and control how the service is being used.
The version of ChatGPT launched this week goes some way to dealing with these issues. OpenAI said that it would not use business data or conversations at organisations using the service to train its AI models — implying that this is not the case when it comes to its free consumer service and a $20-a-month enhanced version. The new service also has greater capabilities, including the ability to handle longer responses and understand more of the context about a query.
Yet generic models trained on mountains of data scraped from the internet can only go so far. They don’t have the expert knowledge to make them useful in particular settings, whether that is finance or healthcare. They also lack the specific insight into an organisation that would come from being trained on its internal processes or details about its products and customers.
Rob Thomas, head of software at IBM, says this points to a paradox about the use of generative AI in business. The race is on to make general-purpose language models more useful, building ever-larger models and training them on larger mountains of data. But when it comes to dealing with specific tasks inside a company, according to Thomas, “the narrower the model, the more accurate and the better [it is]”.
That requires retraining a general-purpose, or “base”, model with a company’s own data and imparting more understanding of the domain it is needed to work in.
Not surprisingly, when AI systems are trained with critical data and take on important tasks, this quickly becomes a board-level concern. The reaction from most companies considering generative AI, according to Thomas at IBM, is to try to bring the AI model “on premise” — in other words, keep it inside a company’s own data centre. Issues of governance — from understanding what data a model has been trained on to dealing with potential bias — also loom large.
Yet as their workers develop a familiarity with AI-powered chatbots such as ChatGPT in their personal lives, most employers will probably struggle to stop them bringing the technology to work, creating a ready market for an enterprise-grade service.
One question this raises is how far OpenAI will go in trying to become an enterprise technology company. The business software market represents an attractive source of revenue for a company that has been burning through billions of dollars (though it has not published pricing details for its new service).
Yet adapting its products further for business use, as well as developing the sales and service capabilities to deal with business customers, would amount to a major new initiative for a young company that is already stretched. Following ChatGPT’s millions of users to the office makes sense, but is unlikely to turn OpenAI into the next power in enterprise technology.
richard.waters@ft.com
Letter in response to this article:
Getting the most out of large language model AI / From Simon Burgess, Partner, Xerini, London EC2, UK
[ad_2]
Source link