[ad_1]
While debate continues to rage about the ethics of using artificial intelligence across the industrial gamut, a new study suggests UK consumers are increasingly on board with the technology. Almost three-quarters of UK residents would trust AI to deliver them with things like written content, health tips and relationships advice.
Artificial intelligence is a field combining computer science and datasets, to enable machines to execute problem-solving. Mainstream debate around this has tellingly shifted in the media to sensationalist headlines about ‘robot uprisings’ in the future – but for now at least, the most overt threat the machines hold is to people’s jobs. Increasingly, the hype around generative AI is being harnessed by bosses looking to push down the pay demands of staff – particularly in the creative sector – due to AI’s ability to take in old information and rephrase it – creating quick ‘new’ content at a fraction of the price.
According to a new study from Capgemini, the applications of generative AI are also being embraced by many consumers, who are using the technology to reel off academic essays, work reports, search for information, or even seek out life-advice. A resounding 74% of UK consumers would trust AI to undertake such tasks – ahead of the global average of 73% – according to Capgemini’s survey.
This is a marked change from the recent findings of KPMG – with the Big Four firm suggesting most consumers would only be comfortable using AI in functions it is already associated with, like Alexa-style voice assistants. Meanwhile, more than a fifth of consumers would not use it for anything at all.
In the case of Capgemini’s survey of 10,000 people across Australia, Canada, France, Germany, Italy, Japan, the Netherlands, Norway, Singapore, Spain, Sweden, the US and the UK, however, most consumers were much more adventurous with their AI usage. At present, only 10% of respondents said they had never used AI tools.
Meanwhile, the majority of consumers had already used AI creatively. A 52% chunk said they had used models like Chat GPT to generate emails, essays, stories and poems. This put it far ahead of any other use for AI. The next most common use was ‘creative brainstorming’ – using the AI to generate prompts that could inspire human creativity – while 23% used it to find information on topics such as science, history, business or technology. Much further down the list, only 16% were using it to fine-tune their content, or 11% to analyse text to better understand it.
This may be a sign that initially cautious consumers have gone too far in the other direction – suggesting they are simply happy to let AI do all the work, and take its creations at face value. In the context of an essay, for example, this may mean in the best case scenario they put out accurate content they don’t understand, and at worst, they put out inaccurate information without being aware. To that end, 49% of consumers said they were unconcerned that AI might generate ‘fake news’ in its content, and only 33% were concerned that generated products did not recognise the contribution of artists, writers and experts whose works were fed into AI’s algorithms.
The issue with this, aside from it debatably being a form of intellectual property theft, is that a large number of AI users are uncritical of where their information might have come from. With generated text allowing the avoidance of sourcing, verifying its authenticity, or holding inaccurate sources accountable, becomes increasingly difficult. According to Capgemini, this is something consumers and companies alike must keep in mind, as they rush to deploy AI technologies in their daily lives.
“The awareness of generative AI amongst consumers globally is remarkable, and the rate of adoption has been massive, yet the understanding of how this technology works and the associated risks is still very low,” commented Niraj Parihar, CEO of the insights and data global business line at Capgemini. “Generative AI is not “intelligent” in itself; the intelligence stems from the human experts who these tools will assist and support. The key to success therefore, as with any AI, is the safeguards that humans build around them to guarantee the quality of its output.”
[ad_2]
Source link