[ad_1]
Mike O’Donnell is a professional director, writer and strategy adviser, and a regular opinion contributor.
OPINION: One thing that I have learned in 10 years of writing a weekly business column is that you can’t tell what readers will love and what leaves them cold.
You can spend many hours, labouring and fine-tuning an 800-word opinion piece into what my old friend Pattrick Smellie describes as “words of spun gold” only to have it fall flat, without a ripple. Or worse be blindsided by new facts after publishing that make you look like a schmuck.
On the other hand, often the quick and spontaneous columns are submitted in a hurry as you try to fit your writing around a myriad of other tasks, connects with people, and you get a truckload of response.
READ MORE:
* Revolutionary AI is coming for a job near you, could it do yours?
* ChatGPT is coming to Slack, and it will help write your messages
* Investors are going nuts for ChatGPT-ish artificial intelligence
That’s what happened last week, when my quick download of observations from the South By South West conference in Austin resulted in a deluge of texts, emails and calls.
It seems artificial intelligence (AI), and more specifically generative AI which takes a command and generates synthetic content (be it text, image or audio), is a digital scab that everyone wants to pick at.
My column last week resulted in speaking invites, consulting offers and even the offer of a trip overseas.
It also resulted in a good set of disbelievers who reckon that while generative AI is a party trick with language, it will never take off.
I guess the news for them is that it already has. All the major search engines are now generative AI-enabled to some extent, and the delineation between search engines and AI bots will largely disappear.
In retail, It’s already being used. It was announced this week that clothing maker Levi-Strauss is partnering with an AI company.
Levi’s has confirmed that for their new brochureware and e-commerce operations they will supplement human models with AI-generated models to show off their clothes. Talking of clothes, Levi’s has clothed the move as a part of a digital journey of diversity, inclusion and equity.
But if you think about it, really they are choosing to use AI to generate the illusion of inclusion rather than paying non-traditional models to be the real thing. Which is weird.
Meanwhile, the BBC is using AI to swap the faces of anti-government protestors in places like Hong Kong, to protect identities while maintaining all the facial and lip movements and expressions.
Effectively the AI delivers a digital mask that emulates but hides a person’s identity. Sure beats pixelisation.
There are already whole businesses around selling photographs of people that do not exist – businesses like This-person-does-not-exist or Generated.photos – which give you synthetic humans.
DAVID UNWIN
GhatGPT-4 scores better on law exams, and can tell you what to cook from a picture of your fridge, AI commentator Paul Duignan says.
Generated.photos is particularly impressive as you dial in your preferences around gender, ethnicity, age and emotion. Then choose to make them more or less beautiful. God forbid.
But I challenge you to tell me that any of them don’t look as real as a heart attack. This is what you might get if you currently run a modelling agency or work as a photographer.
Rather than handwringing at the horror, an interesting question is what will the rules of engagement be in this new reality?
What’s your commercial approach to enabling informed use but preventing abuse?
Let’s start with sourcing. When do you declare that generative AI was used in the creation of content, an idea or a plan?
If you are a consultant working for a big consulting house like Deloitte or PwC and you use ChatGPT or Bard to suggest some reading on a job you are doing for a client, do you have to declare it? Probably not.
Or if you use it to critique a draft set of recommendations or a plan for a client, do you have to declare that? Maybe.
Then, if you use AI to do the first draft of the plan for the client, do you have to declare that? Almost certainly.
The problem is that between these three examples, there are 50 shades of grey and knowing when you will do what in terms of sourcing is not so straightforward.
Likewise, if you are a photographer, and you use an image generator like Dall-E or Midjourney to run an emotional filter over the images you take, do you have to declare that? Or is just going to be the way professional image makers work?
Another question is around fact-checking. If you are going to use generative AI to create content for commercial purposes, do you fact-check the results every time before publishing? If so, do you need to disclose that it has been fact-checked?
Then there is the question of biased or stereotypical results. Specifically, accounting for and mitigating bias?
I watched an example of this at South By South West, when a speaker struggled to get Dall-E to find a non-male chief executive when asked for chief executive images. Then, when it did finally offer one up, it was the “CEO Barbie” doll.
As always human use of a tool will precede the development of business rules around that use. Then the courts will try to establish some precedents (good luck with that).
Meanwhile, if you run a business that produces content, of any sort, there’s a pretty good chance your people are already using generative AI. So its probably time to pull the finger on your approach to use and abuse.
[ad_2]
Source link