[ad_1]
Identity theft, non-consensual porn content, scams, misinformation…Is it possible to use AI deepfake technology for something other than causing harm?
The dual potential of deepfake technology is “like talking about knives,” according to John Egan, the CEO of L’Atelier BNP Paribas.
“If we’re only discussing knife crime, we’re not going to talk about how knives are used in the kitchen for cooking, right?”
Egan discusses the two facets of this emerging technology in a new episode of Euronews Tech Talks.
John Egan is the CEO of L’Atelier BNP Paribas, a company specialising in quantitative foresight, solutions, and strategic intelligence for European governments. They assist in formulating emerging technology policies and determining capital investments aimed at technology deployment. You can learn more about their work here.
Challenging grief and death
In a 2013 “Black Mirror” episode, a character named Martha grapples with loss by interacting with a digital replica of her deceased partner.
Merely a decade later, the line between fiction and reality has blurred with examples such as Kim Kardashian sharing a video showcasing a remarkably lifelike digital version of her late father, Robert Kardashian
Bringing your loved ones back digitally is one possible application of deepfake technology.
“There are lots of commercial opportunities,” says Egan.
For example, he envisions a global company using AI to create a multilingual bot, facilitating seamless communication across offices worldwide.
Beyond business, Egan proposes an AI assistant for elders replicating a familiar face, providing reminders from medication prompts to home security.
“So there are plenty of different positive use cases for this type of technology that extend beyond just entertainment and translation and the kind of fundamental corporate needs right into the very personal and kind of intimate familiarity associated with friendship and combating endemic loneliness,” says Egan.
Protecting against threats
In an era of easily accessible and cost-effective deepfakes, Egan warns of the rapid growth of potentially misleading or harmful information.
As he points out, the sex industry, historically an early adopter of new technologies, provides a stark example of how quickly these advancements can become commercially viable.
For instance, deepfakes have been increasingly used to create porn content without people’s consent.
How do we protect against these threats? There’s no single solution, says Egan, but there are ways to mitigate the risk. For instance, he suggests becoming more effective at managing your online profile.
“The less information that’s out there about you, the less there is to take advantage of,” he says.
“Avoid putting your kid’s information online, including photos and videos. Be more effective at protecting those spaces. On an individual level, that’s about as much as you can do”.
In addressing the prevalence of deepfakes, Egan offers practical advice on spotting them, emphasising clues such as incongruity in skin tone, unusual blinking patterns, and peculiar facial features.
However, he acknowledges the evolving sophistication of these manipulations, particularly in low-fidelity versions.
“We need to understand that the pace of change is unsettling for most, except for those who are driving it at that moment,” he says.
Journalist • Marta Rodriguez Martinez
[ad_2]
Source link