[ad_1]
An anonymous reader quotes a report from Ars Technica: Getty Images CEO Craig Peters told the Verge that he has found a solution to one of AI’s biggest copyright problems: creators suing because AI models were trained on their original works without consent or compensation. To prove it’s possible for AI makers to respect artists’ copyrights, Getty built an AI tool using only licensed data that’s designed to reward creators more and more as the tool becomes more popular over time. “I think a world that doesn’t reward investment in intellectual property is a pretty sad world,” Peters told The Verge. The conversation happened at Vox Media’s Code Conference 2023, with Peters explaining why Getty Images — which manages “the world’s largest privately held visual archive” — has a unique perspective on this divisive issue.
In February, Getty Images sued Stability AI over copyright concerns regarding the AI company’s image generator, Stable Diffusion. Getty alleged that Stable Diffusion was trained on 12 million Getty images and even imitated Getty’s watermark — controversially seeming to add a layer of Getty’s authenticity to fake AI images. Now, Getty has rolled out its own AI image generator that has been trained in ways that are unlike most of the popular image generators out there. Peters told The Verge that because of Getty’s ongoing mission to capture the world’s most iconic images, “Generative AI by Getty Images” was intentionally designed to avoid major copyright concerns swirling around AI images — and compensate Getty creators fairly.
Rather than crawling the web for data to feed its AI model, Getty’s tool is trained exclusively on images that Getty owns the rights to, Peters said. The tool was created out of rising demand from Getty Images customers who want access to AI generators that don’t carry copyright risks. […] With that as the goal, Peters told Code Conference attendees that the tool is “entirely commercially safe” and “cannot produce third-party intellectual property” or deepfakes because the AI model would have no references from which to produce such risky content. Getty’s AI tool “doesn’t know what the Pope is,” Peters told The Verge. “It doesn’t know what [Balenciaga] is, and it can’t produce a merging of the two.” Peters also said that if there are any lawsuits over AI images generated by Getty, then Getty will cover any legal costs for customers. “We actually put our indemnification around that so that if there are any issues, which we’re confident there won’t be, we’ll stand behind that,” Peters said. When asked how Getty creators will be paid for AI training data, Peters said that there currently isn’t a tool for Getty to assess which artist deserves credit every time an AI image is generated. “Instead, Getty will rely on a fixed model that Peters said determines ‘what proportion of the training set does your content represent? And then, how has that content performed in our licensing world over time? It’s kind of a proxy for quality and quantity. So, it’s kind of a blend of the two,'” reports Ars.
“Importantly, Peters suggested that Getty isn’t married to using this rewards system and would adapt its methods for rewarding creators by continually monitoring how customers are using the AI tool.”
[ad_2]
Source link