← BackMon, Apr 6
Using celebrity images generated via llm for apps and servicces

Your AI Avatar App Is Training on Taylor Swift's Face

Topic: Using celebrity images generated via llm for apps and serviccesMon, Apr 6

Optimist View

AI-generated celebrity likenesses democratize entertainment and marketing, allowing small developers to create engaging apps without expensive licensing deals. Technology companies argue these tools enable creative expression and accessibility, with platforms like Midjourney and DALL-E implementing safeguards against malicious deepfakes. The technology could revolutionize personalized content and reduce barriers for indie creators who previously couldn't afford celebrity endorsements.

Sources: Industry advocacy based on general AI platform policies

VS

Skeptic View

Celebrity AI image generation violates personality rights and creates legal nightmares similar to music copyright issues. Privacy advocates warn that these tools enable identity theft, non-consensual deepfakes, and economic harm to celebrities whose likenesses generate revenue without compensation. The same recognition failures plaguing platforms like Suno, which The Verge reported in April 2026 struggles to prevent copyrighted music usage despite policies against it, likely affect image generation systems.

Sources: The Verge (April 05, 2026)

Industry Reality

Major AI companies quietly implement celebrity detection systems while publicly claiming their models don't train on copyrighted material, yet these safeguards routinely fail in practice. App developers exploit legal gray areas around transformative use and parody protections, knowing that most celebrities lack resources to pursue individual cases. The industry operates on a 'generate now, litigate later' model, similar to how music streaming platforms initially handled copyright.

Sources: Industry practices inferred from copyright enforcement patterns

What Your Feed Is Hiding

Celebrity AI image apps operate in the same broken enforcement environment that allows Suno's music copyright violations. While platforms claim robust detection systems, The Verge's April 2026 reporting on Suno revealed that 'no system is perfect' at recognizing copyrighted material - the same technical limitations plague facial recognition for celebrity likenesses. The uncomfortable reality is that these apps rely on training data scraped from the open internet, meaning your favorite celebrity avatar app has almost certainly processed thousands of paparazzi photos, red carpet images, and social media posts without consent or compensation.

Key data: Suno's imperfect copyright recognition system failing to prevent copyrighted music usage despite explicit policies against it

Where They Actually Agree

Both technology advocates and privacy critics agree that current AI detection systems are fundamentally flawed and that existing copyright frameworks are inadequate for generative AI. All sides acknowledge that the technology has outpaced legal protections, creating a wild west environment where enforcement depends more on legal resources than actual rights violations.

Community Pulse

Should AI platforms be required to obtain explicit consent before generating celebrity likenesses?

AI-generated analysis based on published sources. TheOtherFeed does not take political positions.

Using celebrity images generated via llm for apps and servicces — Both Sides | TheOtherFeed