Protecting Public Figures and Artists' Likeness in the Age of AI

Protecting Public Figures and Artists’ Likeness in the Age of AI

Lauren Hendrickson
February 5, 2025

Table of Contents

Key Takeaways:

  • AI’s ability to generate hyper-realistic images, videos, and other content is challenging traditional notions of ownership and authenticity. Artists and public figures are particularly vulnerable to exploitation in this new landscape.
  • Licensing provides a way for creators to control how their likeness is used across various media and ensures they are compensated for its commercial use. Without proper licensing, artists risk losing income and facing damage to their reputation.
  • Tools that promote transparency, alongside strong regulations, will be critical in protecting creators’ rights and ensuring the ethical use of AI-generated content.

 

AI’s ability to generate hyper-realistic images and videos is more than just a new development. It raises important questions about ownership, authenticity, and the protection of individual identity. For artists and public figures, the stakes are higher than ever. In the age of AI, the creative industry is changing rapidly, altering the way we think about ownership and creativity.

In our previous article, “How AI is Affecting the Music Industry,” we looked at how AI is transforming music creation. But its impact doesn’t stop there. AI is now making its mark on other parts of the creative industry, pushing boundaries and prompting real change. From realistic avatars to deepfake videos, AI has the power to disrupt on a large scale. This article explores what steps we can take to address these challenges.

What are Public Figures’ and Artists’ Likenesses?

A public figure’s and artist’s likeness refers to the visual, auditory, or even behavioral representations of their identity that are recognized by the public. This can include:

  1. Visual Likeness: This typically includes an individual’s face, body, or other distinguishing features that are recognizable and associated with that person. It can also involve their style, appearance, or unique mannerisms.
  2. Auditory Likeness: AI can mimic or clone a person’s voice to replicate how they sound.
  3. Behavioral Likeness: AI can replicate or manipulate a public figure’s or artist’s mannerisms, speech patterns, and other characteristics, especially in digital formats.

For artists, likeness also extends to their art style, signature, and public perception in their creative work. Protecting these likenesses is crucial because they are tied to personal branding, reputation, and intellectual property rights. In the age of AI, unauthorized use of an artist’s or public figure’s likeness, such as through deepfakes, can distort their image and deprive them of potential revenue.

Protecting Public Figures and Artists from AI Exploitation

The market for AI-generated images, art, and avatars is projected to exceed $0.92 billion by 2030, emphasizing the growing importance of controlling and licensing likenesses. However, over 96% of deepfake content today involves the non-consensual use of individuals’ likenesses, often for exploitative purposes. Unauthorized use, if left unregulated, could severely damage personal brands and compromise the creative industry’s integrity, depriving artists of income generated from their own likenesses. We are already seeing how deepfakes are impacting media authenticity daily, highlighting the urgent need for protections.

Licensing likenesses for legitimate purposes ensures creators are compensated for the commercial use of their image, voice, or video. It guarantees that the rightful owner benefits from their likeness. Without clear licensing agreements, artists and public figures risk significant financial loss and damage to their personal brand. As AI technology advances, the importance of licensing increases, ensuring fair use and protecting creators from exploitation.

By licensing their likeness across various media forms—such as voice, images, and video—creators can maintain control over how their identity is used, ensuring fair compensation. Without this framework, creators are more susceptible to exploitation, misrepresentation, and financial loss as AI technologies evolve.

Why Licensing Likeness is Essential for Fair Compensation and Control

Licensing likenesses is crucial for protecting creators’ intellectual property and ensuring fair compensation. Key benefits include:

  • Control and Integrity: Licensing gives artists and public figures control over how their likeness is used, allowing them to approve or deny specific uses. This helps prevent misuse that could damage their brand or reputation.
  • Fair Compensation: Licensing guarantees that creators are compensated for the commercial use of their likeness, ensuring they are rewarded for the value their identity generates.
  • Opportunity in AI: Licensing also opens up opportunities for artists to leverage AI technology for collaboration and income generation, while maintaining control over how their likeness is used.

By licensing their likeness, creators protect their identities and maximize their financial opportunities, ensuring fair compensation while maintaining control over their personal brand.

Public Figures and Artists Collaborating with AI

Not all artists are opposed to working with AI; many are open to it, provided the right measures are in place. They recognize the role AI will play in the future and are willing to embrace it, as long as they are fairly compensated and can retain control over how their likeness is used. Here are a few examples of artists’ views on AI and how they are leveraging it in their work:

Ethical AI Initiatives Protecting Artists’ Likeness 

Companies are recognizing the potential of AI and stepping up with solutions to protect artists’ likenesses. Here are some noteworthy examples of companies leading the way in safeguarding creators’ identities:

1. YouTube’s DreamTrack: Empowering Artists with AI-Powered Music Creation

YouTube, in collaboration with DeepMind (Google’s AI research division), introduced DreamTrack, one of the first public collaborations between artists and AI. DreamTrack is an AI-powered tool that lets creators generate unique soundtracks for their YouTube Shorts, using the voices of popular artists. By working directly with artists, YouTube ensures that their likenesses are used with permission, and that they are fairly compensated for their participation. With a major platform like YouTube leading the way, this initiative sets a precedent for future collaborations in the industry. Moreover, AI-generated soundtracks produced through DreamTrack are clearly labeled as such, promoting transparency and helping to inform listeners.

2. ElevenLabs: Ethical AI Use in Audio

While YouTube’s DreamTrack is an innovative step forward, ElevenLabs, a startup focused on the audio world, also takes a strong ethical stance regarding AI technology. Their platform prioritizes the safe use of AI for generating audio content while ensuring artists’ likenesses and voices remain protected. ElevenLabs’ commitment to ethical practices sets a standard for emerging AI platforms. By collaborating with artists, they safeguard their rights and promote the responsible use of AI-generated content.

3. FameFlow: Celebrity Likeness in Marketing

FameFlow offers a unique platform that combines celebrity likenesses with AI technology for marketing purposes. Trusted by over 400 celebrities, FameFlow allows advertisers to select their preferred celebrity for a campaign. FameFlow uses AI image and video generation tools to create advertisements, but celebrities have the final say, approving commercials before publication. By allowing celebrities to control how their likeness is used, FameFlow protects their identities while giving businesses a new way to leverage celebrity authority to boost sales.

Identifying AI-Generated Content Through Digital Watermarking

Before the advent of AI, every piece of digital content was a clear representation of human creativity and ingenuity. However, this is no longer the case. AI technology now plays a key role in generating art, music, videos, and images, blurring the lines between what is human-created and machine-made. As a result, distinguishing between real and manipulated content has become increasingly difficult. This has created a need for tools like digital watermarks to separate “human reality” from “AI manipulation.”

A digital watermark is an embedded code or identifier, often invisible to the naked eye, that indicates the origin or author of digital content. These watermarks serve as a “digital fingerprint,” allowing artists, platforms, and consumers to verify the source and monitor usage. They also help prevent unauthorized reproductions. Watermarking works by embedding metadata or imperceptible patterns within content. These watermarks remain resistant to typical editing techniques such as resizing, compression, or filtering, ensuring durability across multiple formats.

Applications of Watermarking in Various Media Forms

Digital watermarking is not limited to one type of content. Creators and organizations can apply it across various media to ensure authenticity and maintain transparency, especially as AI-generated content becomes more prevalent.

1. Watermarking for Images

In images, watermarks can take the form of imperceptible pixel patterns or metadata tags that specify the creator’s identity or the AI model used to generate the content. According to a 2024 survey by Deloitte, over 70% of consumers expressed concern about the authenticity of AI-generated content, highlighting the need for clear differentiation between AI-generated images and human creations. Tools like Stable Diffusion embed metadata (watermarks) in images, which can help protect against likeness infringement and maintain journalistic integrity, especially when used in legal contexts. With 87% of Americans reporting difficulty distinguishing facts from fiction in online AI content, the need for watermarking is more urgent than ever.

2. Watermarking for Videos

For videos, companies like Truepic are using watermarking to identify manipulated or AI-generated content. YouTube has explored using watermarking systems for AI-generated videos to ensure transparency, while platforms like Adobe’s Content Authenticity Initiative (CAI) employ watermarking to tag AI-generated images, videos, and designs, ensuring the authenticity of visual content. 

Additionally, platforms like TikTok are addressing the need for transparency by labeling AI-generated videos. Although this isn’t traditional watermarking, clearly labeling synthetic media allows users to easily identify AI-generated content. This enhances trust and transparency while helping consumers make informed decisions.

3. Watermarking in Music

In music, creators embed watermarking to include information about the composer, production platform, and licensing terms within the audio file. For example, AudibleMagic adds fingerprints or watermarks to audio files. This enables content differentiation and tracking for royalty reporting and payment administration.

AI-Related Regulations and Their Gaps in Addressing Artists’ Likeness

As AI technology rapidly evolves, existing laws struggle to keep pace with the challenges posed by AI-generated likenesses. Below are some of the U.S. laws that address content creation, including AI-generated media, and the gaps that leave artists and public figures vulnerable:

  • Copyright Act: The Copyright Act protects creators’ rights to their original works, but it does not extend to AI-generated content. Since AI creations do not meet traditional human authorship requirements, AI-generated likenesses fall outside the scope of copyright law. As a result, artists’ and public figures’ likenesses are left unprotected from unauthorized AI use.
  • Federal Trade Commission Act: This act addresses deceptive practices, including AI-generated content. However, its protections are limited to commercial contexts and do not fully cover the unauthorized use of individuals’ likenesses in AI media. This leaves artists and public figures unprotected in non-commercial contexts or instances where their likeness is exploited without consent.
  • Right of Publicity (State Laws): The “Right of Publicity”  laws in 36 states grant individuals control over the use of their likenesses, allowing them to protect against unauthorized commercial exploitation. However, these laws do not specifically address AI-generated likenesses, creating inconsistent enforcement across states. Consequently, artists and public figures face fragmented protection, with AI-generated content often remaining unregulated.
  • Deepfake Legislation (State Laws): Several states have enacted deepfake legislation to address issues such as election interference, defamation, and fraud. While these laws offer some protection to public figures, they do not comprehensively cover the broader issue of AI-generated likenesses. Furthermore, the variation in state laws leaves gaps in how AI-generated likenesses are treated, resulting in uneven protections for artists and public figures.

Emerging Legislation to Protect Public Figures and Artists

Recognizing the gaps in existing laws, new regulations are being introduced to better protect artists and public figures from the misuse of their likenesses in AI-generated content. These emerging laws aim to close the gaps and establish clearer protections.

1. Ensuring Likeness, Image, and Voice Security (ELVIS) Act

Enacted in Tennessee on March 21, 2024, the ELVIS Act is one of the first state laws to specifically address the unauthorized use of a person’s likeness, image, and voice, especially in AI-generated content. The act expands the scope of Tennessee’s Personal Rights Protection Act of 1984. It provides protections against the unauthorized replication of a person’s voice using AI. Violations of the ELVIS Act can result in both civil and criminal penalties, including the possibility of a Class A misdemeanor charge for unauthorized use.

2. No Artificial Intelligence Fake Replicas And Unauthorized Duplications (No AI FRAUD Act)

In January 2024, Rep. Salazar introduced the No AI FRAUD Act, a federal law aimed at addressing unauthorized AI-generated impersonations of public figures, artists, and everyday individuals. This law criminalizes the creation and distribution of AI-generated content that mimics someone’s likeness or voice without consent. By establishing national protections against deceptive AI content, the law helps safeguard public figures and protects consumers from scams and identity theft.

3. Nurture Originals, Foster Art, And Keep Entertainment Safe (NO FAKES Act)

The NO FAKES Act, a bipartisan bill introduced in 2024, prohibits the unauthorized use of a person’s voice or likeness using generative AI in commercial, political, or deceptive contexts. This bill gives artists and public figures control over their digital personas and protects them from exploitation while also shielding the public from misleading AI-generated content.

4. California’s Assembly Bill 2602 (AB 2602)

California passed AB 2602 in September 2024, requiring explicit contracts for using a performer’s digital replicas. This ensures transparency and consent in how their likeness is used. The law also mandates that artists have professional representation when negotiating contracts. This requirement helps protect them from exploitation and prevents the unauthorized or misleading use of their digital personas. This law will take effect on January 1, 2025.

5. California’s Assembly Bill 1836 (AB 1836)

AB 1836, passed alongside AB 2602, extends protections to deceased individuals by prohibiting the use of their digital likenesses without consent from their estate. This law ensures that artists and public figures maintain posthumous control over their digital legacy, preventing exploitation and providing families and estates with the legal tools to protect their loved one’s likenesses.

The Path Forward In Protecting Creativity in the AI Era

 To truly protect creativity in the age of AI, there must be a balance between technological advancements and regulation. While tools like watermarking and labeling AI-generated content are already in place, their effectiveness relies on strong, clear regulations to support them. Governments must step in to enforce accountability and establish clear penalties for misuse. It’s not enough for organizations to simply adopt these practices. They must also be held accountable for how they develop and distribute AI tools. Transparency should be the gold standard for both content creators and the platforms that distribute their work. Without clear regulations, the line between authentic and AI-generated content will continue to blur, leaving artists, public figures, and the public at large increasingly vulnerable to exploitation.

Identity.com

Identity.com helps many businesses by providing their customers with a hassle-free identity verification process through our products. Our organization envisions a user-centric internet where individuals maintain control over their data. This commitment drives Identity.com to actively contribute to this future through innovative identity management systems and protocols.

As members of the World Wide Web Consortium (W3C), we uphold the standards for the World Wide Web and work towards a more secure and user-friendly online experience. Identity.com is an open-source ecosystem providing access to on-chain and secure identity verification. Our solutions improve the user experience and reduce onboarding friction through reusable and interoperable Gateway Passes. Please get in touch for more information about how we can help you with identity verification and general KYC processes using decentralized solutions.

Join the Identity Community

Download our App