YouTube said Tuesday that after a pilot phase, its similarity detection technology has been officially rolled out to creators eligible for the YouTube Partner Program. This technology allows creators to request removal of AI-generated content using their likeness.
This is the first wave of the rollout, a YouTube spokesperson told TechCrunch, adding that affected creators received an email this morning.
YouTube’s detection technology identifies and manages AI-generated content that features similarities, such as a creator’s face or voice.
This technology is designed to prevent people from having their likeness misused to endorse products or services they have not agreed to support or to spread misinformation. There have been numerous examples of AI likeness abuse in recent years, such as Elecrow using an AI clone of YouTuber Jeff Geerling’s voice to promote its products.

The company provided instructions on how creators can use the technology on its Creator Insider channel. To begin the onboarding process, creators must go to the Likes tab, consent to data processing, and use their smartphone to scan the QR code displayed on their screen. This will take you to a web page to verify your identity. This process requires a photo ID and a simple selfie video.
Once YouTube grants access to use this tool, creators can view all detected videos and submit takedown requests and copyright requests in accordance with YouTube’s privacy guidelines. There is also an option to archive the video.

Creators can opt out of using this technology at any time, and YouTube will stop scanning their videos 24 hours after opting out.
tech crunch event
san francisco
|
October 27-29, 2025
The similarity detection technology has been in trials since the beginning of this year. YouTube first announced last year that it had partnered with Creative Artists Agency (CAA) to help celebrities, athletes, and creators identify content on the platform using AI-generated likenesses.
In April, YouTube announced its support for a bill called the NO FAKES Act, which would address the issue of AI-generated replicas that imitate a person’s image or voice to deceive others and generate harmful content.