While there is a growing amount of AI-generated content being posted on the internet, YouTube is working on new tools to give content creators more control over content that copies their voice or likeness using generative AI on the platform.
The decision comes as part of YouTube's updated content regulations it implemented in November 2023, where it asked creators to ensure transparency when using AI for the sake of content.
The video-sharing platform in its announcement post revealed that the new likeness management technology will help creators to safeguard their content, as a part of its commitment to promote responsible AI development.
Read more: Google Forms testing Gemini-powered 'Help Me Create a Form' feature
YouTube described the first tool as synthetic-singing identification technology within Content ID that will allow creators to automatically detect and manage AI-generated content on YouTube content that simulates their singing voices using generative AI.
Manage AI Deepfakes with YouTube's new tool
Moreover, as the accessibility of generative AI music tools created fears among artists regarding their use in plagiarism and copyright infringement, over 200 artists wrote an open letter earlier this year to YouTube, demanding to take action against it.
With the second tool, the platform is developing a new technology that will allow people from a variety of industries including creators, actors, musicians, and athletes to detect and manage deepfakes, AI-generated content showing their faces on YouTube.
Currently, these tools are still in the development process and YouTube is expected to share further details about the rollout later this year.