OpenAI is currently in the process of developing a new tool called Media Manager that will give content creators greater control over how their content is used in training generative AI. The aim is to have this tool up and running by 2025, as OpenAI works towards establishing a standard with creators, content owners, and regulators.
The organization has come under fire recently for utilizing publicly available data from the internet to train AI models, which has resulted in legal action from prominent U.S. newspapers. OpenAI has defended its actions by stating that using copyrighted material is crucial for creating effective AI models. However, in an effort to compromise with content creators, OpenAI is allowing them to opt-out of having their work used.
Despite this attempt to meet creators halfway, some have criticized OpenAI for its opt-out process, claiming it is burdensome and that the company pays insufficiently to license content. In response to these concerns, other third parties are developing tools to assist artists in regulating how their work is utilized in AI model training. These tools include measures to block scraping attempts, apply barely perceptible watermarks, and disrupt AI model training by “poisoning” image data.
As the debate over content usage in AI development continues to evolve, it remains to be seen how OpenAI and other entities will navigate these complex issues moving forward. Stay tuned to Road Rug Cars for further updates on this developing story.
“Infuriatingly humble tv expert. Friendly student. Travel fanatic. Bacon fan. Unable to type with boxing gloves on.”