OpenAI outlines strategy for responsible AI data usage and collaborations with creators

by

in

1. OpenAI announced Media Manager, a tool for creators to opt in/out of AI training
2. There is speculation on how effective this tool will be and whether creators’ data will be actively identified
3. Pressure is mounting on OpenAI and other AI companies to handle data more ethically, with Noyb launching legal action against OpenAI for ChatGPT inaccuracies.

OpenAI has announced a new approach to data and AI, focusing on responsible AI development and partnerships with creators and content owners. They aim to build AI systems that expand opportunities for all while respecting the choices of creators and publishers. The company is developing a tool called Media Manager to allow creators to control how their works are used in machine learning research and training, with a goal to have it in place by 2025.

The details of how Media Manager will work are still unclear, but it appears to be a self-service tool for creators to opt-in or out of generative AI training. Some speculate if OpenAI will actively identify creators’ data in their dataset using machine learning. There are concerns about the need for an opt-out option if OpenAI believes training on publicly available data is fair use and should develop tools to filter copyrighted material from the outset.

By 2025, OpenAI aims to build a substantial foundational dataset of copyrighted works, suggesting potential partnerships with sources like the Financial Times and Le Monde. The company is under pressure, along with other AI firms, to handle data more ethically, as evidenced by legal action from European privacy advocacy group Noyb regarding inaccuracies generated by ChatGPT. Overall, OpenAI’s initiative with Media Manager reflects a growing awareness of the importance of ethical data usage in AI development.

Source link