1. Adobe’s artificial intelligence image generator Firefly was trained primarily on its own library of licensed stock images.
2. Some of the images used to train Firefly may have come from competitor Midjourney, which may have scraped images from the internet without licensing.
3. Despite the revelations, Adobe claims that all non-human pictures generated with Firefly are still copyright safe due to a rigorous moderation process.
Adobe’s artificial intelligence image generator, Firefly, has been marketed as ethical AI, trained primarily on Adobe’s own licensed stock images. However, a new report by Bloomberg suggests that some of the images used to train Firefly may have come from competitor Midjourney, raising concerns about the sourcing of the training data. Adobe has stated that only about 5% of the images used fell into this category and had gone through a rigorous moderation process.
The revelation that some of Firefly’s training data may not have been sourced ethically has raised questions about the copyright safety of the generated images. When Firefly was first launched, Adobe offered indemnity against copyright theft claims for its enterprise customers, positioning it as a safer alternative to other AI models. Despite the concerns raised by the Bloomberg report, Adobe maintains that all non-human images generated by Firefly are still safe and go through a thorough moderation process.
While the controversy surrounding Firefly’s training data may have raised doubts about its copyright safety, Adobe is reportedly taking a more rigorous approach with its AI video generator. Rumors suggest that artists are being paid per minute for video clips used to train the AI. This development indicates that Adobe is addressing concerns about the sourcing of data for its AI models.
Overall, the controversy surrounding Firefly highlights the importance of ethical sourcing and moderation of training data for AI models to ensure copyright safety and avoid potential legal issues.