The Institute of Electrical and Electronics Engineers (IEEE) has published a study revealing that some AI-generated images may infringe on copyrighted materials. The research, led by AI expert Gary Marcus and digital illustrator Reid Southen, has found instances where systems like OpenAI’s DALL-E 3 produce images that closely resemble copyrighted scenes from movies and video games. The findings raise pressing legal and ethical questions regarding the use of copyrighted content in training AI and generating images.
Legal Implications and Industry Response
The report suggests that OpenAI and Midjourney likely utilized copyrighted material to train their image-generation models, but the legality of such practices remains a contentious issue. Critics argue that businesses profiting from subscription-based image generation services could be held liable for copyright infringement. As the industry grapples with these concerns, OpenAI has responded to a lawsuit filed by The New York Times, asserting that their training practices fall under fair use. Meanwhile, Midjourney’s terms of service warn users against infringing on intellectual property rights.
Future of AI and Copyright Law
Legal experts debate the responsibility for potentially infringed outputs, questioning whether the AI developers or the users commanding specific outputs should be held accountable. Professor Tyler Ochoa gave The Register a nuanced view, suggesting that contributory infringement should be considered when users intentionally solicit copyrighted scenes. As AI continues to navigate the complex terrain of copyright law, companies like OpenAI advocate for the necessity of using copyrighted content in AI training to meet modern demands and maintain competitiveness.
Last Updated on November 7, 2024 11:07 pm CET