The Associated Press (AP) has released guidelines concerning the use of artificial intelligence (AI) in its newsrooms. The guidelines emphasize that AI and tools such as ChatGPT or other generative AI tools based on large language models (LLM) such as GPT-4 or AI image generation tools should not be used to create publishable content or images. However, staff members are encouraged to familiarize themselves with the technology.
Details of the Guidelines
Amanda Barrett, Vice President of News Standards and Inclusion at AP, stated in a blog post that while AI can serve the values of accuracy, fairness, and speed, it will not replace journalists. She emphasized the responsibility of AP journalists to ensure the accuracy and fairness of the information they share. The guidelines mention the following best practices:
- “AP has a licensing agreement with OpenAI, the maker of ChatGPT, and while AP staff may experiment with ChatGPT with caution, they do not use it to create publishable content.
- Any output from a generative AI tool should be treated as unvetted source material. AP staff must apply their editorial judgment and AP's sourcing standards when considering any information for publication.
- In accordance with our standards, we do not alter any elements of our photos, video or audio. Therefore, we do not allow the use of generative AI to add or subtract any elements.
- We will refrain from transmitting any AI-generated images that are suspected or proven to be false depictions of reality. However, if an AI-generated illustration or work of art is the subject of a news story, it may be used as long as it clearly labeled as such in the caption.
- We urge staff to not put confidential or sensitive information into AI tools.
- We also encourage journalists to exercise due caution and diligence to ensure material coming into AP from other sources is also free of AI-generated content.
- Generative AI makes it even easier for people to intentionally spread mis- and disinformation through altered words, photos, video or audio, including content that may have no signs of alteration, appearing realistic and authentic. To avoid using such content inadvertently, journalists should exercise the same caution and skepticism they would normally, including trying to identify the source of the original content, doing a reverse image search to help verify an image's origin, and checking for reports with similar content from trusted media.”
Concerns and Precautions
Generative AI has the potential to spread misinformation through altered content that may appear realistic. To prevent the inadvertent use of such content, journalists are advised to exercise caution and skepticism. This includes identifying the source of the original content, conducting reverse image searches to verify an image's origin, and checking for similar content from trusted media sources. If there's any doubt about the authenticity of the material, it should not be used.
The AP's move comes as various news organizations are beginning to set rules on how to integrate AI tools like ChatGPT into their operations. The Poynter Institute, a journalism think tank, has called this a “transformational moment” and urged news organizations to establish AI standards and share them with their audiences.
Generative AI can create text, images, audio, and video on command but struggles with distinguishing fact from fiction. As a result, material produced by AI should be vetted carefully, similar to content from other news sources.
AP's Ongoing Relationship with AI
The AP has been experimenting with simpler forms of AI for a decade, using it to create short news stories from sports scores or corporate earnings reports. The organization recently announced a licensing agreement with OpenAI, allowing the AI company to train on AP's archive of news stories.
The guidelines reflect AP's commitment to maintaining the highest standards of journalism in an era of rapid technological advancement.