HomeWinBuzzer NewsBBC Takes a Stand against Data Scraping While Embracing AI in Journalism

BBC Takes a Stand against Data Scraping While Embracing AI in Journalism

BBC plans to work with tech firms, media groups, and regulators for the safe use of generative AI.

-

The British Broadcasting Corporation (BBC), known as the United Kingdom’s largest news organization, has recently detailed the principles it intends to uphold with regards to the use and exploration of generative AI. This comes as part of their wider effort to leverage technology for enhanced news production, archival, and tailoring of personalised experiences. 

The corporation has stated on the blog post that it believes that such technology presents an opportunity to deliver improved value to its audiences and society at large. Towards this end, the BBC has outlined three guiding principles driving its approach to artificial intelligence: acting consistently in the public’s best interests, prioritizing talent and creativity while respecting artists’ rights, and ensuring openness and transparency regarding AI-produced outcomes.

Collaborative Approach towards Generative AI in Journalism

BBC indicated its intention to collaborate with technology companies, other media entities, and regulatory bodies to ensure the safe development and application of generative AI. Highlighting the importance of maintaining trust in the news industry, it stated, “In the next few months, we will start several projects exploring the use of Gen AI in our production and work methodology.”

The projects would investigate how generative AI could potentially aid, complement, or even drastically change the BBC’s operations across various fields, including journalistic research and production, content discovery, archival, and the crafting of personalised experiences. However, they did not further elaborate on the specifics of these forthcoming projects.

Choosing to Block Data Scraping

In a move that mirrors actions by CNN, The New York Times, and Reuters, BBC has barred web crawlers from OpenAI and Common Crawl from accessing its websites. This step is seen as the broadcaster’s attempt to prevent unauthorized access to its copyrighted material.

Davies mentioned that this action was taken to “safeguard the interests of license fee payers”, asserting that training AI models with BBC data, devoid of its explicit permission, did not align with public interest.

Other news organizations have started drafting their guidelines concerning this technology, with The Associated Press, for instance, publishing its guidelines earlier this year. Unlike BBC, it opted to partner with OpenAI to use its stories for training GPT models.

Worries Over the Role AI will Play in Content

While news and broadcasting companies are using AI, they are also taking a stance about protecting their data. In July, a group of leading news publishers also considered suing AI companies over copyright infringement. The publishers allege that the AI firms are infringing on their  rights and undermining their business model by scraping, summarizing, or rewriting their articles and distributing them on various platforms, such as websites, apps, or .

Last Updated on November 8, 2024 10:44 am CET

SourceBBC
Luke Jones
Luke Jones
Luke has been writing about Microsoft and the wider tech industry for over 10 years. With a degree in creative and professional writing, Luke looks for the interesting spin when covering AI, Windows, Xbox, and more.

Recent News

0 0 votes
Article Rating
Subscribe
Notify of
guest
0 Comments
Newest
Oldest Most Voted
Inline Feedbacks
View all comments
0
We would love to hear your opinion! Please comment below.x
()
x
Mastodon