HomeWinBuzzer NewsGoogle Warns Employees about Using Bard, ChatGPT and Other AI Chatbots

Google Warns Employees about Using Bard, ChatGPT and Other AI Chatbots

Alphabet has advised its employees not to input confidential materials into AI chatbots.

-

In a surprising turn of events, Alphabet Inc, the parent company of Google, has issued a warning to its employees about the use of chatbots, including its own AI chatbot, Google Bard. The warning comes amidst the global marketing of Bard, a program designed to hold human-like conversations and answer a wide range of user prompts.

Confidentiality Concerns

According to Reuters, Alphabet has advised its employees not to input confidential materials into AI chatbots. The company confirmed this, citing a long-standing policy on safeguarding information. The concern arises from the fact that human reviewers may read the chats, and the AI could reproduce the data it absorbed during training, creating a potential leak risk. In a similar step, Apple has banned the use of generative AI tools such as ChatGPT and GitHub Copilot by its employees.

Risks of AI-Generated Code

Alphabet also alerted its engineers to avoid direct use of computer code that chatbots can generate. The company stated that Bard can make undesired code suggestions but still aids programmers. Google also emphasized its aim to be transparent about the limitations of its technology.

Google’s cautionary stance reflects a growing trend in corporate security. Companies are increasingly warning their personnel about using publicly available chat programs. Several businesses, including Samsung, Amazon.com, and Deutsche Bank, have set up guardrails on AI chatbots. Apple, which reportedly has similar restrictions, did not return requests for comment.

Bard’s Global Rollout and Privacy Concerns

Google is currently rolling out Bard to more than 180 countries and in 40 languages as a springboard for creativity. However, the company is also extending its warnings to Bard’s code suggestions. In response to a report about the postponement of Bard’s EU launch due to privacy concerns, Google told Reuters it is addressing regulators’ questions and has had detailed conversations with Ireland’s Data Protection Commission.

Controversies about Microsoft´s Bing Chat

Microsoft’s Bing Chat AI search chatbot which uses an adjusted version of ChatGPT, has had a rollercoaster journey so far. Most recently, Bing Chat faced criticism when it was found to be providing misleading information to users who searched for “Chrome” on Microsoft’s Edge Browser. Instead of providing information about Google’s browser, it redirected users to news articles about Bing features. This incident raised questions about the trustworthiness of Bing Chat and Microsoft’s approach to promoting its products.

Regulatory Hurdles for AI Chatbots

The regulatory landscape for AI chatbots has also been evolving. In May 2023, OpenAI CEO Sam Altman stated that the company might have to cease operations in the EU due to the proposed AI Act by the European Union. However, he later clarified that OpenAI had no plans to leave the EU and welcomed a proactive approach to AI governance.

Despite the challenges, AI chatbots continue to evolve rapidly. In May 2023, Microsoft issued a significant update for Bing Chat, introducing multimodal capabilities, history, and Edge actions. OpenAI´s ChatGPT now supports plugins and Microsoft has announced an upcoming plugin-ecosystem for Bing, ChatGPT and other Microsoft Products.

Last Updated on November 8, 2024 12:44 pm CET

SourceReuters
Markus Kasanmascheff
Markus Kasanmascheff
Markus has been covering the tech industry for more than 15 years. He is holding a Master´s degree in International Economics and is the founder and managing editor of Winbuzzer.com.

Recent News

0 0 votes
Article Rating
Subscribe
Notify of
guest
0 Comments
Newest
Oldest Most Voted
Inline Feedbacks
View all comments
0
Would love your thoughts, please comment.x
()
x
Mastodon