HomeWinBuzzer NewsReport: AI Chatbots like ChatGPT and Bing Chat Could Pose Major Privacy...

Report: AI Chatbots like ChatGPT and Bing Chat Could Pose Major Privacy Risk

Team8 says that chatbot AI like Bing Chat and ChatGPT could leave organizations open to privacy and data risks in a new report.

-

In recent weeks we have seen a growing pushback against artificial intelligence such as ChatGPT – and by association 's Bing Chat. There is concern about the ongoing development of AI and even around the regulatory situation of the technology. In a new report, Team8 says that chatbots such as ChatGPT could be breaching customer privacy.

The Israel-based venture firm says that AI solutions that generate content could leave organizations vulnerable to lawsuits and data leaks. In the report, which was given to Bloomberg, Team8 highlights how chatbots could be targeted by threat actors.

There is also the possibility that employees give the chatbots sensitive information as part of their prompts. AI such as ChatGPT and rely on the user inputting information as a query. The tool that scrapes data from the web to provide a natural language and original answer.

Microsoft is leveraging – the developer of ChatGPT – across its services. GPT-4, the AI engine that powers ChatGPT and Bing Chat. Microsoft has invested billions into OpenAI, and now uses GPT-4 in Office (Microsoft 365 Copilot), Bing (Bing Chat and Bing Image Creator), Microsoft Cloud (Azure OpenAI Service), CRM/ERP (Dynamics 365 Copilot), and programming (GitHub Copilot X).

AI Chatbots Could Pose High Risk Threat

So, Microsoft has committed in a major way to AI across enterprise and consumer products. Considering Microsoft is the biggest enterprise software company in the world, it is likely AI will become part of organizations around the world. Team8 says the use of poses many problems that have not yet been addressed:

“Enterprise use of GenAI may result in access and processing of sensitive information, intellectual property, source code, trade secrets, and other data, through direct user input or the API, including customer or private information and confidential information,” the report says, and describes the risk as “high”.

“As of this writing, Large Language Models cannot update themselves in real-time and therefore cannot return one's inputs to another's response, effectively debunking this concern. However, this is not necessarily true for the training of future versions of these models,” the report adds. 

“On the user side, for example, third-party applications leveraging a GenAI API, if compromised, could potentially provide access to email and the web browser, and allow an attacker to take actions on behalf of a user,” Team8 says.

Tip of the day: For the most part, are stable, but they can still be still thrown out of whack by updates or configuration issues. Many boot their PC to find their Microsoft Store isn't working or their Windows apps aren't opening. Luckily and Windows 10 have an automatic repair feature for apps that can resolve such issues.

SourceBloomberg
Luke Jones
Luke Jones
Luke has been writing about all things tech for more than five years. He is following Microsoft closely to bring you the latest news about Windows, Office, Azure, Skype, HoloLens and all the rest of their products.

Recent News