Zoom has reversed a policy that would have allowed the company to use customer data to train its artificial intelligence (AI) models. The policy, which was announced in March, was met with backlash from privacy advocates and users who were concerned about how their data would be used.
Zoom wants to make its video calls smarter with artificial intelligence, but it also wants to own the data that powers it. The company updated its terms of service in March, claiming all the rights to the data generated by users during their calls. This data includes not only audio and video, but also text, images, and emotions. Zoom plans to use this data to train and improve its new AI features, which it calls “Zoom IQ”.
Zoom IQ is a set of generative AI features that Zoom has been introducing throughout the year. These features aim to enhance the user experience and productivity of video calls. For example, Zoom IQ can automatically generate call summaries, transcriptions, and action items. It can also create realistic avatars, backgrounds, and filters for users.
To achieve this, Zoom uses its own language model, as well as those developed by OpenAI and Anthropic, two leading AI research labs. Zoom says that it uses a “federated” approach to AI, which means that it does not centralize the data from users, but rather distributes the learning process across different devices and servers.
Stepping Back and Reversing the Decision
In a blog post on Monday, Zoom makes its policy change clear with the following commitment: “For AI, we do not use audio, video, or chat content for training our models without customer consent.”
Under the new policy, Zoom will only use customer data to train its AI models if customers have given their consent. This consent can be given through a new setting in the Zoom app. Customers who do not give their consent will not have their data used to train AI models. Zoom says that the new policy will go into effect on August 15. The company also says that it will delete any customer data that has already been used to train AI models without consent.
The reversal of Zoom's policy is a victory for privacy advocates and users who were concerned about how their data would be used. It is also a sign that companies are starting to take privacy more seriously in the wake of recent data breaches and privacy scandals.
Generative AI Privacy Concerns
How AI models such as chatbots and coding tools handle user data is a hot topic. There are concerns about how data is accessed and used, even amongst the biggest players in the industry. Whether it is OpenAI and ChatGPT, Google Bard, or Microsoft's Bing Chat, there are plenty of answered questions about AI data acquisitions.
In March 2023, OpenAI launched a plugin platform for ChatGPT that allows developers to create plugins that extend the functionality of ChatGPT. However, there has been recent evidence that shows ChatGPT plugins cause privacy risks. One of the main risks is that plugins could be used to inject malicious code into ChatGPT sessions. This could allow attackers to steal data, install malware, or even take control of a user's computer.
Google Bard made its debut in Europe last month following months of delay due to privacy concerns. Bard's launch in the EU required Google to adhere to rules within the bloc. Google dealt with regulatory and legal challenges from the European Commission and other authorities, who were concerned about Bard's potential impact on data protection, privacy, competition and intellectual property rights. Google had to demonstrate that Bard complied with the EU's strict rules and standards, such as the General Data Protection Regulation (GDPR) and the Digital Services Act (DSA).