HomeWinBuzzer NewsSlack Adjusts Privacy Language to Address AI Training Concerns

Slack Adjusts Privacy Language to Address AI Training Concerns

Slack says that the data it gathers from users remains in-house and is used to train its AI model in updated privacy language.

-

Salesforce's Slack has faced criticism over its privacy policies, which allowed the platform to analyze user data for AI training unless users opted out. The company clarified that this data remains within the platform and is not used to train third-party models.

Slack explained that its machine learning models operate at a platform level to enhance functionalities like channel and emoji recommendations and search results. These models do not access original message content from direct messages, private channels, or public channels for generating suggestions. The company emphasized that its models are not designed to memorize or reproduce customer data.

Generative AI and Customer Data

Slack utilizes generative AI in its Slack AI product, employing third-party large language models (LLMs). According to the company, no customer data is used to train these third-party LLMs. Instead, Slack uses off-the-shelf LLMs that do not retain customer data. These models are hosted on Slack's AWS infrastructure, ensuring that customer data does not leave Slack's trust boundary, and the LLM providers do not have access to this data.

The privacy principles, updated in 2023, initially included language stating that Slack's systems analyze customer data, such as messages, content, and files, to develop AI/ML models. This broad data usage led to significant user backlash, with many expressing concerns over their data being used for AI training.

User Concerns and Company Response

Slack has insisted that data will not leak across workspaces, although it acknowledged that its global models utilize customer data. Messages within individual workspaces, however, are analyzed. The company revised its privacy principles to state: “To develop non-generative AI/ML models for features such as emoji and channel recommendations, our systems analyze Customer Data.”

A Slack spokesperson emphasized that the company's policies and practices have not changed; only the language has been updated for clarity. The data analysis feature is enabled by default, which may raise regulatory concerns. Workspace owners must email Slack's customer experience team to opt out, though the company has not specified how long this process takes.

Opting out means that customers will still benefit from globally trained models without contributing their data to these models. Slack explained that the data is used to improve features like query parsing, autocomplete, and emoji suggestions. The company believes that these personalized improvements are only possible by studying user interactions with Slack.

Journalistic Scrutiny and Documentation Updates

Journalists covering the issue have reported receiving warnings from Slack about “inaccuracies” in their articles, which were based on Slack's own documentation. The company has since updated this documentation to clarify its practices. As of May 17, Slack explicitly states: “We do not develop LLMs or other generative models using customer data,” and specifies that its systems analyze customer data, including files, to “develop non-generative AI/ML models for features such as emoji and channel recommendations.

Slack's updated documentation also notes that for autocomplete, “suggestions are local and sourced from common public message phrases in the user's workspace. Our algorithm that picks from potential suggestions is trained globally on previously suggested and accepted completions. We protect data privacy by using rules to score the similarity between the typed text and suggestion in various ways, including only using the numerical scores and counts of past interactions in the algorithm.

Industry Context and Comparisons

The incident, which led to numerous users shutting down their Slack workplaces, highlights the challenges software companies face in communicating how they use user data for generative . The complexities of explaining retrieval-augmented generation (RAG) workflows and other machine learning approaches in a privacy policy pose a reputational risk for companies.

Slack stated: “Our guiding principle as we build this product is that the privacy and security of Customer Data is sacrosanct, as detailed in our privacy policy, security documentation, and SPARC and the Slack Terms.” However, a review on May 17 by The Stack noted that none of these documents mention generative AI or machine learning.

Slack also collects user data to “identify organizational trends and insights,” according to its privacy policy. The company has yet to respond to questions about what kind of organizational trends it pulls from customer data.

Dropbox faced a similar issue in December 2023, when confusion over a new default toggle set to “share with third-party AI” caused an uproar. AWS's CTO publicly flagged his privacy concerns to Dropbox, which later clarified that “only [their] content relevant to an explicit request or command is sent to our third-party AI partners [OpenAI] to generate an answer, summary, or transcript… your data is never used to train their internal models, and is deleted from OpenAI's servers within 30 days.”

SourceSlack
Luke Jones
Luke Jones
Luke has been writing about Microsoft and the wider tech industry for over 10 years. With a degree in creative and professional writing, Luke looks for the interesting spin when covering AI, Windows, Xbox, and more.

Recent News

Mastodon