A striking disconnect exists between the rapid adoption of artificial intelligence in the workplace and the way employees actually use it, according to a comprehensive global study by KPMG and the University of Melbourne. Surveying nearly 50,000 individuals across 47 countries late last year and into early 2025, the research found a startling 57% of employees conceal their use of AI tools, often passing off AI-generated material as their own original work.
This widespread secrecy points to deeper issues than simple convenience. The report, “Trust, attitudes, and use of artificial intelligence: a Global study 2025,” details how this hidden usage often involves risky practices. Two-thirds (66%) of employees using AI admit they rely on its output without checking them for accuracy. Almost half (48%) confessed to feeding potentially sensitive company information or copyrighted material into public AI tools – a practice often directly against company policy.
Furthermore, 56% of employees in the KPMG survey reported making work errors due to AI use, and 72% admitted putting less effort into tasks knowing they could rely on AI. This phenomenon, sometimes termed “Shadow AI” by industry watchers like Gartner, is seen as a growing enterprise risk. Sam Gloede, KPMG International’s global trusted AI transformation leader, stated, “That is really concerning because that’s where the organization is exposed to significant risk.”
Governance Gaps and Trust Deficits Fuel Risky Behavior
The prevalence of these risky behaviors appears directly linked to inadequate organizational preparedness. The study found a major gap in AI literacy and governance: only 47% of workers globally reported receiving any AI-related training.
Compounding this, just 41% stated their organization has a policy or provides guidance on using generative AI tools like ChatGPT, and only 55% of employees in AI-using organizations feel adequate safeguards are in place to ensure responsible use.
Professor Nicole Gillespie from the University of Melbourne, a lead author, highlighted the consequences: “This hidden usage creates significant risks for organizations, including data security breaches, errors, copyright infringements, and diminishes the potential for learning and innovation.” Employees feel pressure to adopt AI to remain competitive, Gillespie added, finding a “seductive element” in its benefits that encourages use, sometimes regardless of rules.
Intriguingly, this period of rapid, often ungoverned adoption has coincided with shifting attitudes. Comparing findings to a late 2022 survey across 17 countries (detailed in the KPMG report’s appendices), the researchers found overall perceived trustworthiness of AI systems and willingness to rely on them has generally decreased, while employee worry about AI has markedly increased.
This suggests that hands-on experience is leading to a more measured, and perhaps more realistic, assessment of AI’s current limitations and risks, rather than fostering greater comfort, reinforcing the urgency for better oversight.
The AI Divide and Future Challenges
The study underscores a pronounced difference between advanced and emerging economies. Workers in emerging nations generally report higher AI adoption rates, greater trust, better AI literacy, and perceive stronger organizational support for responsible AI use compared to their counterparts in advanced economies.
Demographically, younger workers (under 35), those with higher incomes, and individuals who have received AI training are the most frequent users and the most trusting, but paradoxically, also report higher rates of complacent and inappropriate use, suggesting literacy alone doesn’t prevent risky behavior without strong governance.
While organizations grapple with governing current tools – often used more for augmenting productivity than full automation as a recent Anthropic report suggested – the next generation of AI presents further governance headaches. Anthropic’s security chief, Jason Clinton, recently predicted the arrival of autonomous “virtual employees” within a year, raising complex issues around security, accountability, and the management of non-human identities (NHIM) – a category already estimated to outnumber human accounts 46-to-1 in many firms.
Industry Responds as Organizations Urged to Act
In response, technology providers are developing more robust enterprise controls. Microsoft, for instance, detailed significant updates to its Copilot Control System (CCS) in late April 2025. These aim to give IT departments better visibility and management capabilities, including risk assessment via Purview integration, usage tracking via Copilot Analytics (with agent-specific reports expected by June 2025), and enhanced controls for deploying specific AI agents.
Such developments signal an industry acknowledgment that providing powerful AI tools necessitates corresponding enterprise-grade governance frameworks. The increasing sophistication of AI, like ChatGPT’s ability to use conversational memory to personalize web searches, further highlights the need for careful data handling.
The KPMG report concludes by emphasizing four key actions for organizations grappling with these challenges: fostering transformational leadership around AI, enhancing trust through transparency and assurance mechanisms, boosting AI literacy via comprehensive training, and strengthening governance frameworks to guide responsible use.