Productivity

 

Employees are using AI in harmful ways — and companies may be in the dark

Almost half of professionals surveyed have used AI on the job inappropriately, and 63% say they’ve seen other staffers using it inappropriately



Artificial intelligence has swept into the workplace with disruptive force, and many organizations are now grappling with the consequences.

A growing concern is the rise of “shadow AI” — employees using artificial intelligence tools in unauthorized or inappropriate ways, sometimes without fully understanding the risks. A recent study conducted by the University of Melbourne and KPMG found that 47% of career professionals admitted to using AI inappropriately at work, while 63% reported witnessing colleagues doing the same. Misuse ranges from relying on AI during internal assessments to uploading sensitive corporate information into public AI systems.

The risk exposure is significant. The study warns that invisible or shadow AI usage not only increases vulnerability but also undermines an organization’s ability to identify and mitigate emerging threats.

The Illusion of Competence

Experts argue that the fundamental shift is not increased dishonesty among employees, but the speed, polish, and invisibility AI enables.

Zahra Timsah, CEO of i-GENTIC AI, explains that AI can instantly produce high-quality outputs that mask shallow understanding. Managers may interpret polished deliverables as evidence of expertise, creating a false sense of productivity. In some cases, employees present AI-generated analysis confidently but cannot explain or defend the underlying reasoning. This dynamic risks organizations making decisions based on work that no one fully understands — eroding internal capability while leadership believes independent thinking is intact.

The Melbourne/KPMG data underscores the scale of the issue:

  • 44% of U.S. employees are using AI tools without authorization.

  • 46% have uploaded sensitive company data or intellectual property to public AI platforms.

  • 64% admit exerting less effort because AI reduces the workload.

  • 57% report making mistakes due to unchecked AI usage.

  • 53% conceal their AI use, presenting AI-generated work as their own.

Nick Misner, COO of Cybrary, describes the trend as a governance failure rather than isolated misconduct. While AI accelerates development and content creation, it can also introduce technical debt, security vulnerabilities, and compliance gaps if used without oversight.

He points to findings from Gallup’s State of the Global Workplace report, which estimates that 79% of employees worldwide are disengaged or minimally engaged. In that environment, providing powerful AI tools without clear guidance often leads not to innovation, but to corner-cutting and increased risk.

Recent incidents illustrate the irony. In one case involving KPMG Australia, 28 employees were caught using AI to cheat on internal exams, including a partner fined $10,000 for misconduct on an AI ethics assessment.

Bringing AI Out of the Shadows

Leaders must proactively address AI usage through structured governance rather than reactive enforcement.

1. Learn from Past Technology Waves

Joe Schaeppi, co-founder of Solsten, notes that similar patterns emerged during the early adoption of the internet and search engines. New tools inevitably produce gray-area behavior until policies mature. Companies such as Anthropic are already building enterprise-grade safeguards to mitigate misuse. Ultimately, organizations must align policy enforcement with culture and accountability.

2. Maintain Human Oversight

Companies should audit data access controls and clearly define which information can interact with AI systems. Synthetic datasets can help model scenarios without exposing sensitive information. Most importantly, a human review process should precede any high-stakes deployment or decision based on AI-generated outputs.

3. Establish Clear, Practical Rules

Ambiguous policies are ineffective. Organizations should:

  • Provide approved internal AI tools.

  • Explicitly prohibit uploading confidential, financial, client, or proprietary data into public AI platforms.

  • Monitor sensitive data flows, including copy-paste behavior into external systems.

  • Redesign performance evaluations to emphasize reasoning and accountability, not merely polished output.

Training must be concrete and scenario-based. For example, using AI to refine a generic email is acceptable. Uploading contracts or financial data into public systems is not. Brainstorming with AI is appropriate; presenting AI-generated analysis without understanding it is not.

4. Recognize When Misuse Becomes Legal Exposure

AI misuse crosses into legal territory when it involves intentional deception, data leakage, intellectual property theft, financial manipulation, or fraud. At that stage, companies may need to involve regulators or law enforcement. The key threshold is whether misuse results in measurable harm, exposure, or deliberate concealment.

5. Prioritize Consistent Enforcement and Training

From an employment law perspective, consistency is critical. Disciplining one employee for AI misuse while overlooking another creates discrimination and retaliation risks. Clear policies and uniform enforcement reduce litigation exposure.

Employees must also understand that AI does not shift responsibility. They remain accountable for the accuracy of outputs and for ensuring that no confidential or regulated information is improperly disclosed.

The Strategic Imperative

AI adoption is inevitable. The strategic risk is not employee usage itself, but unmanaged usage. Organizations that fail to implement governance frameworks, training programs, and accountability standards will face escalating operational, legal, and reputational consequences.

The companies that succeed will not be those that ban AI, but those that integrate it deliberately — pairing technological acceleration with human judgment, transparency, and disciplined oversight.