Skip to main content
Daniel J Glover
Back to Blog

Shadow AI: The Hidden Governance Crisis

13 min read
Article overview
Written by Daniel J Glover

Practical perspective from an IT leader working across operations, security, automation, and change.

Published 4 January 2026

13 minute read with practical, decision-oriented guidance.

Best suited for

Leaders and operators looking for concise, actionable takeaways.

This is Part 2 of a 7-part series on Business AI Enablement for IT Leaders. The series covers why enablement matters, building an enablement framework, employee training, tool selection, governance controls, and concludes with a 90-day implementation roadmap.


There is an AI revolution happening inside your organisation. You probably cannot see it.

According to Cisco's 2025 research, approximately 60% of organisations feel they may be unable to identify shadow AI usage. This is not a failure of security tools. It is a fundamental visibility gap created by how employees access AI - through personal accounts, browser extensions, and mobile applications that never touch corporate infrastructure.

Shadow AI is the natural successor to shadow IT. But where shadow IT typically involved file sharing or project management tools, shadow AI involves systems that process sensitive information, generate business content, and increasingly make recommendations that influence decisions.

The stakes are higher. The visibility is worse. And the problem is growing faster than most IT leaders realise.

The Invisible AI Revolution

Shadow AI has reached a scale that should concern every technology leader. Netskope research found that more than 73% of work-related ChatGPT queries were processed using accounts not approved for corporate use. Employees are not just experimenting with AI - they are integrating it into daily work through channels IT cannot monitor.

The speed of growth is remarkable. In sectors like healthcare, manufacturing, and financial services, shadow AI tool usage surged more than 200% year over year according to Zendesk's CX Trends 2025 report. This is not a gradual adoption curve. It is a flood.

How shadow AI enters organisations:

Entry PointVisibilityCommon Examples
Personal accounts on corporate devicesNone to minimalChatGPT Plus, Claude Pro, Gemini Advanced
Browser extensionsNone without endpoint monitoringAI writing assistants, grammar tools with AI
Mobile applicationsNone on personal devicesAI chatbots, voice assistants, productivity apps
Embedded AI in approved toolsPartialAI features in email, documents, CRM systems
API integrations by power usersVariesCustom scripts, Zapier automations, low-code tools

The challenge is that each entry point has different visibility characteristics. Corporate-managed devices might reveal browser extension usage, but only if endpoint monitoring is configured for it. Mobile devices used for work bypass corporate controls entirely.

The result is that IT leaders are making governance decisions based on incomplete information about actual AI usage patterns.

Where Shadow AI Hides

Understanding where shadow AI concentrates helps prioritise governance efforts. Different business functions have different AI use cases - and different risk profiles.

Marketing and Communications

Marketing teams adopted generative AI faster than almost any other function. The use cases are obvious: content creation, social media posts, email campaigns, ad copy. Shadow AI in marketing typically involves:

  • Content drafting and editing with ChatGPT or Claude
  • Image generation for campaigns
  • Competitive analysis from AI-summarised research
  • Customer persona development

Risk profile: Moderate. The primary risks are brand inconsistency, factual errors in public content, and potential copyright issues with AI-generated material.

Analytics and Business Intelligence

Data analysts discovered that AI could accelerate insight generation significantly. Shadow AI in analytics includes:

  • Natural language queries against datasets
  • Automated report narrative generation
  • Pattern identification in unstructured data
  • Code generation for analysis scripts

Risk profile: High. Analysts often work with sensitive business data. Uploading datasets to public AI tools creates substantial data leakage risk.

Customer Service and Support

Frontline support staff use AI to handle volume and complexity. Common shadow uses:

  • Drafting customer responses
  • Summarising case history
  • Troubleshooting assistance
  • Translation for international customers

Risk profile: High. Customer data frequently flows through these interactions. Regulatory implications vary by industry but are significant in financial services and healthcare.

Software Development

Developers were early adopters of AI coding assistants. Shadow AI in development involves:

  • Code generation and completion
  • Debugging assistance
  • Documentation creation
  • Code review and refactoring suggestions

Risk profile: Critical. Proprietary code and system architecture details may be exposed. As I explored in my analysis of vibe coding security, AI-generated code also introduces security vulnerabilities if not properly reviewed.

Human Resources

HR teams use AI for tasks ranging from job descriptions to policy drafting. Shadow uses include:

  • Writing and improving job postings
  • Drafting employee communications
  • Performance review assistance
  • Policy document creation

Risk profile: High. Employee data is sensitive, and AI-assisted hiring decisions may have legal implications around bias and discrimination.

Finance and Procurement

Finance teams leverage AI for analysis and documentation. Shadow applications:

  • Financial report drafting
  • Contract review and summarisation
  • Vendor research and comparison
  • Budget modelling assistance

Risk profile: Critical. Financial data and contract terms are highly sensitive. Errors in AI-assisted financial analysis could have material business impact.

The Real Risks of Unmanaged AI

Shadow AI creates risks that compound over time. The longer unmanaged AI operates, the more these risks accumulate.

Data Leakage

Every prompt sent to a public AI service potentially exposes information. For consumer-grade AI tools, this data may be used for model training, stored for extended periods, or accessible to the AI provider's employees.

Consider what employees routinely share with AI:

  • Customer names and account details
  • Proprietary business strategies
  • Unreleased product information
  • Employee performance data
  • Financial projections and results

A single employee pasting customer data into ChatGPT may not seem catastrophic. But multiply that by hundreds of employees across months of usage, and the aggregate exposure becomes substantial.

The Samsung incident in 2023 - where employees inadvertently exposed proprietary source code through ChatGPT - demonstrated how quickly a shadow AI problem can become a security incident. The company subsequently banned ChatGPT entirely, a reactive measure that created its own productivity costs.

Compliance Violations

Regulatory frameworks increasingly address AI specifically. The EU AI Act, with full enforcement in 2026, creates obligations that shadow AI directly undermines:

  • Documentation requirements: Organisations must document AI systems used for certain purposes. Shadow AI is undocumented by definition.
  • Risk assessment obligations: High-risk AI applications require assessment. Shadow AI bypasses this entirely.
  • Data protection integration: GDPR requirements apply to data processed by AI. Shadow AI often processes personal data without appropriate safeguards.

For organisations in regulated industries, the compliance risk is acute. Healthcare organisations using AI for any patient-related purpose face HIPAA implications. Financial services firms face supervisory scrutiny of AI in customer interactions and decision-making.

Quality and Consistency Failures

AI outputs require verification. Without training and governance, employees often accept AI outputs uncritically. This creates quality risks:

  • Factual errors: AI confidently generates incorrect information. Without verification processes, these errors propagate into business documents, customer communications, and decisions.
  • Inconsistency: Different employees using different AI tools produce inconsistent outputs. Brand voice varies. Data interpretations differ. Customer experiences diverge.
  • Hallucinations in critical contexts: AI fabricating citations, statistics, or precedents can have serious consequences in legal, financial, or customer-facing contexts.

Security Vulnerabilities

Shadow AI creates security exposure beyond data leakage:

  • Credential exposure: Employees may share API keys, passwords, or access tokens with AI to get assistance. These credentials become exposed.
  • Malicious output: AI can be manipulated to produce harmful code, phishing content, or misleading information. Without security awareness training, employees may not recognise these risks.
  • Supply chain risk: Browser extensions and third-party AI integrations may themselves be security risks, operating with permissions that exceed their apparent function.

Why Employees Turn to Shadow AI

Understanding why employees bypass official channels is essential for solving the problem. Blame is counterproductive. Employees using shadow AI are typically trying to do their jobs better.

Legitimate Options Are Missing

The most common driver of shadow AI is simply that organisations have not provided approved alternatives. When IT has no official AI tools available - or the approval process takes months - employees find their own solutions.

This is particularly acute when employees see competitors or peers at other companies using AI effectively. The productivity gap creates pressure to find solutions regardless of official policy.

Approved Tools Do Not Meet Needs

Sometimes organisations have approved AI tools, but they do not serve the use cases employees actually need. A customer service AI that cannot help with marketing content creation pushes marketing teams toward shadow alternatives.

Friction Is Too High

Even when appropriate tools exist, friction kills adoption. If using the approved AI requires multiple approvals, VPN connections, or cumbersome interfaces, employees will gravitate toward the consumer tool that works immediately.

Employees Do Not Know Alternatives Exist

Poor communication about approved tools drives shadow AI as much as poor tools themselves. Employees may not know what is available, how to access it, or what use cases it supports.

Fear of Appearing Incompetent

Some employees hide AI use because they fear it will be seen as cheating or a sign they cannot do their jobs. This creates a particularly insidious form of shadow AI where employees actively conceal their usage.

Gaining Visibility Without Surveillance

The immediate reaction to shadow AI is often surveillance: deploy monitoring tools, scan network traffic, inspect browser history. This approach has severe limitations.

Technical constraints: Much shadow AI occurs through personal devices, personal accounts, and encrypted connections. Traditional monitoring cannot see what it cannot access.

Cultural damage: Aggressive surveillance destroys trust. Employees who feel monitored become less likely to engage transparently with IT - the opposite of what you need for effective AI governance.

False precision: Even comprehensive monitoring only shows tool usage, not the content or risk level of that usage. An employee querying ChatGPT about lunch options looks the same in logs as one uploading customer data.

Better approaches:

Anonymous Usage Surveys

Ask employees directly what AI tools they use and for what purposes. Anonymous surveys generate more honest responses than identifiable ones. The goal is understanding patterns, not identifying individuals.

Network-Level Discovery

While you cannot inspect encrypted content, you can identify connections to known AI services. This provides aggregate usage data without content surveillance. Useful for understanding scale, not individual behaviour.

Expense Analysis

Many employees expense AI subscriptions. Expense reports reveal shadow AI spend that licensing reviews miss.

Access Log Analysis

For corporate-managed identity systems, authentication logs to AI services reveal usage patterns. This works for services that support SSO but catches employees using personal accounts.

Open Dialogue

Often the most effective approach is simply asking. Town halls, team meetings, and informal conversations can surface shadow AI usage when employees feel safe discussing it.

The goal of visibility is not punishment. It is understanding current state so you can design solutions that actually address employee needs. This is particularly important when AI is reshaping how software itself gets built - the governance challenge extends well beyond chat-based AI tools into the development pipeline itself.

The 32% Control Gap

Netskope research found that only 32% of organisations have formal controls in place for AI usage. This control gap explains much of the shadow AI problem.

What formal controls typically include:

  • Acceptable use policies specific to AI
  • Data classification guidance for AI inputs
  • Approved tool catalogues with access provisioning
  • Training requirements before AI access
  • Monitoring and audit capabilities
  • Incident response procedures for AI-related issues

Why controls lag:

  • Speed of adoption: AI adoption outpaced governance development at most organisations
  • Uncertainty about risk: Without clear risk frameworks, governance teams hesitated
  • Lack of ownership: AI governance often falls between IT, security, legal, and business - and belongs clearly to none
  • Competing priorities: Other security and compliance priorities consumed governance capacity

What happens without controls:

Without formal controls, AI governance becomes ad hoc. Different business units develop different practices. Inconsistent risk management creates compliance gaps. Employees lack clear guidance on appropriate use.

The 32% figure is not just a governance gap. It is a value gap. Organisations without controls cannot systematically improve AI effectiveness because they lack the feedback loops that governance provides.

Quick Reference: Shadow AI Discovery Checklist

Use these steps to assess shadow AI in your organisation:

Baseline Assessment:

  • Survey employees anonymously about AI tool usage
  • Analyse network traffic for connections to known AI services
  • Review expense reports for AI-related subscriptions
  • Audit authentication logs for AI service access
  • Interview business unit leaders about team AI practices

Risk Identification:

  • Map shadow AI by department and use case
  • Identify data types flowing through shadow channels
  • Assess regulatory implications by usage type
  • Evaluate security exposure from identified tools
  • Prioritise risks by likelihood and impact

Root Cause Analysis:

  • Document why employees chose shadow tools over alternatives
  • Identify gaps in official tool offerings
  • Assess friction in approved tool access
  • Review communication about available resources
  • Understand cultural factors driving concealment

Immediate Actions:

  • Address highest-risk shadow AI with interim controls
  • Communicate current policies clearly to all employees
  • Establish safe reporting channel for AI concerns
  • Begin planning for comprehensive enablement programme

Shadow AI discovery is not a one-time exercise. As new AI tools emerge and employee needs evolve, regular reassessment is essential.

From Visibility to Action

Discovering shadow AI is only the first step. The insight is valuable only if it drives action.

The temptation is to respond with restrictions - blocking services, banning tools, enforcing compliance. This approach fails for the same reasons that drove shadow AI in the first place. Employees have needs that AI addresses. Blocking tools does not eliminate those needs.

The effective response is enablement that makes shadow AI unnecessary. Provide approved tools that meet employee needs. Offer training that builds confidence. Implement governance that enables rather than restricts.

Part 3 provides the framework for this enablement approach - a structured method for building the access, training, governance, and support that make shadow AI obsolete.

The goal is not zero shadow AI. Some experimentation with new tools will always occur, and that experimentation often identifies valuable capabilities. The goal is reducing shadow AI to the point where visibility is achievable and risks are manageable.

Organisations that achieve this balance gain the benefits of AI adoption without the accumulating risks of unmanaged usage.


Addressing Shadow AI in Your Organisation

Understanding your shadow AI landscape is the first step toward effective governance. My IT compliance services help organisations assess current AI usage, identify risks, and develop governance frameworks that enable rather than restrict.

Get in touch to discuss how to gain visibility into AI usage and build controls that actually work.


Previous: Part 1 - Why Business AI Enablement Matters Now

Next: Part 3 - Building Your AI Enablement Framework

Share this post

About the author

DG

Daniel J Glover

IT Leader with experience spanning IT management, compliance, development, automation, AI, and project management. I write about technology, leadership, and building better systems.

Continue exploring

Keep building context around this topic

Jump to closely related posts and topic hubs to deepen understanding and discover connected ideas faster.

Browse all articles

Let's Work Together

Need expert IT consulting? Let's discuss how I can help your organisation.

Get in Touch