Skip to main content
Daniel J Glover
Back to Blog

AI Governance: Controls That Work

12 min read
Article overview
Written by Daniel J Glover

Practical perspective from an IT leader working across operations, security, automation, and change.

Published 8 January 2026

12 minute read with practical, decision-oriented guidance.

Best suited for

Leaders and operators looking for concise, actionable takeaways.

This is Part 6 of a 7-part series on Business AI Enablement for IT Leaders. The series covers why enablement matters, shadow AI risks, building an enablement framework, employee training, tool selection, and concludes with a 90-day implementation roadmap.


Governance has an image problem. To many employees, governance means bureaucracy, delays, and the word "no" repeated in various forms. This reputation is often deserved.

But AI governance done well looks different. It provides clarity that enables faster decision-making. It creates guardrails that allow confident action. It removes uncertainty that otherwise paralyses adoption.

The 68% of organisations without formal AI controls are not more innovative than those with them. They are simply operating blind, accumulating risks they cannot see, and missing opportunities to learn from experience.

This article provides a governance framework that enables rather than restricts.

The Governance Paradox

Governance and enablement seem opposed. More rules mean less freedom. Tighter controls mean slower action. This framing is wrong.

Consider an analogy. Traffic lights slow individual vehicles at intersections. But traffic lights enable higher overall throughput because they eliminate the chaos of uncoordinated intersections. The constraint creates capability.

AI governance works the same way. Clear rules about data handling eliminate the uncertainty that makes employees hesitant. Defined approval paths are faster than ad hoc escalation. Established verification processes catch errors before they cause damage.

Governance that fails:

  • Prohibits without providing alternatives
  • Requires approval for everything regardless of risk
  • Changes frequently without clear communication
  • Punishes compliance failures without addressing root causes
  • Creates burdens disproportionate to risks addressed

Governance that works:

  • Enables safe action by clarifying boundaries
  • Matches controls to actual risk levels
  • Remains stable with predictable updates
  • Treats failures as learning opportunities
  • Creates minimal overhead for low-risk activities

The goal is not minimal governance. The goal is appropriate governance - controls proportionate to risk that enable productive work.

The AI Policy Framework

Every organisation needs a foundational AI policy. This document establishes principles, defines responsibilities, and points to detailed guidance for specific areas.

Policy Structure

An effective AI policy includes:

Purpose and scope. Why the policy exists and who it covers. This should emphasise enablement alongside risk management.

Principles. The values guiding AI use in the organisation:

  • Transparency about when AI is used
  • Human responsibility for AI outputs
  • Privacy and data protection priority
  • Continuous learning and improvement
  • Safety and quality standards

Roles and responsibilities. Who is accountable for what:

  • Executive sponsor for AI enablement
  • IT responsibility for approved tools and security
  • Business unit responsibility for appropriate use
  • Individual employee responsibility for policy adherence
  • Champions and their support role

Approved tools. Reference to the authorised tool catalogue with access procedures.

Prohibited uses. Clear boundaries on what is not permitted:

  • Using non-approved tools for business purposes
  • Processing prohibited data types with AI
  • Automated decisions without human review for specified categories
  • Representing AI output as human work where disclosure is required

Compliance requirements. How the policy relates to regulations:

  • GDPR and data protection obligations
  • Industry-specific requirements
  • Emerging AI regulations

Enforcement and exceptions. How violations are addressed and how exceptions are requested:

  • Progressive response to violations
  • No-blame approach to good-faith errors
  • Exception request process with criteria
  • Appeals mechanism

Review and updates. How the policy stays current:

  • Annual review minimum
  • Trigger events for interim updates
  • Communication of changes

Policy Principles

Several principles make AI policies effective:

Clarity over comprehensiveness. A shorter policy that employees actually read and understand beats a comprehensive policy they ignore. Link to detailed guidance rather than including everything.

Principles over rules. Rules cannot cover every situation. Principles help employees make good decisions when specific guidance is absent.

Enable by default. The policy should help employees do things, not primarily stop them. Prohibitions should be few and justified.

Proportionate enforcement. Minor infractions should not receive the same treatment as serious violations. Good faith matters.

Data Classification for AI Inputs

Data classification determines what information can be used with which AI tools. Without clear classification, employees either avoid AI entirely (missing value) or use it recklessly (creating risk).

Classification Tiers

TierDescriptionAI Use Guidance
PublicInformation intended for public distributionAny approved tool
InternalBusiness information not intended for public disclosureEnterprise-grade tools with data protection agreements
ConfidentialSensitive information requiring protectionRestricted tools with enhanced controls; may require approval
ProhibitedInformation that must not be processed by external AINo external AI processing permitted

Public data includes published content, marketing materials, and information already in the public domain. This data can be used with any approved AI tool without restriction.

Internal data includes business information, internal communications, and operational data. Enterprise-grade tools with appropriate data protection agreements can process this data.

Confidential data requires heightened protection. This may include customer personal information, strategic plans, financial projections, and proprietary methods. Use with AI requires either restricted tools with enhanced controls or case-by-case approval.

Prohibited data must not be processed by external AI under any circumstances. This category typically includes:

  • Highly sensitive personal data (health records, financial details)
  • Trade secrets and critical intellectual property
  • Security credentials and access keys
  • Information subject to specific regulatory restrictions

Classification in Practice

Employees need practical guidance to classify data quickly:

  • Provide examples for each classification tier relevant to common work
  • Create decision trees that help with ambiguous cases
  • Establish escalation paths when classification is uncertain
  • Train on classification as part of AI enablement programmes

The goal is confident, rapid classification that does not slow work but does prevent inappropriate exposure.

Output Verification and Quality Controls

AI outputs are not automatically trustworthy. Verification requirements should match the risk of using unverified output.

Verification Levels

Level 1: Spot Check (Low Risk)

For internal productivity outputs where errors have limited consequences:

  • Review output for obvious errors
  • Confirm general alignment with intent
  • No formal documentation required

Examples: Internal email drafts, meeting notes, personal research summaries.

Level 2: Quality Review (Medium Risk)

For outputs that influence decisions or reach internal audiences:

  • Verify factual claims
  • Check logical consistency
  • Confirm alignment with organisational standards
  • Brief documentation of review

Examples: Internal reports, analysis summaries, policy drafts.

Level 3: Expert Review (High Risk)

For outputs affecting customers or significant decisions:

  • Subject matter expert verification
  • Comprehensive fact-checking
  • Consistency review against standards
  • Documented sign-off

Examples: Customer communications, published content, code deployment.

Level 4: Formal Validation (Critical Risk)

For outputs with significant business or regulatory implications:

  • Multi-person review
  • Compliance verification
  • Full documentation and audit trail
  • Leadership sign-off

Examples: Financial statements, legal documents, regulatory submissions.

Common Verification Failures

Training should address frequent verification mistakes:

  • Trusting confident tone. AI presents incorrect information as confidently as correct information. Confidence is not an accuracy signal.

  • Skipping source verification. AI may cite sources that do not exist or do not support claimed conclusions. Sources require independent verification.

  • Assuming consistency. AI may contradict itself within the same output. Long outputs need internal consistency review.

  • Overlooking omissions. AI may not mention important considerations it does not know about. Outputs are not comprehensive without explicit prompting.

Monitoring and Compliance Enforcement

Governance without monitoring is just documentation. Effective monitoring provides visibility without surveillance overreach.

What to Monitor

Usage patterns. Aggregate analytics on AI tool adoption:

  • Number of active users
  • Frequency of use
  • Use case patterns
  • Tool-specific adoption

This data informs capacity planning, training priorities, and tool evaluation. It does not require content inspection.

Policy indicators. Signals that suggest policy issues:

  • Access attempts to non-approved tools
  • Unusual data volumes
  • Off-hours usage patterns
  • Error rates suggesting untrained users

These indicators prompt investigation without requiring comprehensive surveillance.

Compliance events. Specific incidents requiring attention:

  • Reported violations
  • Security incidents involving AI
  • Customer complaints related to AI use
  • Audit findings

Monitoring Boundaries

Monitoring should respect employee privacy and trust:

  • Be transparent. Employees should know what is monitored and why.
  • Focus on patterns, not content. Usage statistics are less intrusive than content inspection.
  • Investigate with cause. Deep inspection should be triggered by indicators, not applied universally.
  • Protect whistleblowers. Employees reporting concerns should not face retaliation.

Heavy-handed monitoring damages the trust that effective AI enablement requires. The goal is sufficient visibility for governance, not comprehensive surveillance.

Enforcement Approach

Policy violations need appropriate response. The approach should be:

Proportionate. Minor first-time violations do not warrant the same response as repeated serious violations.

Educational. Many violations result from misunderstanding, not malice. Response should include training.

Consistent. Similar violations should receive similar responses regardless of who commits them.

Documented. Actions taken should be recorded for consistency and appeals.

Progressive. Responses should escalate for repeated violations:

  1. Informal guidance and training
  2. Formal warning with documentation
  3. Restricted access pending retraining
  4. Disciplinary action as per HR policy

No-blame for good faith. Employees who make genuine mistakes while trying to work appropriately should be supported, not punished.

Incident Response for AI Failures

AI-related incidents will occur. Preparation enables effective response.

Incident Categories

Data exposure. Sensitive information shared with inappropriate AI tools.

  • Immediate assessment of data involved
  • Vendor notification if relevant
  • Regulatory reporting if required
  • Affected party notification if necessary

Quality failures. AI outputs that caused business problems.

  • Document the failure and impact
  • Identify root cause
  • Implement verification improvements
  • Update training if needed

Security incidents. Compromises involving AI tools or data.

  • Standard security incident response applies
  • Additional focus on data scope and AI-specific factors
  • Vendor involvement as appropriate

Compliance violations. Regulatory requirements breached through AI use.

  • Legal and compliance engagement
  • Regulatory notification as required
  • Remediation planning
  • Control enhancements

Incident Response Process

A structured process ensures consistent handling:

  1. Detection and reporting. Clear channels for identifying and escalating issues.

  2. Initial assessment. Rapid evaluation of scope and severity.

  3. Containment. Immediate actions to prevent further impact.

  4. Investigation. Understanding what happened and why.

  5. Remediation. Addressing immediate damage and preventing recurrence.

  6. Review. Learning lessons and improving controls.

  7. Documentation. Recording the incident for compliance and learning.

As I explored in the incident response discussion in the CISO series, effective incident response is a core organisational capability.

Quick Reference: AI Governance Policy Template

Use this template as a starting point for your organisation's AI policy:

Purpose

  • Statement of policy intent emphasising enablement and safety
  • Scope covering all employees and AI use

Principles

  • Transparency in AI use
  • Human accountability for outputs
  • Data protection priority
  • Continuous improvement commitment

Approved Tools

  • Reference to tool catalogue
  • Access request procedures
  • Criteria for new tool requests

Data Classification

  • Four-tier classification with descriptions
  • Examples for each tier
  • AI use guidance by tier
  • Escalation for uncertain classification

Acceptable Use

  • Permitted use cases
  • Prohibited uses with rationale
  • Output verification requirements
  • Attribution and disclosure requirements

Roles and Responsibilities

  • Executive sponsor
  • IT responsibilities
  • Business unit responsibilities
  • Individual responsibilities
  • Champion role

Compliance

  • Regulatory alignment statements
  • Audit and reporting requirements
  • Training requirements

Enforcement

  • Violation response framework
  • Exception request process
  • Appeals mechanism

Governance

  • Policy owner
  • Review schedule
  • Change communication process

Governance Evolution

AI governance is not static. As AI capabilities evolve, governance must adapt.

Near-Term Developments

Regulatory pressure. The EU AI Act and emerging regulations will create new compliance requirements. Governance frameworks need to accommodate these requirements.

Agentic AI. AI systems that take autonomous actions raise governance questions current frameworks do not address. Decision authority, override mechanisms, and accountability need clarification.

Embedded AI. As AI becomes invisible within other tools, governance must account for AI use employees may not recognise as AI.

Governance Maturity

Organisations progress through governance maturity levels:

Level 1: Ad Hoc. No formal governance. Decisions made case by case.

Level 2: Defined. Policies exist but are inconsistently applied.

Level 3: Managed. Policies are consistently enforced with monitoring and improvement.

Level 4: Optimised. Governance continuously improves based on experience and external developments.

Most organisations are at Level 1 or 2. The framework in this article targets Level 3, with processes for progressing to Level 4.

Building Governance Capability

Effective governance requires investment:

  • Dedicated ownership. Someone accountable for AI governance as a significant responsibility.

  • Cross-functional coordination. Regular engagement across IT, legal, compliance, HR, and business units.

  • Continuous learning. Staying current with regulatory, technology, and industry developments.

  • Employee engagement. Governance that does not consider employee needs will not be followed.

Governance is a capability, not a document. The documents enable the capability but are not the capability themselves.


Developing Your AI Governance Framework

Building governance that enables rather than restricts requires balancing business needs with risk management. My IT compliance services help organisations develop AI governance frameworks that support productive adoption while maintaining appropriate controls.

Get in touch to discuss how to build AI governance that works.


Previous: Part 5 - Selecting AI Tools for Business Units

Next: Part 7 - AI Enablement: Your 90-Day Roadmap

Share this post

About the author

DG

Daniel J Glover

IT Leader with experience spanning IT management, compliance, development, automation, AI, and project management. I write about technology, leadership, and building better systems.

Continue exploring

Keep building context around this topic

Jump to closely related posts and topic hubs to deepen understanding and discover connected ideas faster.

Browse all articles

Let's Work Together

Need expert IT consulting? Let's discuss how I can help your organisation.

Get in Touch