Artificial Intelligence

AI Governance and the EU AI Act: What UK Businesses Need to Know

AI Governance and the EU AI Act: What UK Businesses Need to Know

AI Governance and the EU AI Act: What UK Businesses Need to Know

The EU AI Act is now in force. The general-purpose AI (GPAI) obligations became effective in August 2025, and the high-risk system requirements follow in August 2026. For UK businesses, the instinct may be to dismiss this as European regulation that does not apply domestically. That instinct is wrong. If your AI-powered product or service is used by customers in the EU — or if your AI processes data originating from EU citizens — the AI Act’s obligations reach you regardless of where your servers are located or where your company is incorporated.

This article is not legal advice. It is a practical guide for development teams and technical leaders who need to understand what the EU AI Act requires, how the UK regulatory approach differs, and what concrete steps your engineering organisation should be taking now. At McKenna Consultants, we help businesses implement AI systems with appropriate governance built in from the architecture level, and the patterns described here reflect real-world implementation experience.

The EU AI Act: Structure and Scope

The AI Act takes a risk-based approach to regulation. It classifies AI systems into four tiers, each with different obligations.

Unacceptable Risk (Prohibited)

Certain AI applications are banned outright within the EU. These include social scoring systems, real-time biometric identification in public spaces (with narrow law enforcement exceptions), and AI that exploits vulnerabilities of specific groups. Most UK B2B software companies will not encounter these prohibitions, but it is worth understanding the boundary.

High-Risk AI Systems

This is the category that will affect the largest number of UK technology companies. High-risk AI systems include those used in:

  • Employment and worker management: Automated CV screening, interview analysis, performance monitoring, task allocation
  • Access to essential services: Credit scoring, insurance risk assessment, benefit eligibility determination
  • Education: Automated grading, admission decisions, learning pathway assignment
  • Critical infrastructure: Energy grid management, water treatment, transport systems
  • Law enforcement and justice: Predictive policing, evidence analysis, sentencing support

If your product includes AI capabilities in any of these domains and is used by EU customers, the high-risk obligations apply from August 2026. The requirements are substantial: conformity assessments, technical documentation, risk management systems, human oversight provisions, accuracy and robustness testing, and post-market monitoring.

Limited Risk (Transparency Obligations)

AI systems that interact with people must disclose that they are AI. This covers chatbots, AI-generated content, and emotion recognition systems. The obligation is primarily about transparency rather than technical compliance. Users must know they are interacting with an AI system.

Minimal Risk

AI systems that do not fall into the above categories — spam filters, AI-powered search, recommendation engines — face no specific regulatory obligations under the AI Act, though general product safety and data protection rules still apply.

GPAI Obligations: What Changed in August 2025

The general-purpose AI model obligations are distinct from the risk-based classification above. They apply to the providers of foundation models and large language models — the companies that train and distribute the base models. However, they have downstream implications for every business that builds on top of those models.

For GPAI Model Providers

Providers of general-purpose AI models must now:

  • Maintain and make available technical documentation describing the model’s training process, data sources, and capabilities
  • Provide downstream deployers with sufficient information to comply with their own obligations
  • Implement a copyright compliance policy
  • Publish a sufficiently detailed summary of the training data

Models classified as posing “systemic risk” (broadly, models trained with more than 10^25 FLOPs) face additional obligations including adversarial testing, incident reporting, and cybersecurity assessments.

What This Means for Deployers

If you are building AI features using OpenAI’s GPT models, Anthropic’s Claude, Google’s Gemini, or similar foundation models, you are a “deployer” rather than a “provider.” Your obligations differ, but you are not exempt. You must:

  • Ensure your use of the model complies with the provider’s terms and the documentation they supply
  • Maintain records of how the model is integrated into your product
  • Implement appropriate human oversight for decisions that affect individuals
  • Be transparent with users about AI involvement in outputs that reach them

The practical implication is that your documentation and governance processes must capture how your AI features work end-to-end: which models you use, what prompts and system instructions shape their behaviour, how outputs are validated, and what human review occurs before AI-generated outputs affect real decisions.

How the UK Regulatory Approach Differs

The UK has explicitly chosen not to replicate the EU AI Act. Instead, the UK government has adopted a “pro-innovation” framework based on five cross-sectoral principles: safety, security, and robustness; appropriate transparency and explainability; fairness; accountability and governance; and contestability and redress.

These principles are not yet enshrined in legislation. They are enforced through existing sector-specific regulators — the FCA for financial services, the ICO for data protection, the CMA for competition, Ofcom for communications. Each regulator interprets and applies the principles within its own domain.

What This Means in Practice

For UK businesses operating purely domestically, the regulatory burden is currently lighter than the EU’s prescriptive requirements. However, there are several reasons not to treat this as a free pass.

Regulatory convergence is likely. The UK government has signalled that binding AI regulation will follow. Building governance structures now means you will not be scrambling to retrofit compliance when UK legislation arrives.

The ICO is already active. The Information Commissioner’s Office has been clear that GDPR applies to AI systems processing personal data. Automated decision-making under Article 22, data protection impact assessments, and the right to explanation all create existing legal obligations for AI systems in the UK.

Procurement requirements. Enterprise customers, particularly in financial services, healthcare, and the public sector, are increasingly requiring AI governance documentation as a procurement condition. Even without prescriptive regulation, market expectations are driving governance requirements.

EU market access. If you sell to EU customers — or plan to — the AI Act applies to you. Building governance to the EU standard from the outset is significantly cheaper than retrofitting it later.

Practical Steps for Technical Teams

This is where the article moves from regulatory overview to engineering practice. The following steps represent the governance infrastructure that development teams should be implementing now.

1. Create an AI System Inventory

You cannot govern what you do not know about. Start by cataloguing every AI system, feature, or component in your product portfolio. For each entry, document:

  • What it does. A plain-language description of the AI capability.
  • Which models it uses. Foundation model provider, model version, fine-tuning status.
  • What data it processes. Input data types, sources, whether it includes personal data.
  • What decisions it influences. Does the output inform a human decision, or does it trigger an automated action?
  • Who it affects. End users, employees, third parties.
  • Risk classification. Based on the EU AI Act categories above, what risk tier does this system fall into?

This inventory becomes the foundation for all subsequent governance activities.

2. Implement Human-in-the-Loop Governance

Human-in-the-loop AI governance is not a checkbox exercise. It requires architectural decisions that shape how your AI features are built.

For high-stakes decisions, the architecture must ensure that a qualified human reviews AI outputs before they take effect. This means:

  • Confidence thresholds. AI outputs below a defined confidence score are automatically routed for human review. Outputs above the threshold may proceed, but are subject to sampling and audit.
  • Explanation infrastructure. The human reviewer must be able to understand why the AI produced a particular output. This requires logging the inputs, the model’s reasoning chain (where available), and the key factors that influenced the output.
  • Override mechanisms. Humans must be able to override AI decisions and have those overrides recorded and fed back into system improvement.
  • Escalation paths. When a human reviewer is uncertain, there must be a clear escalation route to a more senior decision-maker.

For lower-stakes applications — content recommendations, search ranking, formatting suggestions — the governance model can be lighter, but transparency requirements still apply. Users should know when AI is influencing what they see.

3. Build Technical Documentation

The EU AI Act’s documentation requirements are specific. Even if you are not yet obligated to produce them, building this documentation practice now creates a durable governance asset.

For each AI system, maintain:

  • System architecture documentation. How the AI component fits into the broader product architecture. Data flows, API boundaries, deployment infrastructure.
  • Training and evaluation data documentation. For fine-tuned models, document the training data sources, preprocessing steps, and evaluation metrics. For prompt-engineered systems, document the system prompts, few-shot examples, and evaluation benchmarks.
  • Risk assessment. A structured assessment of potential harms, including bias, accuracy failures, adversarial manipulation, and unintended use cases. Include mitigation measures for each identified risk.
  • Performance metrics. Ongoing measurement of accuracy, precision, recall, fairness metrics, and failure rates. These metrics should be monitored in production, not just evaluated at launch.
  • Change log. A record of every material change to the AI system — model upgrades, prompt changes, training data updates, threshold adjustments — with the rationale for each change.

4. Implement Bias and Fairness Testing

AI systems can produce discriminatory outcomes even when they are not explicitly designed to consider protected characteristics. Bias testing must be a standard part of your development and deployment pipeline.

  • Pre-deployment testing. Evaluate model outputs across demographic groups (where the use case involves decisions about people) to identify disparate impact.
  • Ongoing monitoring. Bias can emerge over time as input data distributions shift. Implement automated monitoring that flags statistical anomalies in outcomes across relevant groups.
  • Remediation process. When bias is detected, have a documented process for investigating the root cause, implementing corrections, and validating that the fix is effective.

5. Establish an AI Governance Board

Technical governance must be connected to organisational governance. An AI governance board — which may be a standing committee or a function within an existing risk and compliance structure — provides the decision-making authority for:

  • Approving new AI deployments
  • Reviewing risk assessments
  • Setting policies on acceptable AI use cases
  • Responding to incidents and near-misses
  • Liaising with external regulators and auditors

The board should include technical, legal, and business representatives. Governance that lives exclusively within the engineering team will lack the business context to make proportionate decisions. Governance that lives exclusively within legal will lack the technical understanding to be practical.

AI Governance for Agentic Systems

The rise of agentic AI enterprise automation — autonomous agents that plan and execute multi-step tasks — introduces new governance challenges that the EU AI Act does not yet fully address but that responsible businesses must consider.

AI agents for business process automation are fundamentally different from single-prompt AI features. They make sequential decisions, use tools, access external systems, and can take actions with real-world consequences. Governance for agentic systems requires:

  • Action boundaries. Define what the agent is permitted to do. Can it send emails? Can it modify database records? Can it authorise expenditure? These boundaries must be enforced at the infrastructure level, not just through prompt instructions.
  • Audit trails. Every action an agent takes must be logged with sufficient detail to reconstruct its reasoning chain. This is essential for both regulatory compliance and debugging.
  • Breakpoints. For high-consequence actions, the agent should pause and request human approval before proceeding. The definition of “high-consequence” should be configurable and regularly reviewed.
  • Rollback capability. Where possible, agent actions should be reversible. Design your integration architecture so that an agent’s mistakes can be undone without manual data surgery.

Auditing Existing AI Deployments

If your organisation has already deployed AI features — and most technology companies have, whether they have formally catalogued them or not — an audit is the essential first step.

Step 1: Discovery

Identify every AI capability in your product and internal tools. Check for AI features that may have been introduced informally — a developer who added GPT-powered summarisation to an internal tool, a data team that built a classification model for customer support tickets, a marketing team using AI to generate content.

Step 2: Classification

Map each discovered AI capability to the EU AI Act risk categories. Be conservative in your classification. If a system is borderline between limited risk and high risk, classify it as high risk and build governance accordingly.

Step 3: Gap Analysis

For each AI system, compare your current governance posture against the requirements for its risk classification. Document the gaps. Common gaps include: no technical documentation, no bias testing, no human oversight mechanism, no incident response process, insufficient transparency to end users.

Step 4: Remediation Planning

Prioritise gap remediation based on risk classification and exposure. High-risk systems used by EU customers should be addressed first. Build a realistic timeline — governance remediation is not a weekend project — and allocate engineering resources accordingly.

Building Governance Into the Development Lifecycle

The most effective approach to AI governance is not to bolt it on after deployment but to integrate it into your development lifecycle from the start.

  • Requirements phase. Include governance requirements alongside functional requirements. What risk classification does this feature fall into? What documentation is required? What human oversight is needed?
  • Design phase. Architect the AI feature with governance hooks: logging, confidence scoring, human review workflows, and override mechanisms.
  • Testing phase. Include bias testing, accuracy benchmarking, and adversarial testing alongside functional and performance tests.
  • Deployment phase. Ensure documentation is complete and approved by the governance board before the feature reaches production.
  • Operations phase. Monitor accuracy, fairness, and usage metrics in production. Review and update documentation as the system evolves.

Conclusion

The EU AI Act creates clear, enforceable obligations for any business whose AI systems touch EU customers. The UK’s lighter-touch approach does not eliminate governance requirements — it merely distributes them across existing regulators and, increasingly, across customer procurement expectations.

For development teams, the practical response is to build governance infrastructure now: system inventories, documentation practices, human-in-the-loop architectures, bias testing pipelines, and organisational governance structures. This investment protects your EU market access, positions you well for forthcoming UK regulation, and — most importantly — ensures that your AI systems are reliable, fair, and trustworthy.

McKenna Consultants helps businesses design and implement AI systems with enterprise AI governance built in from the architecture level. Whether you are auditing existing deployments, building new agentic AI capabilities, or preparing for the EU AI Act’s high-risk obligations in August 2026, we bring the technical governance expertise that development teams need. Get in touch to discuss your requirements.

Have a question about this topic?

Our team would be happy to discuss this further with you.