Artificial Intelligence

AI First Governance and the EU AI Act: What Businesses Need to Know

AI First Governance and the EU AI Act: What Businesses Need to Know

AI Governance and the EU AI Act: What Businesses Need to Know

The EU AI Act is now in force. The general-purpose AI (GPAI) obligations became effective in August 2025, and the high-risk system requirements follow in August 2026. For UK businesses, the instinct may be to dismiss this as European regulation that does not apply domestically. That instinct is wrong. If your AI-powered product or service is used by customers in the EU — or if your AI processes data originating from EU citizens — the AI Act’s obligations reach you regardless of where your servers are located or where your company is incorporated.

Artificial intelligence was established as an academic discipline in 1956, rooted in computer science, and has since experienced cycles of optimism and disappointment as well as rapid recent advancements. The field draws on computer science to advance technologies such as artificial neural networks and deep neural networks, which are foundational to modern AI. AI algorithms now power a wide range of products and services, enabling automation, data analysis, and personalized experiences. Deep neural networks, with their multiple hidden layers, and artificial neural networks are crucial for modeling complex data relationships and underpinning deep learning applications. Generative AI can produce not only text and images but also other forms such as videos, audio, and software code. AI systems excel at specific tasks, such as playing games, operating in industry-specific applications, or executing short, goal-oriented actions. In transportation, AI enables autonomous vehicles to perceive their environment and make complex decisions. In marketing and branding, AI empowers creatively engaging brands to develop innovative and captivating brand experiences. The knowledge gained by AI models through learning from data improves their performance and ability to transfer skills to new problems. AI is also used for solving math problems, with specialized models and training approaches designed for this purpose. The Turing test remains a historical benchmark for evaluating machine intelligence. AI systems can simulate emotions to mimic human feelings, though they do not possess genuine consciousness.

This new reality requires organizations to adapt quickly to the rapid advancements in AI and the societal shifts they bring. Becoming an AI first organization means fundamentally redesigning strategies, workflows, and culture around AI capabilities, rethinking jobs, developing new skills, and achieving early wins. Practitioners bold within organizations are actively experimenting with AI and driving innovation, while some leaders have mandated AI experimentation for all his employees to ensure comprehensive organizational change. The chief digital officer plays a key role in leading digital transformation initiatives and leveraging emerging technologies, as seen in the coffee giant’s mobile payment solution that integrated innovative payment methods with brand strategy. Legendary tech visionaries and former chief digital officers have shaped digital transformation and technology strategy, inspiring companies to embrace AI-driven change.

This article is not legal advice. It is a practical guide for development teams and technical leaders who need to understand what the EU AI Act requires, how the UK regulatory approach differs, and what concrete steps your engineering organisation should be taking now. At McKenna Consultants, we help businesses implement AI systems with appropriate governance built in from the architecture level, and the patterns described here reflect real-world implementation experience.

Introduction to Artificial Intelligence

Artificial intelligence (AI) is transforming the way businesses operate, enabling computer systems to perform tasks that once required human intelligence—such as problem solving, decision making, and learning from experience. At its core, artificial intelligence leverages advanced algorithms and vast amounts of data to analyze information, recognize complex patterns, and make predictions, often without being explicitly programmed for every scenario.

AI systems come in many forms, from machine learning models that learn to identify trends in data, to natural language processing tools that understand and generate human language, to computer vision systems that interpret high quality images and video. AI researchers are continually pushing the boundaries of what these systems can achieve, drawing inspiration from the human brain and the intricacies of human intelligence to develop artificial neural networks and deep learning architectures with multiple layers capable of tackling a broad range of real world applications.

One of the most exciting developments in recent years is the rise of generative AI. Generative AI tools, such as large language models, can create brand new content—text, images, music, and even computer code—at almost no cost. These generative AI applications are already changing marketing forever, enabling creative professionals and marketers to launch campaigns, analyze data, and engage customers in ways that were once the stuff of science fiction. AI agents, including virtual assistants, are now able to perform tasks like scheduling, customer support, and data analysis, using natural language processing and machine learning to interact with users in a human-like manner.

The rapid rise of AI has been fueled by advances in computing power, the availability of massive training data, and breakthroughs in deep learning and neural networks. Tech visionaries such as Microsoft CEO Satya Nadella and OpenAI’s leadership have made astonishing statements about the potential of artificial intelligence AI to reshape industries and society. From self driving cars that use computer vision and deep neural networks to navigate roads, to the coffee giant’s mobile payment and loyalty programs powered by AI, the impact of AI is already visible in our daily lives.

However, the deployment of AI systems is not without challenges. AI models can inadvertently perpetuate algorithmic bias if trained on skewed or incomplete data, leading to unfair or discriminatory outcomes. There are also concerns about job displacement, as AI tools and autonomous agents take on repetitive tasks and even some creative or problem solving roles previously reserved for humans. As AI systems become more capable—approaching the realm of artificial general intelligence—businesses must grapple with the implications for their workforce, brand strategy, and long-term competitiveness.

Despite these challenges, the benefits of adopting an AI first strategy are significant. AI can help organizations future proof business operations, unlock new opportunities for innovation, and solve complex problems in fields ranging from healthcare and finance to eCommerce and education. Companies that embrace an AI first world—investing in AI research, integrating AI tools into their platforms, and reimagining their approach to problem solving—are well positioned to achieve early wins and thrive in the brand new world of digital transformation.

For both large enterprises and smaller businesses, the adoption of AI is no longer optional. Whether deploying AI-powered chatbots to enhance customer service, using machine learning to analyze new data and optimize marketing campaigns, or building agentic AI systems to automate business processes, the possibilities are vast. By understanding the fundamentals of artificial intelligence and its potential applications, organizations can begin to creatively engage brands, change brand strategy, and lead in the AI first arena.

As the rest of this article will explore, the rapid evolution of AI brings new regulatory and governance challenges. Understanding the basics of AI is the first step toward building responsible, scalable, and secure AI systems that deliver real value—while navigating the complex landscape of compliance, risk, and opportunity.

The EU AI Act: Structure and Scope

The AI Act takes a risk-based approach to regulation. It classifies AI systems into four tiers, each with different obligations.

Unacceptable Risk (Prohibited)

Certain AI applications are banned outright within the EU. These include social scoring systems, real-time biometric identification in public spaces (with narrow law enforcement exceptions), and AI that exploits vulnerabilities of specific groups. Most UK B2B software companies will not encounter these prohibitions, but it is worth understanding the boundary.

High-Risk AI Systems

This is the category that will affect the largest number of UK technology companies. High-risk AI systems include those used in:

  • Employment and worker management: Automated CV screening, interview analysis, performance monitoring, task allocation

  • Access to essential services: Credit scoring, insurance risk assessment, benefit eligibility determination

  • Education: Automated grading, admission decisions, learning pathway assignment

  • Critical infrastructure: Energy grid management, water treatment, transport systems

  • Law enforcement and justice: Predictive policing, evidence analysis, sentencing support

If your product includes AI capabilities in any of these domains and is used by EU customers, the high-risk obligations apply from August 2026. The requirements are substantial: conformity assessments, technical documentation, risk management systems, human oversight provisions, accuracy and robustness testing, and post-market monitoring.

Limited Risk (Transparency Obligations)

AI systems that interact with people must disclose that they are AI. This covers chatbots, AI-generated content, and emotion recognition systems. The obligation is primarily about transparency rather than technical compliance. Users must know they are interacting with an AI system.

Minimal Risk

AI systems that do not fall into the above categories — spam filters, AI-powered search, recommendation engines — face no specific regulatory obligations under the AI Act, though general product safety and data protection rules still apply.

Generative AI and GPAI Obligations: What Changed in August 2025

The general-purpose AI model obligations are distinct from the risk-based classification above. They apply to the providers of foundation models and large language models — the companies that train and distribute the base models. However, they have downstream implications for every business that builds on top of those models.

For GPAI Model Providers

Providers of general-purpose AI models must now:

  • Maintain and make available technical documentation describing the model’s training process, data sources, and capabilities

  • Provide downstream deployers with sufficient information to comply with their own obligations

  • Implement a copyright compliance policy

  • Publish a sufficiently detailed summary of the training data

Models classified as posing “systemic risk” (broadly, models trained with more than 10^25 FLOPs) face additional obligations including adversarial testing, incident reporting, and cybersecurity assessments.

What This Means for Deployers

If you are building AI features using OpenAI’s GPT models, Anthropic’s Claude, Google’s Gemini, or similar foundation models, you are a “deployer” rather than a “provider.” Your obligations differ, but you are not exempt. You must:

  • Ensure your use of the model complies with the provider’s terms and the documentation they supply

  • Maintain records of how the model is integrated into your product

  • Implement appropriate human oversight for decisions that affect individuals

  • Be transparent with users about AI involvement in outputs that reach them

The practical implication is that your documentation and governance processes must capture how your AI features work end-to-end: which models you use, what prompts and system instructions shape their behaviour, how outputs are validated, and what human review occurs before AI-generated outputs affect real decisions.

How the UK Regulatory Approach Differs

The UK has explicitly chosen not to replicate the EU AI Act. Instead, the UK government has adopted a “pro-innovation” framework based on five cross-sectoral principles: safety, security, and robustness; appropriate transparency and explainability; fairness; accountability and governance; and contestability and redress.

These principles are not yet enshrined in legislation. They are enforced through existing sector-specific regulators — the FCA for financial services, the ICO for data protection, the CMA for competition, Ofcom for communications. Each regulator interprets and applies the principles within its own domain.

What This Means in Practice

For UK businesses operating purely domestically, the regulatory burden is currently lighter than the EU’s prescriptive requirements. However, there are several reasons not to treat this as a free pass.

Regulatory convergence is likely. The UK government has signalled that binding AI regulation will follow. Building governance structures now means you will not be scrambling to retrofit compliance when UK legislation arrives.

The ICO is already active. The Information Commissioner’s Office has been clear that GDPR applies to AI systems processing personal data. Automated decision-making under Article 22, data protection impact assessments, and the right to explanation all create existing legal obligations for AI systems in the UK.

Procurement requirements. Enterprise customers, particularly in financial services, healthcare, and the public sector, are increasingly requiring AI governance documentation as a procurement condition. Even without prescriptive regulation, market expectations are driving governance requirements.

EU market access. If you sell to EU customers — or plan to — the AI Act applies to you. Building governance to the EU standard from the outset is significantly cheaper than retrofitting it later.

Practical Steps for Technical Teams

This is where the article moves from regulatory overview to engineering practice. The following steps represent the governance infrastructure that development teams should be implementing now.

1. Create an AI System Inventory

You cannot govern what you do not know about. Start by cataloguing every AI system, feature, or component in your product portfolio. For each entry, document:

  • What it does. A plain-language description of the AI capability.

  • Which models it uses. Foundation model provider, model version, fine-tuning status.

  • What data it processes. Input data types, sources, whether it includes personal data.

  • What decisions it influences. Does the output inform a human decision, or does it trigger an automated action?

  • Who it affects. End users, employees, third parties.

  • Risk classification. Based on the EU AI Act categories above, what risk tier does this system fall into?

This inventory becomes the foundation for all subsequent governance activities.

2. Implement Human-in-the-Loop Governance

Human-in-the-loop AI governance is not a checkbox exercise. It requires architectural decisions that shape how your AI features are built.

For high-stakes decisions, the architecture must ensure that a qualified human reviews AI outputs before they take effect. This means:

  • Confidence thresholds. AI outputs below a defined confidence score are automatically routed for human review. Outputs above the threshold may proceed, but are subject to sampling and audit.

  • Explanation infrastructure. The human reviewer must be able to understand why the AI produced a particular output. This requires logging the inputs, the model’s reasoning chain (where available), and the key factors that influenced the output.

  • Override mechanisms. Humans must be able to override AI decisions and have those overrides recorded and fed back into system improvement.

  • Escalation paths. When a human reviewer is uncertain, there must be a clear escalation route to a more senior decision-maker.

For lower-stakes applications — content recommendations, search ranking, formatting suggestions — the governance model can be lighter, but transparency requirements still apply. Users should know when AI is influencing what they see.

3. Build Technical Documentation

The EU AI Act’s documentation requirements are specific. Even if you are not yet obligated to produce them, building this documentation practice now creates a durable governance asset.

For each AI system, maintain:

  • System architecture documentation. How the AI component fits into the broader product architecture. Data flows, API boundaries, deployment infrastructure.

  • Training and evaluation data documentation. For fine-tuned models, document the training data sources, preprocessing steps, and evaluation metrics. For prompt-engineered systems, document the system prompts, few-shot examples, and evaluation benchmarks.

  • Risk assessment. A structured assessment of potential harms, including bias, accuracy failures, adversarial manipulation, and unintended use cases. Include mitigation measures for each identified risk.

  • Performance metrics. Ongoing measurement of accuracy, precision, recall, fairness metrics, and failure rates. These metrics should be monitored in production, not just evaluated at launch.

  • Change log. A record of every material change to the AI system — model upgrades, prompt changes, training data updates, threshold adjustments — with the rationale for each change.

4. Implement Bias and Fairness Testing

AI systems can produce discriminatory outcomes even when they are not explicitly designed to consider protected characteristics. Bias testing must be a standard part of your development and deployment pipeline.

  • Pre-deployment testing. Evaluate model outputs across demographic groups (where the use case involves decisions about people) to identify disparate impact.

  • Ongoing monitoring. Bias can emerge over time as input data distributions shift. Implement automated monitoring that flags statistical anomalies in outcomes across relevant groups.

  • Remediation process. When bias is detected, have a documented process for investigating the root cause, implementing corrections, and validating that the fix is effective.

5. Establish an AI Governance Board

Technical governance must be connected to organisational governance. An AI governance board — which may be a standing committee or a function within an existing risk and compliance structure — provides the decision-making authority for:

  • Approving new AI deployments

  • Reviewing risk assessments

  • Setting policies on acceptable AI use cases

  • Responding to incidents and near-misses

  • Liaising with external regulators and auditors

The board should include technical, legal, and business representatives. Governance that lives exclusively within the engineering team will lack the business context to make proportionate decisions. Governance that lives exclusively within legal will lack the technical understanding to be practical.

AI Governance for AI Agents and Agentic Systems

The rise of agentic AI enterprise automation — autonomous agents that plan and execute multi-step tasks — introduces new governance challenges that the EU AI Act does not yet fully address but that responsible businesses must consider.

AI agents for business process automation are fundamentally different from single-prompt AI features. They make sequential decisions, use tools, access external systems, and can take actions with real-world consequences. Governance for agentic systems requires:

  • Action boundaries. Define what the agent is permitted to do. Can it send emails? Can it modify database records? Can it authorise expenditure? These boundaries must be enforced at the infrastructure level, not just through prompt instructions.

  • Audit trails. Every action an agent takes must be logged with sufficient detail to reconstruct its reasoning chain. This is essential for both regulatory compliance and debugging.

  • Breakpoints. For high-consequence actions, the agent should pause and request human approval before proceeding. The definition of “high-consequence” should be configurable and regularly reviewed.

  • Rollback capability. Where possible, agent actions should be reversible. Design your integration architecture so that an agent’s mistakes can be undone without manual data surgery.

Auditing Existing AI Deployments

If your organisation has already deployed AI features — and most technology companies have, whether they have formally catalogued them or not — an audit is the essential first step.

Step 1: Discovery

Identify every AI capability in your product and internal tools. Check for AI features that may have been introduced informally — a developer who added GPT-powered summarisation to an internal tool, a data team that built a classification model for customer support tickets, a marketing team using AI to generate content.

Step 2: Classification

Map each discovered AI capability to the EU AI Act risk categories. Be conservative in your classification. If a system is borderline between limited risk and high risk, classify it as high risk and build governance accordingly.

Step 3: Gap Analysis

For each AI system, compare your current governance posture against the requirements for its risk classification. Document the gaps. Common gaps include: no technical documentation, no bias testing, no human oversight mechanism, no incident response process, insufficient transparency to end users.

Step 4: Remediation Planning

Prioritise gap remediation based on risk classification and exposure. High-risk systems used by EU customers should be addressed first. Build a realistic timeline — governance remediation is not a weekend project — and allocate engineering resources accordingly.

Building Governance Into the Development Lifecycle

The most effective approach to AI governance is not to bolt it on after deployment but to integrate it into your development lifecycle from the start.

  • Requirements phase. Include governance requirements alongside functional requirements. What risk classification does this feature fall into? What documentation is required? What human oversight is needed?

  • Design phase. Architect the AI feature with governance hooks: logging, confidence scoring, human review workflows, and override mechanisms.

  • Testing phase. Include bias testing, accuracy benchmarking, and adversarial testing alongside functional and performance tests.

  • Deployment phase. Ensure documentation is complete and approved by the governance board before the feature reaches production.

  • Operations phase. Monitor accuracy, fairness, and usage metrics in production. Review and update documentation as the system evolves.

The EU AI Act creates clear, enforceable obligations for any business whose AI systems touch EU customers. The UK’s lighter-touch approach does not eliminate governance requirements — it merely distributes them across existing regulators and, increasingly, across customer procurement expectations.

For development teams, the practical response is to build governance infrastructure now: system inventories, documentation practices, human-in-the-loop architectures, bias testing pipelines, and organisational governance structures. This investment protects your EU market access, positions you well for forthcoming UK regulation, and — most importantly — ensures that your AI systems are reliable, fair, and trustworthy.

McKenna Consultants helps businesses design and implement AI systems with enterprise AI governance built in from the architecture level. Whether you are auditing existing deployments, building new agentic AI capabilities, or preparing for the EU AI Act’s high-risk obligations in August 2026, we bring the technical governance expertise that development teams need. Get in touch to discuss your requirements.

Have a question about this topic?

Our team would be happy to discuss this further with you.