Artificial Intelligence

AI Coding Assistants in the Enterprise: Governance, Security, and IP for Claude Code, Cursor, and GitHub Copilot

AI Coding Assistants in the Enterprise: Governance, Security, and IP for Claude Code, Cursor, and GitHub Copilot

AI Coding Assistants in the Enterprise: Governance, Security, and IP for Claude Code, Cursor, and GitHub Copilot

Three years ago, AI coding assistants were a productivity curiosity that a small share of developers used informally. By the start of 2026 they are standard developer tooling. Claude Code, Cursor, GitHub Copilot, JetBrains AI Assistant, Amazon Q Developer, and a long tail of integrated assistants are present in the daily workflow of most enterprise development teams — sometimes officially adopted, often introduced bottom-up by individual developers, occasionally tolerated rather than approved.

The technology has moved faster than enterprise governance. Most large organisations McKenna Consultants engages with have a clear official position on data leaving the corporate boundary, a clear official position on open-source licence compliance, and a clear official position on third-party software in production environments. Almost none of them have a coherent, approved, and operationally enforced position on AI coding assistants — even though those assistants are now reading proprietary source code, generating production code, and influencing security-critical decisions on a daily basis.

This article addresses AI coding assistant enterprise governance: the risks, the policy structure, the configuration baselines, and the operational disciplines that enterprises need to put in place. It is written for CISOs, heads of engineering, legal counsel, and the architecture leads who are typically asked to draft the policy. It is deliberately practical — McKenna delivers AI coding assistant adoption engagements to enterprise customers and the framework here is the framework we use.

Why This Question Has Become Urgent

A few converging pressures have made AI coding assistant governance an immediate priority for most enterprise IT functions.

Adoption has crossed the tipping point. When a tool is used by ten percent of developers it can be governed informally. When it is used by sixty or seventy percent — which is the norm in most enterprises by 2026 — it must be governed formally. The existing policy structures that govern third-party software typically do not cover AI coding assistants cleanly, because the tool’s behaviour straddles development tooling, data processing, and intellectual property creation in ways that older categories did not anticipate.

The capabilities have outgrown the original framing. When the original AI coding assistants were autocomplete on steroids, the governance question was narrow: do the suggestions leak code outside the boundary? With agentic coding assistants — Claude Code being the most prominent — the model now opens shells, edits multiple files, runs tests, and commits to branches. The governance question is now substantially broader.

Regulatory expectations are catching up. The EU AI Act high-risk classifications take effect in August 2026; while AI coding assistants are not generally classified as high-risk, the regulatory atmosphere of 2026 means CISOs are routinely being asked, by boards and auditors, to demonstrate that AI tooling in development is governed.

Visible incidents are accumulating. Through 2024 and 2025, several public incidents involved AI assistants producing or surfacing leaked credentials, proprietary code patterns, or licence-tainted suggestions. The probability that any individual organisation will experience a material incident from poor AI coding assistant governance is now non-trivial.

The question is no longer whether to govern. It is how.

The Five Risk Categories

A coherent governance position starts with a structured view of the risks. Five categories cover the practical surface.

1. Source Code Exfiltration

Some AI coding assistants send your codebase, or fragments of it, to a model provider for processing. Some run entirely locally. Some operate in a hybrid mode where context windows are sent to the model but no persistent storage occurs. The exfiltration risk varies dramatically across tools and across configurations of the same tool.

The governance position must distinguish:

  • Which tools, in which configurations, send code outside the organisation’s boundary at all.
  • Where that boundary is — for cloud-deployed code, the boundary is your cloud tenant; for on-premises code, it is your data centre.
  • What contractual and architectural protections exist when code does leave the boundary (zero data retention agreements, encryption in transit, encryption at rest, model training opt-outs).
  • What classes of code are categorically prohibited from being sent to external models — typically code in repositories holding cryptographic material, customer data, or the most sensitive business logic.

The principle is not “no AI assistance”; it is “AI assistance with explicit data handling guarantees aligned to the sensitivity tier of the code.”

2. Generated Code Provenance and IP Ownership

When an AI coding assistant generates code, who owns it? The simple answer is your organisation, because your developers wrote (or accepted) the code in the course of their employment. The complete answer is more nuanced:

  • Some tools’ terms of service explicitly assign ownership of generated code to the user.
  • Some tools have language about “model training on customer code” that, if enabled, can result in your code patterns appearing in suggestions to other customers.
  • Generated code can incorporate patterns that closely match training-set code under restrictive open-source licences. The legal status of this is unsettled, but the operational answer is the same: your organisation must have a process for screening and addressing it.

The governance position must specify:

  • Which tools’ terms of service have been reviewed and accepted.
  • Whether model training on your code is permitted (almost always: no).
  • What licence-screening process applies to generated code — at minimum, the same open-source licence scanning that applies to manually-written code in your codebase.

3. Third-Party Licence Contamination

Related but distinct: AI coding assistants can suggest patterns that originate from training data under licences incompatible with your codebase. A snippet that reproduces GPL-licensed code in a closed-source application is a problem regardless of who wrote it, but the AI-generated case requires its own controls because the developer is less likely to recognise the snippet’s origin.

The governance position should require:

  • Existing static analysis and licence scanning to run on AI-suggested code with the same rigour as on manually-written code.
  • Specific guidance to developers about reviewing AI-generated suggestions for “verbatim large blocks” (the highest-risk pattern for licence contamination).
  • A reporting mechanism for when developers identify AI-suggested code that closely resembles known third-party patterns.

4. Security Review Burden

AI-generated code often compiles, often passes initial tests, and often looks correct — while containing subtle vulnerabilities. SQL injection, cross-site scripting, hard-coded credentials, weak cryptographic primitives, and incorrect authorisation logic appear in AI-generated suggestions at non-zero rates, and many of these patterns are only detectable by careful review.

The governance position should specify:

  • Code review obligations for AI-generated code that are at least equal to those for manually-written code. Some organisations apply higher obligations to AI-generated code in security-critical paths.
  • Mandatory SAST (static application security testing) coverage for repositories that accept AI-generated contributions.
  • Specific guidance for high-risk code patterns where AI suggestions are deprecated outright (cryptographic primitives, authentication flows, anything dealing with raw credentials).

5. Audit Trail and Reproducibility

Regulated industries — financial services, healthcare, public sector — operate under audit obligations that pre-date AI coding assistants. The governance position must address:

  • Whether AI-generated code is identifiable as such in the codebase. (Most enterprise tools support metadata or commit conventions that mark AI-generated contributions; some organisations choose not to expose this in the code itself but maintain an out-of-band record.)
  • Whether the AI tool’s invocation parameters, prompts, and responses are retained — and for how long, where, and under what access control.
  • How this evidence is produced if requested by an auditor, regulator, or in legal discovery.

For organisations subject to formal regulatory audit, a clear position on this point is non-negotiable.

Tier Differences: Consumer-Tier vs Enterprise-Tier Deployments

Every major AI coding assistant has at least two deployment tiers, and the security posture differences are substantial. Critical points to know:

Claude Code. Anthropic offers Claude Code as part of the Claude developer platform. Enterprise deployments use Anthropic’s API with zero data retention agreements, the option to use Amazon Bedrock or Google Cloud Vertex AI as the model serving infrastructure, and the option to deploy Claude through customer-controlled cloud accounts. Consumer-tier use of Claude Code carries different terms; in particular, Anthropic’s API has explicit policies against using customer prompts for model training, but the contractual and operational details vary by tier.

GitHub Copilot. Microsoft offers Copilot Business and Copilot Enterprise tiers in addition to Copilot Individual. Enterprise tiers offer zero data retention, optional code-base awareness with controlled scope, and integration with the broader Microsoft 365 governance model. Individual subscriptions sit outside the corporate governance boundary entirely and should be assumed to be off-limits for any enterprise codebase work.

Cursor. Cursor offers Business and Enterprise plans with privacy-mode configuration, zero data retention, and SAML SSO integration for centralised identity. The non-enterprise tiers operate under different terms.

JetBrains AI Assistant and Amazon Q Developer have similar tiered structures.

The pattern is universal: the tier substantially changes the governance posture. The operational implication is that enterprise governance must specify the permitted tiers, not just the permitted tools, and must back this with configuration enforcement.

A Reference Policy Structure

The policy framework McKenna recommends to enterprise clients has six sections.

Section 1: Scope and Definitions

What counts as an “AI coding assistant” for the purposes of this policy. The scope should explicitly include autocomplete-style assistants (Copilot, Cursor inline), agentic assistants (Claude Code, Cursor Composer), conversational coding interfaces (claude.ai, chatgpt.com when used for code), and IDE-integrated chat assistants. It should explicitly exclude general office productivity AI tools (which are governed by a separate policy).

Section 2: Permitted Tools and Tiers

A specific list of approved tools, the approved tier for each, and the configuration baseline that must be applied. This section should be reviewed quarterly, because the tooling landscape changes faster than annual review cycles support.

Section 3: Repository Tiering

Not all code carries the same sensitivity. A working classification:

  • Tier 1 (highest sensitivity): Cryptographic implementations, authentication and authorisation core code, customer data processing logic, regulated workloads (PCI-DSS, HIPAA, equivalent). Restricted AI assistant use, often limited to local-only models.
  • Tier 2 (high sensitivity): Production application code, infrastructure-as-code, database migration code. Approved enterprise-tier AI assistants permitted with strict configuration.
  • Tier 3 (general): Application development, internal tooling, prototypes, documentation. Approved AI assistants permitted in standard configuration.

The policy should specify the AI assistant configuration that applies in each tier.

Section 4: Generated Code Review Requirements

Code review obligations for AI-generated contributions, with specific attention to security-critical patterns. The position McKenna typically recommends is that AI-generated code receives at least the standard review process plus a documentation note in the commit (or PR description) marking its provenance.

Section 5: Developer Training and Awareness

Training is non-negotiable. Developers using AI coding assistants need to understand:

  • Which configurations are mandatory and why.
  • The risk patterns that require human review.
  • The reporting process for suspected licence contamination, security issues, or other policy concerns.
  • The boundary cases — for example, that pasting code into the consumer-tier of any tool is a policy violation regardless of the tool’s general approval status.

A short, mandatory training module — typically 30 minutes, refreshed annually — is the standard McKenna recommends.

Section 6: Audit and Review

A defined audit cycle, with specific control points:

  • Quarterly review of tool configurations to ensure enterprise-tier baselines are maintained.
  • Random sampling of repositories for evidence of policy compliance (configuration files, commit conventions).
  • Annual review of the policy itself in light of tooling changes.
  • Specific incident response procedures for suspected violations.

The Build-vs-Buy-vs-Self-Host Question

A growing share of enterprise customers ask whether they should self-host AI coding assistants — running open-source models on internal infrastructure rather than relying on cloud API tools. The honest answer is “rarely yet.”

The case for self-hosting is straightforward: data never leaves your boundary, you control the model and its updates, you can audit every interaction. For organisations with the most extreme data sensitivity requirements (intelligence services, certain defence and critical-national-infrastructure environments), this can be the only acceptable answer.

The case against self-hosting is also straightforward: the open-source models that can run on internal infrastructure are typically a generation behind the frontier models that power Claude Code, Copilot, and Cursor. Developer productivity benefits scale with model capability, and the gap in capability is large. For organisations where the cloud tools’ enterprise-tier protections are sufficient, self-hosting trades a meaningful capability disadvantage for a marginal additional security benefit.

For most enterprise customers, the right answer is enterprise-tier cloud tools with appropriate configuration, repository tiering, and review obligations — with self-hosted models reserved for the highest-sensitivity tier.

What McKenna Delivers in This Space

McKenna Consultants delivers two distinct engagements around AI coding assistants for enterprise customers:

Policy and adoption engagements. We work with the CISO, head of engineering, and legal counsel to draft the AI coding assistant policy, define the tiered configuration baselines, specify the developer training, and stand up the audit cycle. The output is a production-ready policy and the operational artefacts to enforce it. Typical engagement length: six to ten weeks.

Technical implementation engagements. We work with engineering teams to apply the policy: configure the tools, deploy enterprise-tier subscriptions, integrate with the corporate identity model (SAML SSO, zero data retention, model selection), and instrument the audit evidence. We also deliver developer training where it has not already been arranged. Typical engagement length: four to eight weeks, often run in parallel with the policy engagement.

We are an experienced AI consultancy and the McKenna engineering team uses these tools daily — including Claude Code in our own engineering workflow. We bring practitioner experience to the governance question, not a purely advisory perspective.

If your organisation is drafting or reviewing its AI coding assistant policy in 2026, contact us to discuss the engagement model.

Have a question about this topic?

Our team would be happy to discuss this further with you.