PADISO.ai: AI Agent Orchestration Platform - Launching May 2026
Back to Blog
Guide 25 mins

Rolling Out Claude Code to a PE Portfolio Without Losing the Audit Trail

Deploy Claude Code across PE portfolio companies securely. SSO, allowlists, transcript retention, and SOC 2 evidence for audit-ready AI governance.

The PADISO Team ·2026-05-02

Rolling Out Claude Code to a PE Portfolio Without Losing the Audit Trail

Table of Contents

  1. Why Claude Code Matters for PE Portfolios
  2. The Audit Trail Problem
  3. Core Governance Architecture
  4. Single Sign-On (SSO) and Identity Management
  5. Allowlists and Access Control
  6. Hook-Enforced Policies
  7. Transcript Retention and Logging
  8. Building SOC 2 Evidence
  9. Implementation Roadmap
  10. Common Pitfalls and How to Avoid Them
  11. Measuring Success and Compliance
  12. Next Steps

Why Claude Code Matters for PE Portfolios

Private equity firms are deploying AI-powered code generation tools across their portfolios to accelerate time-to-market, reduce engineering costs, and unlock competitive advantages. Claude 3.5 Sonnet has emerged as the leading choice for portfolio-wide rollouts because of its advanced coding capabilities, tool-use architecture, and enterprise-grade safety features.

But here’s the tension: PE firms need speed and cost reduction, yet their limited partners and exit buyers demand ironclad governance, audit trails, and compliance proof. If you roll out Claude Code without proper controls, you risk:

  • Audit failures at exit: Buyers conducting technology due diligence will reject portfolios without documented AI governance and transcript retention.
  • Regulatory exposure: SOC 2 Type II auditors will flag missing access logs, policy enforcement, and code provenance.
  • IP leakage: Engineers using Claude Code might inadvertently expose proprietary code, secrets, or customer data to third-party API calls.
  • Uncontrolled sprawl: Without allowlists and SSO, teams spin up parallel Claude instances, fragmenting governance and creating blind spots.

The solution isn’t to ban Claude Code—it’s to architect governance that enables safe, auditable adoption. This guide walks you through the exact pattern PADISO uses to help PE firms and their portfolio companies roll out Claude Code at scale whilst maintaining the audit trail that buyers and regulators expect.

The Audit Trail Problem

Most PE firms underestimate how deeply AI tooling now factors into due diligence. When you’re exiting a portfolio company, limited partners and strategic buyers will ask:

  • Who used Claude Code, when, and for what?
  • What code was generated, and did it touch production systems?
  • Where are the transcripts and decision logs?
  • How do you prove this tool didn’t compromise security or IP?

Without answers, you’ll face extended diligence timelines, valuation haircuts, or deal kills. According to PwC’s AI Predictions 2026, audit trails and governance transparency are now table-stakes for AI-driven tools in financial services and PE portfolios.

The audit trail problem breaks into three layers:

1. Identity and Access Layer You must know who accessed Claude Code, from which IP, at what time, and with what credentials. SSO and multi-factor authentication (MFA) are non-negotiable.

2. Activity and Transcript Layer Every prompt, every code output, every tool invocation must be logged and retained. This isn’t optional—it’s the difference between passing and failing a SOC 2 Type II audit.

3. Policy Enforcement Layer You can’t rely on engineers to “remember” to follow policies. Policies must be enforced at the API level, before Claude Code even runs. This is where hook-enforced policies come in.

Without all three layers working together, you have governance theatre, not actual governance. And auditors know the difference.

Core Governance Architecture

Here’s the pattern that works: a three-tier architecture that sits between your engineers and Claude’s API, enforcing identity, policy, and audit logging at every step.

Engineer → SSO Provider (Okta / Azure AD) → API Gateway / Proxy → Policy Engine → Claude API

                                              Logging Service

                                            Transcript Database

Tier 1: Identity (SSO) All access to Claude Code flows through your enterprise SSO provider. No direct API keys. No shared credentials. Every session is tied to a named individual and MFA-verified.

Tier 2: Policy Engine Before any request reaches Claude’s API, it passes through a policy engine that checks:

  • Is this user on the allowlist for this model/feature?
  • Does this prompt violate content policies (e.g., “no secrets in prompts”)?
  • Is this request coming from an approved IP or VPN?
  • Has the user acknowledged the data handling agreement?

Tier 3: Logging and Retention Every request and response is logged to a tamper-proof, immutable store. Logs include:

  • User identity, timestamp, IP address
  • Full prompt and response transcript
  • Tool calls and external API invocations
  • Outcome (success, blocked, error)
  • Data classification (e.g., “contains customer PII”)

This architecture is not novel—it’s the same pattern used for financial trading systems, healthcare data platforms, and other highly regulated environments. The novelty is applying it to Claude Code at portfolio scale.

Single Sign-On (SSO) and Identity Management

SSO is your foundation. Without it, you have no identity, no audit trail, and no way to enforce policies.

Choosing Your SSO Provider

Most PE firms already use Okta, Azure AD (Entra ID), or Google Workspace. Pick the one you already have; don’t introduce new identity silos. If you’re using Okta, you’ll integrate Claude Code access through Okta’s application catalogue or a custom SAML/OIDC connector.

Okta Integration:

  • Create a new application in Okta for “Claude Code Enterprise.”
  • Configure SAML 2.0 or OIDC (OIDC is preferred for modern apps).
  • Set up group-based access control: only users in the “Engineering” or “Data” groups get Claude Code access.
  • Enforce MFA (push notification or TOTP) for all Claude Code sessions.
  • Log all authentication events to your SIEM (Splunk, Datadog, or similar).

Azure AD Integration:

  • Register Claude Code as an enterprise application in Azure AD.
  • Use conditional access policies to enforce MFA, trusted device checks, and geographic restrictions (e.g., allow access only from office IPs or approved VPNs).
  • Sync group memberships from Azure AD to your API gateway so policies can reference them.
  • Enable sign-in risk detection to flag anomalous access patterns.

MFA and Session Management

SSO alone isn’t enough. Layer on MFA:

  • For interactive use: Require push notification or TOTP (time-based one-time password) on every session initiation.
  • For API/CI-CD use: Use short-lived tokens (15–60 minutes) issued by your SSO provider. Revoke tokens on logout or after inactivity.
  • Session timeout: Set idle session timeout to 30 minutes for interactive use, 5 minutes for API use.

This prevents credential stuffing and limits the blast radius if a token is compromised.

Attribute-Based Access Control (ABAC)

SSO gives you identity. ABAC gives you fine-grained control. Configure your SSO provider to emit custom attributes in the SAML or OIDC token:

User: alice@acmecorp.com
Department: Engineering
Role: Senior Engineer
CostCenter: 4521
DataClassification: CanAccessCustomerData
Team: Platform

Your policy engine then uses these attributes to decide what Claude Code features are available. For example:

  • Only users with DataClassification: CanAccessCustomerData can use Claude Code on customer databases.
  • Only users in the Platform team can use Claude Code to modify infrastructure-as-code.

This is far more scalable than managing individual allowlists and adapts automatically as people move teams.

Allowlists and Access Control

Not everyone needs Claude Code. Rolling it out to your entire portfolio without guardrails is like handing everyone root access to production—it feels empowering until it isn’t.

Building Your Allowlist

Start narrow, expand thoughtfully. Your initial allowlist should include:

  1. Pilot teams: 2–3 high-trust engineering teams with strong security practices.
  2. Security and compliance leads: They’ll help design the governance model.
  3. Architects and tech leads: They’ll define safe use cases and review generated code.

For each pilot team, document:

  • Use cases: What problems are they solving with Claude Code? (e.g., “boilerplate generation,” “test suite creation,” “documentation”)
  • Data sensitivity: What data will they feed to Claude Code? (e.g., “internal code only,” “no customer data,” “sanitised logs”)
  • Review process: Who approves generated code before it ships to production?
  • Rollback plan: If something goes wrong, how do you revert?

Tiered Access Levels

Instead of binary (allowed/not allowed), use tiers:

Tier 1: Sandbox Users can experiment with Claude Code on non-production code and data. No secrets, no customer data, no infrastructure-as-code. This is your onboarding tier.

Tier 2: Development Users can use Claude Code on internal codebases and development databases. Still no production access, but higher trust.

Tier 3: Production Users can use Claude Code on production systems, customer data, and critical infrastructure. Requires explicit approval, SOC 2 training, and code review.

Tier 4: Unrestricted Only for architects and security leads who design and audit Claude Code usage across the portfolio.

Users start at Tier 1 and earn promotion based on demonstrated maturity. This creates accountability and prevents the “everyone gets everything” problem.

Preventing Allowlist Bypass

Engineers are creative. They’ll try to:

  • Use their personal OpenAI account (you can’t prevent this, but you can detect it via network monitoring).
  • Share credentials with colleagues (your logging should flag this).
  • Spin up Claude Code in a different region or cloud account (enforce organisation-wide API keys, not user keys).

Mitigate with:

  • Network monitoring: Flag outbound traffic to OpenAI, Anthropic, or other LLM APIs outside your approved gateway.
  • Code scanning: Use static analysis to detect API keys in commits and block them.
  • Periodic audits: Review SSO logs and transcript logs monthly to find anomalies.
  • Training: Explain why the governance exists (audit readiness, IP protection, regulatory compliance). Engineers who understand the “why” are less likely to bypass controls.

Hook-Enforced Policies

Policies written in a document are ignored. Policies enforced at the API level are followed. This is the difference between governance and theatre.

What Are Hooks?

Hooks are webhooks or middleware that intercept requests before they reach Claude’s API. They’re your policy enforcement layer. When an engineer submits a prompt to Claude Code, the request first hits your hook, which:

  1. Validates the user’s identity and permissions.
  2. Scans the prompt for secrets, customer data, or other violations.
  3. Logs the request for audit purposes.
  4. Either allows the request through or blocks it with a clear error message.

Implementing Hooks

You have two main approaches:

Approach 1: API Gateway (Recommended for PE Portfolios)

Deploy an API gateway (e.g., Kong, Apigee, or AWS API Gateway) between your engineers and Claude’s API. The gateway:

  • Terminates all Claude Code requests from your organisation.
  • Validates SSO tokens and enforces access control.
  • Runs policy checks (secret scanning, prompt validation).
  • Logs all activity to your audit database.
  • Forwards approved requests to Claude’s API.
  • Returns Claude’s responses to the engineer.

This approach is transparent to engineers—they use Claude Code as normal, but all requests flow through your gateway. You control the gateway; you control the governance.

Approach 2: SDK Wrapper (For Integrated Environments)

If your teams use Claude Code via a custom SDK (e.g., in your internal developer platform), wrap the SDK with policy checks:

from padiso_claude_wrapper import ClaudeCodeClient

client = ClaudeCodeClient(
    org_id="acmecorp",
    user_id="alice@acmecorp.com",
    allowed_models=["claude-3-5-sonnet"],
    policy_engine="https://policy.acmecorp.internal"
)

# This request is automatically validated, logged, and audited
response = client.code_generation(
    prompt="Write a function to validate email addresses",
    context="internal-tools"
)

The wrapper ensures every call is audited, even if an engineer tries to bypass it (the wrapper logs the attempt).

Policy Rules to Enforce

Here are the policies that most PE portfolios enforce:

1. Secret Detection Block any prompt containing AWS keys, database credentials, API tokens, or other secrets. Use regex patterns or a service like Truffleog to scan prompts before they reach Claude.

2. Data Classification Block prompts containing customer PII, financial data, or other sensitive information unless the user has explicit permission.

3. Code Review Requirement For production-bound code, require a second engineer to review Claude’s output before it’s merged. Log the review decision.

4. Rate Limiting Prevent abuse by limiting requests per user per hour. For example, 100 requests/hour for Tier 1 users, 500/hour for Tier 3.

5. Model Restriction Only allow specific Claude models (e.g., Claude 3.5 Sonnet for code, Claude 3 Opus for reasoning). This prevents cost overruns and ensures you’re using tested, supported models.

6. Content Policy Block prompts asking Claude to help with illegal activities, malware, or other harmful uses. Anthropic’s usage policies already cover this, but enforcing it in your hook prevents the request from even reaching their API.

Logging Hook Activity

Every hook invocation is an audit event. Log:

  • User identity and SSO session ID
  • Timestamp and request ID
  • Policy checks performed and their outcomes (passed, blocked, warning)
  • Reason for any blocks (e.g., “secret detected in prompt”)
  • Request and response sizes
  • Latency (how long the policy check took)

Store these logs in an immutable, tamper-proof store (e.g., S3 with object lock, or a dedicated audit database). This becomes your evidence for SOC 2 auditors.

Transcript Retention and Logging

Transcripts are the heart of your audit trail. Without them, you have no proof of what Claude Code was used for, what it generated, or whether it followed policies.

What to Log

For every Claude Code interaction, log:

Request Data

  • User identity (from SSO token)
  • Timestamp and timezone
  • Request ID (unique identifier for tracing)
  • Model used (e.g., “claude-3-5-sonnet”)
  • Full prompt text
  • System prompt (if any)
  • Temperature and other model parameters
  • Tools enabled (if using Tool Use with Claude)

Response Data

  • Full response text
  • Tokens used (input and output)
  • Model’s reasoning (if available)
  • Tool calls made (name, arguments, results)
  • Stop reason (“end_turn,” “tool_use,” “max_tokens,” etc.)
  • Latency

Context Data

  • IP address and user agent
  • VPN or network context (office, home, approved IP range)
  • Data classification (was customer data involved?)
  • Approval status (was this request approved by a manager?)
  • Outcome (accepted, rejected, modified before commit)

Storage and Retention

Transcripts must be:

  1. Immutable: Once written, they cannot be deleted or modified. Use append-only logs or S3 object lock.
  2. Encrypted: At rest (AES-256) and in transit (TLS 1.3).
  3. Indexed: You need to search transcripts by user, date, model, or keyword for audits and investigations.
  4. Retained: SOC 2 auditors expect 2–3 years of retention. Compliance frameworks often mandate longer.
  5. Accessible: You must be able to retrieve a specific transcript in under 5 minutes for audit purposes.

Storage Architecture:

Claude Code Request → Logging Service → Immutable Store (S3 + Object Lock)

                                      Indexing Service (Elasticsearch / OpenSearch)

                                      Audit Dashboard (Read-Only)

The logging service writes to S3 with object lock enabled. This prevents accidental or malicious deletion. A separate indexing service reads from S3 and populates a searchable index so you can query transcripts by user, date, or content.

Handling Sensitive Data in Transcripts

If an engineer accidentally includes customer data or secrets in a prompt, that data is now in your transcript logs. You need to:

  1. Redact it: Automatically replace secrets and PII with placeholders (e.g., [REDACTED_SECRET]).
  2. Alert it: Notify the user and their manager that sensitive data was detected.
  3. Investigate it: Log the incident and determine if data was exposed to Claude’s API.
  4. Remediate it: Rotate the secret, notify affected customers if necessary, and document the incident.

Use tools like Nightfall or Presidio to automatically detect and redact secrets and PII in transcripts.

Building SOC 2 Evidence

SOC 2 compliance is non-negotiable for PE exits. Buyers and their counsel will demand a SOC 2 Type II report before closing. Claude Code governance is now part of that report.

The SOC 2 Framework

SOC 2 auditors assess your controls across five trust service criteria:

  1. Security: Are systems protected from unauthorised access?
  2. Availability: Are systems available when needed?
  3. Processing Integrity: Are transactions complete, accurate, and authorised?
  4. Confidentiality: Is sensitive data protected from unauthorised disclosure?
  5. Privacy: Is personal data collected, used, and retained appropriately?

Claude Code governance touches all five. Here’s how to build evidence for each:

Security Controls (Access, Authentication, Encryption)

Evidence to collect:

  • SSO configuration documentation (SAML/OIDC setup, MFA enforcement)
  • Access control policies (allowlists, role definitions, approval workflows)
  • Network architecture diagram showing API gateway and encryption in transit
  • Encryption key management policies (who has access, rotation schedule)
  • Incident logs (any unauthorised access attempts, blocked requests)

Audit questions you’ll face:

  • How do you ensure only authorised users access Claude Code?
  • How are credentials managed and rotated?
  • How do you detect and respond to unauthorised access attempts?

Your answers:

  • All access flows through Okta/Azure AD with MFA. Every session is logged.
  • API keys are rotated quarterly and stored in a secrets manager (AWS Secrets Manager, HashiCorp Vault).
  • Anomalous access patterns trigger alerts and are investigated within 24 hours.

Processing Integrity (Audit Trails, Policy Enforcement)

Evidence to collect:

  • Hook-enforced policy rules (document each policy and how it’s enforced)
  • Transcript logs showing request-response pairs
  • Policy violation logs (blocked requests, reasons, user notifications)
  • Code review logs (who reviewed Claude-generated code, when, outcome)
  • Change logs for policy rules (when were policies updated, who approved, why)

Audit questions:

  • How do you ensure Claude Code requests are authorised and complete?
  • How do you prevent unauthorised or incomplete transactions?
  • How do you detect and respond to policy violations?

Your answers:

  • Every request is validated against policies before reaching Claude’s API. Invalid requests are blocked and logged.
  • Transcripts are immutable and retained for 3 years. They’re searchable and audit-ready.
  • Policy violations trigger alerts and are reviewed by security leads within 48 hours.

Confidentiality (Data Protection, Secrets Handling)

Evidence to collect:

  • Data classification policy (what data is sensitive, who can access it)
  • Secret detection and redaction logs
  • Incident logs (any data leaks or near-misses involving Claude Code)
  • Encryption policies for data at rest and in transit
  • Vendor agreements with Anthropic (data handling, retention, deletion)

Audit questions:

  • How do you prevent sensitive data from being exposed to Claude Code?
  • How do you handle data breaches involving Claude Code?
  • What agreements do you have with Anthropic about data handling?

Your answers:

  • Prompts are scanned for secrets and PII before reaching Claude’s API. Violations are blocked and logged.
  • We have a data incident response plan that includes notifying affected parties within 72 hours.
  • We use Anthropic’s enterprise agreement, which includes data retention guarantees and deletion on request.

Privacy (Personal Data Handling)

Evidence to collect:

  • Privacy policy covering Claude Code usage
  • User consent logs (did users agree to Claude Code’s data handling?)
  • Data retention and deletion policies
  • Vendor privacy agreements with Anthropic
  • Breach notification procedures

Audit questions:

  • How do you inform users about Claude Code’s data handling?
  • How do you handle requests to delete personal data?
  • What happens to data after it’s used?

Your answers:

  • Users sign a data handling agreement before accessing Claude Code. This is logged in SSO.
  • Deletion requests are processed within 30 days. We have a documented procedure.
  • Data is retained for 3 years for audit purposes, then deleted. Anthropic does not retain data longer than necessary.

Availability (Monitoring, Incident Response)

Evidence to collect:

  • Uptime metrics for Claude Code infrastructure (API gateway, logging service)
  • Incident response logs (outages, how they were resolved, impact)
  • Backup and disaster recovery plans
  • Monitoring and alerting configuration

Audit questions:

  • How do you ensure Claude Code is available when needed?
  • How do you respond to outages?
  • How do you recover from failures?

Your answers:

  • We maintain 99.9% uptime with automated failover. Incidents are logged and reviewed within 24 hours.
  • We have a documented incident response plan with clear escalation paths.
  • Data is backed up daily and tested quarterly.

Preparing for the Audit

When a SOC 2 auditor shows up, they’ll want:

  1. Policy documentation: Print your Claude Code governance policies and have them signed off by leadership.
  2. Audit logs: Provide 3–6 months of access logs, policy violation logs, and transcript samples.
  3. Interviews: Be ready for auditors to interview engineers, security leads, and managers about Claude Code usage.
  4. Evidence of monitoring: Show dashboards and reports demonstrating ongoing monitoring and incident response.
  5. Remediation evidence: If violations were found, show how they were fixed and prevented going forward.

If you’ve been logging everything from day one, this is straightforward. If you’re scrambling to reconstruct logs after the fact, you’ll fail.

Implementation Roadmap

Rolling out Claude Code governance across a PE portfolio takes 12–16 weeks. Here’s the phased approach that works:

Phase 1: Foundation (Weeks 1–4)

Goals: Design the architecture, set up SSO, and deploy the API gateway.

Tasks:

  • Audit your existing SSO setup (Okta, Azure AD, or Google Workspace).
  • Design the three-tier architecture (identity, policy, logging).
  • Choose an API gateway (Kong, AWS API Gateway, or Apigee).
  • Set up MFA and session management in your SSO provider.
  • Configure SAML/OIDC for Claude Code access.
  • Design your logging schema and storage architecture.

Deliverables:

  • Architecture diagram signed off by security and engineering leads.
  • SSO configuration with MFA enabled.
  • API gateway deployed in a staging environment.
  • Logging infrastructure running (S3 + Elasticsearch).

Phase 2: Pilot (Weeks 5–8)

Goals: Run a controlled pilot with 2–3 teams and refine policies.

Tasks:

  • Select pilot teams (high-trust, security-conscious).
  • Document use cases and data sensitivity for each team.
  • Deploy Claude Code access through the API gateway for pilot teams.
  • Implement initial policy rules (secret detection, rate limiting).
  • Train pilot teams on responsible usage.
  • Monitor logs and adjust policies based on real-world usage.

Deliverables:

  • 3 pilot teams using Claude Code through the API gateway.
  • Policy rules enforced and logged.
  • Transcript logs retained and searchable.
  • Incident log (any policy violations, how they were handled).

Phase 3: Expansion (Weeks 9–12)

Goals: Expand to more teams based on pilot learnings.

Tasks:

  • Refine policies based on pilot incidents and feedback.
  • Expand allowlist to 10–15 teams (still selective).
  • Implement tiered access levels (Sandbox, Development, Production).
  • Set up audit dashboards for monitoring and compliance.
  • Conduct SOC 2 readiness assessment.
  • Train security and compliance teams on audit procedures.

Deliverables:

  • 10–15 teams with Claude Code access.
  • Tiered access levels implemented.
  • Audit dashboards showing real-time policy compliance.
  • SOC 2 readiness report identifying gaps.

Phase 4: Hardening (Weeks 13–16)

Goals: Close SOC 2 gaps and prepare for audits.

Tasks:

  • Implement missing SOC 2 controls (e.g., encryption key rotation, incident response procedures).
  • Conduct a mock SOC 2 audit with an external auditor.
  • Document all policies, procedures, and evidence.
  • Train portfolio company leadership on Claude Code governance.
  • Set up quarterly compliance reviews.
  • Prepare exit documentation (SOC 2 evidence, audit logs, governance playbook).

Deliverables:

  • SOC 2 Type II readiness (all controls in place).
  • Mock audit report with zero critical findings.
  • Governance playbook for portfolio companies.
  • Quarterly compliance review schedule.

Timeline for Multi-Company Rollout

If you’re rolling out to 5+ portfolio companies, stagger the rollout:

  • Month 1: Foundation and pilot at the largest/most mature portfolio company.
  • Month 2–3: Expansion at 2–3 more companies, refining policies.
  • Month 4: Rollout to remaining companies using proven playbook.

This prevents bottlenecks and lets you reuse learnings across the portfolio.

Common Pitfalls and How to Avoid Them

Pitfall 1: Logging Without Retention

You log everything for 30 days, then delete it. Six months later, an auditor asks for transcripts, and you have nothing.

Fix: Commit to 3-year retention from day one. Use immutable storage (S3 object lock) to prevent accidental deletion. Calculate storage costs upfront (typically $500–$2,000/month for a 50-person engineering team).

Pitfall 2: Allowlist Creep

You start with 5 people, then it’s 50, then it’s “everyone.” Without a formal approval process, governance collapses.

Fix: Require explicit approval for each tier promotion. Document the approval and the reason. Audit allowlist changes monthly.

Pitfall 3: Policies That Don’t Enforce

You write a policy: “engineers must not include customer data in prompts.” But there’s no technical enforcement, just training. Inevitably, someone includes customer data, and your policy fails.

Fix: Every policy must be hook-enforced. If you can’t enforce it technically, it’s not a policy—it’s a guideline. And guidelines fail.

Pitfall 4: Transcripts Without Redaction

Your transcripts contain secrets, API keys, and customer PII. This is now a liability, not evidence.

Fix: Implement automatic secret detection and redaction in your logging service. Use tools like Presidio or Nightfall. Audit redaction logs to ensure coverage.

Pitfall 5: No Incident Response Plan

An engineer includes customer data in a Claude Code prompt. It reaches Anthropic’s API. You don’t know what happened, how to notify customers, or how to prevent it next time.

Fix: Document a data incident response plan before you roll out Claude Code. Include:

  • Detection (how you find incidents)
  • Triage (severity assessment)
  • Containment (stop the bleeding)
  • Notification (who to tell, when)
  • Root cause analysis (what went wrong)
  • Remediation (how to prevent recurrence)

Pitfall 6: Auditor Surprise

You’re weeks away from exit, and the SOC 2 auditor says, “You don’t have evidence that Claude Code usage was authorised.” You scramble and miss the close.

Fix: Start building evidence from day one. Don’t wait until the audit. Conduct a mock audit at 6 months to find gaps early.

Measuring Success and Compliance

How do you know your governance is working? Track these metrics:

Adoption Metrics

  • % of allowlisted engineers using Claude Code: Target: 70%+ (not everyone will use it, and that’s okay).
  • Requests per engineer per week: Baseline: 10–20 for active users. Spike = potential abuse.
  • Time-to-value: How long from allowlist to first production use? Target: 2–4 weeks.

Quality Metrics

  • % of Claude-generated code that passes code review: Target: 80%+. Lower = quality issues.
  • % of Claude-generated code that reaches production: Target: 60%+. Lower = engineers not trusting the output.
  • Bugs introduced by Claude-generated code: Track and compare to baseline. Should be lower than human-written code.

Compliance Metrics

  • % of requests that passed policy validation: Target: 99%+. Lower = policy tuning needed.
  • % of requests with complete transcripts: Target: 100%. Missing transcripts = logging failures.
  • Time-to-redaction for secrets: Target: <1 minute. Longer = risk of exposure.
  • Incident response time: Target: <24 hours from detection to root cause analysis.
  • Audit readiness score: Track gaps found in mock audits. Should trend toward zero.

Cost Metrics

  • Cost per request: Baseline with Anthropic’s pricing. Should be predictable and within budget.
  • Cost of governance infrastructure: API gateway, logging, indexing. Should be <10% of Claude API costs.
  • Engineering time saved: Estimate hours saved by Claude Code vs. manual coding. Target: 20–30% reduction in coding time for applicable tasks.

Dashboard Setup

Create a real-time compliance dashboard showing:

  • Total requests and policy violations (by type)
  • Allowlist size and tier distribution
  • Transcript retention and storage usage
  • Incident log and resolution status
  • SOC 2 control status (green/yellow/red)

Update this dashboard weekly and review it with security and engineering leadership. This is your evidence that governance is active and effective.

Next Steps

If you’re a PE firm or portfolio company ready to roll out Claude Code securely, here’s what to do:

1. Assess Your Current State

  • What SSO provider do you use?
  • Do you have an API gateway in place?
  • What compliance frameworks apply (SOC 2, ISO 27001, HIPAA)?
  • Who owns security and who owns engineering?

2. Design Your Architecture

  • Map the three-tier architecture to your environment.
  • Choose your API gateway, logging service, and storage backend.
  • Define initial policies (secret detection, rate limiting, data classification).
  • Plan for 3-year transcript retention.

3. Build Your Governance Playbook

  • Document SSO setup, MFA enforcement, and access control.
  • Write policy rules and enforcement procedures.
  • Create an incident response plan.
  • Design your audit evidence collection.

4. Run a Pilot

  • Select 2–3 pilot teams.
  • Deploy Claude Code through your API gateway.
  • Monitor logs and refine policies.
  • Document learnings for portfolio-wide rollout.

5. Prepare for Audit

  • Conduct a mock SOC 2 audit.
  • Close any gaps.
  • Train teams on governance.
  • Set up quarterly compliance reviews.

6. Plan Your Exit

  • Document all governance policies and evidence.
  • Prepare a SOC 2 Type II report (or readiness assessment).
  • Create a governance playbook for the buyer.
  • Ensure transcript logs are audit-ready.

If you’re managing a portfolio and need guidance on Claude Code governance, data security, or SOC 2 readiness, consider working with a partner who understands both AI and compliance. PADISO’s Platform Design & Engineering service helps PE firms and their portfolio companies architect secure, auditable AI systems. We’ve helped teams implement AI Strategy & Readiness programmes that pass SOC 2 audits and scale across multiple companies.

For deeper context on how AI adoption fits into broader portfolio modernisation, see PADISO’s 100-Day Tech Playbook for PE-Owned Companies, which covers technology due diligence, quick wins, and 3-year value creation. If you’re evaluating AI tools more broadly, Agentic AI vs Traditional Automation explains when Claude Code and agentic AI make sense vs. legacy RPA.

The PE firms that win in the next 18 months will be those that deploy AI tools—including Claude Code—at scale whilst maintaining the governance that limited partners and exit buyers demand. This guide gives you the blueprint. Execution is up to you.


Summary

Rolling out Claude Code to a PE portfolio without losing the audit trail requires three things:

  1. Identity and Access (SSO): All access flows through enterprise SSO with MFA. No exceptions.
  2. Policy Enforcement (Hooks): Policies are enforced at the API level before requests reach Claude. Violations are logged and investigated.
  3. Audit Logging (Transcripts): Every request and response is logged immutably, retained for 3 years, and indexed for search.

This architecture is not novel—it’s proven in financial trading, healthcare, and other regulated industries. The novelty is applying it to Claude Code at portfolio scale.

Start with a 4-week foundation phase (SSO, API gateway, logging), move into an 8-week pilot with 2–3 teams, then expand based on learnings. By week 16, you’ll have SOC 2-ready governance that satisfies auditors, enables speed, and protects your exit.

The cost is real (governance infrastructure, logging, audits), but it’s far cheaper than a failed audit, a valuation haircut, or a deal kill. And it’s a one-time investment that pays dividends across your entire portfolio.

Start now. Auditors aren’t getting more lenient.