[release] 6 min · Apr 3, 2026

Anthropic Compliance API — Audit Claude, Not the Prompts

Anthropic's Compliance API gives enterprise admins a real-time programmatic audit feed for Claude Platform. Here's what it logs, what it doesn't, and why the gap matters.

#enterprise-security#claude#compliance#ai-governance#audit

If your developers have been using Claude without governance enabled, you have a blind spot — and Anthropic just shipped the tool that makes that blindspot visible, without filling it. The Compliance API for Claude Platform landed on March 24, 2026, giving enterprise admins a programmatic, real-time feed of audit events across their Claude workspace. It pipes into your existing SIEM. It covers admin and system activity. And it logs zero inference content — meaning the actual prompts your developers are sending to Claude are not in the feed.

TL;DR

  • What shipped: Programmatic audit log API for Claude Platform — admin and system events only
  • What’s missing: Inference activity (prompt content, model outputs) is explicitly excluded
  • Critical caveat: Logging starts at enablement only — no retroactive history
  • The governance reality: Compliance API + Claude Code local logs + OpenTelemetry = the actual stack needed; none of these alone is sufficient

What Happened

The March 24 announcement led with Claude Code coming to Team and Enterprise plans, but the feature that security teams should focus on was buried in the governance section: the Compliance API for Claude Platform. According to Anthropic, the API “gives organizations programmatic access to usage data and customer content for better observability, auditing, and governance.”

In practice, that means admin and system events: member additions and removals, API key creation and revocation, account setting changes, workspace access modifications. Security teams can pull logs over the API, filter by time window, user, or API key, and route that data into SIEMs, GRC tools, or custom dashboards — per Anthropic’s announcement and secondary reporting, monitoring tools such as Splunk, Datadog, or Grafana are the intended destination.

The timing is pointed. In the weeks before the announcement, engineering security teams had been rattled by a string of AI tooling incidents — a remote code execution vulnerability in Claude Code via config files, questions about GitHub Copilot’s training data handling, and broader supply chain concerns around AI-assisted development workflows. “What is Claude actually doing in my org?” had stopped being a theoretical question. Enterprises were making auditability a contract prerequisite.

This is Anthropic’s answer.

The Compliance API is not self-serve. You need to work with an Anthropic account team to enable it, after which admins generate an elevated API key to query the activity feed. If you are evaluating Claude Platform for enterprise deployment, factor this into your procurement timeline — you cannot flip it on after signing and expect immediate access.

Why This Matters

The Compliance API solves a specific procurement problem: enterprise buyers need demonstrable governance controls before they will sign contracts to deploy AI tools broadly. They need to answer “who did what, when, in our Claude workspace” — and they need that answer in a format that feeds into the audit infrastructure they already have. Before this API existed, answering that question required manual admin console reviews or support tickets to Anthropic. Now it is a queryable feed.

For organizations already running Claude Enterprise, the unified parent-org filter is the most immediately useful piece. Anthropic explicitly lets enterprises “place Claude Platform usage under the same parent organization and filter activity across both from a single feed.” If you have both Claude Enterprise and Claude Platform workspaces, you no longer need to correlate two separate audit logs manually. That is a real operational improvement, and it signals that Anthropic is thinking about governance at the workspace-portfolio level, not just per-product.

The activation caveat deserves more attention than it typically gets in release coverage. Logging starts at the moment you enable the API — there is no retroactive reconstruction of historical events. Teams that have been running Claude broadly for months before enabling governance controls have a permanent blind spot in their audit trail. If your organization has already deployed Claude Code to a developer team without the Compliance API active, that historical activity is gone. You are starting clean from the enablement date forward. For many organizations, the right time to enable was during initial rollout, not after the fact — and that window may already have closed.

The inference exclusion is the structural gap that security-conscious teams need to understand before deployment. The Compliance API captures admin and system events. What it does not capture is inference activity — the actual content of what your developers are typing into Claude prompts, what the model is responding with, and the semantic content of conversation turns. Anthropic has confirmed the API “does not log inference activity, meaning it doesn’t capture the content of every conversation or prompt by default.” Content retrieval is handled via a separate data export feature; the exact schema of what metadata appears in the audit feed — whether conversation identifiers are logged alongside system events, for instance — has not been publicly documented by Anthropic.

This creates a compliance posture that looks solid from the outside — you have an audit API, you have SIEM integration, you have a queryable feed — but has a meaningful gap when someone asks “what was the developer actually asking Claude to do?” That question matters in a post-incident investigation. It matters during a security audit. It matters when a developer accidentally pastes credentials into a Claude prompt. The Compliance API will tell you a conversation happened. It will not tell you what was in it.

For Claude Code deployments specifically: terminal-level activity from Claude Code sessions logs locally to ~/.claude/ on the developer’s machine — entirely outside the Compliance API’s scope. The actual governance stack for Claude Code at scale is the combination of server-managed policy settings, the Compliance API for admin-level events, and OpenTelemetry instrumentation for inference-level observability. No single piece covers the full picture.

The approach Anthropic has taken on inference exclusion is a deliberate position, not an oversight. Competing enterprise AI platforms handle this differently — some log inference activity by default and give organizations controls to limit retention; others take Anthropic’s position that prompt content is sensitive enough that it should not flow into an audit API without explicit additional configuration. Neither approach is obviously correct. It is a real tension between auditability and privacy — auditability demands you know what was said, privacy demands you not store it all by default. Teams adopting Claude Platform should make that tradeoff consciously, not discover it during their first compliance audit.

The integration architecture Anthropic chose — a programmatic API that feeds into existing SIEM infrastructure rather than a vendor-managed dashboard — is the right call for enterprise buyers. Security teams already have tooling. They do not want another pane of glass. A feed they can pipe into their existing monitoring stack is operationally preferable to yet another vendor dashboard requiring its own login, its own alert configuration, and its own data silo to maintain.

Compare this to the approach some SaaS vendors take of offering a “compliance portal” as a separate product: Anthropic’s feed-first design means your security team stays in the tools they already know, and you are not dependent on Anthropic’s UI roadmap for your audit workflows. That architectural decision matters more than it sounds when you are trying to respond to an incident at 2am.

If you are building the governance stack for a Claude Platform rollout, treat the Compliance API as the admin-event layer only. Pair it with Claude’s server-managed policy settings for behavioral controls and OpenTelemetry or equivalent instrumentation for inference-level visibility. The three together give you something defensible. Any one of them alone does not.

The Take

This is the feature that unblocks enterprise Claude contracts, and Anthropic knows it. The Compliance API gives procurement and security teams the checkbox they need: programmatic audit logging, SIEM integration, demonstrable admin event tracking. That conversation was stalling deals. Now it does not have to.

I’d be watching the inference gap carefully. The deliberate exclusion of prompt content from the audit feed is framed as a privacy tradeoff, and there is a reasonable argument for it. But it also happens to leave a future product surface available — inference-level observability as an additional capability, at additional cost, for organizations that need it. Whether that is cynical or pragmatic depends on whether Anthropic builds it and at what price point.

The more immediate concern for teams rolling out Claude Code at scale: do not confuse “we have the Compliance API enabled” with “we have governance covered.” You have admin-event governance covered. Inference activity is not in the feed. Terminal-level Claude Code activity is not covered. Historical data before enablement does not exist. Define what your actual audit requirements are before deployment, enable logging on day one, and build the full governance stack — not just the API piece — before your first compliance audit asks questions this API cannot answer.

If you are planning a Claude Platform rollout: enable the Compliance API before your developers touch the product, not after. Once that historical window closes, it does not reopen. And if you are evaluating secrets management tooling alongside this rollout — which you should be — the same “enable before you deploy” logic applies there too.