[launch] 5 min · Apr 30, 2026

OpenAI on AWS Bedrock — Lock-In Didn't Go Away, It Moved

OpenAI ships Codex, GPT-5.5, and Managed Agents to AWS Bedrock in limited preview. The multicloud story sounds great until you read the procurement fine print.

#openai#aws#codex#cloud-lock-in#bedrock#enterprise-ai

On April 28, OpenAI and AWS launched three offerings on Amazon Bedrock, all in limited preview: the latest OpenAI models (GPT-5.5 and GPT-5.4), Codex via CLI, desktop app, and VS Code extension, and Amazon Bedrock Managed Agents powered by OpenAI. This is the first concrete product to emerge from OpenAI’s amended deal with Microsoft — the one where Microsoft gave up exclusivity and, in return, stopped paying OpenAI a revenue share. The headline reads “multicloud.” The fine print reads “AWS commitment credits.”

TL;DR

  • What: OpenAI models, Codex, and Managed Agents all land on AWS Bedrock in limited preview as of April 28
  • Why it happened: Microsoft’s license to OpenAI IP is now non-exclusive through 2032, freeing OpenAI to distribute through any cloud
  • The catch: Usage counts toward existing AWS cloud commitments, auth is IAM, inference runs through Bedrock — this is AWS infrastructure with OpenAI on top
  • Action: If you’re evaluating Codex for your team, the real question isn’t which model — it’s which hyperscaler you’re committing to for the next five years

OpenAI on Bedrock — What Happened

The launch has three components, and it matters that they shipped together. First, OpenAI’s frontier models — both GPT-5.5 and GPT-5.4 — are available through Bedrock’s model catalog. Customers authenticate with AWS credentials and run inference through Bedrock. All customer data is processed by Amazon Bedrock, not OpenAI’s APIs directly.

Second, Codex — the autonomous coding agent that now has more than four million weekly users — is accessible through Bedrock via the same interfaces developers already use: the Codex CLI, the desktop app, and the VS Code extension. The difference is that underneath, everything routes through AWS infrastructure.

Third, and most architecturally significant: Amazon Bedrock Managed Agents powered by OpenAI. This combines OpenAI’s agent harness with AWS’s AgentCore compute environment. Every agent gets its own identity, persistent memory, per-action logging, and runs inside your AWS environment with all inference on Bedrock. If that description sounds familiar, it should — it mirrors the structure of Anthropic’s Managed Agents, which launched with similar lock-in trade-offs.

All three components are in limited preview. There is no public GA date. If you are planning production workloads around this, you are planning around access you may not have for weeks or months.

Why This Matters

The backstory here is entirely about money. On April 27 — one day before the AWS launch — OpenAI and Microsoft jointly announced the next phase of their partnership. Microsoft’s license to OpenAI IP continues through 2032, but it is no longer exclusive. In exchange, Microsoft no longer pays a revenue share to OpenAI. That amended deal is what made the AWS launch legally possible.

But the financial entanglement doesn’t stop at Microsoft. To claim the full $35 billion in new financing, OpenAI committed to consuming two gigawatts of Amazon’s Trainium accelerators. This launch is as much about OpenAI fulfilling a capital deal as it is about developer distribution. When a company commits to two gigawatts of proprietary silicon from a specific cloud provider, the word “multicloud” starts to feel aspirational rather than descriptive.

For enterprise teams, the practical implication is straightforward: usage of both OpenAI models and Codex can be applied toward existing AWS cloud commitments. That’s the real hook. Companies with multi-million-dollar AWS enterprise agreements can now run Codex without routing through Azure or OpenAI’s direct API. No new vendor relationship needed. No separate billing. No additional security review for a non-AWS data processor.

This solves a genuine procurement problem. If you’re in a regulated industry with data sovereignty requirements or FedRAMP posture, direct OpenAI API usage was often a non-starter. Having all inference processed by Amazon Bedrock — with AWS credentials, AWS logging, AWS compliance controls — removes that specific blocker. For the subset of enterprises who were blocked by data residency concerns rather than model quality concerns, this is a real unlock.

But here’s where the “model choice” narrative falls apart. You’re not getting a neutral pass-through to OpenAI. You’re getting AWS infrastructure with OpenAI packaged on top. Your authentication is IAM. Your billing counts toward AWS commits. Your agent compute runs in AgentCore. Your compliance posture is AWS’s compliance posture. Switch to a different cloud provider and you’re unwinding all of that — which is exactly what lock-in means.

If your organization already has significant AWS commitments and was previously blocked from using OpenAI by data residency or security review requirements, this launch removes a concrete obstacle. That’s the narrow but real win here.

The Managed Agents piece deserves particular scrutiny. Per-agent identity, per-action logging, and persistent memory running inside your AWS environment — these are features that create deep integration surfaces. Every agent you build against this architecture creates switching cost. It’s the same pattern we flagged with Anthropic’s managed agent runtime: the convenience of a managed environment is paid for with portability you didn’t know you were giving up.

Compare this to running models via a neutral proxy like LiteLLM or routing through your own inference layer. You lose the managed agent features and the AWS commitment credits. You gain the ability to swap models and providers without re-architecting. The question is which trade-off matches your organization’s actual constraints — not which sounds better in a press release.

The competitive dynamic is worth examining too. AWS now hosts frontier models from OpenAI and Anthropic side by side on Bedrock. On paper, this is great for buyers — compare Claude and GPT-5.5 in the same environment. In practice, it’s great for AWS. Every model you evaluate deepens your Bedrock integration. The models compete with each other. AWS wins regardless of which model you pick.

The Take

The headline sells this as “model choice.” I’m not buying it. When Codex usage counts toward your AWS cloud commitments and all inference runs through Bedrock, you haven’t escaped a walled garden — you’ve walked into a different one. The lock-in vector didn’t disappear with the Microsoft exclusivity amendment. It moved from Azure to AWS, and in some ways it got stronger: two gigawatts of Trainium commitment means OpenAI is now structurally incentivized to make the AWS path the best-supported path.

If you’re evaluating Codex for your team, the governance question isn’t “OpenAI vs. Anthropic.” It’s “which hyperscaler do I want Codex bolted to for the next five years?” For teams already deep in AWS with existing commitments, this launch is genuinely useful — it removes a procurement barrier that was real and annoying. For everyone else, it’s a reminder that multicloud is a strategy only the cloud providers benefit from. You don’t get flexibility. You get two sets of integration surfaces to maintain.

The GPT-5.5 launch on Bedrock is the shiny object. The Managed Agents architecture is where the real lock-in lives. Pay attention to the boring part.