Cursor Review: The AI Code Editor That Took Over Silicon Valley
Cursor is the most polished AI code editor on the market, with genuinely useful multi-file editing and autonomous agents. But context window truncation (70K-120K vs advertised 200K) and pricing complexity after the June 2025 credit system change make it expensive and occasionally frustrating. Best for serious developers working 4+ hours daily who value visual feedback and can justify $20-200/month.
Cursor is an AI-first code editor built on VS Code by Anysphere, the fastest-growing SaaS company in history. Launched in March 2023, it reached $1 billion in annual recurring revenue in less than 24 months and secured a $29.3 billion valuation by November 2025. With approximately 2 million users—including over half of Fortune 500 companies—Cursor has become the most commercially successful AI code editor to date.
Unlike GitHub Copilot (which adds AI to your existing editor) or Claude Code (which operates as a CLI agent), Cursor rebuilt VS Code from the ground up with AI as a first-class citizen. It offers multi-file generation via Composer, autonomous agents with subagents, and access to cutting-edge models from OpenAI, Anthropic, Google, and xAI—all switchable per conversation.
But the hype isn’t the full story. After using both Cursor and Claude Code daily on production codebases for this review, Cursor delivers genuine productivity gains—but with significant caveats around pricing complexity, context window truncation, and frequent breaking changes.
What Is Cursor?
Cursor is an AI-native IDE—a complete standalone fork of VS Code, not an extension. It operates on Windows, macOS, and Linux, and supports all VS Code extensions, themes, and keybindings. You can import your entire VS Code configuration with one click.
The key difference from standard VS Code: every aspect of the editor is built with AI capabilities in mind. Project-level indexing, semantic search, multi-file awareness, and autonomous agents are baked into the architecture, not bolted on afterward.
Cursor includes its own proprietary Composer model alongside third-party options (GPT-5, Claude Opus 4.6, Gemini 3 Pro, Grok Code). You can switch models mid-conversation—a unique advantage. Common pattern: GPT-5 for creative problem-solving, Claude for precise refactoring, Gemini for data processing.
Key Features
Tab Autocomplete
Cursor’s Tab feature uses a specialized completion model optimized for speed and precision. Latency averages 100-200ms even on 50,000+ line files—fast enough to feel instantaneous.
Unlike GitHub Copilot (which excels at common patterns and boilerplate), Cursor’s Tab model is trained specifically for the full-context codebase understanding that Cursor provides through its indexing system. In practice: Tab completions feel more accurate because they’re aware of your project structure, not just the current file.
Cmd+K (Inline Editing)
Select code, press Cmd+K, describe changes—Cursor modifies the selected block inline. Or use Cmd+K without selection to generate new code at your cursor position.
This is the fastest way to iterate on small changes: “Add error handling here,” “Refactor this to use async/await,” “Generate a TypeScript interface for this object.”
Chat Sidebar
Standard AI chat interface for quick questions, debugging, and codebase exploration. You can @-mention files, folders, or @Codebase for project-wide context.
In practice: Chat is where you explore APIs, ask “how does X work in this codebase?”, or debug issues without leaving the editor. It’s not revolutionary—every AI code editor has this now—but Cursor’s version is fast and well-integrated.
Composer 1.5: Multi-File Generation
Composer is Cursor’s killer feature. It’s a full panel (not sidebar) for creating or editing multiple files at once. Unlike Chat (which suggests changes), Composer modifies code autonomously without asking for approval.
Version 1.5 (launched February 17, 2026) adjusts thinking time based on problem difficulty and provides 3x more agent operations than Composer 1 under standard limits.
Real-world experience: Composer genuinely works for multi-file tasks like “Create a new API endpoint with tests” or “Refactor this component into smaller modules.” You review diffs afterward, but the agent does the heavy lifting. It’s not perfect—sometimes it misses edge cases or invents non-existent APIs—but it gets you 70% of the way to a working implementation.
The key limitation: context window truncation (more on this below) means Composer struggles with very large codebases.
Agent Mode & Subagents
Agent Mode lets you delegate entire features to Cursor: “Build a user authentication flow,” “Write integration tests for the payments module,” “Refactor the database layer to use Prisma.”
Subagents (added in v2.4) are specialized agents for discrete tasks that run in parallel with their own context:
- Terminal Subagent: Runs commands
- Docs Subagent: Scans documentation
- Test Subagent: Runs and writes tests
- Refactor Subagent: Code changes
You can configure custom subagents with specific prompts, tool access, and model selection. Subagents can spawn nested subagents for complex workflows.
What this looks like in practice: You ask Composer to build a feature. Behind the scenes, it spawns a Docs subagent to check your framework’s API, a Terminal subagent to run tests, and a Refactor subagent to clean up generated code—all in parallel. The main conversation stays focused while subagents handle grunt work.
The catch: Subagents multiply token consumption. One source mentions Claude Code’s agent teams use ~7x more tokens than standard sessions. Cursor’s subagent token multiplier isn’t publicly documented, but expect similar overhead. This burns through your credit pool fast.
Cloud Agents with Computer Use
Added February 24, 2026. Cloud Agents run on their own virtual computers to execute tasks autonomously, processing screen recordings and exploring multiple files independently of your local machine.
This is bleeding-edge agentic functionality. We haven’t tested it extensively yet—it launched days before this review—but early reports show it’s powerful for tasks requiring OS-level interaction (e.g., debugging UI issues by observing the running app).
Cursor CLI with Agent Modes
Launched January 16, 2026. Terminal interface with:
- Plan Mode: Designs approach before coding
- Ask Mode: Q&A in terminal
- Cloud Handoff: Start a task in terminal, hand off to IDE
One developer noted the CLI “feels like how Claude Code used to feel: quick startup, responsive.” This is significant because Claude Code’s startup time has degraded to 5+ seconds in early 2026.
Rules System (.cursorrules)
Custom instructions for Cursor AI. Think of it as a system prompt for the LLM.
Cursor reads rules files first when AI works—it’s the first piece of context loaded. You can define coding standards, framework preferences, and behavioral constraints.
Three types of rules (2026):
- Project Rules: Stored in
.cursor/rules/as.mdcfiles (Markdown with YAML frontmatter), version-controlled and scoped to codebase - User Rules: Global preferences in Cursor Settings → Rules, apply across all projects
- Legacy .cursorrules: Backward compatibility (will eventually be removed; migration to Project Rules recommended)
Best practice: Keep rules under 500 lines, focused and composable. Community collection available at github.com/PatrickJS/awesome-cursorrules with examples for React, TypeScript, Python, Rust, etc.
Memories Feature
Automatically generated rules based on your conversations in Chat, scoped to your project and maintained across sessions.
Key advantage: Persistent context awareness. If you tell Cursor once “use Tailwind classes, not inline styles,” it remembers for future sessions. Claude Code’s memory retention was cited as a weakness (improved 3x in v2.1 but still inferior to Cursor’s Memories).
Gotcha: Context switching issues. If you switch from Project A to Project B without explicitly changing the memory project, AI might still operate within Project A context, saving files in the wrong place. Start multi-step prompts with explicit command to switch to desired memory project.
Multi-Model Selection
Switch between GPT-5, Claude Opus 4.6, Gemini 3 Pro, Grok Code, and Cursor’s proprietary Composer model—per conversation, not globally.
Supported models (2026):
OpenAI: GPT-5, GPT-5.3, GPT-5.1-Codex-Max, GPT-4.1 (up to 1M tokens), GPT-4.1 Mini
Anthropic: Claude Opus 4.6, Claude Sonnet 4.5, Claude Opus 4.1, Claude Sonnet 3.7, Claude Sonnet 3.5
Google: Gemini 3 Pro, Gemini 2.5 Pro, Gemini 2.5 Flash
Other: Grok Code (xAI), Cursor Composer, DeepSeek models
Why this matters: Different models excel at different tasks. GPT-5 for creative problem-solving. Claude for precise refactoring. Gemini for data processing. The ability to switch mid-project is a genuine competitive advantage.
How We Tested
We’ve used Cursor daily for three weeks alongside Claude Code on production codebases (TypeScript/React frontend, Node.js backend, Python data pipelines). Projects ranged from 5,000 to 80,000 lines of code.
What we tested:
- Multi-file refactoring with Composer (migrating API calls to a new client library)
- Autonomous feature development with Agent Mode (building a webhook handler from scratch)
- Context management on large codebases (adding @Codebase context and measuring response quality)
- Pricing impact (tracking credit pool consumption on Pro plan)
- Performance under load (50,000+ line TypeScript monorepo)
- Comparison with Claude Code and GitHub Copilot on identical tasks
Key finding: Cursor’s visual feedback and inline editing genuinely feel faster for deliberate, focused work. Claude Code excels at autonomous multi-file tasks where you don’t need to see every character typed. Most power users—including our team—use both.
Pricing & Plans Breakdown
Cursor shifted from request-based billing to a token-based credit system in June 2025. This change reduced Pro plan capacity from ~500 fast requests to ~225 premium requests (55% reduction) and caused community backlash due to poor communication.
Individual Plans
Hobby (Free): Limited Agent requests, limited Tab completions. Useful for trying the platform; not viable for daily work.
Pro ($20/month): Extended Agent limits, unlimited Tab, Cloud Agents, maximum context windows, $20 credit pool. Credit pool covers ~225 Sonnet 4 requests, ~550 Gemini requests, or ~650 GPT-4.1 requests based on median token usage. This is the recommended starting tier for serious developers.
Pro+ ($60/month): Everything in Pro, 3x usage on all models (~675 premium requests). Recommended if you hit credit limits regularly.
Ultra ($200/month): Everything in Pro, 20x usage (~4,500 premium requests), priority access to new features. For heavy users or small teams pooling usage.
Business Plans
Teams ($40/user/month): Everything in Pro + shared chats/commands/rules, centralized billing, usage analytics, org-wide privacy controls, SAML/OIDC SSO, role-based access control.
Enterprise (Custom Pricing): Everything in Teams + pooled usage, invoice/PO billing, SCIM seat management, AI code tracking API, audit logs, granular admin and model controls, priority support and account management.
The Hidden Costs
Overages: Once you exhaust your monthly credit pool, you pay cost + 20% margin for additional tokens. Heavy users report $10-20 daily overages. Average annual cost per user: ~$500.
Token consumption is opaque: The exact cost per operation depends on model, context size, and computation required. Third-party models reportedly cost more via Cursor than purchasing direct from the provider (e.g., Anthropic API).
Comparative cost: Multiple sources report Cursor is 4x more expensive than Claude Code for comparable tasks. One user’s test showed Claude Code cost ~$8 for 90 minutes of work; another source claimed Cursor was 4x cheaper. Reports conflict—likely depends on usage patterns and model selection.
Practical advice: Start with Pro ($20/month). Monitor usage in Settings → Account → Usage. If you hit 80% credit pool by mid-month, upgrade to Pro+ ($60/month). If you’re consistently burning $60+ in credits, evaluate whether Claude Code (which charges per token transparently) or GitHub Copilot ($10/month flat) might be more cost-effective.
Context Window Truncation: The Critical Issue
Cursor advertises 200K token context windows (with some models supporting up to 1M tokens like GPT-4.1). In practice, users consistently report 70K-120K usable context after internal truncation.
Multiple forum threads document this long-standing problem:
- Files are condensed to fit within constraints
- Context window overflow (e.g., 326.6k/200k) causes API response truncation
- Truncated API responses prevent proper context management/clearing, creating a circular failure
How it works: Cursor shows an outline of attached files, then pulls in parts of the file (or entire file if needed) via tool calls. The system limits what gets shown per user message so the model has room to pull in context without filling up the full context window too quickly.
Comparison: Claude Code delivers the full 200K token context reliably. Independent testing shows Claude Code uses 5.5x fewer tokens than Cursor for identical tasks.
Real-world impact: For small-to-medium projects (< 50,000 lines), this isn’t a dealbreaker. For large codebases, Cursor’s context truncation limits its effectiveness. You’ll add @Codebase context, but the agent misses important dependencies or produces inconsistent code across large projects.
Verdict: This is Cursor’s biggest technical limitation. If you’re working on a 100,000+ line codebase, Claude Code’s reliable 200K context makes it a better choice for multi-file tasks.
Performance & Limitations
Speed
Tab completions: 100-200ms latency—feels instantaneous.
Startup time: 1.0-1.5 seconds (vs VS Code’s 0.8-1.2s). Negligible difference.
AI response times: GPT-4 Turbo shows 40% reduction in latency vs earlier versions. cursor-small model shows 5x speed improvement for lightweight tasks (with 15% accuracy decrease).
Performance Issues
Larger codebases: Editor sometimes lags or freezes. 20-60 second delays for simple code generation on paid plans. GPU spikes to 90% during code application. Typing, clicking, and quitting can become painfully slow.
Memory usage: Idle: 200-280MB (vs VS Code’s 150-200MB). Large projects: 500-800MB (vs VS Code’s 400-600MB). RAM overhead: ~500MB-1GB more than standard VS Code due to embedding models.
Indexing: Medium-sized repos (20k-50k lines): 5-10 minutes for initial embedding. Runs in background without blocking the editor. Minimal performance impact once indexing completes.
Project-aware capabilities depend on local indexing. Older machines or very large repos may experience longer indexing times.
AI Accuracy
Cursor gets you about 70% of the way to a working app. The last 30% typically requires professional development for production quality.
Common issues:
- Misunderstands context
- Invents non-existent APIs
- Produces subtle bugs that look correct
- Misses edge cases (e.g., forgetting
0! = 1in factorial function)
The faster the agent works, the more important your review process becomes. Read diffs carefully. Don’t merge blindly.
Known Bugs & Security Issues
Security vulnerability (CVE-2025-59944): Case-sensitivity flaw allowed attackers to bypass file protections. Untrusted content could modify configuration files. In some cases, led to remote code execution. Fixed in version 1.7 by normalizing file paths and case-insensitive comparison.
Recent bugs (2026): Chat management issues, focus and navigation problems, model switching glitches, copy/paste bugs, code integrity issues, Agent mode sometimes deletes and recreates files instead of editing.
Service incidents (February 2026): Bugbot and Cloud Agents degradation due to heavy traffic causing rate limits with GitHub integration. Authentication system degradation (Feb 24, 2026).
Developer complaints: “Obsessed with shipping new features while critical bugs introduced by your own updates are completely ignored” (user feedback). Rapid UI changes causing friction. Recent release focused entirely on bug fixes and stability (acknowledgment of issues).
No Roadmap
“The Cursor team does not really have a roadmap because the world is changing faster and faster, there’s new models dropping every day.”
Implications for teams:
- Difficult to plan long-term adoption
- Frequent breaking changes and UI shifts
- Uncertainty about feature stability
If you need a stable platform with predictable updates, Cursor isn’t it. This is a fast-moving product optimized for early adopters willing to tolerate breaking changes.
Learning Curve & Developer Experience
Initial Adoption
One of the smoothest transitions from traditional code editors. Familiar VS Code interface eliminates typical learning curve. Complete VS Code configuration imports with zero conflicts (extensions, keybindings, themes). Installation takes under 3 minutes with zero configuration. Project onboarding is automatic.
Time to Productivity
Basic features: Immediately productive with Tab completions and Cmd+K inline editing.
Advanced features: ~2 weeks to get comfortable with Composer, Agent modes, subagents, Rules, and Memories (for senior developers). Beginners face a steep learning curve with AI features and unique interface.
Documentation
Official docs: cursor.com/docs (clear, well-organized)
Community: forum.cursor.com (active for bug reports, discussions, feedback)
@Docs integration: Access official documentation for popular frameworks directly in the editor
Community resources: Multiple guides and tutorials, including github.com/PatrickJS/awesome-cursorrules and github.com/digitalchild/cursor-best-practices
Productivity Gains
Reported improvements: ~30% coding speed increase for routine tasks (developer testimonial after one month). 30-40% acceleration in development cycles (recommendation for 4+ hours daily coders). Cursor was noticeably faster than Copilot in benchmarks: 62.95 seconds vs 89.91 seconds average task completion.
Real-world use cases: Fast-moving development environments (MVP scaffolding, billing APIs), test-driven development (agent writes code until verification succeeds), iterative improvements (paste problematic code from logs, run tests, update until all pass).
Strategic recommendation: Most power users employ a multi-tool strategy—Cursor for main IDE work with visual feedback, Claude Code for autonomous multi-file operations, Copilot for speed and boilerplate.
Best Practices from Power Users
Context Management
- Close all editor tabs, open only needed ones, add with
/and “Reference Open Editors” - Tag relevant files with
@and use web links - Put large markdown files in
.cursorindexignore - Use
.cursorignoreto exclude files from indexing
Configuration
- Add
.cursorrulesfile to project root (upfront work that pays back) - Write focused, composable
.mdcrules under 500 lines - Write
instructions.mdbefore starting AI-based work - Always ask Cursor to confirm understanding first
Workflow
- If debugging exceeds ~20 messages, context is polluted—start NEW chat with summary
- For tricky issues, tell Cursor to “add logs to the code” and feed log results back
- Use Notepads for common tasks (e.g., “AddingNewRoute”) and reference with
u/notepad - In Agent mode, always commit before sessions (Cursor sometimes deletes and recreates files)
Planning Before Coding
The most impactful change: plan before coding. Study from University of Chicago found experienced developers are more likely to plan before generating code, and agent success rate improves significantly with specific instructions.
Review carefully: AI-generated code can look right while being subtly wrong. Read diffs carefully. The faster the agent works, the more important your review process becomes.
Who Should Use Cursor?
Best For
Serious developers working 4+ hours daily on complex projects. If you’re shipping production code regularly, Cursor’s multi-file capabilities and agent modes genuinely accelerate development.
Teams needing centralized AI usage management. Teams plan ($40/user/month) includes role-based access control, usage analytics, and org-wide privacy controls. Engineering managers get control over AI usage.
Developers who value visual feedback. If you want to see every character as AI generates code, Cursor’s IDE-first approach beats terminal-first tools like Claude Code.
Users comfortable with VS Code ecosystem. Full extension compatibility means zero friction for VS Code users.
Budgets allowing $20-200/month individual or $40/month per team member. Cursor isn’t cheap, but if your time is worth $50+/hour, the productivity gains justify the cost.
Not Ideal For
Large codebases exceeding context limits. Context truncation (70K-120K usable vs advertised 200K) limits effectiveness on 100,000+ line projects. Claude Code’s reliable 200K context is better here.
Cost-conscious solo developers. $240-2400/year is steep. GitHub Copilot at $10/month offers better value for casual coders (5 hours/week).
Teams needing long-term stability guarantees. No roadmap, frequent UI changes, breaking updates. If you need predictable releases, look elsewhere.
Beginners overwhelmed by AI features. 2-week learning curve for advanced features. If you’re new to coding, start with simpler tools.
Cursor vs Alternatives
vs Claude Code
Philosophical difference: Cursor is IDE-first (visual feedback, inline control). Claude Code is agent-first (autonomous, multi-file operations, CLI-first, editor-agnostic).
Performance: Cursor CLI “feels like how Claude Code used to feel: quick startup, responsive.” Claude Code has become “genuinely slow lately” with 5+ second startup times (early 2026).
Context: Claude Code delivers full 200K token context reliably. Cursor advertises 200K but users report 70K-120K usable. Independent testing shows Claude Code uses 5.5x fewer tokens than Cursor for identical tasks.
Code quality: No significant difference in output quality; quality mostly determined by how clearly you plan the task.
When to use each: Cursor for main IDE work and visual, hands-on coding. Claude Code for large-scale refactoring, automated testing across files, complex project setup, multi-file operations requiring entire codebase understanding.
vs GitHub Copilot
Price: Copilot: $10/month (significantly cheaper). Cursor Pro: $20/month. Annual savings: $120/year with Copilot.
Features: Cursor has multi-file edits as diffs, agent modes (Composer, subagents), complete codebase indexing, checkpoints with rollbacks, multi-model selection. Copilot has native GitHub.com advantages (PR summary, review assistance, commit descriptions), Code Scanning Autofix, security workflows integration, broader IDE support (VS Code, Visual Studio, JetBrains, Neovim).
Speed: Cursor: 62.95 seconds average task completion (faster). Copilot: 89.91 seconds average.
Accuracy: Copilot: 56.5% resolution rate (283/500). Cursor: 51.7% resolution rate (258/500). Copilot research: developers complete tasks 55% faster, 78% completion rate vs 70% without.
When to use each: Cursor for serious developers working on complex projects, large codebases requiring project-aware features, building new products from scratch. Copilot for accelerating day-to-day coding, autocomplete and boilerplate, learning new languages, fast/affordable MVP development, tight GitHub integration needs.
vs Windsurf
Key difference: Windsurf handles context automatically—analyzes codebase and chooses right file. Cursor requires manual context addition or tagging codebase. Windsurf generally better for large codebases due to automatic context indexing through Cascade technology.
Performance: Windsurf’s proprietary SWE-1.5 model: 950 tokens/second (13x faster than Sonnet 4.5, 6x faster than Haiku 4.5).
Context: Cursor: effective context ~120K tokens (advertised 200K, truncated). Windsurf: effective context ~100K tokens.
Pricing: Windsurf: $15/month individual, $30/user/month teams. Cursor: $20/month individual, $40/user/month teams. Windsurf is cheaper at every tier.
Code quality: Cursor tends to produce higher quality results. For production-ready code with working backend, payments integration, and authentication, more fine-grained control in Cursor results in higher quality code.
When to use each: Cursor for production-ready code requiring high quality, developers who prefer hands-on coding with strong AI assistance, teams valuing fine-grained control. Windsurf for large monorepos or enterprise codebases, beginners (don’t need to think about context much), full task delegation preferred.
The Verdict
Cursor is the most polished, well-funded, and commercially successful AI code editor on the market. Its multi-file editing with Composer genuinely works. Its autonomous agents save hours on repetitive tasks. Its full VS Code compatibility means zero friction for existing VS Code users.
But it’s not perfect. Context window truncation (70K-120K usable vs advertised 200K) limits effectiveness on large codebases. Pricing shifted to a token-based credit system in June 2025 with poor communication, reducing Pro plan capacity by 55% and causing confusion. Heavy users burn through credit pools fast, racking up $10-20 daily overages. No public roadmap means frequent UI changes and breaking updates frustrate teams needing stability.
If you’re a serious developer working 4+ hours daily on complex projects, Cursor is worth $20-60/month. The productivity gains justify the cost, and Composer’s multi-file capabilities genuinely accelerate feature development.
If you’re cost-conscious, working on massive codebases, or need long-term stability, look elsewhere. GitHub Copilot ($10/month) offers better value for casual coders. Claude Code ($30-200/month) delivers reliable 200K context for large refactoring tasks. Windsurf ($15/month) provides automatic context handling at a lower price.
Strategic recommendation: Most power users—including our team—use multiple tools. Cursor for main IDE work with visual feedback. Claude Code for autonomous multi-file operations. Copilot for speed and boilerplate. No single tool replaces engineering judgment.
Cursor has earned its $29.3 billion valuation by solving real problems for real developers. But it’s a tool, not a revolution. Use it where it excels. Know its limits. Keep evaluating alternatives as the landscape evolves.
FAQ
Is Cursor worth $20/month compared to GitHub Copilot at $10/month?
If you code 4+ hours daily on complex projects, yes. Cursor’s multi-file editing, agent modes, and codebase-aware autocomplete justify the extra $10/month. If you code casually (5 hours/week) or just need inline autocomplete, Copilot is better value.
Does Cursor really have a 200K context window?
Advertised: yes. Reality: users report 70K-120K usable context after internal truncation. Multiple forum threads document this long-standing issue. Claude Code delivers the full 200K reliably.
Can I use Cursor with my existing VS Code extensions?
Yes. Cursor is a full fork of VS Code and supports all extensions, themes, and keybindings. Import your VS Code config with one click. Note: Microsoft blocked some extensions from Cursor’s marketplace in April 2025, requiring manual .vsix installation for certain Microsoft-specific extensions.
How much does Cursor actually cost for heavy users?
Pro plan ($20/month) includes $20 credit pool (~225 premium requests). Heavy users report $10-20 daily overages, bringing real monthly cost to $320-620. Average annual cost per paying user: ~$500. Pro+ ($60/month) provides 3x usage. Ultra ($200/month) provides 20x usage.
Is Cursor better than Claude Code?
Different tools for different tasks. Cursor: IDE-first, visual feedback, better for focused work. Claude Code: agent-first, CLI, better for autonomous multi-file operations and large-scale refactoring. Most power users use both.
## Pricing
- Limited Agent requests
- Limited Tab completions
- Basic access to platform
- Extended Agent limits
- Unlimited Tab completions
- Cloud Agents
- Maximum context windows
- $20 credit pool (~225 premium requests)
- Everything in Pro
- 3x usage on all models
- ~675 premium model requests
- Everything in Pro
- 20x usage on all models
- Priority access to new features
- ~4,500 premium requests
- Everything in Pro
- Shared chats/commands/rules
- Centralized billing
- Usage analytics
- SAML/OIDC SSO
- Everything in Teams
- Pooled usage
- Invoice billing
- SCIM seat management
- AI code tracking API
- Granular admin controls
Last verified: 2026-03-02.
## The Good and the Not-So-Good
+ Strengths
- Multi-file editing with Composer actually works—modifies code autonomously without approval prompts
- Supports multiple frontier models (GPT-5, Claude Opus 4.6, Gemini 3 Pro, Grok Code) with per-conversation switching
- Full VS Code compatibility—import all extensions, themes, keybindings with one click
- Fastest growing SaaS in history ($1B+ ARR in <24 months)—proven product-market fit
- Visual feedback as AI generates code—better for seeing changes in real-time than terminal-first tools
- Subagents can parallelize specialized tasks (Terminal, Docs, Test, Refactor)
- Tab completions under 100-200ms—feels instantaneous even on large files
− Weaknesses
- Context window truncation: advertises 200K tokens but users report 70K-120K usable after internal truncation
- Pricing shifted from request-based to token-based in June 2025 with poor communication—Pro plan reduced from ~500 to ~225 premium requests (55% reduction)
- Heavy users report $10-20 daily overages; 4x more expensive than Claude Code for comparable tasks
- No public roadmap—frequent UI changes and breaking updates frustrate teams needing stability
- Performance issues: 20-60 second delays for simple code generation, GPU spikes to 90%, slow UI on larger codebases
- Security vulnerability (CVE-2025-59944) allowed remote code execution via case-sensitivity flaw (fixed in v1.7)
- Microsoft blocked some VS Code extensions from Cursor's marketplace in April 2025, requiring manual .vsix installation
## Security & Privacy
## Who It's For
Best for: Serious developers working 4+ hours daily on complex projects, teams needing centralized AI usage management, developers who value visual feedback and VS Code ecosystem, users comfortable with $20-200/month individual plans or $40/month per team member
Not ideal for: Cost-conscious solo developers ($240-2400/year is steep), large codebases exceeding context limits, teams needing long-term stability guarantees (no roadmap), beginners overwhelmed by AI features (2-week learning curve for advanced features)