methodology

How we score and review tools.

Every score on Cybernauten is the result of hands-on testing, not spec comparison. Here's exactly what goes into it and how we stay independent.

## The Score

Every tool gets a score from 0 to 10. It's a weighted composite across six dimensions. No single factor dominates — a tool with an amazing feature set but predatory pricing will not score a 9.

Features & Capability
Does it do what it claims? How well? How completely?
25%
Pricing & Value
Is the price justified? How does it compare to alternatives?
20%
UX & Developer Experience
How fast can you go from install to productive? What are the rough edges?
20%
Security & Privacy
Open source? Audited? Zero-knowledge? Data residency? Known breaches?
15%
Reliability & Support
Uptime history, quality of docs, responsiveness to issues.
10%
Longevity & Momentum
Is this thing growing or dying? VC-backed or sustainable? Team size and roadmap.
10%

Scores are not permanent. A major pricing change, security incident, or product pivot can move a score significantly. We re-review on any material change.

## How we test

We install and use every tool we review. For developer tools, that means running real projects through them — not toy examples. For SaaS products, it means going through actual workflows, including onboarding, support, and cancellation where relevant.

We read the documentation. We check changelogs and release notes. We look at community discussions on GitHub, Reddit, and Discord to understand real-world pain points that don't appear in official docs. We cross-reference security claims against actual audit reports, not marketing pages.

For tools we use ourselves in production — which is increasingly the case for the agentic developer stack we cover — our assessment reflects sustained daily usage, not a first-impression demo.

## Pricing

Pricing is verified against official sources at the time of publication. Every tool profile shows a "last verified" date on the pricing section. Pricing changes frequently in this space — especially for AI tools — and we update when we catch it.

If you find a pricing discrepancy, let us know and we'll correct it.

## Independence

No tool pays for placement, review coverage, or a higher score. We don't do sponsored reviews. We don't do "partnerships" where a company pays to be included in a best-of list.

Affiliate links exist on some tool profiles and are clearly disclosed at the top of every page that has them. The decision to add an affiliate link is made after the score and verdict are set — never before. A tool we rate 4/10 keeps its 4/10 regardless of whether it has an affiliate program.

## What we don't cover

We don't review tools we can't test. We don't publish assessments based solely on documentation or press releases. If we don't have direct access to a tool, we'll note that explicitly or skip it.

We focus on tools relevant to developers, founders, and technical teams. Enterprise-only tools with no self-serve or trial access are generally outside our scope.