You know quality when you see it.

⊨ lets you measure it.

Bring samples, a reference, or a brief — ⊨ turns your standard into a scoring model that judges any content against it, consistently and at scale.

Try it ↓

How good is this pun? Rate it.

50
You
vs
AI
...
0
Rounds
0
Avg gap
0
Bias
0
Streak

Sample puns are AI-generated.

Models

That was puns — but quality has dimensions everywhere. Each model below learns what "good" looks like from real-world samples, then scores anything against those standards.

Use it anywhere

No signup, no API key, no rate limits. Bring scoring into any workflow.

MCP + Editor plugins

Works with Claude Code, Cursor, Windsurf, or anything that speaks MCP.

$ claude mcp add --transport http u22a8 \ https://u22a8.ai/mcp

API

POST text or a URL, get JSON back. No auth required.

$ curl -s -d "https://github.com/astral-sh/uv" \ https://u22a8.ai/p/u22a8.compelling-readme

How it works

No LLM in the scoring loop. Just math, meaning, and good judgment.

1 Models Curate examples of what "good" looks like. The system discovers quality dimensions automatically.
2 Traits Each trait is a learned spectrum — a direction in meaning space that separates better from worse. Not rules. Patterns.
3 Scores Content gets projected onto each trait. 0–100, with confidence bands. Deterministic, sub-second.

Deterministic. Sub-second. No LLM in the loop.
Just math, meaning, and good taste.

Scores are approximate — not a substitute for human judgment.