Breaks

A break is a trained threshold on the 0–100 score axis for one trait. Each trait has three breaks — developing, solid, strong — which together partition the axis into the four tiers.

§1Definition

The three breaks on a trait are quantiles of that trait's training distributions:

BreakDerivationWhat it marks
developing 75th percentile of the negative training distribution The upper edge of content the model was trained to reject. A score at or above this break is no longer clearly negative.
solid 25th percentile of the positive training distribution The lower edge of content the model was trained to accept. A score at or above this break meets the trained standard.
strong 75th percentile of the positive training distribution The upper quartile of content the model was trained to accept. A score at or above this break is among the strongest examples seen in training.

Table 1. The three breaks per trait and their source quantiles. Each break answers a different question about the score; only solid answers "does this meet the trained standard."

developing 42
solid 65
strong 80
Figure 1. Example breaks on one trait at 42, 65, and 80. The same three breaks on a different trait may sit at, for example, 38, 58, 74. A single score is interpreted against the breaks of the trait that produced it, not against a global scale.

§2Mechanism

§2.1Monotonicity

The three breaks on a trait are ordered: developing ≤ solid ≤ strong. When the underlying quantiles would invert — which can occur if the positive and negative distributions overlap — breaks are floored in sequence so the ordering is preserved:

BreakComputed value
developingnegative_p75
solidmax(positive_p25, developing)
strongmax(positive_p75, solid)

Table 2. Break derivation with the monotonicity floor. When floors apply, two adjacent breaks become equal and the tier between them has zero width.

§2.2Per-trait

Each trait in a model has its own training distributions and therefore its own breaks. Breaks are not shared across traits and are not derived from a global scale. A score of 66 may be Solid on one trait and Developing on another because each trait's breaks were set by its own training data.

§3Interpretation

Each break carries a distinct meaning. Reading a score against the three breaks answers three separate questions:

  • developingHas the score escaped the negative training distribution? Scores below this break fall within the range the model was trained to reject.
  • solidDoes the score meet the trained standard? Scores at or above this break fall within the bulk of the positive training distribution.
  • strongIs the score among the strongest trained examples? Scores at or above this break sit in the upper quartile of the positive distribution.

The score card surfaces the next break as a marker on the trait's bar. The score-to-marker distance is reported separately as headroom.

§4Edge cases

§4.1Collapsed breaks

When the positive and negative distributions overlap, the monotonicity floor causes one or more breaks to collapse (developing = solid, or solid = strong). The tier between collapsed breaks has zero width; no score can fall inside it. Scores near the collapse region are reported with a null tier and low confidence because the model cannot reliably discriminate in that range.

§4.2Degenerate distributions

If a training distribution has fewer than a handful of samples or is constant, its quantile estimates are unstable and overlap with the opposite distribution is likely. The downstream effect is the same as §4.1: collapsed breaks and low confidence in the overlap region. No separate sample-size signal is emitted; the effect is reflected in confidence.

§5Related concepts

  • Tiers — the labels the breaks partition the score axis into.
  • Headroom — the score distance to the next break above the current score.
  • Confidence — the reliability signal that pairs with breaks, low when breaks collapse.
Scores are approximate — not a substitute for human judgment.