A break is a trained threshold on the 0–100 score axis for one trait. Each trait has three breaks — developing, solid, strong — which together partition the axis into the four tiers.
The three breaks on a trait are quantiles of that trait's training distributions:
| Break | Derivation | What it marks |
|---|---|---|
developing |
75th percentile of the negative training distribution | The upper edge of content the model was trained to reject. A score at or above this break is no longer clearly negative. |
solid |
25th percentile of the positive training distribution | The lower edge of content the model was trained to accept. A score at or above this break meets the trained standard. |
strong |
75th percentile of the positive training distribution | The upper quartile of content the model was trained to accept. A score at or above this break is among the strongest examples seen in training. |
Table 1. The three breaks per trait and their source quantiles. Each break answers a different question about the score; only solid answers "does this meet the trained standard."
The three breaks on a trait are ordered: developing ≤ solid ≤ strong. When the underlying quantiles would invert — which can occur if the positive and negative distributions overlap — breaks are floored in sequence so the ordering is preserved:
| Break | Computed value |
|---|---|
developing | negative_p75 |
solid | max(positive_p25, developing) |
strong | max(positive_p75, solid) |
Table 2. Break derivation with the monotonicity floor. When floors apply, two adjacent breaks become equal and the tier between them has zero width.
Each trait in a model has its own training distributions and therefore its own breaks. Breaks are not shared across traits and are not derived from a global scale. A score of 66 may be Solid on one trait and Developing on another because each trait's breaks were set by its own training data.
Each break carries a distinct meaning. Reading a score against the three breaks answers three separate questions:
developing — Has the score escaped the negative training distribution? Scores below this break fall within the range the model was trained to reject.solid — Does the score meet the trained standard? Scores at or above this break fall within the bulk of the positive training distribution.strong — Is the score among the strongest trained examples? Scores at or above this break sit in the upper quartile of the positive distribution.The score card surfaces the next break as a marker on the trait's bar. The score-to-marker distance is reported separately as headroom.
When the positive and negative distributions overlap, the monotonicity floor causes one or more breaks to collapse (developing = solid, or solid = strong). The tier between collapsed breaks has zero width; no score can fall inside it. Scores near the collapse region are reported with a null tier and low confidence because the model cannot reliably discriminate in that range.
If a training distribution has fewer than a handful of samples or is constant, its quantile estimates are unstable and overlap with the opposite distribution is likely. The downstream effect is the same as §4.1: collapsed breaks and low confidence in the overlap region. No separate sample-size signal is emitted; the effect is reflected in confidence.