A model is a named, trained artifact that scores content along one or more traits. Each model encodes a specific standard for a specific kind of content; scoring the same text against different models produces different results, because the models measure different things.
A model consists of a handle, a description, and a set of traits with trained parameters. The handle identifies the model for the API and MCP; the description summarizes what the model measures; the traits define the axes on which content is scored. A model is self-contained: once trained, scoring against it requires only the model and the content.
When content is scored against a model, each declared trait produces a score on the 0–100 axis along with its tier label, confidence level, and headroom. A composite — the harmonic mean of the per-trait scores — is returned alongside the per-trait results. The format of the combined output is described on the score card page.
A model's trait axes and break values are fit during training from the supervision supplied at that time. The trained parameters are stored on the model and used at scoring time. A model is not a set of rules or weights the user adjusts; changes to scoring behavior come from retraining with updated supervision, not from reconfiguring a deployed model.
Models are narrow. A model trained to score technical writing measures a different set of traits than a model trained to score landing pages, and both differ from a model trained on policy documents. A given piece of content scored against two models receives two unrelated results, because the two models are answering two different questions.
A model is addressed by its handle across every scoring surface — the REST API, the MCP server, the GitHub Action, and the web UI. The same model produces the same output for the same content regardless of which surface issued the scoring call; scoring is deterministic given a fixed model and fixed content.