Verbosity modes
sift’s full VectorizeOutput for a 10-result SERP is ~2.4K tokens. About 52% of that is augmentation — the per-result quality_vector and signals[] are the bulk. For most downstream agents the secondary fields aren’t behaviorally load-bearing, so sift exposes three verbosity modes.
The three modes
Section titled “The three modes”| Mode | Per-result shape | Aggregate | Typical use | Token delta |
|---|---|---|---|---|
concise (default) | url, title, description, quality.{tier, reason, confidence}, recommended_action, safety_flag | tier_distribution, vendor_dominance_ratio, diversity_entropy | Agent consumption — the ablation-validated sweet spot | ~33% smaller |
full | Every quality_vector field + signals[] | Complete | Audit, debugging, learning-loop extracts | Baseline |
summary | url, title, description, safety_flag only | Complete | SERP landscape dashboards, diversity monitoring | ~45% smaller |
How the default was chosen
Section titled “How the default was chosen”A Phase 5 ablation (see the sift-token-economy design note) ran 3 query types × 3 payload shapes × 3 agent models (Haiku 4.5 / Llama 3.3 70B / Sonnet 4.6) = 27 runs. On each run the agent was asked to summarize the SERP in under 300 characters.
Finding: concise and full produced indistinguishable downstream behavior. summary lost the agent’s ability to weight individual sources but preserved SERP-level awareness (dashboards still work). So the default is concise: enough to weight source quality correctly, at a third less input cost.
Verbosity is input-only
Section titled “Verbosity is input-only”Observations always record the full payload regardless of what the caller requested. The learning-loop substrate needs every dimension for post-hoc analysis — it would be a mistake to lose signals[] and authoritative_weight just because the agent at call time didn’t need them.
Picking a mode
Section titled “Picking a mode”- Start with
concise. It’s the default and covers ~95% of agent use cases. - Switch to
fullwhen auditing a specific classification, running a validation suite, or needingsignals[]to trace reasoning. - Use
summaryfor dashboards or when you only need the SERP-level picture (e.g., a side-panel showing how biased the current query’s results are).
Example — same query, three modes
Section titled “Example — same query, three modes”concise response (trimmed):
{ "verbosity": "concise", "results": [ { "url": "https://tandfonline.com/doi/full/...", "title": "...", "description": "...", "quality": { "tier": "peer_reviewed", "reason": "academic journal", "confidence": 0.95 }, "recommended_action": "keep", "safety_flag": null } ], "aggregate_vector": { "tier_distribution": { "peer_reviewed": 10, ... }, "vendor_dominance_ratio": 0, "diversity_entropy": 0 }, "summary_hints": ["Primary / peer-reviewed sources are present (10 of 10)..."]}full adds per-result editorial_standards, commercial_intent, self_promoting, third_party, domain_content_mismatch, authoritative_weight, signals[], plus mean_editorial_standards and mean_authoritative_weight on the aggregate.
summary strips all per-result quality fields, keeping only url/title/description/safety_flag plus the full aggregate and hints.