Expert agreement
How closely major reviews, institutional assessments, and qualified reviewers align on the core conclusion.
- Strong agreement
- Broad but qualified agreement
- Divided interpretations
- Frontier debate
Methods playbook
This page makes the evidence model legible. The site should not ask readers to trust tone alone; it should show how agreement, certainty, sourcing, and updates are handled.
Consensus should not collapse evidence quality and agreement into a single fuzzy label.
How closely major reviews, institutional assessments, and qualified reviewers align on the core conclusion.
How stable the underlying body of evidence looks once bias, inconsistency, indirectness, and imprecision are considered.
A claim page should behave like a lightweight evidence synthesis, not a confident opinion post.
Readers should be able to inspect the method without opening a dense appendix first.
Every canonical claim page should say which databases were searched, what the search cutoff was, and how the team narrowed the field.
The public page should say which evidence can carry the bottom line and which material only provides context.
The review should show how the team judged the quality of trials, observational studies, and evidence syntheses.
Each page should expose reusable question-and-outcome summaries with direction, certainty, and limitations instead of relying on a flat source count.
Readers should be able to see which guideline bodies, assessments, or consensus panels define the baseline for the claim.
Readers should be able to see when a page changed, why it changed, and whether cornerstone citations were checked for retractions or corrections.
Not every citation should carry equal weight in the public summary.
Use these to define the shared baseline among major expert bodies. They anchor the public-facing answer.
Use these to summarize the body of evidence and avoid cherry-picking isolated studies.
Use these when one trial or landmark paper materially changed the field or clarifies an active disagreement.
Use these for mechanism, history, and terminology. They should not outrank higher-tier synthesis sources.
Corrections and review cadence are part of the evidence model, not a footnote.
The backend should support reliable discovery, integrity checks, and visible provenance.
Why this matters
When a reader can see the rubric, the search scope, the source tiers, and the update trail, the page stops sounding like a black box.