Methods playbook

The public rubric behind each canonical claim page.

This page makes the evidence model legible. The site should not ask readers to trust tone alone; it should show how agreement, certainty, sourcing, and updates are handled.

The two-axis rubric

Consensus should not collapse evidence quality and agreement into a single fuzzy label.

Expert agreement

How closely major reviews, institutional assessments, and qualified reviewers align on the core conclusion.

  • Strong agreement
  • Broad but qualified agreement
  • Divided interpretations
  • Frontier debate

Evidence certainty

How stable the underlying body of evidence looks once bias, inconsistency, indirectness, and imprecision are considered.

  • High
  • Moderate
  • Low
  • Very low

Minimum publishability checklist

A claim page should behave like a lightweight evidence synthesis, not a confident opinion post.

  • Frame the claim as a testable question with a clear scope.
  • Document where the evidence search happened and where the cutoff date lands.
  • Prefer guidelines, consensus assessments, systematic reviews, and meta-analyses over isolated novelty.
  • Judge study quality explicitly instead of letting every citation count the same.
  • State what is settled, what is unsettled, and what would change the page.
  • Declare conflicts of interest, reviewer roles, and editorial independence on the public page.

What each page should expose

Readers should be able to inspect the method without opening a dense appendix first.

Search trail

Every canonical claim page should say which databases were searched, what the search cutoff was, and how the team narrowed the field.

Inclusion rules

The public page should say which evidence can carry the bottom line and which material only provides context.

Appraisal tools

The review should show how the team judged the quality of trials, observational studies, and evidence syntheses.

Evidence summary objects

Each page should expose reusable question-and-outcome summaries with direction, certainty, and limitations instead of relying on a flat source count.

Institutional anchors

Readers should be able to see which guideline bodies, assessments, or consensus panels define the baseline for the claim.

Corrections discipline

Readers should be able to see when a page changed, why it changed, and whether cornerstone citations were checked for retractions or corrections.

Source stack logic

Not every citation should carry equal weight in the public summary.

Tier 1 · Guidelines and consensus statements

Use these to define the shared baseline among major expert bodies. They anchor the public-facing answer.

Tier 2 · Systematic reviews and meta-analyses

Use these to summarize the body of evidence and avoid cherry-picking isolated studies.

Tier 3 · Pivotal primary studies

Use these when one trial or landmark paper materially changed the field or clarifies an active disagreement.

Tier 4 · Context and background

Use these for mechanism, history, and terminology. They should not outrank higher-tier synthesis sources.

Update and corrections rules

Corrections and review cadence are part of the evidence model, not a footnote.

  • Trigger updates when a new landmark synthesis changes the direction or size of the effect.
  • Trigger updates when a major institution changes guidance or uncertainty language.
  • Check cornerstone citations for retractions and major corrections on a visible schedule.
  • Use living review status only on topics that actually have an update workflow behind them.

Evidence operations stack

The backend should support reliable discovery, integrity checks, and visible provenance.

  • Ingest metadata through OpenAlex, Crossref, PubMed, and Europe PMC when appropriate.
  • Normalize core identifiers such as DOI, PMID, PMCID, ORCID, and institutional IDs.
  • Run integrity checks for retractions, corrections, and guideline updates.
  • Keep a visible public trail: source tiers, identifiers, search cutoff, review dates, and change log.

Why this matters

Trust comes from visible process.

When a reader can see the rubric, the search scope, the source tiers, and the update trail, the page stops sounding like a black box.