Editorial standards

How the site decides when a claim is ready to publish.

The platform is not a live opinion board. Canonical claim pages are supposed to reflect the evidence stack, the limits of that evidence, and what would actually change the current view.

Source hierarchy

What the editorial layer prefers to rely on.

  • Systematic reviews and meta-analyses come first whenever they exist.
  • Consensus statements, clinical guidelines, and major institutional reviews anchor the public summary.
  • Single studies can inform context, but they do not outrank the broader literature.

When a claim is settled enough to publish

Publishing thresholds should be explicit.

  • A claim should only be published as consensus when multiple independent reviews or major institutions converge on the same bottom line.
  • If the evidence is still split, the page should be labeled as active debate or unclear rather than forced into a false certainty.
  • When institutions disagree, the page should identify the shared baseline first and then explain the narrower margin of disagreement.

Update triggers

The site should update on evidence changes, not only on a calendar.

  • A new landmark review materially changes the direction, size, or mechanism of the current summary.
  • A major professional society updates its formal guidance.
  • A heavily cited paper used on the page is retracted or materially corrected.
  • An editorial review date passes and the claim is flagged as needing a refresh.

Source and citation policy

What kinds of materials are allowed to carry the public summary.

  • Core public claims should be backed by peer-reviewed syntheses or official reports, not by blogs, anonymous commentary, or isolated preprints.
  • Systematic reviews, meta-analyses, and major institutional guidance should outrank individual studies unless a landmark trial is the reason the field changed.
  • When evidence is early or thin, the page should say that clearly instead of forcing a settled summary.

Trusted institution stacks

The site should anchor each cluster in the bodies that already synthesize the field.

Health and medicine

Cochrane, WHO, CDC, and NASEM are the main anchors when a claim affects care, safety, or public-health policy.

Nutrition and diet

The American Heart Association, American College of Cardiology, and Academy of Nutrition and Dietetics help define the mainstream evidence-backed baseline.

Climate and environment

IPCC assessment reports, NOAA, NASA, and environmental evidence reviews are the main reference points for the settled core.

Genetics and biotechnology

WHO, FAO, National Academies reviews, and major regulator safety assessments are the main starting points for GMO and biotechnology safety claims.

Neuroscience and psychology

The Campbell Collaboration, APA, and domain societies are more useful than pop-psychology summaries or isolated lab claims.

Historical case studies

Landmark trials, Surgeon General or equivalent public-health reports, and retrospective reviews help explain how past consensus shifts actually happened.

Handling disagreement among experts or institutions

Disagreement should be explained, not flattened into confusion.

  • If major bodies disagree, identify the shared baseline first and then explain the narrower margin of disagreement.
  • Do not present a minority expert view like a 50-50 split when the broader literature is not actually balanced.
  • Separate disputes about effect size, scope, or policy from disputes about whether the core phenomenon exists.

Reviewer and publication expectations

Trust depends on visible review standards, not just confident writing.

  • Pages should be reviewed by people with relevant domain or editorial expertise before being treated as canonical.
  • Reviewer qualifications should be visible enough to build trust without turning the page into a credential contest.
  • Claim pages should state what kind of evidence would change the current summary so the review standard stays falsifiable.

Trust signals shown on public pages

Users should not have to guess how much review a page received.

  • Original publish date and last evidence review date
  • Consensus band and confidence score
  • Source count and evidence-stack links
  • Plain-language bottom line before the deeper nuance
  • A clear statement that public sentiment is not the same thing as expert consensus

Editorial independence

Confidence should come from legible process, not from posture.

  • Canonical claim pages are supposed to track the evidence stack, not the loudest public argument.
  • Community threads do not vote a claim into or out of consensus.
  • Source transparency and update logic should be visible enough that users can inspect the reasoning.

Why this matters

Transparency is part of the product, not a footnote.

Public trust does not come from sounding confident. It comes from showing the evidence hierarchy, naming the uncertainty honestly, and making the review logic legible.