Table of Contents >> Show >> Hide
- Quick refresher: where the “19 errors” even came from
- Part Two’s real subject isn’t 19it’s trust
- What counts as an “error” (and what’s just a disagreement)
- Why youth gender medicine becomes an “error-magnet” topic
- How to read an “errors list” without losing your mind
- What editors and writers can learn from Part Two (even if they never touch this topic again)
- Conclusion: fewer dunk contests, more durable knowledge
- Experience-Based Lessons: what “error storms” teach content teams
- A) The first 30 minutes are about psychology, not publishing
- B) Small errors become big because they’re symbolic
- C) “It’s technically true” is a trap phrase
- D) Corrections work best when they’re boring
- E) You can’t “tone” your way out of a sourcing problem
- F) The audience you’re really serving is the quiet reader
Somewhere on the internet, a counter is always ticking. Not a countdown to launch day or a timer on your air fryer.
I mean the other kind: the “I found X mistakes” scoreboard that turns every complicated topic into an accuracy
cage match. And if you’ve spent any time near the debate over youth gender medicine, you already know the vibe:
the stakes are high, the emotions are higher, and the margin for sloppy writing is basically zero.
That’s why “About those ‘19 Errors,’ Part Two” matterseven if you never read a single footnote in the whole saga.
It’s not just a rebuttal in a niche online dispute. It’s a case study in what happens when science communication,
journalism norms, and culture-war gravity all collide… and then someone starts counting.
Quick refresher: where the “19 errors” even came from
The short version (because we all have lives): a prominent science-communication site published commentary on
Abigail Shrier’s book Irreversible Damage. The site later retracted a favorable review and ran a follow-up series
critiquing the book’s claims about transgender youth and medical care. A journalist, Jesse Singal, published a lengthy
critique arguing that the follow-up posts contained roughly “19 errors,” including serious claims such as inaccurate
characterizations of sources and even “made-up” quotations attributed to the book.
Part One responded to the share of criticisms aimed at one author; Part Two responded to the remainderespecially
the criticisms aimed at A.J. Eckert’s post. The headline problem wasn’t just the number 19. It was the implication:
“If they missed this many things, why trust them on anything?”
And that’s the real plot twist: the fight is ostensibly about mistakes, but the prize is credibility.
Part Two’s real subject isn’t 19it’s trust
“Errors” lists can be useful. They can also be rhetorical napalm. When you label a bundle of disputes as a numbered
set of “errors,” you’re not merely reporting inaccuraciesyou’re making an argument about competence, motive, and
editorial responsibility. The number itself is sticky. Nineteen sounds like a lot (because it is), and the human brain
hears “a lot” and translates it into “unreliable.”
Part Two pushes back on that framing. Its core message is: “Most of these aren’t actually errors, the real errors were
small and corrected, and the substantive research critique still stands.” Whether you agree with that claim or not,
the move is recognizable: shift the conversation from a punch-list of alleged gotchas to the underlying evidentiary
dispute.
But here’s the thing: the moment “made-up quotes” enters the chat, it stops being a normal internet argument.
It becomes a trust emergency. Because quotation marks aren’t decoration. They’re a promise.
What counts as an “error” (and what’s just a disagreement)
If you want to keep your sanity while reading any “19 errors” style postPart Two includedyou need a triage system.
Not everything that’s wrong is equally wrong, and not everything labeled “wrong” is actually wrong.
1) Hard errors: the stuff that should trigger a correction notice
These are the big ones: misquotations, incorrect numbers, attributing claims to a paper that the paper doesn’t make,
or stating that something appears in a source when it doesn’t. If a reader can check a claim in a primary source and
quickly see the claim is inaccurate, that’s a hard error.
Hard errors are fixablebut only if you fix them like you mean it: prompt, clear corrections; transparency about what
changed; and a willingness to say “Yep, that was on us.”
2) Soft errors: technically defensible, practically misleading
Soft errors live in the gray zone. Think: quoting a statistic without the caveats that limit its meaning; summarizing
a contentious area of research as if it’s settled; using an umbrella term that collapses important differences; or
citing a study correctly but implying a certainty the study doesn’t support.
Soft errors are common in high-conflict topics because writers feel pressure to simplify. But simplification is not
neutral. It always chooses a shapeand some shapes distort.
3) Framing disputes: not “wrong,” but not innocent either
This is where many “error” lists get spicy. One side says a statement is “false.” The other says it’s a
good-faith interpretation. Sometimes both are half-right.
Example: If two legitimate medical organizations emphasize different risks and benefits, it can be tempting to say
“the science says X.” But a more accurate (and less tweetable) version is: “Here’s what the best-known guidelines
recommend, where evidence is strong, where it’s weaker, and what’s still being debated.”
Framing disputes aren’t trivial. They can change policy conversations, clinical expectations, and the way families
understand options. But labeling every framing dispute an “error” can also be a strategy: it turns interpretation into
an indictment.
Why youth gender medicine becomes an “error-magnet” topic
Some subjects almost beg writers to trip. Youth gender medicine is one of them, for three reasons:
it’s clinically complex, politically weaponized, and scientifically uneven across outcomes.
Guidelines exist, but evidence quality varies across questions
Major professional organizations have published clinical guidance and policy statements supporting gender-affirming
approaches to care. At the same time, systematic reviews and methodological critiques continue to debate the strength
of evidence for specific outcomes (physical effects, mental health trajectories, long-term satisfaction, and the
frequency and meaning of detransition in various populations).
That mixstrong clinical commitments plus ongoing evidence debatescreates the perfect conditions for error inflation.
A writer can cite a guideline correctly and still oversell certainty. Or a critic can highlight uncertainty and imply
there’s no basis for care at all. Both moves can be misleading.
Definitions are not universally stable (and everyone pretends they are)
People argue past each other using the same words with different meanings: “gender-affirming care,” “puberty blockers,”
“social transition,” “watchful waiting,” “rapid-onset gender dysphoria,” “desistance,” “regret,” and “evidence.”
If you don’t define terms up front, you’re basically hosting a debate in which the participants are speaking
different dialects and insisting it’s all “plain English.”
The incentives reward certainty, not accuracy
The internet rewards confident takes. Clinical reality rewards humility. The result is a writing environment where
careful nuance feels like “weakness,” and caveats get edited out because they “kill the pacing.”
(So does being wrong, but that’s apparently a tomorrow problem.)
How to read an “errors list” without losing your mind
Whether you’re reading Part Two, the original critique, or any future installment of the “Count the Mistakes” Olympics,
here’s a practical method that works surprisingly well:
Step 1: Sort the claims by severity
- Severity A: misquotes, wrong attributions, wrong numbers, false statements about what a source says.
- Severity B: incomplete context, questionable inference, overconfident summary of mixed evidence.
- Severity C: style complaints, tone complaints, or “I wouldn’t have phrased it that way.”
If an “errors” list is mostly Severity C, it’s not really about accuracy. It’s about winning.
If it includes Severity A, then the discussion needs to pause and deal with that first.
Step 2: Look at correction behavior, not just the initial mistake
The adult move in publishing isn’t “never make mistakes.” It’s “correct mistakes promptly and transparently.”
Sites with clear correction practices earn more trust over time, even when they slip.
A reliable signal is whether corrections are easy to find, explained clearly, and applied consistentlyespecially when
the mistake is embarrassing. Anybody can fix a typo. The question is whether they fix the things that hurt trust.
Step 3: Ask what happens to the core argument if every minor point is granted
This is the hidden test. Imagine a critic is right about 5 minor issues. Does that overturn the main claim?
Sometimes yes (if the main claim depends on those points). Often no (if the main claim rests on multiple lines of
evidence and those points are peripheral).
“Error” rhetoric often tries to smuggle in a bigger conclusion: “Because you were wrong here, you’re wrong everywhere.”
That’s not logic. That’s a vibe.
Step 4: Watch for asymmetry
A frequent pattern in polarized debates: one side demands perfect precision from opponents while accepting sweeping
generalizations from allies. If you only notice errors on one team, you’re not fact-checkingyou’re cheerleading.
What editors and writers can learn from Part Two (even if they never touch this topic again)
The “19 errors” episode isn’t unique to gender medicine. It’s a universal publishing lesson wearing a very loud outfit.
Here are the takeaways worth keeping:
1) Quotation marks are hazardous materialshandle with gloves
If you can’t point to the exact language in the source, don’t use quotation marks. Paraphrase instead, and say it’s
a paraphrase. The credibility cost of a misquote is wildly disproportionate to the convenience of sounding punchy.
2) Don’t claim consensus when what you mean is “a prominent guideline”
Guidelines matter. So do systematic reviews and evidence quality assessments. Good writing distinguishes:
what clinicians commonly do, what guidelines recommend, what the evidence strongly supports, and what remains uncertain.
Collapsing these categories is how “soft errors” are born.
3) Be careful when naming individuals and describing their work
Claims about researchers’ motives or professional conduct escalate risk fast. If you must critique an individual’s
work, focus on the methods and the claimsthen cite precisely. In contentious domains, a sloppy characterization
becomes gasoline for the entire discourse.
4) Build a correction lane before you need it
The best time to plan how you’ll correct mistakes is before you’re trending for the wrong reason.
A visible corrections policy and a consistent process for updates aren’t optional in the modern attention economy
they’re basic infrastructure.
Conclusion: fewer dunk contests, more durable knowledge
“About those ‘19 Errors,’ Part Two” sits at the crossroads of science writing and internet accountability.
The argument will keep going, because the underlying topichow best to care for transgender and gender-diverse youth
is still evolving, still contested, and still deeply consequential.
But the broader lesson is surprisingly simple: accuracy is not a vibe. It’s a practice. And on topics where people are
scared, politicized, or personally invested, that practice has to be unusually disciplined. Not because perfection is
possible, but because trust is fragileand the internet is always counting.
Experience-Based Lessons: what “error storms” teach content teams
You don’t need to live through a public “19 errors” pile-on to learn from it. You just need to recognize the pattern.
In many content teamsnewsrooms, medical comms, nonprofit education shopsan “error storm” tends to unfold the same way:
a piece goes live, a critic assembles a receipt-heavy thread, and suddenly everyone is staring at one paragraph like it’s
the Zapruder film.
Here are the experience-based lessons that teams repeatedly discover the hard way, especially on topics that blend
medicine, identity, and policy:
A) The first 30 minutes are about psychology, not publishing
The immediate impulse is either (1) defensive denial (“They’re acting in bad faith!”) or (2) panicked overcorrection
(“Delete everything!”). Neither is ideal. The productive move is slower: isolate the claims, categorize them by severity,
and assign a calm reviewer who was not the original author. Separating ego from evaluation is the difference between
“we corrected a mistake” and “we started a weeklong meltdown.”
B) Small errors become big because they’re symbolic
A typo doesn’t usually go viral. A misquote can. Not only because it’s “worse,” but because it signals something larger:
“Did you even read the source?” In high-conflict topics, critics often treat small factual failures as proof of a hidden
agenda. Teams learn to protect their credibility by locking down the basics: quotes, numbers, dates, names, and what
specific sources actually say.
C) “It’s technically true” is a trap phrase
Teams often defend a contested line with technicalities, only to realize that readers are reacting to the implication.
For example, a sentence can be literally accurate but framed in a way that invites a misleading conclusion. In medicine,
this happens when writers blur correlation and causation, overstate certainty, or summarize mixed evidence as if it’s a
single clean story arc. The fix is not to write timidly. It’s to write precisely: define terms, specify populations,
and state what is known versus hypothesized.
D) Corrections work best when they’re boring
The goal of a correction is not to win an argument. It’s to restore the record. That means:
(1) state what was wrong, (2) state what is now correct, (3) note when the change was made, and (4) avoid rhetorical
flourishes. If a correction reads like a clapback, audiences interpret it as a PR move. If it reads like a calm update,
audiences interpret it as stewardship.
E) You can’t “tone” your way out of a sourcing problem
Teams sometimes try to fix legitimacy issues by softening languageswapping “false” for “misleading,” “dangerous” for
“concerning,” “debunked” for “contested.” That can help, but it doesn’t solve the core issue if the citations don’t match
the claim. Critics don’t care that you were polite; they care whether the source supports the sentence. The durable habit
is claim-first drafting: write the claim, then find the best evidence for it, then revise the claim until it matches the
evidence you actually have.
F) The audience you’re really serving is the quiet reader
In an “error storm,” the loudest voices are often not persuadable. The quiet readerthe one who’s genuinely trying to
learnis the person who benefits from clarity, definitions, and transparent updates. Content teams that keep that reader
in mind tend to make better decisions: they correct quickly, avoid performative fights, and improve their process so the
next piece is stronger.
If you take nothing else from the “19 errors” saga, take this: high-stakes topics demand high-discipline writing.
Precision isn’t a personality trait. It’s a workflow.
