neuralcosmology
Essays
June 15, 2026·4 min

Fluency on the cheap: epistemic hygiene in the LLM era

Before 2023, the price of a paragraph that read like expert analysis was paid in cognitive labour. Now it costs twenty dollars a month. What an author with a conscience left ought to do about it.

A practical problem arrived in 2023. Until then, writing a paragraph that looked like a specialist working through an argument required someone to think. The structure, the vocabulary, the hierarchy of claims, the references — all assembled in the head of whoever stood behind the text. After 2023 the assembly is done by machines, and done well enough that telling machine prose from human expert writing now costs separate effort.

The LLM is a powerful tool and I use one daily — for code review, translation, literature search, fast first drafts of an argument. The worry isn't with the tool. The worry is with a new public rhetoric in which a surface match with expert writing has begun to be confused with epistemic weight.

There's an old word for it that travels poorly across languages but the image survives. Picture a small-time reseller who has talked his way into selling a thermonuclear reactor out of a yard sale. The reactor is real. The labels are right. The leaflet is well written. The reseller isn't lying. He simply doesn't understand what's in front of him and can't answer a single second-order question about it. Until recently this class of figure was rare at the upper levels of public conversation, because the entry threshold filtered them out. Now the threshold is a twenty-dollar-a-month subscription.

Symptoms

A paragraph written by a machine and a paragraph written by a human who leans on a machine without carrying any cognitive load themselves read the same. The signs:

— No exposed risks. Any substantive claim in science has the shape "here is a condition under which I am wrong." The yard-sale text never carries that condition, because the reseller doesn't know the condition under which his thesis fails.

— Smooth completeness. Real reasoning stumbles, retreats, catches its own counterexamples. The yard-sale text glides — it's generated as a stylistically consistent surface, and nothing past surface consistency survives a push.

— Names doing no work. In a serious text, a mention of Friston or Tononi or Levin either rests on a substantive connection (here is what is in their work, here is how that bends the argument) or it doesn't belong. In the yard-sale text, names function as "I'm familiar with the field" signals, and the actual work of the cited author plays no role in the argument.

— Quantitative non-contact. Any claim about reality has to land somewhere in a number — an effect size, a power level, a range, a timescale. Yard-sale text routes around numbers, because numbers are check-points.

What sits under this

The "can a human tell ChatGPT prose from their own" test has mostly failed in its current form: the prose is good, manual fact-checking is slow, editors tire. The only filter that scales is the author, and what the author filters is not the output text but their own position on the subject before they sit down to write.

Epistemic hygiene in this sense is a simple discipline. Before any public claim, four questions:

  1. Under what condition would I be wrong? If the answer is a concrete observable outcome, keep going. If there is no answer, retract the claim.

  2. Where does this land in a number? Order of magnitude is enough: "10⁻⁴ of effect width," "10⁵ trials," "10² galaxies." Without a number the claim is rhetoric.

  3. What is the strongest counterargument against me? Not a strawman, but the most technically loaded counterargument an actual specialist would bring. If you can't state it at the level the specialist would, you don't understand the subject.

  4. What would the counterargument do to strengthen my thesis if it failed? If nothing, the thesis is theological. If something, name it now, in advance.

The four questions aren't for every text. Fiction doesn't get tested this way. Personal essays don't either. But any text that claims epistemic content — "here is how reality works," "here is what the experiment shows," "here is what follows from the theory" — needs to pass this filter before publication.

Why I'm writing this to myself

I'm not writing this for the reader directly. I'm writing it for myself, and for the version of me one year from now who will lose discipline and start publishing smooth, unanchored paragraphs about consciousness and physics. When that happens, the essay should sit there as a check. "Under what condition was I wrong? Where is it in a number? What is the strongest counter? What would strengthen me?"

The same logic explains the shape of this site. The SPARC preprint gives numbers on 171 galaxies, open code, AIC comparison against MOND. The falsifier table publishes in July — seven conditions under which the programme dies. The essay on PEAR/GCP describes the protocol that puts it under fire. The essay on Levin's bioelectric memory issues a biological prediction that resolves or breaks in a single lab cycle. All of this is self-checking infrastructure built outside the author's head, because the inside-the-head check is the first one to weaken.

What I ask of the reader

One thing. When you read a text that claims substantive knowledge of the world — mine or anyone else's — put at least the first question to the author. Under what condition would you be wrong? If there is no answer, the text carries no epistemic weight. However persuasive, however elegant, however well cited — it carries no weight. It is yard-sale fusion.

In an era when surface literacy is cheap, the filter has moved inwards. Without it the public conversation about reality will stop meaning anything in short order. With it there is still a chance to tell a programme that can be checked from a piece of well-written persuasion.

epistemicsllmmethodologyrhetoricdiscipline