A small observation from an old and uncomfortable literature. Place a hardware random number generator next to a person concentrating on a particular outcome and the output statistics shift toward that outcome by something on the order of one part in ten thousand. Incompatible with noise only in the limit of very many trials.
Robert Jahn's Princeton Engineering Anomalies Research lab collected this kind of data from 1979 to 2007 — several million sessions. Mean per-session z-shift around 0.2σ, which compounds to roughly five sigma across the corpus. The lab closed with an honest statement: the effect is robust, the mechanism is unknown, the academic community did not accept the methodology. They never claimed a mechanism.
The paradigm continued in the Global Consciousness Project (Roger Nelson, 1998–present). A network of about seventy hardware RNGs scattered around the world records continuously. When something happens that draws mass attention — September 11, an Olympic opening ceremony, the death of a public figure — Nelson asks whether the aggregate statistic deviates from expectation. More than five hundred events have been published, with a composite z around 6.4σ.
This is uncomfortable for two reasons. First, the effect looks like a violation of independence in physical processes across a geographically distributed network. Second, the standard criticisms — file-drawer effect, post-hoc selection, flexible event definition — are fair. They don't close the question, but they reduce confidence in the PEAR/GCP meta-analyses to the point where no conservative physicist will cite them.
I am not asking you to take the data on faith. I am asking you to notice the shape of the prediction that follows from them.
What pointer architecture predicts
If reality is organised as a distributed computational system with local nodes and shared state, then like every such system it has bounded coordination bandwidth. When many observers query the same part of state at once — say, watching a single broadcast — the synchronisation load on that region rises.
What should show up under load? The same thing that shows up in any distributed architecture: rising latency, small desynchronisations, local deviations from stationary statistics. Not "mass consciousness moves particles." Just slightly less efficient maintenance of the local stationary process during peak load.
The effect size in this picture has to be very small and very persistent. That is exactly what PEAR and GCP find. It is also exactly what is hard to distinguish from selection artefacts without a preregistered protocol.
What it would take to either confirm this or kill it honestly
One experiment. Preregistered on OSF. A pre-specified list of thirty upcoming global-scale events with a built-in attention metric (e.g. forecast broadcast audience). A pre-specified time window: ±90 minutes around peak. A pre-specified statistic: composite z-score across a network of at least fifty hardware RNGs of varied geography. A pre-specified threshold: aggregate statistic exceeds 4σ at the GCP-claimed effect size for n=30.
If the threshold is met, you have a serious argument that synchronisation in physical systems depends on something that correlates with mass attention. If it isn't, twenty years of PEAR/GCP literature gets a clean "probably file-drawer."
Either way, we know more than we do now.
Why bother with this essay
Within the pointer architecture programme, the galactic half is already on testable ground: the preprint reports the first comparative AIC test on SPARC, with code released. The biological half needs cooperation with labs of the Michael Levin tier — that won't happen on a solo timeline.
Between the two sits an observational test that should be cheaper: equipment that already exists in a distributed network, and a protocol that fits inside one mid-sized grant.
I am not claiming the effect is there. I am claiming that it ought to be, if pointer architecture is right, that we have the instrumentation to check, and that the absence of a serious check is a methodologically available step that nobody has taken.
The companion book, Celestial Code, walks through the full prediction set. The preprint gives the numbers on galaxies. This essay is about the cheapest of the tests — the one nobody has yet run properly.
