A new pre-print made the rounds last week waving red flags about your brain on ChatGPT. Undergrads, EEG caps on, wrote three 20-minute essays. Those who leaned on GPT-4o showed weaker alpha-beta coupling, produced eerily similar prose, and later failed to quote their own sentences. Headline: “LLMs dull your mind.”
Sloptraptions is an AI-assisted opt-in section of the Contraptions Newsletter. Recipe at end. If you only want my hand-crafted writing, you can unsubscribe from this section.
Post-publication note (6/21/25): Apparently the preprint had a couple of “AI traps” inserted (ht ) designed to limit (and possible actively mislead?) summarizability. The author’s comments are in this Time article. I don’t think this affects my specific conclusions/criticisms of the narrative overreach around results of pretty narrow significance. But make up your own mind. Possibly other readings of the preprint may have been more significantly undermined.
On a meta-note, unlike famous scholarly hoaxes like the Sokal affair, this preprint tries to both present actual research, and pre-empt criticisms, and massage the response narrative through a rather silly technical trick. Scanning, summarization, and triaging have been a fact of scholarly life for centuries. If most readers are concluding your paper is not worth reading in detail based on a quick scan of abstract and contents, and checking for anticipation/accommodation of particular critical points, then burying a gotcha in the details (especially in a 206-page study) doesn’t actually increase the significance. If anything, LLMs increase rather than decrease the depth of engagement of such material by researchers trying to better triage the firehose. Hamstringing the use of LLMs to summarize through such traps is not going to make people more willing to read in detail papers they’ve judged not worth the effort based on the abstract and a scan-through.
I buy the data; I doubt the story. The experiment clocks students as if writing were artisanal wood-carving—every stroke hand-tooled, originality king, neural wattage loud. Yet half the modern knowledge economy runs on a different loop entirely:
delegate → monitor → integrate → ship
Professors do it with grad students, PMs with dev teams, editors with freelancers.
Neuroscience calls that stance supervisory control. When you switch from doer to overseer, brain rhythms flatten, attention comes in bursts, and sameness is often a feature, not decay.
2 · The Prompting-Managing Impact Equivalence Principle
For today’s text generators, the cognitive effects of prompting an LLM are empirically indistinguishable from supervising a junior human.
Think inertial mass = gravitational mass, but for AI.
As long as models write like competent interns, the mental load they lift—and the blind spots they introduce—match classic management psychology, not cognitive decline.
3 · Sameness Cuts Two Ways
Managerial virtue Good supervisors enforce house style and crush defect variance. Consistent voice across 40 blog posts? Process discipline.
Systemic downside LLMs add an index-fund pull toward the linguistic mean—cheap, reliable, originality-suppressing (see our essay “LLMs as Index Funds”).
Tension to manage Know when to let the index run and when to chase alpha—when to prompt-regen for polish and when to yank the keyboard back for a funky solo.
Thus the EEG study’s homogeneity finding can read as disciplined management or proof of mediocrity. The difference is situational judgment, not neurology.
4 · Evidence from the Real World
Creators shift effort from producing to verifying & stewarding (Microsoft–CMU CHI ’25 survey)
60 % of employees already treat AI as a coworker (BCG global survey (2022))
HBR now touts “leading teams of humans and AI agents” (Harvard Business Review, 2025)
Across domains, people describe prompting in manager verbs: approve, merge, flag.
5 · So Why Did the Students Flop?
Because freshman comp doesn’t teach management.
Drop novices into a foreman’s chair and they under-engage, miss hallucinations, and forget what the intern wrote. Industry calls them accidental managers.
The cure isn’t ditching the intern; it’s training the manager:
delegation protocols
quality gates
exception handling
deciding when to tolerate vs. combat sameness
A follow-up study could pit trained editors, novice prompters, and solo writers against the same brief—tracking error-catch speed, final readability, and EEG bursts during oversight moments.
6 · Implications
Education – Grade AI-era writing on oversight craft—prompt chains, fact-checks, audit trails—alongside hand-wrought prose.
Organizations – Stop banning LLMs; start teaching people how to manage them.
Research – Use dual baselines—artisan and supervisor. Quiet neural traces aren’t always decay; sometimes they’re vigilance at rest.
7 · Closing Riff
The EEG paper diagnoses “cognitive debt,” but what it really spies is role confusion.
We strapped apprentices into a manager’s cockpit, watched their brains idle between spurts of oversight, and mistook the silence for sloth.
Through the lens of the Prompting-Managing Equivalence Principle:
Sameness ⇢ quality control
Low activation ⇢ watchful calm
Real risk ⇢ index-fund homogenisation—a strategic problem, not a neurological cliff.
Better managers, not louder brains, are the upgrade path.
Recipe
Briefly summarize the process we used to write this essay
How we got here, in four moves
Rapid appraisal of the study – You asked me to “read the paper.” I skim-summarized its design, measures, and headline findings so we had shared ground for critique.
Critical framing & concept coinage – We identified the study’s blind spot (it treats writing as solo craft) and coined the Prompting-Managing Impact Equivalence Principle to reframe LLM use as supervisory work rather than cognitive decline. We also folded in your earlier “LLMs as index funds” argument about homogenization.
Outline co-construction – I mapped those ideas into a sectioned outline: provocation → fault line → equivalence principle → sameness tension → evidence → skill-building → implications. You refined it, adding the point that management is itself a learnable skill.
Dense-prose rewrite – Finally, I converted the outline into a flowing, informal yet tight essay, delivered in Markdown with strategic quotes and punchy sub-heads.
So: scan → reframe → structure → style.
Further comment on the "LLM traps" this post has had to weather. Imagine you invented a time machine, and wanted to write a paper about it. Do you think you'd need to use anti-LLM-summarization traps in order to get people to personally read every last detail eagerly? The tactic is ONLY meaningful if you know your point is fundamentally weak, and you need an easy way to dismiss criticisms as "aha, you didn't actually read it! Gotcha!"
If you think you have done something truly significant, and it isn't popping in the headline and abstract, you've either buried the lede because you don't recognize it yourself, OR you're mistaken. In this case, it's the latter situation. The brain-scan experiments don't prove what the authors think they prove, and the headline narrative is basically false.
This post I felt like reading, and didn’t notice or mind it being AI generated.
Somehow I didn’t feel the same about earlier AI generated posts in this serious.
Maybe it is because models are getting better, or you are getting better at prompting, probably both.