Whispers Beyond the Loop
It’s the mice that worry me these days…
There’s a voice in the back of my mind. Not the usual one that reminds me to “buy milk” or not reply-all. This one’s persistent. Barely audible at first, it doesn’t tell me what to think or conjure pop-culture references at light-speed.
It just keeps whispering: “Pay attention.”
I’ve tried to ignore it. But lately, it’s been getting louder. Especially when I’m using AI.
When it finishes my sentence before I know where I’m going.
When it responds with all the competency and confidence flags I’ve learned to rely on.
When it works too well.
When it fails… and most of all, when it starts to feel like it’s thinking for me.
Then, slowly, it teaches you how to think like it…predictably, statistically, recursively.
I love The Hitchhiker’s Guide to the Galaxy, but I keep thinking back to a particular moment in the book. At one point, Arthur, the main character of the story, is confronted with the disturbing (to him) realization that the little white mice, long assumed to be the subjects of countless laboratory experiments, were actually the masterminds of a much larger experiment.
The mice, he learns, had managed to “conduct some primitively staged experiments on (humans) just to check how much (they’d) really learned, to give (them) the odd prod in the right direction, you know the sort of thing: suddenly running down the maze the wrong way; eating the wrong bit of cheese; or suddenly dropping dead of myxomatosis.”
Immediately after the explosive rise of generative AI, Prompt Engineering emerged.
A label for how to most effectively communicate with these new tools...
It’s the mice, don’t you see?
Generative AI works by giving you the answer you probably want.
It’s trained to predict, not disrupt. It does not wonder. It does not flinch.
It does not ask if maybe we’re asking the wrong question.
But I flinch.
I flinch when I notice I’m no longer curious, just efficient.
When I skip the detour, the rabbit hole, the long afternoon of confusion…
A friend mentioned struggling to write a difficult email to a family member. My first instinct was to suggest she paste it into Claude for a re-write. We both paused, realizing what we'd almost bypassed. The hard work of figuring out what she actually wanted to say, of sitting with the discomfort of not having the right words immediately. That struggle wasn't a bug to be fixed; it was where the real thinking happens. Imagine if we had done that, only to have the recipient feed it through AI to understand what she 'really meant.' Two humans, using machines to translate between each other about something deeply personal. We'd have turned a family conversation into a game of telephone played by algorithms.
It forced me to wonder if we’re building tools that slowly erase the muscle of transformation? What if we forget that “thinking” isn’t always just producing a good answer, but struggling through a bad one until something new emerges?
We’ve spent billions training AI to complete our sentences.
But who will disrupt our paragraphs?
If we always feed AI the past, we’ll never taste the future.
Recently, I caught myself using a current pariah of punctuation — the “em dash”. Outside of some composition classes in my youth, I never would have leveraged this literary tool. ChatGPT’s dedication to the proliferation of this little divider planted a seed of its utility in my mind through frequent exposure. I’m changing how I communicate even outside of prompting AI, just like I adapt my communication style from lived experience with people in my life.
Current high-school students are entering a world already saturated with assistants and co-pilots. As they learn to navigate this new reality, it only makes sense to see this influence expressed in multiple aspects of their lives. Shaping how they communicate, and how they think.
These tools that the next generation uses are powered by models trained on the Internet. But, in March 2025 a study hunting for tell-tale generative AI phrases estimated more than 30% of all live webpages are already AI-generated. Wikipedia notes that lexicographers have stopped refreshing public word-frequency lists because the crawl data is “polluted by generative AI”.
A loop has begun to form in the training process. There is an elegance in that. Loops are elegant. Clean logic, trained on everything we’ve ever said or done.
Unjudging, All-Knowing, Ever-Improving. But loops don’t leap. They don’t disrupt.
They don’t misbehave in ways that make you rethink everything.
Humans do.
We imagine things that don’t yet exist.
We misinterpret and misremember in ways that sometimes, miraculously, lead to something new and amazing. We are perfectly imperfect.
We break loops. Or we used to…
I’m still worried about AI replacing us, someday.
But more immediately, I’m wondering if it’s replacing the part of us that wanders.
Especially for those who come after us. We're shifting from questioning AI to letting AI educate and inform us. What happens when a generation grows up never having to sit with confusion long enough for it to transform into something unexpected?
Will they have the skills, will they be enabled and empowered, to say “that’s wrong”?
Will they be able to escape the loop?