You’re Not Being Manipulated by Algorithms. You’re Obeying Them

(Press Association via AP Images)
Last week, I wrote about Olivia Nuzzi’s remarkably swift media rehabilitation, and the response surprised me. That column argued that modern media rewards spectacle over substance, but it also hinted at something more profound: the performance isn’t just happening inside newsrooms. It’s happening inside us. All of us. And it’s happening in ways far stranger and more invasive than anything I covered in that first piece.
I realized it while walking my dog this weekend and trying to skip a song on Spotify. It’s a song I genuinely love, but I wasn’t in the mood for it. My finger hovered over my AirPod—and stopped. I didn’t want Spotify to get the “wrong idea.” I didn’t want the all-knowing algorithmic DJ to think I disliked the song. The hesitation lasted less than a second, but it revealed something uncomfortable: I wasn’t making the choice for me. I was making it for the version of me the algorithm believes in.
Once I noticed that, I couldn’t stop seeing it. I tailor how long I pause on Instagram Reels—not because of how I feel about them, but because I don’t want the platform to misclassify me. I consider avoiding edgier political videos I’m curious about because I don’t want YouTube to drag me into a new ideological lane. I sometimes don’t open messages because I know the app will reshuffle my entire social world based on that one tap.
At first I thought this was a personal glitch. It wasn’t. Friends admitted they do the same thing. Not dramatically—but constantly. Micro-adjustments, all day long. A kind of self-policing. A low-level performance. Call it algorithmic discipline. We’re not being watched by people. We’re performing for machines.
For years we were told about “filter bubbles” and “echo chambers,” as though we were passive victims of someone else’s programming. But that’s not what’s happening anymore. We’re now building our own cages by absorbing the logic of the feed. The algorithm doesn’t moralize us—it stabilizes us. It rewards predictability, so we behave predictably. Curiosity becomes dangerous because curiosity looks like inconsistency, and inconsistency confuses the machine.
Talk to anyone in Gen Z for five minutes and you’ll hear some version of the same warning: “Don’t watch all of that or your feed will get weird,” or “I only liked it to keep my algorithm normal,” or simply, “That’ll mess up your recs.” These aren’t jokes. They’re maintenance routines—daily upkeep to keep their digital identities from drifting. What they see online reflects not what they’re interested in, but what keeps the algorithm calm.
When this mindset meets politics, things get ugly fast. I know moderates who refuse to click anything from the “other side” because they don’t want their feed to lock them into a political identity they didn’t choose. Conservatives who avoid progressive creators they’re curious about. Liberals who won’t read certain Substack essays because they don’t want to “signal interest.”
People aren’t afraid of being persuaded—they’re scared of being reclassified.
So in that context, political curiosity is becoming a liability. Our feeds shape our informational world; we act as though the risk of exploring ideas isn’t intellectual—it’s algorithmic contamination. That instinct erodes something fundamental. Democracies only work if people can surprise themselves once in a while. If clicking the unfamiliar becomes too costly, the unfamiliar disappears.
This dynamic is everywhere—including inside newsrooms. After the Nuzzi column ran, several journalists reached out with examples of what I’d call pre-censorship—not editors spiking stories, but writers and editors making quiet decisions based on fear of how an algorithm might respond.
At one major outlet, a simple headline about a public-health study was rewritten because Facebook had begun downranking anything containing the word “vaccine,” even when it was factual. An editor told me, “It felt like lying to the reader, but the alternative was no readers at all.”
At another publication, an in-depth story on online extremism never ran because editors feared YouTube’s automated moderation would punish the entire channel. The reporter who wrote it told me he felt “sick” watching months of work vanish because of an opaque prediction model.
Multiple editors told me nuanced policy stories—on zoning laws, policing, environmental rules—get sidelined because they underperform on platforms. One editor summed it up: “Some days it feels like the platforms have already decided what journalism is allowed to be.” She wasn’t angry. She was tired.
These aren’t dramatic moments. They’re subtle distortions that accumulate into a very real civic problem. An inflexible body politic isn’t poetic language—it’s visible. It looks like citizens avoiding their own curiosity. It looks like journalists abandoning crucial reporting. It looks like our political discourse shrinking to fit the preferences of machines that reward predictability above all else.
And if these patterns hold, the next five to ten years won’t look like a sci-fi dystopia. They’ll look exactly like now—just more rigid. People won’t explore outside their political lanes because their feeds have trained them not to. Newsrooms won’t cover whole categories of stories because they know platforms won’t surface them. Politicians will talk like overstimulated influencers, optimizing every sentence not for persuasion but for algorithmic traction.
The real danger isn’t that algorithms will take over. It’s that we will adapt ourselves to become easier for them to predict. A society of citizens afraid to click is a society that can no longer think freely.
It’s not inconceivable that institutions—governments, platforms, campaigns, corporations—will soon have something close to a complete cognitive profile of each of us. And once those profiles exist, they become predictive. Feed models don’t just reflect who we are—they start nudging us toward who we’re expected to be. A kind of algorithmic destiny takes shape. If you behave like someone who never clicks outside your lane, the system treats you as someone who never will. And eventually, you don’t.
At scale, it becomes a self-fulfilling prophecy: not because AI overrides free will, but because we quietly reorganize our behavior to match the version of ourselves the system anticipates. That’s the real threat—not mind control, but the slow erosion of unpredictability. The slow narrowing of who we imagine we can be.
So what do we do? The earliest version of this column ended with a shallow “click the unfamiliar link.” That still matters, but it’s not enough. The problem is structural as much as personal, and resistance has to happen on multiple fronts.
Individually, we can practice intentional unpredictability—click based on genuine curiosity, not algorithmic expectations. Not because humans are inscrutable, but because democracy depends on people who are willing to be inscrutable to algorithms.
Newsrooms need courage, too. They once navigated political pressure and advertiser pressure without surrendering to it. Algorithmic pressure should be treated the same way. If a story matters, run it—and tell readers how platform incentives shape what they see. Honesty builds trust; algorithm-chasing does not.
And yes, platforms have arguably the most significant role—not as villains, but as architects. They could distinguish between “curiosity clicks” and “preference clicks.” They could give users a mode to explore new territory without detonating their entire recommendation engine. They already track everything with extraordinary precision; giving users control over interpretation isn’t radical.
My Spotify DJ moment keeps coming back to me because it wasn’t about the song. It was about identity—the quiet fear that a machine might misunderstand me and rebuild my world accordingly. That’s new. It didn’t exist twenty years ago.
If my Nuzzi column was about the media’s addiction to performance, this one is about our own. We are all performing for the feed now, and those performances shape our reality far more than we’d like to admit.
The Spotify moment keeps coming back to me because it wasn’t about the song. It was about identity—the quiet fear that a machine might misunderstand me and rebuild my world accordingly. That’s new. It didn’t exist twenty years ago.
But here’s what I keep forgetting: the algorithm doesn’t actually know me. It knows a pattern. And I’m the one who’s been keeping that pattern intact—one skip button at a time.
Next time, I think I’ll just skip the song.
This is an opinion piece. The views expressed in this article are those of just the author.