How to Save Democracy—and Journalism—After Algorithms Broke the Truth

(Sipa via AP Images)
Something broke in 2025. Not journalism—journalism was collateral damage. What broke was the information system that makes shared reality possible. Without some shared sense of what’s real, democracy doesn’t function for long. Alarm bells should be ringing, warning us what’s coming.
Earlier this week, I detailed how journalism lost its ability to set the terms of public reality—dismantled not by its own failures, but by algorithmic systems that reward emotional intensity over accuracy and affirmation over verification. Editorial judgment, imperfect yet guided by civic purpose, has been replaced by engagement-optimized algorithms. The result is fragmentation severe enough to undermine shared reality itself—or worse, to convince us objective truth no longer exists.
The diagnosis is clear. The question now is whether we do anything about it.
Calling this a “post-news era,” as Axios founder Jim VandeHei has, matters only if it leads to reform rather than resignation. Improving clarity and speed inside a broken system may help people cope—and, incidentally, flatter Axios’ editorial mission—but it does not repair the system that broke reality in the first place.
The problem is not how to make journalism trend again. It is how to rebuild an information environment where shared reality can exist at all. Without that foundation, disagreement stops being productive as politics becomes a fight over what is real. No democratic system sustains itself under those conditions.
If we are serious about fixing this, the path forward is not mysterious.
* * * * *
First: stop treating algorithmic amplification as free speech.
For years, we have blurred the line between hosting expression and engineering viral reach. One protects speech. The other distributes power.
Free expression does not require forced amplification. No one has the constitutional right to be injected into millions of feeds. A serious reform framework draws a firm boundary: platforms may host nearly anything, while the content they choose to algorithmically promote—especially political material and information that predictably distorts reality—faces clear constraints. Reach is power. Power carries responsibility.
In practice, that means limiting how often unverified claims are recommended before human review. It means explaining why content appears in a feed. It means adding friction—waiting periods, verification steps—before material reaches viral scale. Speech remains available—unencumbered and free. Automatic acceleration does not.
Second: regulate information platforms like other systems that shape public life.
This is not about government control over content. It is about transparency and accountability when ranking systems cause systemic harm.
We regulate financial institutions because their failures cascade. Information systems now operate the same way, only faster and with fewer safeguards. We don’t allow motorboats in reservoirs designed for drinking water. Platforms above a certain size must disclose how their recommendation systems weigh engagement, recency, and source credibility. Independent researchers need audit access to evaluate whether those systems amplify misinformation during elections or public health crises.
For example, during an election or public health emergency, researchers could determine in near real time whether false claims are being algorithmically boosted faster than verified reporting—and regulators could require immediate changes to ranking behavior.
When demonstrable harm occurs, reliable and respected regulators should be able to demand correction. Transparency is the goal. Editorial control is not.
Third: rebuild shared information spaces by default.
Democracies do not require consensus. They require overlap—shared reference points and baseline facts.
Total personalization has hollowed out that overlap. Reversing it means establishing public-interest defaults during elections, emergencies, and moments of national consequence. Discovery layers can prioritize verified reporting without eliminating personalization. Users retain the option to opt out. The default simply recognizes that shared moments require shared information.
In practice, that could mean that during elections or emergencies, feeds (like X’s “For You”) prioritize verified reporting and official information before opinion-driven or purely engagement-optimized content. This would not be the law per se, but an agreed-upon standard for all social media platforms.
Finally, acknowledge the values embedded in information systems.
Every ranking system encodes priorities. Engagement-first design elevates intensity over accuracy, speed over context, affirmation over understanding. Denying this reality has allowed those values to shape public life without public consent.
Regulation is a choice about which values come first. A healthier information system will be less addictive and less lucrative. That tradeoff deserves to be stated plainly.
Recommendation engines optimized for watch time reward rage and conspiracy because those maximize attention. A civic-stability approach prioritizes accuracy and source credibility, even when engagement declines. This is not a technical constraint. It is a business decision.
Platforms should offer users a clear, visible choice—one feed optimized for engagement, another optimized for verified information and source credibility. At present, that choice does not exist.
* * * * *
It’s worth acknowledging there is a real and understandable fear that immediately follows any call for limits on amplification: the idea of regulators, or any centralized authority, deciding what deserves promotion and what does not. For anyone familiar with George Orwell, that concern is not frivolous. The prospect of an official arbiter of truth should make people uneasy. It also makes me uncomfortable.
That said, I believe much of the recent panic over “Big Tech censorship” has been overstated. Twitter, before Elon Musk’s takeover, did make a series of regrettable moderation decisions that reinforced long-standing caricatures of Bay Area groupthink. In several cases, the act of censorship itself drew far more attention to certain stories than they likely would have received on their own, a textbook example of the Streisand effect.
Musk’s response was to swing to the opposite extreme. He stripped away most of that moderation framework and rebranded Twitter as X in the name of free speech. The result was not a flourishing marketplace of ideas. It was a platform increasingly saturated with baseless conspiracy theories, coordinated harassment, and outright hate speech. Important information still exists there, but it is buried beneath a volume of noise so overwhelming that the signal is difficult to detect.
These two failures illustrate the same point from opposite directions. Heavy-handed moderation backfires by eroding trust. Total abdication turns a major information platform into a polluted commons where reality competes on equal footing with fantasy—and often loses.
I don’t pretend that identifying who should decide what gets amplified is easy. It may even be impossible to get perfectly right. The proposals outlined here are not a blueprint for a truth commission. They are a starting framework for a discussion that has been avoided for too long. They focus on systems, incentives, and transparency rather than content-level judgment.
What’s needed is a formal convening—a commission with genuine authority and cross-partisan legitimacy that can establish baseline standards for algorithmic transparency and responsible amplification. Not to become an arbiter of truth, but to create a framework within which platforms, researchers, and regulators can operate with clarity and competence, rather than constant improvisation.
If we cannot even have a serious, cross-partisan conversation about what responsible amplification should look like, then we should be honest about the alternative. It means accepting defeat. It means conceding that an information system governed entirely by opaque machines, optimized for engagement and chaos, will define the limits of our democratic life. At that point, the only remaining hope is that our algorithmic overlords turn out to be benign.
That is not a strategy. It is surrender.
* * * * *
Taken together, the guidelines listed above prioritize reasonable friction before virality, transparency in ranking systems, independent audits with authority, public-interest defaults at critical moments, and meaningful user control. No single fix. A shift away from pure engagement and toward civic function.
Elements of this approach are already emerging. Europe’s Digital Services Act advances algorithmic transparency. Some platforms experiment with chronological feeds or quality signals. What remains absent is a coordinated response that treats the information ecosystem as democratic infrastructure rather than a neutral marketplace.
That absence was evident this week during a congressional hearing with FCC Chair Brendan Carr. The discussion centered on agency independence and allegations of bias. The systems actually shaping public reality—recommendation algorithms, engagement incentives, automated amplification—received little attention. That disconnect explains why the alarm keeps rising while the action stalls.
The alternative is straightforward. Democratic institutions continue to erode as shared facts disappear. Governance becomes impossible when reality itself is contested. This is already happening.
None of this restores a mythical golden age of journalism. That era never existed. It does restore something more essential: the ability to argue, disagree, and govern without reality itself being up for negotiation.
Journalism can survive in that environment. It cannot survive in a system designed to bury it beneath content engineered to inflame and distract.
We are living in a bizarre future where we are untethered from any wisdom or relevant experience and running an experiment that no democracy has survived—governing through algorithmic systems that reward confusion over clarity.
Our shared reality was dismantled by choice. Rebuilding it requires one as well. The tools exist. The remaining question is whether we use them before the damage becomes irreversible.
This is an opinion piece. The views expressed in this article are those of just the author.
Comments
↓ Scroll down for comments ↓