Editor’s note (April 2026): This article is part of the Blog Herald’s editorial archives. Originally published in 2007, it has been revised and updated to ensure accuracy and relevance for today’s readers.
There was a moment in the mid-2000s that felt like the internet could democratize news. Platforms such as Digg, Reddit, Netscape and Newsvine have delegated the power of editorial choice to ordinary users. You voted for what mattered. The stories with the most votes rose to the top. No editor, no gatekeeper – just collective judgement.
It was an idealistic model. And in practice, it immediately ran into one of the most persistent tensions in online communities: what happens when you give people power over each other’s visibility, but not responsibility for how they use it?
This question, which fueled early debates about social news platforms, is not only unresolved. It became more effective.
The original transparency problem
Each of the early social news sites took a different approach to voting transparency. Digg lets you see who upvotes a story, but hides the downvotes behind a veil of anonymity. Netscape showed everything – every upvote and downvote on every submission, completely visible to all users. Reddit has hidden all individual vote data, showing only the totals.
Each approach had predictable side effects. Digg’s semi-transparency allows observant users to catch coordinated voting rings, but the anonymity of negative votes has led to what’s known as the “Burying Brigade”—groups that systematically bury stories or users they don’t like, without any reporting. Netscape’s complete transparency provided a degree of community self-regulation, but also created social friction: users took it personally when their posts were downvoted, and the disease would spread throughout the community. Reddit’s complete opacity has made it vulnerable to manipulation by trolls and bots, even though communities have no mechanism to identify a problem.
What it revealed, even then, was a structural truth: transparency creates accountability, but accountability creates friction. Designers had to choose between communities that could police themselves and communities that felt safe.
Same dilemma, on a completely different scale
The stakes for this design choice have grown tremendously. The social news platforms of the 2000s had modest audiences and limited cultural influence. Today’s successors—Reddit, X, YouTube, Facebook, TikTok—reach billions of people, and the opacity of their ranking systems has become a serious public concern.
Having survived the social news era to become one of the most visited sites on the web, Reddit still struggles with the same dynamics of voice manipulation that plague Digg. Brigadier—coordinated groups that manipulate upvotes or downvotes to influence debate—continue to distort the natural flow of conversation, undermine trust, and alienate members from communities. Reddit’s tools to fix the problem have improved, but the underlying problem remains the same: the platform doesn’t personalize feeds for each user, making the system relatively transparent, but it’s also more prone to manipulation like a voting rig.
At the same time, larger algorithmic platforms have gone far beyond simple voting mechanics. It’s not community voting that determines what you see on Facebook, TikTok, or YouTube — it’s a proprietary rating model trained on engagement signals, shaped by commercial incentives, and almost entirely opaque to the people it affects. If algorithms are to make decisions that benefit society, who makes those decisions and how matters matter, and ensuring that those decisions benefit society requires transparency in algorithmic design.
This is not an academic concern. A 2024–2025 field experience Recruiting 1,256 participants in X during the US presidential campaign found that reordering feeds to reduce content expressing partisan hostility measurably changed users’ feelings toward the other party—providing direct causal evidence that feed algorithms change political attitudes. The rating options of the platforms are not neutral. They shape how polarized or collaborative the information environment is.
When reporting structures fall apart
Undertaker problem 2007 it seems almost quaint compared to what followed. Vote manipulation has evolved from petty disputes at the community level to a structured tactic applied on a political scale. The persistence of this dynamic reflects something that the original social news designers understood but did not appreciate: anonymous negative power without accountability not only enables abuse, but invites it.
This is where the question of transparency ties into something deeper than platform design. At issue is whether the systems that shape public attention bear any obligation to those they affect. For years, the dominant answer from platforms was: no. They self-regulate, issue transparent reports that are difficult to verify, and treat their ranking systems as proprietary assets that must be protected.
That era is coming to an end — at least in some jurisdictions. The EU’s Digital Services Act, in full force from February 2024, is designed to end an era in which tech companies essentially regulate themselves by forcing platforms to be more transparent about how their algorithmic systems work and to take responsibility for the social risks stemming from their services. Under the DSA, users now have the right to understand the basis on which platforms rank content in their feeds and to opt out of personalized recommendations – obligations that have already prompted TikTok, Facebook and Instagram to offer options to turn off personalized feeds.
Between 2023 and 2024, 35 US states introduced legislation addressing social media algorithms; more than a dozen bills have been signed into law, although many face ongoing legal challenges. Globally, the direction of travel is toward greater transparency—even if enforcement is controversial.
What Digg’s Revival Tells Us
This story has a coda worth noting. Digg – one of the original platforms that sparked this debate – relaunched in 2026. His return comes with clear commitments to transparency, public trust and search discovery, and appears to be a direct response to Reddit’s growing frustration with opaque decision-making and moderator burnout.
Whether or not Digg 2.0 is successful is beside the point. Signs of revival are appetite. Users, creators, and publishers are increasingly unhappy with platforms that have a huge impact on viewability and barely explain how that impact works. The nostalgia driving Digg’s reboot isn’t really nostalgia for the old site—it’s nostalgia for the idea that you might understand the system you’re involved with.
A lasting lesson for content creators
If you publish online, the question of transparency is not abstract. The degree to which any platform makes its ranking logic readable to you directly affects your ability to generate sustainable coverage. Opaque systems are addictive: you optimize for signals you can observe without understanding the mechanism, and you remain vulnerable to invisible changes.
The designers of the original social news platforms wrestled with the real trade-off—accountability versus friction, visibility versus privacy. They didn’t solve it cleanly, and no one has since. But the conversation is over. We have better evidence of what algorithmic opacity costs in terms of polarization, manipulation, and user trust. We have regulatory frameworks that are starting to demand answers.
What we still lack is a cultural expectation that platforms owe their users a legible account of how their choices were made. This expectation, imperfectly achieved by the Bury Brigade debate twenty years ago, remains a work in progress.






