Editor’s note (April 2026): This article is part of the Blog Herald’s editorial archives. Originally published in August 2005, it has been revised and updated to ensure accuracy and relevance for today’s readers.
In August 2005, Google added a silent feature to the Blogger navigation bar: “Mark as Targeted” button. The idea was simple enough. Readers who encounter spammy blogs or truly harmful content can report their concerns, and Google would use that collective information to decide what to do next. Community moderation. Crowd source.
Within weeks, there were reports that the system was being used differently – not to flag spam, but to silence feedback. It raised a question that has become increasingly relevant in the two decades since: What happens when you give a crowd the power to make a sound?
A defect included in the “Community Standards”.
Blogger’s documentation at the time was admirably clear about intent and quietly avoided risk. According to the platform, the flag button “is not censored and cannot be controlled by angry mobs”. The reassurance was in the first sentence of the help page. The fact that the platform needs to say this clearly shows that the designers understand what they’re building on some level.
The mechanics created a structural problem. The volume of flags determined the results. Human research came next, if at all. This meant that a coordinated group—simply people who disagreed with the writer’s views—could, in practice, push for a blog to be removed without any individual claim being evaluated on the merits.
That was the tension at the heart of the feature, and it was never really resolved. Distinguishing between “this content is illegal” and “this content offends me” is one of the more difficult challenges in platform management. Leaving this difference to general crowd behavior doesn’t solve it – it gives it to whoever can organize the biggest response.
The pattern was repeated
What happened at this first Blogger event was a preview of the dynamics to come in the social platform’s twenty-year history. In 2012, a conservative blogger’s Blogger account in New York was deleted hours after a politically controversial post was unreservedly taken down. The content was restored, but an explanation was never given. In 2016, writer Dennis Cooper lost years of work — and his entire Blogger account — after one anonymous complaint led to its deletion. It took months of public pressure before Google returned his material.
A peer-reviewed study published in 2024 found that marking tools on major platforms, it is successfully used to silence users already vulnerable to platform censorship – activists, journalists, sex workers, LGBTQIA+ creators, often through coordinated reporting campaigns designed not to identify harm but to trigger automated takedown systems. Meta’s own Board of Control has determined that in some cases at least three reports are enough to remove non-infringing content from Instagram. The gap between “flag” and “malicious” remains as wide as it was in 2005.
The problem is not specific to any platform or political persuasion. What the Blogger episode identified early on was a design weakness: the easier it is to initiate a review and the more automated the response, the more useful the system is as a tool for harassment or raiding, regardless of the platform’s stated values.
Where is the moderator chat now?
The scale of the problem has changed a lot since 2005. Platforms now process content at volumes that no human team can manually review. TikTok has released information about this In 2024, more than 96% of content removed for policy violations was removed by automated systems before a single review. Meta’s automated systems account for approximately 90% of removals of violent and graphic content.
The efficiency of automation is real. The problem is that automation inherits the same structural flaw as the original flag button, only faster and on a larger scale. Systems trained to respond to volume are still vulnerable to coordinated abuse. Targeted by an organized campaign, the account faces the same functional outcome as Ashok Banker’s 2005 blog — first deleted, then with questions, limited appeal, and no transparent explanation.
The Electronic Frontier Foundation has repeatedly documented this dynamic: addressing processes that are slow, opaque, or effectively inaccessible; automated enforcement decisions that affect livelihoods without any meaningful human review; and the platforms have no obligation to explain why a particular piece of content has been removed or an account has been suspended.
The European Union’s Digital Services Act, which comes into effect in 2024, seeks to address part of this by requiring major platforms to provide transparency about moderation decisions and application processes accessible to EU-based users. Although its coverage is geographically limited and its application uneven, it is a meaningful step.
What should bloggers and creators take away from this?
The clearest lesson from two decades of this history is that independent creators resisted until it was too late: platform dependency is a structural risk.
If your posts, audience, and archive live on someone else’s infrastructure, the rules can change around you without warning. A flag campaign, a policy update, an algorithmic change – any of these can effectively shut down your public presence on a given platform. Dismissing concerns about the flag button in 2005, the blogger was making the same calculation that writers made about Twitter, Facebook, and Tumblr in the years before each of those platforms changed what they wanted to accept.
This isn’t an argument against using platforms – it’s an argument for a clear understanding of relationships. Publishing on a free hosting service means accepting the platform’s moderation logic as a condition of access. This was true for Blogger in 2005. This applies to Substack, Medium and all other intermediaries today.
Creating some independence in the content operation—own domain, direct email communication with readers, archive you control—doesn’t eliminate platform risk. But this changes the conditions of exposure. A writer whose primary relationship with their audience is something they own is in a completely different position than a writer who has published only for someone else.
The “Bookmark” button has long since been removed from Blogger’s interface, and Blogger itself is a much reduced platform. But the question he raises goes nowhere: when a crowd can make a sound, whose standards govern what can be heard? Twenty years later, we’re still working on the answer.






