comScore
WELL, ACTUALLY

Censoring WDBJ Footage Isn’t as Simple as Rebuking Journalists for Doing So

ShootingWithin hours of the shooting deaths of two WDBJ employees on live television, hot takes of every color, shape, and size popped up in the mainstream media and elsewhere.

This isn’t all that surprising, which is why Rolling Stone published a tongue-in-cheek blog post titled, “How to Respond to the Latest Mass Shooting” — essentially providing four different types of hot takes for writers to mix and match. The first two — “Don’t politicize this!” and “Blame black people” — were already out in force thanks to Virginia Gov. Terry McAuliffe and Breitbart, respectively.

The third and fourth options — the equally inevitable “Should we post this footage?” and “Why aren’t we posting this footage?” takes — didn’t begin in earnest until after the suspected shooter posted first-person videos of the attack on Facebook and Twitter. Many opinions about whether or not to air portions of the original live broadcast had already been published, but the existence of the alleged killer’s personal, publicly available footage on social media dramatically added more to the pile.

Just as Rolling Stone predicted, pretty much all of the takes whose cues came from options #3 and #4 overly emphasized the words “should” and “we” in the context of journalism. Such is especially the case in Harry Siegel‘s New York Daily News column, which published around the same time the paper announced its unsurprising cover for the Thursday edition. Including title and article, the piece uses the annoyingly royal “we” 10 times.

So what’s wrong with this word in particular? It represents the most glaring error in Siegel’s main argument, as well as all other anti-censorship arguments like it made by news organizations. For the title suggests, Siegel asserts that “we can’t censor snuff films” and similar content in the media, but who is this “we” he refers to?

Journalists, especially the social media-savvy reporters who saw the suspect’s post and demanded they be taken down:

I did see a half-dozen journalists and many others — including some who had the footage auto-play in their feed without warning — call on Twitter to take down that snuff film along with the account the killer posted it from, which it did.

This observation leads Siegel to believe that it was primarily journalists who were responsible for Twitter and Facebook’s removals of the first-person videos from their platforms. Such is an idea that he’s not entirely comfortable with:

The idea that Twitter or any ad-supported social media site — not the virtual public commons they like to sell themselves as but rather, like shopping malls, spaces open for business — should decide what violence we can see is nuts.

It’s here in the seventh paragraph that Siegel first adopts the royal “we.” After that, he drops it another eight times in a string of points that chastise those journalists who wanted the footage removed, and the social media “algorithms” that obliged.

What results is the assumption that “we” journalists, reporters, writers and bloggers were mostly responsible for condemning the videos and anyone or anything that retweeted or shared them, and it bleeds into the finer points of Siegel’s argument. That’s a shame, because a few paragraphs later, he rightfully notes that “there’s a simple, crucial separation between what you choose to watch and what you demand no one can watch.”

Censorship in such instances can and is often greatly abused, and such is why Siegel feels the need to ridicule his colleagues. Yet neither they, nor Twitter and Facebook, were the only parties with dueling interests in whether or not the posted videos stayed or went.

CNN Money later confirmed that both Twitter and Facebook had suspended the accounts and removed the videos. Twitter did so “within eight minutes,” and Facebook followed suit “as soon as the videos were flagged.” The same article then references both companies’ terms of service:

Both social networks rely on users to flag inappropriate posts, which they then review and determine if an account should be suspended, or if a post should be removed.

In other words, they don’t just rely on their users who just so happen to be journalists to determine whether or not any posted content is offensive. Anyone who uses Twitter or Facebook can flag another user’s post, and if enough valid complaints are tallied, and if the content in question breaks either company’s terms of service, then the post goes.

“Neither platform pre-screens posts before they’re published,” notes CNN Money, before citing Twitter’s own media policy: “We do not mediate content.”

So Siegel and his fellow self-appointed arbiters of journalistic integrity can argue all they want about whether or not “we” should censor heinously violent content like the shooting deaths of Alison Parker and Adam Ward, but to do so in a vacuum accomplishes nothing. If anything, it merely dilutes the discussion, rendering it as meaningless and wasteful as all the typical hot takes that such events routinely generate.

[Image via screengrab]

— —
>> Follow Andrew Husband (@AndrewHusband) on Twitter

Have a tip we should know? tips@mediaite.com

  1. Mediaite
  2. The Mary Sue
  3. RunwayRiot
  4. Law & Crime
  5. SportsGrid
  6. Gossip Cop