‘Ghostbusters’ Twitter harassment raises free speech questions

Ghostbusters star Leslie Jones

Comedian Leslie Jones had enough with Twitter and so she quit. After coming under cyber-attack from so-called internet “trolls” who sent racist, misogynistic, and threatening tweets, the Ghostbusters star decided it was time to remove herself from the social media platform that tends to thrive on the anonymity of the internet. Before closing down her account, though, Jones called out Twitter for not doing enough to protect members from harassment from other accounts. Freedom of speech, she said, has its limits.

Does she have a point? Social media has given a voice to so many people and brought awareness on a global scale. But does all of that vocal power come with a price tag? Should social media content – on Twitter, Instagram, Facebook, Snapchat – be policed?

It’s a slippery slope, to be sure. On one hand, social media has given a voice to the public that never existed in the past. Think about it: the average person opens a Facebook account the same way a big-name celebrity does. The accounts are generally managed in different ways, but the process is the same. You don’t need your own computer, tablet, or smartphone to use social media.

If speech starts being regulated on social accounts, where are boundaries drawn? When hate speech is used? When emojis are used the wrong way? When is someone offended by someone else?

All of this isn’t meant to diminish what happened to Leslie Jones who was attacked in an unfair way and no one should have to endure that type of harassment, online or in-person. Free speech when it comes to the internet is just the new territory and difficult to maneuver. All social media platforms have ways to report abuse or fraud but not many proactive approaches to preventing that type of behavior in the first place. How can they? There is really no way to predict what people will say at any given moment. Or is there?

It seems that technology could be a solution to the free speech-harassment problem. Twitter and similar platforms must have a way to analyze previous reports of abuse of the platforms and then determine trigger words that would prompt an immediate review. It’s likely even possible that if these words are noticed, the content would remain in a queue until it was moderated. Those who didn’t want to wait to post could edit the content so it passed a standards test. There would still be those who could get past the rules but the rapid-fire insults, particularly racist or misogynistic ones, would slow down. There is technology in place to find topics by hashtags, or target consumers based on interests, so it doesn’t seem like a stretch to use some of that same savvy to make the internet a little less hateful.

Unless, of course, the issue isn’t technology but the principle of censorship. Who decides what should be allowed in social media content? That’s the question, really. What happened to Leslie Jones in abhorrent, but how do we keep it from happening again?

Comments

David Jones

David is an avid blogger and Managing Director at Splashpress Media

Related Stories...