Word Vomit Wednesday - Flagged

 Welcome to Word Vomit Wednesday! A series of blog posts about random thoughts or a specific topic from current events that I, and sometimes the rest of the Internet, ruminate obsessively about. All thoughts/opinions/experiences are my own (unless otherwise indicated); I don’t claim anything that I write to represent anyone other than myself.

*At the time of posting this, I was and am still partially banned from sharing things on Facebook without any explanation from Facebook.*

Last week I was flagged on Facebook for “hate speech” much to my surprise and to the surprise of pretty much everyone I know. And, if I’m being honest here, I actually felt more inconvenienced than upset about it. I probably should have felt more upset about it because it’s indicative of an enormous and terrifying cultural trend: censoring critical thought and expression while protecting harassment, threats, and bigotry. I have no idea who reported me or what about me or something I posted (I only posted twice last week and I rarely engage with Facebook anymore except to post this blog and work for Female Frequency and maybe “like” some of my friend’s posts) was found to be problematic, which is part of the problem with reporting on social media platforms. No one has any idea what the fuck is going on and nothing is actually accomplished.

I talked to my friend (I’ll call her Viv for this piece) who co-founded and worked for an organization whose intention was to provide support for victims of online harassment. And what she found while working with Twitter and their global trust and safety departments was pretty abysmal. First of all, these are very small departments that employ very few people. Which makes discerning legitimate reportings and enforcing consequences for the hundreds of thousands of claims that come in weekly to be virtually impossible. And because these social media companies don’t want to shell out the money for more manpower on this issue we’re left at the whims of algorithms that end up doing more harm than good. By having bots that are programmed to find keywords and then trigger a ban based on those words removes any kind of discussion about First Amendment rights and protections.

Algorithms have no concept for context and nuance. You can’t define hate speech and symbols without also discussing context and you can’t pretend to care about the First Amendment if you can’t determine what speech is protected and what warrants consequences if there are no people having those discussions while working on cases. By setting up these algorithms you may be able pick up on that Neo-Nazi’s multiple profiles, but you’re probably also lumping people who educate about World War II in with the bigots as if they're in any way equitable. They’re obviously not even close. That's one way in which these social media platforms are doing a great disservice to it’s community members.

While Viv was working at her organization, she had the rare opportunity to personally and directly bring cases to global trust and safety which would expedite the process for her clients significantly. Even then, there were still many obstacles. No two social media platforms have a uniform way they deal with reports and they all require different types of “evidence” from the users filing complaints which users are either not aware of or have no idea how to obtain them. Not only that, but harassment is still just not taken seriously. According to Viv, even after personally bringing forward very serious cases involving death threats it still took 48 hours for any action.

The excuses for not doing more organizationally and even legislatively, is this bullshit idea that the Internet is too fast to even think about putting real protections for people around hate speech, threats of violence, threats to reputation, privacy and consent. Excuse me, but that’s just fucking lazy. So lazy and unwilling to do the work are these social media companies, that they opened up this country to major national security threats (hello, Russia Investigation). And it’s appalling that the people on the Internet who do cause harm and who express themselves with violence are only ever given a slap on the wrist. Why even have a reporting system if no one is going to be held accountable for their actions? Which brings me to my next point. Oftentimes reporting someone (as was in my case last week) is the harassment behavior.

Trolls employ reporting as a harassment tactic CONSTANTLY. My first personal experience with it was last week, but I have seen it happen over and over again to, in particular,  to BIPOC (black, indigenous people of color) activists and advocates (mostly women) that I follow on various social media platforms. And it is enraging every time that these people who are either educating, observing, asking, or sharing are policed at virtually every turn. THAT’S FUCKED UP AND REALLY NOT NECESSARY. But because there is no real discussion or real people discerning the difference between hate speech and a truth that may make someone feel some discomfort, reporting is abused and used violently toward marginalized people. Much in the same way all our other institutions are set up to uphold those same white supremacist and patriarchal standards.

If our society is going to progress in any way, we need to get this mess sorted out. Free speech does not mean one is free from consequences. If someone is being abused they should feel like they’re going to be heard when they reach out. When someone has been flagged, they need to be given specific reasons why something they did or said was deemed inappropriate and be held accountable appropriately, not just given a link to the site’s guidelines. And if someone uses the reporting system in a violent way they should not only be appropriately held accountable for that but also have it communicated to them why what they were reporting was not considered hate speech, etc. Fostering discussion and education through healthy communication practices is something we definitely need in these spaces. If these platforms continue to rely on these algorithms instead of having qualified humans facilitate we are never going to have the resources or professional support that we deserve in these spaces.

Katie Louchheim suggests that if an opinion makes you uncomfortable, go see a therapist before projecting your bullshit inappropriately on others.