Editorial: Twitter can’t fix hatred

On Friday, Oct. 2, President Donald Trump announced that he and first lady Melania Trump had tested positive for COVID-19. Politicians from both sides of the aisle, including Joe Biden, expressed their wishes for the swift recoveries of the president and first lady, but many of Twitter’s 330 million monthly users made posts expressing their desire for the president to die. In response, Twitter issued a number of statements from their Twitter Communications’ (Comms) profile condemning those who would use the platform to wish serious harm on the president or any other person and said that any such posts would be taken down. Many reacted to Twitter’s statements with anger, including Rep. Alexandria Ocasio-Cortez of New York and Rep. Ilhan Omar of Minnesota, suggesting a clear double-standard in the company’s treatment of online death threats against the president as opposed to those against women, people of color and disabled individuals. While some of this criticism rightfully falls on the shoulders of Twitter, the fact of the matter is that users must take responsibility for the abuse of social media platforms as well.

As Twitter clarified in a tweet on Friday, their public policies and guidelines on abusive behavior state that “tweets that wish or hope for death, serious bodily harm or fatal disease against *anyone* are not allowed and will need to be removed. this [sic] does not automatically mean suspension.” This was in response to another tweet that said the platform would suspend those that expressed wishes for Trump’s death. Conveying the anger and frustration of many, Rep. Ocasio-Cortez retweeted the post with the caption, “So… you mean to tell us you could’ve done this the whole time?” Rep. Omar and many others, including film director Ava Duvernay, echoed Cortez’s sentiments in similar tweets.

This criticism of Twitter’s enforcement of their rules on abusive behavior is not new; the platform has long struggled with how to promote “healthy” discourse, especially with regards to the treatment of women and other marginalized groups online. In 2018, Amnesty International published research that found that while women want to use Twitter for the same reasons that anyone does, to promote ideas, communicate with friends, meet new people, etc., many “are no longer able to express themselves freely on the platform without fear of violence or abuse.” Amnesty argues that Twitter has a “human rights” responsibility to be transparent in how they enforce policies to fight against abuse and threats with collection of data relating to such violations. 

Twitter has a transparency page with data on the accounts that they have taken action against, and, according to Recode, they have dedicated funding to finding a way to measure the “health” of the discourse on the platform. In 2019, Twitter published a blog post reporting that 38% of abusive tweets had been brought to the attention of Twitter’s review teams via algorithms rather than individual flagging by users, which, according to Recode, was up from 0% in 2018. This progress has been important and helpful, but it is insufficient, and Twitter CEO Jack Dorsey has acknowledged as much in 2017: “We see voices being silenced every day….We updated our policies and increased the size of our teams. It wasn’t enough.”

For all of the justified criticism and all of Twitter’s efforts, it’s fair to say that it will probably never be enough. The nature of Twitter is that it is quick and instantaneous, the id of society in 280 characters or less. Twitter, as it currently exists, cannot review tweets before they are posted, only after they are recognized as abusive by their algorithms or flagged by a user. With 330 million savvy users, the task begins to seem like a fool’s errand. 

Additionally, it is arguable at best as to whether it is in Twitter’s best interest to muffle, and alienate, users who would express their desire for the death or harm of a public figure, as well as users who would see that as protected under free speech. The fact of the matter is that Twitter, for all of the public relations jargon about bringing people together, is a business, and users are the product. In the 2020 documentary, “The Social Dilemma,” former Facebook and Google engineer Justin Rosenstein argues that social media platforms are “not free, they’re paid for by advertisers….They pay in exchange for showing their ads to us. We’re the product.” To alienate users would be to lose users and to lose users would be to lose revenue from advertisers. This is not to say that Twitter is purposefully permissive of hate speech or abusive content, but it is hard to believe that ad revenue is not a factor in their decision-making and enforcement. 

A Saturday tweet from Twitter Safety reported that “more than 50%” of abusive content had been “caught through automated systems” on the platform, which represents significant progress from 2019. However, Amnesty is absolutely justified in arguing that Twitter should be transparent with their collection of this sort of information, which is not currently available on Twitter’s Transparency page. With that being said, Twitter is not completely culpable for all of the unchecked hateful and abusive content, especially the significant amount that is directed toward marginalized groups. That is, ultimately, a reflection of our society. Twitter is no more permissive of hate speech than our community or our politicians. For all the capabilities of modern day technology, speaking out against Twitter, against perceived inequalities and intolerance, like Rep. Ocasio-Cortez and Rep. Omar, still seems to be just about the only effective way to combat hatred.

Read more here: https://mainecampus.com/2020/10/editorial-twitter-cant-fix-hatred/
Copyright 2024