Coronavirus is a vibe. At the very least, the mysterious virus originating from Wuhan, China, can help you get noticed on Instagram or jack up the price of cryptocurrency. Which is to say, Coronavirus isn’t just a global health epidemic — it’s also an opportunity for influencers, brands and clout chasers to garner tens of thousands of new clicks and follows.
As a result, #coronavirus has become filled with as much bullshit (a mix of misinformation, conspiracy theories and potentially dangerous pranks) as it has useful news about the ever-growing public health crisis.
Facebook, Twitter and TikTok have responded by promising that they have teams dedicated to fact-checking these erroneous claims and/or cynical ploys, and have already taken down thousands of posts they feel violate their terms and conditions and put public safety at risk.
But what’s less easy to purge are accounts from small brands and influencers, who aren’t technically breaking any rules but are using Coronavirus and its related hashtags to amplify themselves. Basically, that means if you see #coronavirus trending, it’s likely that some of the first news you’ll read won’t be about the number of people affected or quarantine plans; instead, you’ll learn about how it’s affected the value of Bitcoin, followed by updates on popular Turkish dramas and how you can take part in an iPhone 11 giveaway.
Some of this, of course, is the work of bots. But Joan Donovan, director of the Technology and Social Change Research Project at Harvard’s Shorenstein Center, told NBC News that a surprising number of posts that used #coronavirus were very much real people, partaking in the act of “keyword squatting,” which is when influencers and the extremely online “use ecological crises and other significant events to raise money for themselves” as well as to get more followers.
At first glance, fixing keyword squatting should be easy. After all, if tech platforms are indeed putting more resources into combating misinformation and suspending spam accounts, clearing out keyword searches shouldn’t be that arduous. But in practice, it’s much trickier, says Alex Micu, a London-based digital director who works in advertising and marketing. “Hashtags are quite a democratic affair. So there’s not much policing going on. You can literally use any hashtag you want.”
What he’s saying is that there isn’t a hierarchy by which the posts are ranked in accordance to their value — meaning a small account that’s legitimately talking about the public health implications of Coronavirus might end up ranked lower than an account with a slightly larger following that’s merely promoting a Wish.com shop. “Anyone who says that they know exactly how the algorithm works is a liar,” Micu tells me. “All we know is that it isn’t chronological, and that posts are shown to your closest friends or most engaged followers first, and based on that performance, they spread out accordingly.”
The whole process then is unpredictable and far harder to track when a hashtag is giant, or if the account disseminating the (dis)information is large. “When you have a million followers, it’s hard to say what works and what doesn’t,” Micu explains, adding that even if there’s a “low level of engagement percentage wise,” if the number of people who saw the post was high, the post will likely be considered more valuable.
In short, it’s difficult to stop accounts from taking advantage of newsy hashtags to serve more nefarious means — e.g., BuzzFeed has reported that keyword squatting has been used to direct trolls to targeted harassment campaigns against activists and journalists. So, again, while Facebook and Twitter have publicly said that they’re putting greater security measures in place to prevent keyword squatting, it’s still led to activists in India and Burma to being chased off social media, and in some cases, forced into hiding.
And so, online at least, Coronavirus is quickly becoming a plague of a totally different kind.