We promise this article is not actually filled with a ton of annoying ads for clickbait. Bear with us.
How good are you at spotting fake news? Probably pretty good, you might say. Awareness surrounding the issue of fake news has taught us not to trust websites with questionable URLs and titles, but in order to understand how to spot fake news we first have to understand what it is. When experts admit that the internet does have a fake news problem, most people would scoff at the thought of being gullible enough to fall for it.
The problem, though, is not the obviously fake information circulating the web. Elementary education has taught everyone the importance of finding credible information since the birth of the internet (remember when your English teachers biggest warning was not to use Wikipedia?), and increasing coverage of the subject of fake news is reminding everyone to reconsider what they read on the web. If everyone is aware that fake news exists, how is it still an issue?
A major player in the fake news issue is not the presence of false information itself. It’s the way in which it’s presented. Companies like Outbrain, which generate millions of dollars in revenue simply by connecting people to relevant stories on media sites, are at the forefront of this. The service works by allowing media sites to pay to have their content advertised on other websites. If you’ve ever noticed blocks of articles at the bottom of a page, that’s probably the work of Outbrain. Even Snapchat has become a ruthless culprit of marketing useless information.
But these link’s aren’t reputable, necessarily, and often read like clickbait by promising jaw-dropping content with less than average information. There’s nothing necessarily malicious about the presence of these articles. That is, until they get political. With scandals like Cambridge Analytica and its impact on elections around the world coming into light, speculation begins to stir on how lawmakers will combat clickbait and misleading information in the future.
An article by Wired, released last month, highlighted the issue of clickbait in relation to fake news and politics. Outbrain sees as many as 600,000 new content links each day. Only artificial intelligence can handle that level of fact-checking, and even then things aren’t always right because even true information can be misleading. When it comes to elections, clickbait works to fuel a political agenda that will sway voters one way or another. Even reputable sources have employed Outbrain in the past to increase web traffic. So there’s no way to really say which sources are the culprit for manipulative or misleading content.
Clickbait wouldn’t be an issue at all if everyone read the entirety of every single article they saw online. The problem is, though, we just don’t have the time to do that. Most people are guilty of sharing an article without having actually opened or read it (just the other day I, myself shared this article with a friend before realizing it was two years old). Most people’s reading level’s don’t stretch beyond that of a 7th grader, and that doesn’t account for the fact that our attention spans are noticeably shorter thanks to social media and the age of telling a story in 140 characters or less.
Weaponizing Social Media
When these clickbait stories get shared over and over, their headlines seem to make more of an impact than their content does. Clickbait articles have, in the past, suggested things like celebrity deaths, heartwarming stories of human compassion, and other “you’ll have to see it to believe it” headlines that are simply not true. This type of content strategy is known as like-farming. It’s generally used by marketers and strategists as a way of generating traffic in order to sell something. In Outbrain’s case, this is the content itself.
Issues with sites like Outbrain are nothing in comparison to how social media is being weaponized politically. However, it is the most blatant example of how information is skewed in order to attract attention. When the Cambridge Analytica story broke in 2018, it proved that the depth of misinformation went far beyond harvesting clicks and likes to generate cash flow—it became clear that it was being used as a political weapon. And still is—to this day.
Thousands of social media profiles are regularly being made to spread misinformation online. The share button, while useful, has been exploited as a political weapon in recent years by strategic communications organizations in order to sway opinions of the masses. While the Mueller probe may not have turned up any solid evidence that Russian organizations infiltrated on the 2016 election, other reports prove that they have used social media to further the divide between right wing conservatives and the left.
Meanwhile, a plan to stop interference in the Austrialian elections was announced this week by Facebook. In the Philipphines, Facebook recently removed hundreds of accounts that were tied to political propaganda. As one door closes, ten more open.
The battle against misinformation is only going to get more complicated as technology advances further. Governments are working to regulate social media enough to slow down the spread of propaganda. But the strategy only gets better.
The next time you read a politically charged tweet—no matter how seemingly innocent it may be—take some time to think about whether or not there may be a real person behind that profile. Services like Outbrain will never not exist so long as people keep clicking. The only thing you can do is learn to question what lies behind the veil of the internet.