Forget Reddit. YouTube is the cesspool of the internet.
Much has been made in this hand-wringing chapter of discovering how bad social media has become over Reddit, the chat site that is a haven for conspiracy theories and right-wing hatred. And it is bad. So much so that Reddit – which has prided itself on not “interfering” in its crazy readers’ discourse – has shut down numerous boards linked to these hate-mongering and outlandish theories.
The spark for this was the horrific shooting at Marjory Stoneman Douglas high school, where 18 people were killed by another mentally unbalanced person who could freely buy an assault rifle, the kind of magazines used by soldiers in conflict and a grudge against the system.
By an extraordinary fluke of this modern world’s six-degrees of separation, I have actually been to that school. It’s where my three nephews in Florida went to high school. The brave coach who stood in front of kids played football with the middle of my sister’s sons. They were still friends when the recent attack took place.
Immediately after this unspeakable tragedy, a maelstrom of propaganda emerged about so-called “crisis actors” and other patent nonsense – and was easily found on YouTube.
I’d never heard of this odd phrase, which will go down in the aeons as one of the nastiest forms of propaganda. What kind of sick mentality would label the teenage survivors of a gun massacre, like David Hogg, actors pretending to be victims seeking to “take our guns away”.
The nexus for spreading these conspiracy theories, it turns out, was YouTube – a free-for-all of blatant untruths, deranged conspiracy theories and other absurd disinformation. The video service last month said it would post Wikipedia entries next to videos to debunk the most obvious conspiracy theories.
YouTube, once considered the greatest of ways to share videos online, is now the cesspool of the internet.
Last year it was rocked by scandals when major brands pulled their advertising after their adverts were displayed alongside videos depicting hate speech, antisemitism and extremism. YouTube says it tries to control the spread of this hate-mongering and fake news but that these evil communities are quick to adapt and circumvent its attempts at censoring such nonsense.
So Google can build artificial intelligence (AI) that can spot a diabetic illness that leads to blindness, or beat the world’s best champion at Go, but it can’t solve hate-speech on its own video service. Who else doesn’t believe them?
YouTube has 1.5bn viewers around the world, and the video viewing service had algorithms that try to ensure its viewers stay online for longer. It does that through the panel of recommended “up next” videos. Numerous commentators have reported how these suggested videos get increasingly more violent, or controversial, or more outlandishly fake news-ish.
“YouTube is the most overlooked story of 2016. Its search and recommender algorithms are misinformation engines,” tweeted Zeynep Tufekci, an associate professor at the University of North Carolina and New York Times contributor.
Guillaume Chaslot, a former Google engineer who worked at YouTube wrote in the run-up to the US presidential elections in November 2016 that “80% of recommended videos were favourable to Trump, whether the initial query was ‘Trump’ or ‘Clinton’. A large proportion of these recommendations were divisive and fake news.”
He also found that the search phrase “’is the earth flat or round?’ and following recommendations five times, we find that more than 90% of recommended videos state that the earth is flat”.
That about sums it up. We live in a disinformation age where Facebook, Google and Twitter have been may have usurped by “bad actors” as Mark Zuckerberg said last week of Cambridge Analytica and Russian internet trolls, but are not blameless themselves for corroding our privacy. Or in YouTube’s case, by its own give-us-your-eyeballs-for-longer recommendation algorithms.
This column first appeared in Financial Mail