Spotify has acquired Kinzen, a company that will serve one purpose for the streamer. Sifting through the million and one generic “friends sitting around and talking” podcasts, reporting any sort of harmful content it can find.
Fortunately for Kinzen, there isn’t a human team going through all the podcasts. That would be far too boring. Kinzen uses machine learning, coupled with ‘human expertise’ to find any sort of hate speech, or other harmful content, on the platform.
Spot-ifying problems on the platform
This isn’t the first time Spotify and Kinzen have worked together. Back in 2020, during the US elections, Spotify hired Kinzen to help stop the spread of misinformation across the platform. We’re not sure why misinformation is only important when elections are running. But hey, we don’t run a company valued at $18.1 billion.
As with most machine learning-based services, Kinzen will only get better at identifying and reporting content it believes is harmful. It’s a difficult job, even for a machine, considering how jargon can shift from country to country.
Read More: Spotify now has more than 300,000 audiobooks but you have to pay for them
It’s possible that Spotify is trying to get in early and avoid any more PR nightmares, such as the pulling of Neil Young’s music. That sparked plenty of outrage earlier this year. The whole scandal was caused by the spread of misinformation regarding the COVID-19 pandemic in podcasts – to which Spotify responded by placing a content advisory warning at the beginning of such podcasts.
We’ll have to see how well Kinzen performs its job over the coming months. It’s entirely possible that harmful content is published and promoted to the general public before Kinzen tracks it down. But at least it’s trying.
Source: TechCrunch