YouTube’s recommendation algorithm has faced a lot of scrutiny this year for radicalization, pedophilia, and for generally being “toxic” — which is problematic because 70 percent of the platform’s viewing time comes from recommendations. That’s why Mozilla launched the #YouTubeRegrets project, to highlight the issue and urge YouTube to change its practice. The stories of the darker sides of YouTube’s recommendations are chilling, and put the spotlight on whether or not their purpose is justified. “The stories show the algorithm values engagement over all else — it serves up content that keeps people watching, whether or not that content is harmful,” Ashley Boyd, Mozilla’s VP of Advocacy, told TNW.
Gore, violence, and hate
Many of the stories describe the effects of recommendations on more vulnerable groups such as children: Users can’t turn recommendations off, so children can be fed problematic content without having the means to steer clear of it. But that doesn’t mean adults are unaffected: Often the recommendations go completely against the viewer’s interests in harmful and upsetting ways: Credit: MozillaConspiracy theory videos are also mentioned as they’re frequently recommended, causing students to be misinformed, elderly people being duped, and feeding into the paranoia of people with mental health problems. Mozilla acknowledges that the stories are anecdotal and not cold hard data, but they do highlight the bigger issue at hand. “We believe these stories accurately represent the broad problem with YouTube’s algorithm: recommendations that can aggressively push bizarre or dangerous content,” Boyd explains. “The fact that we can’t study these stories more in-depth — there’s no access to the proper data — reinforces that the algorithm is opaque and beyond scrutiny.” And therein lies the issue. YouTube has denounced methodologies employed by critics of the recommendation algorithm, but doesn’t explain why they’re inaccurate. Mozilla points out that YouTube hasn’t even provided data for researchers to verify the company’s own claim that it has reduced recommendations of “borderline content and harmful misinformation” by 50 percent. So there’s now way to know whether YouTube actually has made any progress.
Solution?
Judging by these personal stories and recent news reports, it does seem something needs to happen — and fast. Earlier this year, Guillaume Chaslot, a former Google employee, told TNW the “best short-term solution is to simply delete the recommendation function.” While that particular solution might not be realistic, Mozilla presented YouTube with three concrete steps the company could take to improve its service in late September:
Provide independent researchers with access to meaningful data, including impression data (e.g. number of times a video is recommended, number of views as a result of a recommendation), engagement data (e.g. number of shares), and text data (e.g. creator name, video description, transcription and other text extracted from the video) Build simulation tools for researchers, which allow them to mimic user pathways through the recommendation algorithm Empower, rather than restrict, researchers by changing its existing API rate limit and providing researchers with access to a historical archive of videos
Boyd says YouTube’s representatives acknowledged that they have a problem with their recommendation algorithm, and said they’re working to fix it. “But, we don’t think this is a problem that can be solved in-house. It’s too serious and too complex. YouTube must empower independent researchers to help solve this problem,” says Boyd. You can read all the stories on Mozilla’s website. And if you’re looking to get rid of some algorithms in your life, then try an extension called Nudge which removes addictive online features like Facebook’s News feed and YouTube recommendations. Update: YouTube’s spokesperson responded to Mozilla’s initiative and says they cannot verify the stories as they don’t have access to the data in question: YouTube also points out that only a tiny fraction of the content on the platform is harmful and the Community Guidelines clearly prohibit violent, graphic, and hateful content. The company has also taken steps to improve how it connects users to content, including how it suggests videos in search results and through recommendations.