YouTube Making Changes To Its Recommendation Algorithm

After years of being accused of exposing people to misleading information, YouTube – a division of Alphabet Inc.’s Google – has announced that it is retooling its recommendation algorithm to prevent the promotion of false information. In a blog post, the company said that it will “begin reducing recommendations of borderline content and content that could misinform users in harmful ways”. As examples, the company cited “videos promoting a phony miracle cure for a serious illness, claiming the earth is flat, or making blatantly false claims about historic events like 9/11.”

The change is the result of a six-month technical effort to improve the company’s recommendation algorithms. The recommendation feature suggests videos to users based on the videos they previously watched. The algorithm takes views, average watch time, likes, dislikes and other metrics into account for the decision to suggest a piece of content. Unfortunately, this has allowed conspiracy theories and politically-motivated attack ads to be highlighted alongside otherwise unassuming content.

Many of the videos that will be affected by the new policy don’t necessarily violate YouTube’s Community Guidelines. Instead, they are considered to be “borderline” and untruthful in potentially harmful ways. YouTube says that limiting the reach of these videos will provide a better experience for its users.

YouTube said the change would only apply to English-language videos and affect less than 1 percent of the site’s content. The company’s post continued, “We think this change strikes a balance between maintaining a platform for free speech and living up to our responsibility to users.”

These videos will not be removed entirely from the platform. The company clarified, “To be clear, this will only affect recommendations of what videos to watch, not whether a video is available on YouTube.” So if a user follows certain channels, the videos may still appear in their search results or recommendations.