YouTube defines 'hate speech' and clarifies which videos can earn money through ads
It has been a few weeks since advertisers started to pull out of YouTube after concern about the placement of ads. Since then, the company has been scrambling to earn back trust and it says it has "held thousands of productive conversations with advertisers, and implemented additional controls to restore advertiser confidence."
Now YouTube has set out what it classifies as hate speech, and says that any content that falls into this category will not be eligible for monetization. It's a move that's designed to calm the fears of advertisers, but there is concern that the rules being put in place are now too strict and could affect the incomes of large numbers of YouTubers.
In a blog post, YouTube's vice president of product management, Ariel Bardin, explains to content creators that it needs their help. The company shares details of "the types of content that brands have told us they don't want to advertise against and help you to make more informed content decisions." New guidelines find YouTube taking a "tougher stance" on a range of content.
The blog post details three broad categories:
Hateful content: Content that promotes discrimination or disparages or humiliates an individual or group of people on the basis of the individual’s or group's race, ethnicity, or ethnic origin, nationality, religion, disability, age, veteran status, sexual orientation, gender identity, or other characteristic associated with systematic discrimination or marginalization.
Inappropriate use of family entertainment characters: Content that depicts family entertainment characters engaged in violent, sexual, vile, or otherwise inappropriate behavior, even if done for comedic or satirical purposes.
Incendiary and demeaning content: Content that is gratuitously incendiary, inflammatory, or demeaning. For example, video content that uses gratuitously disrespectful language that shames or insults an individual or group.
YouTube has already proved that the automated algorithms it uses to flag up videos are far from perfect, and some users of the site have voiced concerns that the company is not allowing enough flexibility for content-makers. While many of the criteria which could find a video ineligible for monetization are not really open to interpretation, this is not the case for all of them -- and this is something that's likely to be a talking point for some time to come. For instance, one person's interpretation of "gratuitously disrespectful language" will be very different from another's. The guidelines also raise questions -- such as why is "veteran status" included in the list of possible hate speech targets?
Image credit: Alexey Boldin / Shutterstock