YouTube has mechanisms to identify and remove malicious content platform, which is done from the beginning but in recent years has been enhanced significantly. Artificial intelligence, human moderators, and collaboration with other companies are some of the steps taken to mentor a healthy environment.
What does YouTube consider harmful content?
In September 2019, YouTube established new platform enhancement measures, which she called “The 4 Rs of Responsibility”. Each “mistake” is a basic principle for maintaining a healthy service, free of harmful content and toxic users, while favoring and encouraging creators who add to the ecosystem.
The methodology of the “erres” was inspired by the 5S Quality Method and, so far only the first one, the “Removed”, was detailed. It governs all guidelines on content analysis, identification of inappropriate videos, and consequent removal.
YouTube does not tolerate videos that violate its Terms of Service, which are as follows:
- Dangerous challenges: such as the Soap Challenge and the like, which pose a risk of injury;
- Dangerous or threatening pranks: Pranks where victims believe they are subject to harm or generate emotional stress;
- Instructions for killing or injuring: such as bomb tutorials or instructions on how to commit attacks on other people’s lives;
- Drug apology: with consumption or encouragement of illicit drug use. Licit drugs, such as alcohol or cigarettes, are tolerated depending on the context;
- Eating Disorders: People suffer from disorders such as anorexia or bulimia, among others, in which the creator encourages weight loss unhealthily. Videos of medical disorders (as an incentive to seek treatment) are allowed;
- Promotion of violent events: that promote or praise violent events, such as school shootings;
- Criminal Practice Instructions: These teach you how to steal money/items or how to bypass security systems (hacking) and send malware to users (phishing);
- Promotions of miracle cures: Promoting substances, treatments, or sessions that promise unproven healing.
YouTube is also fiercely battling conspiracy theory videos, which have already made big channels like nefarious Alex Jones’s Info Wars filed off the platform.
How are inappropriate videos removed from YouTube?
According to YouTube, the procedure for identifying and removing harmful content has been done since the platform was founded in 2005, but it has intensified since 2016.
The platform has implemented solutions such as machine learning, flags provided to NGOs, peer review teams that monitor publications, measures that prevent minors from streaming without adults present (in order to inhibit toxic viewers), bar the suggestion of adult themes to accounts. family members and etc.
YouTube works closely with creators and reviewers to correctly identify a video as inappropriate and flag it for removal to reduce false positives and end up erasing legitimate posts while punishing users while toxic content goes through the sieve.
This is where one of the hardest parts comes in, which is identifying hate speech. The major problem for algorithms is knowing how to distinguish the use of a word according to context, so that a legitimate video using a particular expression can remain in the service. While another, who used the same expression but with a connotation of hate, is removed as soon as identified.
According to YouTube, the service has collaborated since 2017 with various users and technology companies to correctly identify harmful content via algorithms. The company contributes databases to increase positive identifications while reducing errors. According to official data, 87% of the 9 million videos removed in the second quarter of 2019 were initially flagged via algorithms.