Google is implementing more measures now to remove violent, terrorist and extremist content posted on YouTube. Google confirmed about this on Sunday in a blog post. According to the blog post, they will take a lot more effort on identifying and removing videos that contain extremist content. Google will issue a warning and will not monetize or recommend them for user endorsement. Even if the users don’t clearly violate company policies, they have all the right to take actions on the basis of the supremacist and inflammatory religious content.
Not only this, Google is also said to employ more engineering resource and give a hike to its use of technology which will help them in identifying the videos, along with training new content classifiers for quick identification and removal of extremist content. The company will be collaborating with counter-extremist groups again for identifying content which will be further used to radicalize and recruit extremists. Google will further try to reach Islamic State potential recruits via online targeted advertising. Then redirect them towards anti-terrorist videos in a bid to change their minds about joining.
Recently, in some countries like Britain, France, and Germany, a lot of civilians were killed and got wounded because of the shootings and bomb blasts by Islamic militants. These countries have pressed Social media platforms like Facebook, Google, and Twitter to increase measures in removing militant content as well as hate speech. Facebook also showed its efforts in removing terrorist content. According to the company, Facebook has started using artificial intelligence known as language understanding and image matching for quickly identifying and removing content. The General Counsel of Google, Kent Walker said, “While we and others have worked for years to identify and remove content that violates our policies, the uncomfortable truth is that we, as an industry, must acknowledge that more needs to be done now.”