Promises to take active approach to preventing on-site activity.
Another day, another headline about a terrorist attack… but in this digital world we live in, there’s a key player involved that some critics feel really isn’t taking their role seriously. Social media platforms have been under intense scrutiny for the accidental part their playing in enabling large-scale attacks to continue, events that have caused catastrophic loss of life and injury. But Facebook at least claims it will now take a more active role in thwarting terrorist activity on its site.
In a bold “you can’t have it both ways” move, sites like Facebook and Twitter have been blamed for not removing hate speech in the past. Breastfeeding photos get your account blocked, but calling for the torture and deaths of entire demographics of people is protected. But Facebook has said it’s not having that anymore; the company announced it would be using artificial intelligence to seek out and remove terror-related speech and content, prompting some to decry the attack on freedom of speech.
Fancy a job?
Speaking of jobs no one could possibly want, Facebook is one of several platforms that already pays human content moderators to vet users’ content and take down offensive images and videos. Entire outsourced teams of moderators sit at computers all day and sift through the images to weed out things like animal cruelty, criminal activity, hard core porn, and more. It’s no great leap to add hate speech and terrorism-related content to that list.
Whether or not this is a genuine move to make the world a kinder place, Facebook does finally have an incentive to take action. Several governments’ leaders have called for some form of “regulating” of the internet in order to stop the communications that allow large-scale attacks to take place. It’s either police yourselves, or let the government do it for you… and no one wants that.