It's been over a month since a gunman opened fire at two mosques in Christchurch, New Zealand, killing 50 people and livestreaming the massacre on Facebook. It appears the social network, as well as Facebook-owned Instagram, is still showing videos of the attack, according to a Friday report by Motherboard.
Some of the videos on the platforms are shorter clips of the original 17-minute footage, the report says. One video on Facebook showing the gunman killing people from a first-person perspective says it potentially contains "violent or graphic content," but remains on the platform, according to Motherboard. Users can reportedly click to confirm they want to watch the video.
Facebook and Instagram didn't immediately respond to a request for comment.
Facebook has struggled to keep footage of the mass shooting off its site. The social network was criticized after its artificial intelligence systems couldn't automatically detect video of the terrorist attack. Facebook relies heavily on user reports to flag inappropriate content, and says it didn't get a report during the live broadcast. The first user report came 12 minutes after the livestream ended, wrote Guy RoRead More – Source
It's been over a month since a gunman opened fire at two mosques in Christchurch, New Zealand, killing 50 people and livestreaming the massacre on Facebook. It appears the social network, as well as Facebook-owned Instagram, is still showing videos of the attack, according to a Friday report by Motherboard.
Some of the videos on the platforms are shorter clips of the original 17-minute footage, the report says. One video on Facebook showing the gunman killing people from a first-person perspective says it potentially contains "violent or graphic content," but remains on the platform, according to Motherboard. Users can reportedly click to confirm they want to watch the video.
Facebook and Instagram didn't immediately respond to a request for comment.
Facebook has struggled to keep footage of the mass shooting off its site. The social network was criticized after its artificial intelligence systems couldn't automatically detect video of the terrorist attack. Facebook relies heavily on user reports to flag inappropriate content, and says it didn't get a report during the live broadcast. The first user report came 12 minutes after the livestream ended, wrote Guy RoRead More – Source