In the wake of the attack on mosques in Christchurch, New Zealand, Second Century Initiative Professor Tony Lemieux spoke with Time about the difficulty that social media companies have in taking down extremist and violent content.
Time reported that the attacker initially live-streamed the footage of the assault online, and Facebook, YouTube and other companies struggled to block and delete the content. But copies of the content continued to spread to other sites such as Twitter, Instagram, Reddit and back to Facebook itself.
Time reporter Billy Perrigo spoke with Lemieux, who is also the Director of Georgia State’s Global Studies Institute, and wrote:
“It becomes essentially like a game of whack-a-mole,” says Tony Lemieux, professor of global studies and communication at Georgia State University.
Facebook, YouTube and other social media companies have two main ways of checking content uploaded to their platforms. First, there’s content recognition technology, which uses artificial intelligence to compare newly-uploaded footage to known illicit material. “Once you know something is prohibited content, that’s where the technology kicks in,” says Lemieux. Social media companies augment their AI technology with thousands of human moderators who manually check videos and other content. Still, social media companies often fail to recognize violent content before it spreads virally, letting users take advantage of the unprecedented and instantaneous reach offered by the very same platforms trying to police them.
To read the full article, visit http://time.com/5552367/new-zealand-shooting-video-facebook-youtube-twitter/.
He also spoke with The Today Show, NBC Nightly News, CNN International and WABE.
Lemieux arrived at Georgia State under the Second Century Initiative, which was the predecessor initiative to the Next Generation Program.
– Jeremy Craig, Manager of Marketing & PR, Office of the Provost