Monday, December 07, 2015

Social media companies have to assess how identify violent or terror-promoting content


Social networking sites, especially Facebook, Twitter and Instagram, are coming under increased pressure to screen material for terror-inciting content, according a Wall Street Article Monday by Deepa Seergaraman, Alistair Barr and Yoree Koh.
 

 
Until recently, companies have allowed posts that depicted past terror acts on the theory that they are important news.  Now, the companies have to consider whether they were posted just for enticement. Computerized algorithms have a hard time doing this.
 
It's easier to identify child pornography, since there is a database with NCMEC of known images with digital footprints.  But no such system exists for terrorism.  Screening for these issues could extend eventually to private cloud storage.
 
The service most under pressure seems to be Twitter.  (Facebook removed a post by Tashfeen Malik quickly after the attacks.)  Twitter also has to deal with the way users interpret the dynamics of its service, as some people now consider certain reply behavior by unrequited followers as “stalking” or at least rude, while others don’t.  I discussed this on my main blog Friday (Dc. 4).

No comments: