Tuesday, July 23, 2013

Ransomware trojan exploits child pornography risk to users; can ordinary retail repairs be legally risky to users?

On Nov. 11, 2009 I had discussed here the grim possibility that hackers could plant child pornography on an unsuspecting user’s computer.  The Associated Press was conducting a study on the issue in 2009, but not a lot of news came from it.

On July 20, I posted on the “protecting minors” blog (see the left side of my “doaskdotell.com” site) a story about some states requiring computer repair technicians to report c.p. that they encounter to authorities.  This would possibly put technicians in a position of making legal judgments on content that they are not prepared to make.  As with general concerns about surveillance, a police investigation could involve searching all of a user’s or business’s computers, or might migrate into other areas of possibly illegal content (like copyright), so there could be a broad issue here.

This problem should not be confused with a new “ransomware” Trojan that tells the user that his or her computer has c.p. on it and will not be unlocked until the user pays a “ransom”.  The only way to get the computer back up is to remove the virus (typical source, from Anvisoft, here).  If the user does not know how, she must take the computer to a technician.  But understand the difference.  The computer does not have illegal content on it; it is only the ransomware that is illegal.  A technician will not have to report the user to police for this problem.
  
Some time in the last couple of years, Geek Squad (as part of Best Buy) has been including a provision that the consumer signs when turning in any hardware to be repaired, words to the effect that the consumer is “on notice that a product containing child pornography will be turned over to the authorities”.   Hopefully authorities could clear a problem that is not illegal very quickly, but otherwise the hardware owner could probably expect a knock on the door at home or work from the police.  (That’s probably more likely than the theatrical scenario of an arrest when picking up the item.) 
  
But what is objectionable seems to be the obvious “Catch 22”; a computer user with an infected computer but not having seen any illegal content pop up cannot know for sure that it is not there.  In most states (as with federal law)  to break the law the user must “knowingly” acquire and view the illegal content, but in practice the defense could be very difficult and expensive, and the user’s arrest might be reported widely in the media, destroying his reputation even if he is later cleared by a virus infection. (In the more distant past, some states, like Arizona, seemed to act as if owner responsibility was one of absolute liability.) 
    
How commonly does this issue occur?  There has not been a lot written about it since 2009.  But, regarding technician reporting, there is a story from Houston (link) and another from Hartford (link), both in 2012.  It's not clear what would have led technicians to even see the illegal images; normal scanning with tools wouldn't show them, and they are not supposed to go looking for them deiberately. 
  
Apparently, a user who has P2P file-sharing may be at greater “risk” than one who does not (such as the issue with a case in Wyoming in 2007, discussed on the Nov. 2009 posting).  But even without it, a deliberate infection by a hacker could be possible the same way making a machine a zombie for a DNS attack is possible.  But usually there is very little motive to do so – ironically, that point can make defense more difficult.
  
One possible route to illegal possession would be visiting supposed "adult sites", where the user believes that all images are of individuals 18 or older (the federal standard, regardless of age of consent), but some in fact are not, and become cached.  Users normally depend on federal laws which require "adult video" film producers to verify the age of actors to protect them legally. It might not always work. 
   
It’s also possible for people to be ensnared if their machines are distributing illegal material without their knowledge (often through P2P), as the National Center in Alexandria works with the FBI to monitor such traffic.  It's possible that more automated tools, to detect illegal images by watermarks during distribution or even in "the cloud" (as from automated disk backups like Carbonite) will become possible and even common in the future. 
  
Society (the law enforcement and prosecutorial community, and the media as whole) is very intolerant of this problem because minors can be the ultimate victims.  Unfortunately, innocent people can be ruined by the crossfire, although this seems to be rather rare.  It seems as though there needs to be more attention to consumer responsibilities, as to what they should be expected to do to keep their computers and noses clean.  Formal training and adult education come into play, as would the idea of certification or “Internet driver’s license”. A few states, like Florida, have become more diligent in spelling out user responsibilities. Fr example, users who inadvertently find c.p. on the web are expected to call police immediately to clear themselves (link). 


No comments: