Tackling online child sex abuse images

 

Next Tuesday Maria Miller MP, the Secretary of State for Culture, Media and Sport, will convene a meeting with senior executives drawn from some of the many large and diverse players who make up what we loosely call the “internet industry”.

The meeting is being held in the aftermath of the revelations in the trials of child murderers Stuart Hazell and Mark Bridger. In both instances the internet appears to have played a part in feeding their interest in violent child sex abuse. Hazell and Bridger were each found to have collections of child sex abuse images on their computers. This sparked a huge debate in the media and elsewhere about the responsibilities of online service providers. What more can and should they do to rid the internet of child abuse images altogether or, if that’s not on the cards, what could they do at least to reduce the volumes accessible in the UK?

Why do these images matter?

Let us first remind ourselves why the images matter. For the avoidance of doubt we also need to be clear that an image could either be a photograph or a video or be delivered by live streaming.

By definition a young person has been harmed in the making of a child abuse image. We therefore need to find the images as fast as we can in order to identify, locate, safeguard and get appropriate help to the child wherever he or she is and as quickly as possible.

Forensic examination of the images may also assist law enforcement in identifying and apprehending anyone involved in the abuse or in distributing or downloading copies of it.

Apprehending the abusers and those involved in distributing or downloading is important in its own right but it matters too in terms of justice for the child victims, perhaps helping to bring some sense of closure. Crucially it is also about finding ways to prevent perpetrators from re-offending, hurting the same or other children in the future.

Staying with this theme of preventing new acts of child abuse, even though the workings of the causal chain are imperfectly understood, there is unquestionably a raised probability of a person who engages with child abuse images going on to commit contact offences against their own or other people’s children. That puts a very high premium indeed on rendering any discovered child abuse images inaccessible or limiting their accessibility as much as we can, again as quickly as we can. We need to reduce the chances of anyone new finding and perhaps getting involved with child abuse images for the first time. That initial offence is the one that will set too many on the road to perdition, putting more children in danger.

However, there is another extremely important reason for making the images inaccessible on as large a scale as possible as rapidly as possible. We owe it to the abused child. This takes us back to the earlier point about how best to help victims. It is bad enough that a young person has been raped or molested to create an image, but to know it is freely circulating on the internet can add greatly to the child’s distress, their sense of personal humiliation and loss of control. That could make recovery a lot harder to achieve.

Acknowledge the good work that is already being done

I am sure Maria Miller will open up Tuesday’s meeting by noting the excellent work already being done by many of the companies likely to be in the room.

If a child abuse image is discovered on a web site by anyone in the UK it should be reported to the Internet Watch Foundation (IWF), the UK’s hotline, funded by the internet industry and the European Union. If the image is on a UK based server it will be deleted at source and, typically, be gone in 60 minutes. Few countries can match that.

However, if an image has been published on the web from somewhere outside the UK and, after the IWF has notified the proper authorities it isn’t removed promptly, it can still be rendered inaccessible to most people in the UK by being placed on a url (web address) blocking list which the IWF maintains. That is an acceptable second best.

Every online business I know in Britain uses the url blocking list provided by the IWF to block images on overseas web sites. Some even manage to incorporate lists provided by foreign hotlines and police agencies. Url blocking is online child protection 101.

Several of the businesses Maria Miller will be meeting also deploy hashing technologies. A hash is a unique digital fingerprint. Every already known child abuse image is identifiable via its hash. Several companies that provide online storage space have access to large databases of these hashes. If a user tries to store such an image it can be identified via its hash and appropriate action taken. In addition the IWF monitors Usenet Newsgroups and issues ISPs with a block list of those found to contain or advertise child abuse images.

So there is no question that most of the big internet companies are doing something to help fight child abuse images online, and in the main they have been doing it for quite a while. Everybody has a zero tolerance policy towards this kind of material.

But…..

Despite these highly commendable voluntary efforts regrettably it is nonetheless beyond all reasonable doubt that there are now more child abuse images circulating on the internet than there have ever been. Current approaches are not working well enough. We have to step up a gear or two.

In 1995, arguably the UK internet’s Year Zero, the British police only knew about the existence of 7,000 child abuse images in total. 81 people were convicted of offences relating to child abuse images. Today we are a long way from that.

Following a series of freedom of information requests we learned that five local forces in England and Wales between them seized a whopping 26 million child abuse images in a two year period ending April, 2012. One rough and ready estimate suggested this could mean, in the same period, perhaps over 300 million images were seized by all the police forces in England and Wales. Also the police told us, at the last count, they knew 50,000-60,000 individuals in the UK had been involved in downloading child abuse images from the internet. Since records began, in no year have arrests in this space ever exceeded 2,500. Think about that. Do the math.

While the web and Usenet Newsgroups remain an important source of child abuse images, those astonishing numbers seem to derive principally from the explosion in the use of Peer2Peer networks and perhaps parts of the Darknet by paedophiles and other persons exchanging child abuse images. Url and Newsgroup blocking is no use here and the scope for using hashes seems not to have taken off. Whether or not that is for technical reasons I cannot say at the moment but I intend to find out. Watch this space. I will report back.

There is yet another ingredient that needs to be added into the mix for Tuesday’s meeting, not so much for discussion, more as part of the new background.

I referred earlier to the number of arrests made for child abuse offences and compared it to the number of persons known to have engaged with such images. On current trends that would mean some of the guys the police know about today might not be arrested until 2037. That’s absurd.

The good news is the UK police, or at any rate CEOP, recently acknowledged openly that within their existing levels of resources, and probably within any reasonably foreseeable resource scenario in these times of austerity, they simply cannot cope with the volumes of images and offenders. Interpol has also admitted that

no police force in Europe is on top of this problem

My guess is they could just as easily have said “the world”.

These statements by the police are important because they mean we can at last have an open and honest discussion about the realities of the situation not the myths. The emphasis, rightly, must therefore shift towards looking at what more can be done at a technical level by the industry, particularly the larger, richer enterprises, to stem or reverse the tide.

No need for a major public enquiry

In the UK the Byron and the Bailey Reviews did not look at how child abuse images and related issues were handled because at the time it was assumed everything was working well. I do not think we need to reopen either or both of these Reviews, nor do we need to establish a third. This is a relatively compact area of policy. Most of the parameters are well defined.

Yes we should seek a wide range of views on next steps and listen to a range of opinions but it is clear that several uncontroversial, obvious steps can be taken immediately or fairly quickly which will help ameliorate the unacceptable status quo.

Stay tightly focused

We need industry’s help to find ways to reduce the number of offenders and reduce the availability of the images. Here is my plea: I hope on Tuesday everyone will stay tightly focused on this agenda. If the meeting turns into a wide ranging discussion about any and every perceived online ill affecting children it risks achieving nothing other than headlines.

For example, with one exception (see Messages for Miscreants below), if the meeting tries to go too far into issues of offender management, safeguarding, police resources, how to improve international co-operation and the public education aspects of policy, hugely important though they are, it will be going in the wrong direction. Tuesday should be about horses for courses. Let’s see and hear what industry, uniquely, can do to help here and now. Below is my version of what I think they could do.

Peer2Peer – a call to action

The IWF has done a fantastic job over the years in keeping UK based web sites and Usenet Newsgroups comparatively free of child abuse images. However, as we have seen, a great deal of the growth has been on and around Peer2Peer. The IWF’s terms of reference preclude it from engaging with Peer2Peer. The view has always been that Peer2Peer is solely a matter for the police. See above.

Here I will make a rare criticism of the IWF, of which I was once a Director. The IWF is the UK’s lead agency in the field of online child abuse images. It is the organization people turn to first for advice, guidance and leadership on this subject. I think it has been far too timid in articulating the need to expand its terms of reference and upscale the wider effort specifically to tackle Peer2Peer networks. Thus, while I am sure there are many things that can and should be done by the IWF within the frame of their existing terms of reference unless, somehow or other, a new focus, a new urgency is given to tackling Peer2Peer networks we will have missed a major target by a large margin.

At least one specific proposal about how we might make progress with Peer2Peer has previously been put to CEOP and the Home Office. It got nowhere. If funding is an issue the music and film industries ought to be approached. They did (once) express some sympathy towards the idea of helping out. The same Peer2Peer software being used to rip them off is also often used to exchange child abuse material.

Incidentally, having said that the IWF has been too timid in pinpointing the need for more action to deal with Peer2Peer I think the police also have some explaining to do.

A problem that falls through the cracks

The misuse of Peer2Peer networks is a classic example of a problem that falls through the cracks. It cannot be laid at the door of any single company or indeed any single part of the internet value chain. It is not specifically an ISP issue, neither do search engines have a singular vantage point. Not even the companies that develop the Peer2Peer software in the first place can be blamed although one wonders how much more they could be drawn into helping prevent blatantly illegal use.

As ever, if everyone is responsible, no one is.

File Hosting sites, deep packet inspection and web crawlers

If Peer2Peer networks are top of my list for action, file hosting or online storage sites run them a close second. Every file hosting site, image sharing service and online service provider that supplies storage space to users ought to deploy hashing technologies such as PhotoDNA or similar. This should happen whether the storage space is provided for a fee or as part of a “free” service. Let me put that another way. Why wouldn’t a company use PhotoDNA or similar? In the case of PhotoDNA Microsoft give it away. Not using hashing in such a setting is a bit like saying you don’t care what your customers put on your machines. You should care.

In the USA a large proportion of reports of child abuse images NCMEC receives have resulted from hashing technologies being deployed in member companies. I hope every UK based business will follow suit.

More generally it would be useful to have an update on the efficiency of deep packet inspection programmes (dpi) which might be able to detect known illegal content in the stream i.e. as it moves across a network, without having to wait for it to “arrive” on someone’s computer or on a hosting site.

My understanding is that the music and film industries have developed a range of dpi type measures which have been deployed by some online service providers to intercept and interdict copyright infringing traffic or prevent copyrighted material from being re-uploaded. How might these be adapted for use in the fight against child abuse images? Up to now detecting videos which contain child abuse images has been difficult and dealing with live streaming also presents challenges.

Is any web crawling technology or similar being used to hunt down sites and online spaces containing illegal images? Several years ago I seem to recall that either the Munich police or the German Federal Police claimed to have something of the sort that they were using.

Do we need to look at IRC channels or the file-swapping capabilities linked to any number of messaging or chat services?

Then there’s encryption. Let’s not go there. Yet.

Messages for miscreants

ISPs and other access providers should cease forthwith to use Error 404 messages. They do little or nothing to deter a would be offender and potentially they are deceptive. I believe they may anyway be about to become unlawful under Article 25 (2) of the EU Directive on combating the sexual abuse of children and child pornography.

My suggestion is that where anyone enters a url known to contain child abuse images, or in a search engine or browser they enter terms which indicate they are looking for child abuse material, child abuse networks or contacts associated with child abuse, or they type in words which suggest they are trying to locate information which implies a clearly murderous intent towards children or some other grossly malevolent purpose, algorithms should pick this up and an appropriate message should flash up on their screen. Norton 360 does something similar already for malware. That’s the kind of thing I had in mind.

Unambiguous and in your face

As with Norton 360 the messages to miscreants should be prominent and “arresting” (so to speak). In your face, not displayed only at the margins of the screen or at the bottom of the page where many people might never look. The message could also include a copy of the IP address then being used by the person making the search. The precise wording of the message is up for discussion but it may include, for example, information about where to find help if the searcher is worried about their own potentially deviant or illegal behaviour.

People who have a legitimate interest in searching for material associated with child abuse or child murders will understand the beneficial purpose and intent behind the message and will click past it regardless. Many of those with different motives will immediately desist, anxious that their behaviour might be traced back to them. Maybe if a web cam is attached to the computer or device being used it could be turned on and a picture taken? Is that too spooky or intrusive? Some won’t think so.

Anecdotally it seems at least 50% of all the men arrested for child abuse image offences were, at least in the beginning, quite weakly attached to finding the pictures. Almost anything that could have knocked them off course, deflected them or made it clear that they are not searching anonymously would have stopped substantial numbers in their tracks. Immediately. That is a prize well worth having.

Imagine if, overnight, 50% or more of the police’s potential case load was to vanish as the low level would-be offenders took flight or never arrived. That would leave law enforcement with more time and resources to go after the other 50% who, presumably, have a stronger interest in child sex abuse material. In a similar vein research carried out by Professor Richard Wortley and others shows there is often a highly opportunistic element to child abuse image related crimes i.e. if the images are easy to get at some guys will get at them and if they are not easy to get at they won’t.

Email service providers

Assuming it is legal in the UK, email service providers should be encouraged to follow AOL’s lead and scan attachments to emails to check images against a database of hashes of known child abuse images.

It is well known that several online service providers scan the contents of emails and postings looking for clues about what advertisements they ought to serve up to a user. The AOL idea seems to me to be taking that reasoning simply a step further.

Tightening search engines’ rules

Search engines should tighten their rules and their algorithms to halt the illegal advertising of child abuse images. I am not going to repeat the strings I found when I searched using terms which were not themselves illegal but one, for example, appeared to provide a direct link to

teen porn young 13 year old sex video

Under UK law, s.1 (1) (c), Protection of Children Act, 1978, it is an offence to advertise the availability of child abuse images, even if you don’t actually have any to sell or provide. The link I found should not have been there. It is illegal.

Section 3 of the same Act is also interesting. This is what it says

Where a body corporate is guilty of an offence under this Act and it is proved that the offence occurred with the consent or connivance of, or was attributable to any neglect on the part of, any director, manager, secretary or other officer of the body, or any person who was purporting to act in any such capacity he, as well as the body corporate, shall be deemed to be guilty of that offence and shall be liable to be proceeded against and punished accordingly.

And what are we to make of the allegations in Amanda Platell’s piece in the Daily Mail on 24th May? How could she reach such horrific and illegal material so quickly? That’s another reason why search engines should look again at the sort of syntax they will allow or provide in their returns.

Along similar lines, if the engineering is smart enough it ought to be possible to detect urls and images even if they are not already on a blocking list or an existing database of hashes. This could be done by analysing context sensitive variables on the fly. I appreciate that, in the beginning, mistakes will be made and that this may not be easy. But if no one tries, if no one wants it to happen it never will.

Sexual Offences Act 2003

s.12 of the Sexual Offences Act, 2003, makes it a criminal offence for an adult to cause a child to watch a sexual act, and that includes images of such an act. A transgressor could face up to ten years in prison.

There are two key elements to the crime. For the purposes of their own sexual gratification the adult must intend to cause the child to watch the sexual act. Clearly that rules out practically every online service provider but we are in proximate territory. Motivation is important in terms of establishing criminal intent but in relation to the potentially harmful impact on the child it hardly matters.

“Barely legal” and “teen sex”

Can anything be done about search engine returns or links containing terms such as barely legal, teen sex or similar?

What about urls which imply, for example, that the sex scenes on offer on the site will be taking place in a school classroom or are between a pupil and a teacher? If these are not illegal they must be quite close. Even if they aren’t in themselves illegal, terms like these are often code. They will take you to other sites which then lead on to child abuse material. I know of at least two experts who have said many hard core porn sites get you to child abuse images within “six or seven” clicks of the home page.

Moreover in contrast to US law, under UK law what matters is not how old a person in an image actually is, it is whether or not they appear to be a child. Following a decision of the Court of Appeal in R v Land this is a matter for the jury as a finding of fact. Expert testimony is not allowed.

Therefore, assuming a judge agrees that an individual within a sex abuse image could be under 18, if he or she looks like a child to 12 good men and women true then he or she is a child. I agree it is unlikely that anyone would prosecute if it was discovered that someone in an image was actually 30 years old but that’s a different matter.

In that context a significant proportion of the material that is found on some so called adult sites is at least borderline illegal here in the UK because the models or actors look very young. I know these things can be extremely difficult to judge but my point is we should not blithely assume that just because a site calls itself an adult site that this necessarily means nothing on the site is child pornographic under UK law.

Not all adult porn sites are completely legal anyway

Leaving aside considerations of whether or not the persons appearing in the images are of legal age, or whether some sites will lead you to child abuse images, we anyway need to explode the myth that all genuinely and obviously adult pornography is bound to be legal. It isn’t. Some of the material on well known porn sites would never be granted an 18 certificate, which means it could not be shown in UK cinemas, but neither would it be granted an R18 licence so it could not even be sold in licensed sex shops. In other words a portion is criminally obscene. UK law enforcement may not take action against these sites but that is largely because of resource constraints or operational challenges.

Moreover, following the Court of Appeal decision in R v Perrin there is no doubt that sites which fail to take steps to remove grossly pornographic material from their home pages, thus rendering it accessible to anyone who lands on it, including children, are committing a criminal offence. So why are they still so easy to find?

The fact that the police do not knock on their doors with arrest warrants does not give companies a licence to do nothing. That would be putting them uncomfortably close to simply taking advantage of the police service’s poverty or the difficulties of mounting such actions internationally, whereas I am sure better businesses would wish to be seen straining every muscle to uphold higher standards.

Could search engines favour porn sites within the .xxx domain? As I understand things they closely monitor the behaviour of sites within their ambit .

Default on blows away anonymity

If we accept that hard core porn sites are an important route for some men to find their way to child abuse images, an (admittedly) gigantic step would be for the search engines to create a default which blocked access to such sites.

To get access all one would have to do,à la UK mobile phone networks since 2004 and UK gambling sites since 2007, is ask for the default block to be lifted. You could only do that by proving you are over 18. It would not be necessary to say “gimme the porn, I’m a pervert”. All you would need to do is prove you are over 18 and the adult barrier would go permanently for anyone logging in on that account.

As most of the bad guys rely on what they believe is the anonymity of the internet they would baulk at such a process. I acknowledge that there may be privacy concerns for the rest of us but, if there is a will, a way could be found to address them.

Adopting such a policy would have the added but entirely separate advantage of stopping kids accessing hard core porn sites.

Belt, braces, string, padlocks – a step too far?

In truth a similar and potentially much better result could be achieved if the ISPs, device manufacturers and WiFi providers implemented a default block on porn. That is being progressed through UKCCIS now so one could regard a suggestion that the search engines do the same as being overkill or redundant. That said we still do not actually know what the final result is going to be of these UKCCIS processes.

I am agnostic on this point but I think a default block on all porn sites that can only be lifted following an age verification process would be a tremendous step in the right direction from several points of view. Where and how it is done is, of course, a very important detail but what matters more is that it is done somewhere in the line of access.

Big databases preferred

If databases of known images or their hashes, or databases of urls, are to be used anywhere in these processes it is important that they are as large as possible and not narrowly restricted in the way some lists are e.g. some agencies only issue lists where the child depicted in the image has been located in real life, the perpetrator arrested and convicted. As a result such lists can be quite small.

In relation to urls INHOPE should be pressed to provide a consolidated list of all known urls. The European Commission funds INHOPE and is thought to be sympathetic to this point of view.

No smoke without action?

I appreciate that through the above measures we will increase our capacity to gather in more information about criminal activity. But if there is no extra law enforcement or other follow through what is the point? Well there is one.

Of course I would like to see the cops with more money so they could do more but that is not a get out for the industry because even if the police did nothing beyond what they currently do there may still be a lot to be gained from several of the measures outlined above. They are intended to deter would-be bad guys from continuing with or taking up a course of illegal behaviour.

Naming and shaming

The IWF should either publicly name every company they identify as hosting child abuse images or establish a threshold of some sort e.g. determined by a matrix of frequency and quantity. Any company falling on the wrong side of the threshold should be publicly identified. This should apply to companies the IWF identifies both in the UK and overseas.

The IWF should press INHOPE to do something similar.

CEOP should also explain the basis on which it decides whether or not to publicise the names of companies that its officers are constantly having to contact because of apparent breaches in the law, either in respect of child abuse images or grooming.

Independent audit

A means should be devised to validate and confirm that a range of companies are taking some or all of the actions referred to above. Might there be some scope for identifying and praising companies taking extra or exceptional measures?

Reserve powers

Maybe this is no longer an issue but it seems to me the case is even stronger now that, in the forthcoming Communications Bill, Ofcom or its successor be given reserve powers to give directions to any business in the UK that is involved in providing access to the internet or is providing services on the internet to persons within the UK.

Ofcom or its successor should be able to act either under its own volition or at the direction of the Secretary of State, but of course in exercising its powers it must act entirely independently of Government and according to its own previously announced and published criteria. In relation to how decisions are made about how to classify content Ofcom might usefully link up with either the BBFC or ATVOD, or both.

A minor but important digression: well done Facebook

Facebook is to be commended for its recently announced intention to look again at its policy on hate speech, particularly in respect of “gender-based hate”. I’d say several of the hard core porn sites I have seen would get close to qualifying as “gender-based” hate.

Is Facebook’s move a harbinger of changing attitudes among the big players on the internet? Might a greater willingness be emerging to dive in and look more carefully at the content that is being served up and is associated with the company’s brand?

The internet does not just hold up a mirror to the world. It also magnifies and amplifies what is going on in it.

Once more, apologies for the length of the blog.

About John Carr

John Carr is a member of the Executive Board of the UK Council on Child Internet Safety, the British Government's principal advisory body for online safety and security for children and young people. In the summer of 2013 he was appointed as an adviser to Bangkok-based ECPAT International. Amongst other things John is or has been a Senior Expert Adviser to the United Nations, ITU, the European Union, a member of the Executive Board of the European NGO Alliance for Child Safety Online, Secretary of the UK's Children's Charities' Coalition on Internet Safety. John has advised many of the world's largest internet companies on online child safety. In June, 2012, John was a appointed a Visiting Senior Fellow at the London School of Economics and Political Science. More: http://johncarrcv.blogspot.com
This entry was posted in CEOP, Child abuse images, Default settings, Pornography, Regulation, Self-regulation. Bookmark the permalink.