France goes for 15 as the Article 8 age – a hodgepodge cometh

The GDPR comes into full force in just over two months. One of the small number of derogations the GDPR allows is for a Member State to choose a minimum age of consent for the purposes of processing data online. That means Member States are allowed to determine when young people in their country can decide for themselves whether or not to join Twitter, Instagram and the like.

A country can adopt any minimum age between 13 and 16, with 16 as the default. Before deciding  which age to go for no Government or DPA commissioned research to try to understand the nature and extent of children’s comprehension of the modern internet. Instead, quite a few e.g. the UK, opted for the de facto status quo, which is 13.  A larger number were content to allow the default to become the standard. A variety of influences appear to have been at play in guiding each country’s decision. As far as I can tell the single most common was a reference to some pre-existing law, regulation  or practice which bears no relation to what is actually happening to children online.

The decision in France

Last week in France, the National Assembly appears to have decided to ignore the advice of CNIL, the French DPA. CNIL wanted to stick with the default of 16. French Parliamentarians prefer 15. This puts France alongside Greece and Croatia.

A child’s rights activist from Paris told me she thought it crazy for a child to be able to consent to sex at 15 but not  be able to choose to go on Snapchat for another year. I can see that looks odd but there is a big and obvious downside. See below.

Meanwhile Belgium went for 13 and Holland, technically, stuck with 16. Spain already had 14 and apparently is not budging. Austria has chosen  14. Previously it was reported that Ireland was going for 13 but the word is moves are afoot to raise it to 16.


In the case of Germany,  when the GDPR kicks in it is going to be 16 because their election cycle meant it was just not possible to consider any alternatives within the time available.  Now there is a Government they are going to look at this again in July.

The hodgepodge cometh

It is clear in Europe every available age between 13 and 16 is going to be out there.  The hodgepodge cometh, as Shakespeare or Eugene O’Neill, or both, might have said. No one knows what the consequences of this are going to be and they are hard to foresee. Maybe I am worrying over nothing.

The obligation to carry out age verification?

If an online service simply draws a line and says they will not accept anyone below the minimum age, which has largely been the position up until now, how far will companies be expected to go to verify the ages of those who do join? Or will we see  a continuation of huge numbers of children simply ticking a box to misrepresent their age? That can be done with or without parental collusion.

We know every service provider is meant to carry out a risk assessment in relation to their product but absent any guidance about how to determine what the risks are or what weight to attach to them one can imagine a wide variety of outcomes.

Undermining the grooming law

The  obligation to verify, or not, will matter particularly in those countries such as France where the age of consent to sex is the same as or lower than the age of consent for data purposes. In those places everyone who goes to a social media site will be able to say they had reason to believe the person with whom they were communicating was old enough to agree to meet for sex.

That could make prosecutions for grooming  a great deal harder. Children below the age of consent to sex who have been manipulated into agreeing to it will be brought to the witness stand to ask if they lie about their age all the time or only when they want to trick someone into having  sex with them. Either way this is not a good outcome for children.

And if, having obtained parental consent,  sites decide to accept children below the age of consent to sex, will this be  apparent on the child’s  visible profile? If it isn’t, the same issue could arise. But of course, signifying a user’s age in this way may not be without significant disadvantages.

Differences in national capabilities to verify age or parental consent

While we’re on it, when it comes to age verification or, for that matter, obtaining parental consent, what are the implications of different countries having different capabilities to do either or both? Things get hodgier and podgier.

The only “safe” option

From the grooming law standpoint, the only “safe “minimum age for data purposes is one which is lower than the age of consent to sex. That is a strong argument for 13.  I don’t think any country in Europe has an age of consent to sex which is as low as 13.  I guess the alternative is for every social media site to  make it plain that membership of its site cannot be taken to imply that every user is old enough to meet for sex. That’ll be a challenge for the marketing department.

Not a once and for all decision

However,  getting back to the minimum age, the decision on  the Article 8 age is not a once and for all thing. Member States can change their minds.  In the UK the DPA is funding the sort of research that should have been carried out before anyone took a decision in the first place.  By the time the research is completed we will also have some actual experience of how the new hodgepodge regime is working in practice.

Watch this space.

Posted in Age verification, Consent, Default settings, Internet governance, Privacy, Regulation, Self-regulation

Child sex abuse images matter

Child sex abuse images which have been posted on the internet are important in and of themselves.  We do not need to justify taking action against them on any grounds other than the fact that they exist. Let’s remind ourselves why.

Every act of sexual abuse involving a child will harm that child.  However, if an image of the abuse is also  created then distributed over the internet, potentially to be viewed by everyone in the world, unquestionably that will add to, change and expand the harm. The crushing sense of loss of one’s human dignity can be severe, even overwhelming. The constant, nagging fear that your classmates or people from your neighbourhood may already have seen the images or might in the near future can be deeply corrosive of a young person’s self-confidence and self-esteem. And that’s on top of the harm caused by being sexually abused in the first place.

Every picture of a child being sexually abused is a picture of a child who needs help.

A view from America

James Marsh is probably the USA’s leading lawyer when it comes to representing victims of child sex abuse where an image of the abuse has appeared online. He represented Amy in Amy v Paroline. It went all the way up to the US Supreme Court. Marsh is quite clear. Victims want every copy or version of an image of their humiliation to be found and deleted as quickly as possible. Pending that, they want access to be restricted to the greatest extent achievable, again as quickly as possible. Knowing this is being done can be crucial to a victim’s prospects of recovery and getting back to any kind of normal life. It is reassuring evidence that the harm done to them is acknowledged, that every practical step is being taken to make things as right as they can be, that justice is being delivered.

No false choices

The business of addressing the  perpetrators of child sex abuse, including people seemingly involved “only” in downloading images, has to remain high on everyone’s agenda.  However, it is a separate and discrete challenge. Dealing with perpetrators or speedily removing  or restricting access to child sex abuse material are not alternatives. Each is important on its own terms, without more.

Technology can do this

Once an image has been confirmed as containing child abuse material, finding, deleting or restricting online access to it can now be achieved largely by technical measures. Inter alia, the use of proactive searching technology based on hashes is revolutionizing the space. Even if this was all we did it would be worth doing, not only because of our obligations to the victims depicted in the images but also because of our obligations to children as yet unharmed. The continued availability of child sex abuse material online encourages or helps sustain paedophile activity and this puts children at risk everywhere.

Identifying the victims and getting them the necessary help

Earlier it was observed that every picture of a child being sexually abused is a picture of a child who needs help. But first we need to find the child. This is important both to safeguard them from the possibility of further abuse and  to ensure they get the  therapeutic or other support they need to start rebuilding their lives.

When it comes to identifying and locating victims law enforcement has to be in the lead.

Distributed and undistributed images

A great many child sex abuse images that find their way into the hands of law enforcement have never been distributed over the internet.   In a report shortly to be published by NCMEC  they will say only 11% of all the images in their database have been “actively traded” (which for them means found or reported on at least five occasions). Often, maybe typically, such images have been picked up in the course of another operation where a suspect’s digital media are seized and interrogated.

Because we know a high proportion of child sex abusers are closely linked to the victim, in such circumstances it can be a relatively easy matter for both victim and perpetrator to be found in a single action.

Nevertheless, because a child sex abuse image has been created there is no way of knowing if, in fact, it is already out on the internet and simply hasn’t been reported to a hotline or the police yet. It might appear on the internet in the future. For those reasons, the image has to be retained in police databases otherwise, if it is eventually found somewhere online,  police resources in several different countries could be wasted trying to find a child or a perpetrator in a case that has long been fully resolved and closed.

Of the child sex abuse images that do find their way on to the internet, many can be and are extensively copied and distributed on a gigantic scale on a global basis.  These are the images victims, police, parents, the public and governments think about most and worry about most. They are also a major focus of child rights advocates who work in this area. Children in distributed images are likely in the greatest danger.

So how good are we at finding unidentified children?

Where an image has been found on the internet  and is reported to, say, a hotline or directly to the police, in the first instance the child will  normally be an unidentified victim. How efficiently are law enforcement able to convert an unidentified victim into an  identified child? For these purposes “how efficiently” really means “how quickly”.  This is an acid test.

There are other things we would like to know, for example, about the availability of resources to help identified children recover from the abuse they have suffered, but how quickly sexually abused children in images are being found by the police is an important point of departure.

Looking at the facts

INTERPOL owns and administers the International Child Sexual Exploitation image database (ICSE). INTERPOL’s leadership acknowledged the importance of analysing  whatever data were in ICSE with a view to improving our collective understanding of how effective their work is in this area.

EU funded research project

With funding provided by the European Union  INTERPOL agreed to be party to a proposal put forward by ECPAT International. I was on the Technical Working Group which helped supervise the work. The report came out earlier this week.

Entitled “On Unidentified Victims in Child Sexual Exploitation Material – towards a global indicator” it looked at images within and the processes associated with ICSE. Law enforcement agencies in 53 countries can connect to ICSE. Given that INTERPOL has 192 nations in membership this is surprising and disappointingly low, however ICSE contains images known to be from 88 jurisdictions.

INTERPOL was able to tell us they had identified 12,000 children who were in images in ICSE but, as the report indicates, they were unable to say with a high enough degree of certainty what percentage this represented of all children in there.

Neither was it possible to work out what a “typical journey”  from unidentified to identified looked like,  including how long the journey would be likely to take. Some children might be identified within  days or weeks of an image being entered into ICSE. Others will have been unidentified in the database for years.

As the title implies the  EU funded project was trying to establish how, on an on-going basis, the world could determine how successfully law enforcement was able to move children from the unidentified to the identified category.

At the same time it was hoped to get an insight both as to the total number of unique images in circulation or being produced and the numbers of children who are being victimised in those images. Would this tells us anything about the nature or amount of child sex abuse taking place in society as a whole? Probably not. We are going to have to continue to rely on prevalence studies for that.

My hunch is that in the vast majority of cases where a child is sexually abused there will be no recording device of any kind in the room. The emergence of live streaming and the growing levels of apparently self-produced images may be changing that dynamic but, in terms of scale, that remains unproven for now. Also most of the abuse we know results in an image being produced seems to draw on the experience of countries where European languages are widely spoken. Could it be significantly different elsewhere?

Having analysed the factors which determine what it takes to find a child when all there is at the beginning is the image itself and any available meta data, ECPAT International’s idea was to construct a baseline index of some sort. It would be set at, say, 100. If next year the index stood at 110 we would know that the numbers of children remaining in the “unidentified” category were going up. This would trigger further enquiries to see how we might at least get back to 100. However, the overarching aim  would always be to reduce the index to nil or as close to it as we can get. We want to live in a world with zero child sex abuse of any kind and as we move towards that we definitely want to live in a world with zero child sex abuse material on the internet.

Data from ICSE

ICSE yielded a great deal of interesting and extremely useful data about the sub set of child sex abuse victims who appear in distributed and undistributed images that find their way into the hands of the police. It is not necessary to rehearse all the findings here. They are well set out in the report, but as the study got underway a number of things quickly became apparent.

ICSE was established as a collaborative, international investigative tool. The designers did not have the needs of researchers in mind when they were building it. They were thinking entirely about police officers intent on finding a child as fast as possible. ICSE’s ability to generate the kind of analytical data that would be needed to construct a global indicator is therefore limited although the system is currently undergoing an upgrade and the new version will be better in that respect.

The second, and in retrospect rather obvious point, is that while ICSE is uniquely important because of its singular role as a connecting point for every police force in the world, in terms of databases and the analytics they can yield, ICSE is not the only one and it is not even the largest.

EUROPOL, the UK, the USA and others also have image databases or are building one. Part of the broader mission has to be to ensure all of the databases can connect with each other so no child falls through the cracks. This implies a common set of criteria will need to be developed to ensure everyone’s on the same page in terms of how and what data are entered and exchanged. Alternatively a method must be agreed to standardise the data. Not a small thing to do.

What doesn’t get measured doesn’t get done

Even if it is not always literally true, it is substantially true that what doesn’t get measured, doesn’t get done. It is an unfortunate axiom of our recessionary, cash-strapped times.  If we, as child rights advocates want to stand alongside colleagues in law enforcement, or independently agitate for them to be given more resources to find more children quicker, we need to be able to do more than give voice to righteous rage.  We need numbers. We need a global indicator. And we need it now.

Posted in Child abuse images, Internet governance, Microsoft, Regulation, Self-regulation

Landmark moment

Yesterday both Houses of  the UK Parliament agreed the designation of the BBFC as the age verification regulator for commercial pornography online. The end of a long road. Beginning of a new journey.

Posted in E-commerce, Regulation, Self-regulation

Refreshing honesty – damnable ambition

Yesterday I blogged about s.230 of the Communications Decency Act, 1996 and how a group of academics and civil society organizations were trying to persuade the US government to insist upon Canada and Mexico agreeing to adopt a version of it in the negotiations currently taking place around the North American Free Trade Agreement.

Later that day someone sent me a link to a blog published by one of the signatories, the Electronic Frontier Foundation (EFF).

It ill-behoves a Brit to lecture anyone on the ugliness of imperial ambition,  and the EFF at least had the good grace to confess being a little uneasy about their tactics. However, as with zealots down the ages, the ends can easily be made to justify the means.

Take those troublesome Canadians, for example.  Here is what the EFF say

The difficulty with the inclusion of Section 230 style safe harbors in NAFTA is that it would either require Canada and Mexico to change their law, or it would require the provision to be watered down in order to become compatible with their existing law—which would make its inclusion pointless. Therefore, the first option is the better one. (emphasis added).  For Canada, in particular, strengthening legal protection for Internet platforms could help roll back the precedent set in the Google v. Equustek case, in which the Canadian Supreme Court required Google to globally de-index a website that purportedly infringed Canadian trade secret rights.

So we’re clear about that?  For the EFF it’s the American way or it’s the highway. The quaint habits and jurisprudence of other democracies are to be tolerated only insofar as they are identical to ours. No room for doubt or deviation. Local context? Piffle. The whole world is seeking to modify the excesses which have been facilitated by s.230. The whole world is wrong and if they want to sell us washing machines……

I wish I could be as certain about anything as the EFF seem to be about everything.

Posted in Default settings, E-commerce, Internet governance, Regulation, Self-regulation

More loyal and royal than the Crown

s. 230 of the Communications Decency Act, 1996 is the bit of US Federal law that confers a near-blanket immunity on internet intermediaries who facilitate the publication of content by third parties.

You would think, in January 2018, anyone who chose to write about s.230 could not fail to mention its widely acknowledged shortcomings. Given, for example, the intense debates that have taken place on and around Capitol Hill about the part s.230 has played in facilitating child sex abuse, child sex trafficking and the distribution of child sex abuse images you might imagine it would be hard not to refer to that dimension, even if only in passing, even if only to explain why you take a different view from the still growing number of people advocating reform.

Equally, as people around the world gather at the funeral pyre of fact-checked based journalism, at a time when everywhere there is a huge amount of angst about fake news, hate speech, copyright theft, about foreign interference in,  or other forms of manipulation of, democratic processes – I could go on, and on, but I am sure you get the point – likely you may have hoped that a  discussion of a possible link between these phenomena and s.230 will at least flicker across your screen or appear on the page.

In which case prepare to be hugely disappointed by the lobbying letter published on 21st of this month by 39 academics and 16 organizations. I’ll call them the “Gang of 55” or maybe just the “Gang” for short.

The letter is addressed to representatives of the US, Canadian and Mexican governments, the people leading the negotiations on a possible new North American Free Trade Agreement (NAFTA). Casting aside forms of persuasion which generally are the hallmark of academic life,  the Gang want the Trump Administration to insist that a lookalike version of s.230 be included in the deal and they want the Canadians and Mexicans to suck it up as the price of doing business with Uncle Sam.

Blinded by the light

For the Gang, it is evident the past 21 years might as well never have happened. The internet is simply one continuous, uninterrupted and unqualified success story and it’s all pretty much down to s.230. The cyber Summer of Love is still here. The Gang inhaled.  Virtual kaftans are showing. Trapped by nostalgia or an attachment to a once glorious idea? Who knows? It can be hard to move on. Tell me about it. I was once a VP of MySpace.

Now the addressees of the letter are not idiots. They will be perfectly well aware of s.230’s faults so I can only imagine part of the point of the epistle is to rally others behind the pro- s.230 banner or help sustain waverers. In that, I think it will fail because the letter is so transparently inadequate. It is a rare example of silence amounting to misrepresentation.

While Facebook, Google et al seem almost daily to acknowledge that not everything has worked out brilliantly the Gang speak only of their success. A case of the colonies being more loyal and royal than the Crown?

The Gang tell us

intermediary immunity lowers the barriers to launch new online services


This helps prevent the market from ossifying at a small number of incumbent giants.

Perhaps that’s how the internet works on Planet Tharg but, er, the one I am familiar with right here on Earth is already dominated by a

small number of incumbent giants

giants, moreover, not slow to snap up any new business that arrives on the scene and looks likely to pose a threat. Think Waze, Instagram, WhatsApp, Periscope to name but four. The Gang look away from the hard-nosed, vulgar, commercial realities of near-monopoly power, the gigantic cash mountains such market dominance can generate and the power that comes with a practically bottomless pit of money.

Its time has passed

There may have been a case for s.230 back in 1996. Back then we did not know how this new-fangled technology was going to work out. Now we do. New entrants cannot come to market and plead ignorance. Being small and funky does not give you a licence or permission to misbehave.

The Gang say

Without immunity, new entrants face business-ending liability exposure from day one; and they must make expensive upfront investments to mitigate that risk. 

This is the very embodiment of the Zuckerbergesque doctrine of “move fast and break things”, a doctrine which he has now abandoned as he seeks to make amends for its manifest failings. No liability = zero incentive. A unique privilege. A form of subsidy enjoyed by no other type of business. Internet exceptionalism needs to be buried.

In its present form, the immunity has provided an alibi for inaction. It has become a refuge for scoundrels, an incitement to recklessness, a permanent “Get Out of Jail Free” card.

The immunity values innovation above all other things. The only arbiter is whether or not the market likes it and it makes money. What about consequences?

As Tom Lehrer put it in his ditty about Wernher Von Braun’s work

Once the rockets go up, who cares where they come down? That’s not my department.”

We must narrow the scope of the immunity

It would clearly be unfair and unreasonable to create any form of liability for content a service provider could not possibly have known about.  In my version of the internet, there would be a rebuttable presumption of immunity. It can be set aside if a business cannot show it has made good faith efforts to anticipate potential breaches of its Ts&Cs and taken reasonable and proportionate steps to address them. It wouldn’t have to get it right 100% of the time but, being mindful of available technology, it would have to show it tried.

Posted in Child abuse images, E-commerce, Internet governance, Privacy, Regulation, Self-regulation

A link between hate speech postings online and on-street violence?

Interesting piece in The Economist. Suggesting there is a correlation between hate speech postings on Facebook and violent crimes against refugees in Germany. Correlation. Not causation. But interesting nevertheless.

Posted in Regulation, Self-regulation

More and better compensation for victims of child sex abuse

Things are moving on in the USA. People are campaigning to get more and better compensation for victims of child sex abuse that results in images of the abuse being made and distributed over the internet. Well done James Marsh, the brilliant lawyer who will not let this go. We need a similar scheme in the UK. And everywhere else. If guys who engage in the vile trade are no longer worried about being arrested and spending time in goal they might worry if they thought it could cost them their pension and other assets as they are compelled to provide monetary compensation to the people they have hurt. #marshlawfirm

Posted in Child abuse images, Regulation, Self-regulation