The GDPR – still huge uncertainties

Courtesy of European Schoolnet, Ghent University and KU Leuven,   a great many leading experts from the privacy world and the children’s world gathered last Friday in Brussels for the first open and serious EU-wide discussion on the implications of the GDPR.  Better late than never? Definitely. Several earlier requests for the Article 29 Working Party to organise something similar had fallen on deaf ears so well done to all concerned for taking the initiative to put this event together. However, I regret to report that almost none of the many critical questions raised received clear or definitive answers. And in 11 months the GDPR will be law in every EU Member State, including the UK.

Too many important problems

It was good to hear from a Commission representative that the EU’s Data Protection Authorities (DPAs), the European Data Protection Supervisor (EDPS) and the Commission itself are now meeting monthly to try to agree on a variety of things but not so good to hear children’s interests, as such, were unlikely to receive any special attention.

We have far too much on our plate. Children will be dealt with as necessary as we come to each heading….. 

That put us in our place. Given that the room was full of people who had only come to discuss the GDPR and children I am not sure this got us off to the best possible start.

Moreover, although it is clear that, in future, DPAs are going to be major players in the online children’s space, the DPA world is not famously over-endowed with people with a track record of engagement with children and cyberspace.  You might equally say the reverse is also true i.e. the online children’s world does not have an abundance of privacy experts although Jeff Chester and Kathryn Montgomery had come over from the USA and they were energetic participants in the debate.

What was refreshing, however, was that there seemed to be a genuine openness on the part of all the participants to try to work out both what the law meant and how it might best be applied in the interests of children. We all need to build on that.

Article 8 and the age limit – sharp intake of breath

There was one particular matter which caused a major and sharp intake of breath across the whole room. I will try to explain it.

At the moment there is a de facto lower age limit which applies to online services. In most, not all, EU Member States it is 13. That is the age at which a young person may decide for themselves whether or not to hand over personal data to a commercial online service without that service having to obtain verifiable consent from a parent of the young person.

Back in 2012, when the GDPR process kicked off, the Commission proposed to make 13 the legally defined lower age limit in every country. No exceptions would be allowed. Countries such as Holland, Spain and the UK would have had to change their laws to accommodate this but at least there would be EU-wide uniformity.

It all went wrong.

At the last minute (December, 2015) politicians rebelled. Based on no advice or research of any kind we ended up with a new default age limit of 16 but each Member State was given a choice to opt for a lower age limit as long as it was not below 13.

We moved from having a single age to having up to four. That alone would have/will create all manner of complexities but on Friday we learned the following:

1. If a service is based in a country which has decided its minimum age is 13 it may be the case that this will be the operative age for that service in every EU Member State. This is consistent with the “country of origin” principle which applies in other contexts. By implication I guess it would also apply to non-EU Member States but let’s stick with the EU for now.

Google and Facebook are both registered in Ireland. Thus, if this interpretation of the law holds and Ireland opts for 13, in effect nothing will change in relation to age limits anywhere within the EU for any of the services these companies provide.

It is likely any service in a country that opts for 16 will at least consider moving to another country that chooses 13. Article 8 will become a laughable dead letter and people will be left wondering what all the fuss was about.

The COPPA law in the USA establishes 13 as the minimum age in every country in the world where a US company operates but where an individual jurisdiction creates a higher threshold e.g. Spain, which opted for 14, typically the US companies will honour that rule. If the above reading of the law turns out to be right, the EU could be taking a different tack. It could be saying that local decisions count for nothing. All that matters is the age limit in the country of origin.

Remember – this is not definite. It could be that every service has to adapt to the age limit agreed upon in each Member State. Yet that is quite a gulf in interpretation. What is remarkable is that such a gulf exists at all – with 11 months left.

2. However, apparently, if a service sets up in Country A where the age limit is 13 and they target people in Country B where the age limit is, say, 16,  some Member States are arguing that could be construed as “cheating” so won’t be allowed. Go figure.

3. In addition, seemingly there is a possibility that if, for example, a service was targeting French citizens, irrespective of where they lived the age limit should be whatever France had decided upon in respect of age. Again, go figure.

  1. The final possibility is there could be some permutation of these. Ditto.

To repeat my earlier point: the fact that nobody from a DPA, EDPS or the Commission could say what the position was in respect of some of these questions is……


There is continuing uncertainty about the rules on “profiling” as they affect children. The GDPR says

Controllers must not carry out solely automated processing, including profiling, that produces legal or similar significant effects (as defined in Article 22(1)) in respect of a child.

Quite what is a “legal or similar significant effect”? Answer came there none yet here is an issue of enormous significance to the business models of probably every online service of any size.

Age verification

Doubts and reservations were expressed about the potential for age verification to be part of the solution to any of the unresolved challenges. Questions were raised about its technical feasibility, particularly in a transnational environment, its potential to conflict with the principles of data minimization, to restrict children’s rights unlawfully, and also in relation to the possibility of it becoming a stalking horse for governments and the security services to exercise even greater control of their citizens.

These are all self-evidently important questions. For my own part, I believe it is both inevitable and desirable that we will make greater use of reliable, trustworthy, privacy compliant, privacy-enhancing age verification solutions that will work entirely to the benefit of children and young people.

Article 35

This was not discussed at any length but it was acknowledged that it has potential in respect of children. Here is the text

Where a type of processing in particular using new technologies, and taking into account the nature, scope, context and purposes of the processing, is likely to result in a high risk to the rights and freedoms of natural persons, the controller shall, prior to the processing, carry out an assessment of the impact of the envisaged processing operations on the protection of personal data. 

At the very least my hope or expectation is that every business providing relevant services will need to consider whether what they are about to provide could constitute a high risk to some or all of its users. That implies that it must know who its users are, what the risks might be and what steps they ought to take to mitigate or eliminate those risks.

Definition of a child

The only point where there seemed to be little doubt – although even here there was some – was the definition of a child. The consensus seemed to be a child is any person under the age of 18 but unless and until someone – either a DPA, the EDPS or, eventually, a court rules we cannot be completely certain. This is acutely important because of the prevalence of profiling.

Size isn’t everything

At least one voice was raised in defence of small start-ups. How can they be expected to do all this stuff? Or even know about it? Suffice to say there was not a whole lot of support for this position. 1995 is, er, 22 years ago. The internet has moved on. As one perceptive commentator put it

If two youngsters set up to manufacture teddy bears with glass eyes which could easily be detached and swallowed by a young child, causing that child to choke to death, we wouldn’t feel a great deal of sympathy for them or excuse them because they didn’t know about the dangers or the applicable rules. Why should it be any different here?


Posted in Age verification, Default settings, Facebook, Google, Internet governance, Regulation, Self-regulation

The Queen’s Speech

As most readers will know, following a General Election and then usually annually until the next General Election the British Parliamentary year begins with the “Queen’s Speech”. Her Majesty sits on the throne in the House of Lords and reads a speech which has been prepared for her by the Prime Minister.

The speech sets out the government’s legislative programme for the next twelve months or so and immediately following an election usually that programme reflects what was in the winning party’s manifesto. Things are a little bit different this time because no party won a majority and we have a minority government.

Politics are going to be very fluid and exciting in the period ahead. I suspect a lot of things are going to have to be negotiated both within and between parties before they take shape as a  finished legislative proposal. The consumption of Valium among civil servants is bound to increase and, all in all, there is going to be a great deal of uncertainty.

Anyway,  today Her Majesty delivered the speech and, unsurprisingly, mostly it is about Brexit. The word “Internet” appeared only once and that was in connection with terrorism. The word “children” did not appear at all. However, in the briefing notes which were issued with the text of the speech there is a major mention of a Digital Charter and within the charter issues of concern to children’s online safe safety feature prominently. The fact that the Charter is described as a “non-legislative measure” may raise eyebrows but let’s wait and see how it develops.

Here is the relevant text

Proposals for a new digital charter will be brought forward to ensure that the United Kingdom is the safest place to be online.”

  • We will develop a Digital Charter that will create a new framework which balances users’ and businesses’ freedom and security online.
  • The Charter will have two core objectives: making the UK the best place to start and run a digital business and the safest place in the world to be online.
  • We will work with technology companies, charities, communities and international partners to develop the Charter; and we will make sure it is underpinned by an effective regulatory framework.
  • We are optimistic about the opportunities on offer in the digital age, but we understand these opportunities come with new challenges and threats – to our security, privacy, emotional wellbeing, mental health and the safety of our children. We will respond to these challenges, assuring security and fairness in the new digital age and strengthening the UK’s position as one of the world’s leading digital economies.
  • We strongly support a free and open internet. But, as in the offline world, freedoms online must be balanced with protections to ensure citizens are protected from the potential harms of the digital world. We will not shy away from tackling harmful behaviours and harmful content online – be that extremist, abusive or harmful to children. And we will make sure that technology companies do more to protect their users and improve safety online.
  • Many of these challenges are of an international nature, so we will open discussions with other like-minded democracies and work with them to develop a shared approach. The Prime Minister has already started this process, securing an agreement with G7 countries to strengthen their work with tech companies on this vital agenda.
  • Britain’s future prosperity will be built on our technical capability and creative flair. Through our Modern Industrial Strategy and digital strategy, we will help digital companies at every stage of their growth, including by supporting access to the finance, talent and infrastructure needed for success and by making it easier for companies and consumers to do business online.

Key facts

  • Working with partners, we have supported robust action to tackle harmful material posted online:
    • The Internet Watch Foundation has shared information on approximately 35,000 indecent images of children with six major tech companies (Facebook, Microsoft, Twitter, Yahoo, Adobe and Google) so they can remove them from their services;
    • The Police Counter-Terrorism Internet Referral Unit has secured the removal of over 270,000 pieces of terrorist-related content since its creation;
    • The Digital Economy Act introduces protections for children from seeing adult material online.



Posted in Internet governance, Pornography, Regulation, Self-regulation

A trusted internet?

A joint blog from John Carr and Dr. Mike Short

The dominant technology

The Internet has become the dominant technology of the 21st century. There are now over 3 billion users worldwide and a vast array of services. The huge expansion in the number of smartphones and other portable devices, set alongside the growth in the associated infrastructure to allow ubiquitous and continuous access, has made it very difficult to imagine a world where the Internet does not play an absolutely central role. No online information or ordering? No messaging, social media, photo sharing or remote banking? We all have a stake in ensuring the internet’s continuing success.

With problems

Yet there can be little doubt that because of a series of different types of problems, a great many governments and other social institutions are beginning to raise questions which are undermining the wider legitimacy of the Internet.  They say it appears to be above and beyond the reach of the law or any kind of control and this is facilitating serious harms without there being any real or believable signs that there will be an end to it. To put that slightly differently: we may be reaching a point where people are beginning to wonder if the price we are paying for the internet as we now know it isn’t just a tad too high.

Far-reaching consequences

This could have far-reaching consequences. It will be hard for consumers, investors, parents, teachers and others to feel they can truly trust or get behind a medium that is being routinely trashed because of the way it seems to provide succour or comfort to terrorists or, for that matter, fraudsters, child sex abusers, hackers, copyright thieves or dark forces wishing to manipulate elections.

Marking your own homework?

The internet industry has not been able to come up with a convincing or reassuring narrative.  Here is the first big problem. Hitherto companies have essentially been marking their own homework. Whatever they might say about how they plan to address any challenges it will be discounted or disregarded as self-serving propaganda or marketing which is unlikely to tell the whole story.

Hybrid institutions require hybrid solutions

Internet businesses self-evidently are privately owned entities yet they are performing public functions which can have very public consequences. Internet businesses have therefore become a new kind of hybrid institution. That has inescapable consequences which call out for hybrid solutions. Many businesses are born abroad and may, therefore, have different approaches or attitudes, but none should be above the law and the law should not be seen to be impotent or ineffective.

The absence of transparency, the absence of an independent and trusted voice, risks leaving policy makers to listen and respond to the beat of different drums whose ultimate success may depend on no more than how good they are at lobbying or orchestrating a media campaign.  This must change.

Change starts here?

The UK is as good a place as any to map out or describe what that change might look like while recognising always that, ultimately, for any strategy to succeed it must receive substantial backing from and be adopted by a broad range of national governments and international institutions.

How might such a strategy emerge? Nine years ago Prime Minister Brown established the Byron Review to look at online child protection. David Anderson QC has looked at how terrorism is being dealt with. We have had a variety of welcome and highly focused studies, and there have been more industry roundtables and Select Committee Reports than you can shake a stick at.

Something new?

What is needed is something altogether different, more securely based and properly resourced with acknowledged high-level experts drawn from Government, Opposition Parties,  industry, academia and civil society circles, particularly from within the free speech and digital rights communities. A new institution but not necessarily a permanent one.  Perhaps an Interim  Commission with legal powers to require information from any interested party?

One of the matters the Interim Commission might look at urgently is how best to manage these issues in the longer term. Should some new, permanent body be established which either involved or took powers from  Ofcom, the ASA and the  ICO  or would some other arrangement be likely to work better?

There is a sense that the current distribution and division of responsibilities for online affairs is sub-optimal and has grown up in a rather ad hoc fashion. It is time to take a fresh look and try to create a new institutional framework which enjoys widespread support.

This does not imply that everyone would necessarily have to agree with or sign up to any and every outcome of decisions made by such a new body but there needs to be a consensus about the reasonableness of the ways in which decisions are made.

Determine its own agenda

It would be wrong to try to require the sort of body we are suggesting to give priority to any particular issue. It would need to take a view on that itself, but a good way of getting started might be for them to look at some of the pressing matters facing law enforcement and their relationships with social media platforms. Therein may lie a microcosm of the bigger picture.

Perhaps, given the rapid pace of change and to emphasise the urgency of its mission, we should call the body we have in mind Internet Commission 2020. It would be a forum that can bring industrial and policing needs/ solutions to the table for discussion and resolution, and NOT presume new laws are needed. It should be pro safety and pro-innovation founded on cooperation for a better internet.  But the pace of change requires us to get moving on this strategy immediately. The future starts tomorrow.


Posted in Default settings, Facebook, Google, Internet governance, Regulation, Self-regulation

Looking for a new name

I have just read the draft budget produced by the Internet Corporation for Assigned Names and Numbers, otherwise known as ICANN. 

The new draft relates to 2018.  Assuming it is approved what it shows (page 11) is that in 2018 ICANN’s total operating budget will be around US$146 million.

How will the US$ 146 million be financed? That’s easy.  On page 15 we see there are two principal sources.

Registries will contribute 88.1 million – 57.3 from transactions and 30.8 from fees.

Registrars will contribute 51 million – 36.9 from transactions and 14.1 from fees.

In last year’s accounts there was a set of numbers which showed that of the total income fully 40% was derived from one particular Registrar (GoDaddy) and one particular company that owned two major Registries: Verisign, proprietors of .com and .net.

As this is a draft budget a similar estimate has not been included although, presumably, it will be when the accounts for 2018 are eventually published.  If anything it is likely to show an even greater concentration around those two entities because (page 15 again) the number of Registrars is projected to fall from 2,989 to 2,241.

Small margins in a cut-throat business favour larger businesses. In economics this is often referred to as capitalism’s tendency towards monopoly.  Monopolies always work in favour of the monopolist. Rarely do they work to the advantage of the consumer, which is why we have regulators to stop them from happening or break them up.  If the number of Registrars continues to decline at a similar rate who will call a halt here?

Anyway, this brings me back to my headline. Can we think of a new and better name for ICANN? One that more accurately reflects what it actually does?

It would need to be a name that recognises the importance and complexity of ICANN’s mission but also acknowledges that the whole shooting match floats atop a sea of money provided from a very narrow base.


Posted in ICANN, Internet governance, Regulation, Self-regulation

Uncertain times ahead

This blog is really a continuation of my last one but written in the wake of the terrible events in Manchester and the further revelations about the scale and nature of the terrorist material made available through various online networks.

Before getting into this bear in mind, if it’s not too obvious, terrorist material and terrorist related activity are not the only types of online content and conduct that cause concern. In the debate that is bound to take place in the uncertain times ahead several agendas will be in play.

Now let’s imagine a great many governments around the world finally become convinced that, despite their very best, extremely expensive and completely sincere endeavours, it is, in fact, impossible for social media platforms and other types of online services to ensure they are not exploitable by seriously malevolent forces.

A view forms that, under prevailing conditions, internet companies will never be able to employ enough human moderators or develop sufficiently smart AI systems to ensure the level or nature of the criminality taking place via their services is negligible or manageable either by them or by law enforcement.

In other words a consensus develops that one part of the brave experiment that was the old internet has failed and it’s time to rethink. User-generated content would disappear.

Or rather it would be put into hibernation until new platforms emerged which (a) accepted they were publishers and therefore (b) would only allow content to appear on their public spaces when they were satisfied it did not break any laws or incite or facilitate the breaking of any laws and/or that the real world identities and contact details of the publishers had been reliably verified. Mere conduit status would be severely restricted although it need not disappear completely.

What might happen next? Can we picture the internet without user generated content? Such an internet would be very different from the old one – the one we know today – and be a very much poorer one. The transition would also be uneven and messy.

However,  if Amazon is still there along with Tesco online, Expedia, Ryanair, Trip Advisor, PayPal, eBay, Netflix, iTunes, Spotify, the British Museum, Google, Bing, and our favourite artists’ and sports teams’ websites were still up, if businesses could still communicate with each other and sell us stuff, the vast majority would probably find a way to rub along. Free speech would not have died but it would operate within quite different parameters.

What might prompt such a calamitous chain of events?  It’s not hard to come up with an answer to that question, even though we will all earnestly hope the day never arrives.


Posted in Facebook, Google, Internet governance, Regulation, Self-regulation, Uncategorized

The Facebook files

There are astonishing and detailed  revelations in today’s Guardian about the scale of abuse on the Facebook platform.  They are based on a major leak from someone who was obviously once on the inside track – and maybe still is.

Seemingly in January of this year Facebook had to assess 54,000 reports of alleged revenge porn and “sextortion”.  14,000 accounts related to these complaints were disabled. Apparently, 33 cases involved children. If these numbers are not bad enough just think about how many abuse reports Facebook might have received in the same month which they categorised  under other headings?

Let’s not leap to any conclusions. Leaks are not always the most reliable way of learning the truth, the whole truth and nothing but the truth. Little things like context also matter but set against Facebook’s historic tradition of near total opacity about its inner workings the Guardian piece could quickly become the received wisdom.

Less than three weeks ago Facebook “revealed” they were going to recruit a further 3,000 moderators to add to the 4,500 we are told were already in place. There will now be the faintest suspicion that the company knew about the leak and were trying to get their retaliation in first because up to that moment Facebook had steadfastly refused ever to disclose how many people it employed as moderators.

Of course, we do not know when Facebook will reach 7,500 moderators – recruiting and training them will take time – and as one of their senior staff commented “there is not necessarily a linear equation between the number of moderators employed and the effectiveness of the moderation function.” Quite so.

In other words without knowing a great deal more, 7,500 may turn out to be a wholly politically determined number that bears no relationship whatsoever to what is needed.

As soon as the UK elections are over I would say social media platforms are in for a tough time.  And once that political ball starts rolling there is no knowing where it might end up.

My guess is there’s more to come from The Guardian. Watch this space.

Posted in Facebook, Google, Internet governance, Regulation, Self-regulation

GDPR – Chapter 724

The UK’s children’s organizations have submitted a detailed note to our data protection authority – the Office of the Information Commissioner.  I won’t try to summarise it here. You can download it from the CHIS website. If you have any comments or suggestions please let me know.

In Brussels earlier this week at the ICT Coalition a few other matters arose that I had missed. One is important and obvious, the others were just important!

Obvious and important: we are all fixated on May 2018 but that is not a once and for all date. In May 2018 the GDPR comes into effect . This means unless a Member State has already “derogated” from a provision where derogation is allowed e.g. Article 8, all of the defaults will apply automatically. A country could, of course, still revisit a particular item afterwards and change its domestic law. I guess the only point is that between now and May 2018 everybody with an interest will or ought to have their heads down as they try to get ready for the big bang. Thus I am not sure if there will be any real appetite to reopen anything to do with the GDPR anytime soon after May 2018. This means  May 2018 remains an incredibly important target date.

Other points

If a company sets itself up in a jurisdiction within the EU where the age of consent is, say, 13, can it apply the (traditional)  “country of origin principle” and, in effect, make 13 the standard in every jurisdiction where it operates?  I doubt it, otherwise what would have been the point of allowing Member States to choose in the first place?

Similarly, where the service is provided from outside the EU what latitude will there be to ignore or have terms and conditions or operational principles which would not be permitted if the business was within the EU?  I think here the answer is “none” because somewhere there is a provision which says data can only be transferred across national boundaries if the third party has a data regime which is consistent with that prescribed by the EU in the GDPR. An age limit of 11 or 12, for example, would therefore not be allowed unless verifiable parental consent was obtained.

Within a classroom, if a teacher wishes to use a particular online resource in the course of a lesson is it necessary for every child in the class to have given their permission either for that specific site to be displayed or would a general consent be sufficient? What happens if such  consent has not been given? Is the teacher allowed to proceed or must the child be excluded for the duration?

And if the children are below the age of data consent and their parents refuse to give permission?

The GDPR does not change or affect any rights children may have which are not linked to an “information society service”and it has been suggested that, for example, the “right to be forgotten” encompasses both online and offline environments.

Posted in Age verification, Consent, Regulation, Self-regulation