Read this and weep

Hot news: on Thursday evening in Brussels officials working on the draft ePrivacy Regulation agreed they were at an impasse. They cannot reach a consensus on the text so the matter has been referred “upstairs” to see if the outstanding issues can be resolved at political level i.e. by national governments.

One of the outstanding issues concerns the use of PhotoDNA and similar tools which can detect child sex abuse material moving across a network or being stored on devices connected to a network.  PhotoDNA is currently being used extensively and it works spectacularly well. It represents the single most important advance in eliminating child sex abuse material since the internet began. The draft makes it illegal.

The only way I can explain the otherwise inexplicable is that privacy people are taking an exceptionally narrow, blinkered view of what privacy means. This is limiting their ability to see the absurd results of their decisions, in this case profoundly damaging consequences for children’s safety and well-being across the 28 Member States.

Children’s advocates across the European Union need to get busy. We must lobby national Governments to include in an Article (not simply a Recital) a specific exemption to allow businesses to deploy or continue deploying tools which are designed to protect children.

At the moment only officials from Ireland, the UK, Portugal, the Czech Republic and Belgium are speaking out directly in a way which is favourable to children’s interests. The rest appear to be opposed or are silent, which amounts to the same thing. The Austrian Presidency is being singularly unhelpful. This matters precisely because they hold the Presidency. Member States look to them for a lead. If you know any Austrian Members of Parliament or Ministers, get on the phone straight away.

The draft allows Member States to opt out of the part of the Regulation that prohibits the use of PhotoDNA and similar tools. This would inevitably lead to a patchwork of laws. That could, effectively, cripple operations.  And, to repeat, what the Regulation is proposing is to disrupt systems that are already in place.

What is the EU saying anyway? Children in Britain, Ireland, Portugal, the Czech Republic and Belgium can be protected but kids in other jurisdictions just have to hope their national legislative bodies get their act together? How does that square with other lofty statements about the importance the EU attaches to protecting children?

How does it square with the 2011 Directive which requires every Member State (no discretion or variation allowed) to have machinery to remove discovered child sex abuse material? How does it square with the UNCRC and the Declaration of Fundamental Rights? States have obligations to protect children. They do not have a discretion to prevent companies from protecting children.

Meanwhile, in other hot news UK MEP Mary Honeyball is tabling a question in the European Parliament asking officials to document how and when experts on online child safety and officials within the Commission with a brief to look after children’s interests were consulted on the content of the draft Regulation. I think I already know the answer. They weren’t.

Is this how it was for Woodward and Bernstein?


Today practically every law enforcement agency in the world is saying they cannot cope with the volumes of child sex abuse materials circulating on the internet. They desperately want tech companies to help stem the tide. If we hear criticisms these are generally grumbles from governments, the police or civil society about the private sector not doing enough. It is therefore strange beyond words to hear objections to companies doing too much to try to make the internet safer for children.


Almost universally the task of making the internet a better place is seen as a joint endeavour. We even have a word for it: “multistakeholderism”.

The European Union in particular has been an extremely vocal and energetic proponent of multistakeholderism but it is becoming apparent that the message has not been filtering down or taking hold evenly in all parts of the Commission and its associated agencies.

Who gave you permission to do that?

The argument focuses on PhotoDNA but the logic extends way beyond that single product. No one is suggesting Microsoft did anything illegal when, unilaterally, it developed PhotoDNA and decided to give it away to companies or agencies with a legitimate interest in locating, deleting and reporting child sex abuse material. The fact I am even writing a sentence like that tells you something about how ridiculous this whole thing has become.

While nobody is accusing Microsoft of breaking the law, in effect what is being said is that because there is no EU or national law which specifically mandates or specifically allows PhotoDNA to be deployed,  really it has to stop.

In some countries citizens can only do what the law allows. In others, citizens can do anything providing the law says it is not illegal. I know which approach I prefer. Up to now, deploying tools to protect children has never been illegal. Let’s keep it that way.

If an individual company can be shown to be using tools ostensibly designed to protect children in order to gain access to metadata, or anything else which, otherwise, they would not be able to access, and they then exploit the information thus obtained for commercial purposes, they should be brought before a court and severely punished. But absent that we should leave things as they are.

Bungling again

Signor Buttarelli has made clear he is worried that if an agreed text for the Regulation is not finalised swiftly then all will be lost when the current Parliament dissolves ahead of the next European elections. Having to start again with a new Parliament and a perhaps new Commission clearly holds little appeal.

Remember what happened when the GDPR was coming through? On the little matter of the age limit for children to give consent without online businesses having to engage with obtaining verifiable parental consent, the privacy priesthood inside and outside the Commission united behind a single proposition. They said there should be one age, 13, which should be applied without exception in every Member State.

They presented no evidence in support of it,  did no research to justify it, simply believing they could steamroller it through, hoping that, after four years of discussions, politicians would say “enough already”  because they wanted to move on to something new.

At the very last minute politicians bit back. We ended up with a patchwork of ages. And now, guess what? We are hearing across the EU that data protection authorities are astonished to learn how many companies are abandoning the idea of obtaining consent altogether, including for children, instead using legitimate interests or contracts as the basis of their engagement.

While constantly talking about the need to engage with parents, in effect, businesses have intentionally cut parents out of the loop. By a back door a minimum age of 13 has been created EU-wide. Politicians have been thwarted by a sleight of hand. Article 8 is becoming a dead letter.

Privacy leaders need to start engaging with the children’s agenda

What  that story illustrates is, with notable and honourable exceptions, thinking through how to address children’s interests, is not high on the list of priorities of the leadership of the privacy world.

That has to change.

Posted in Child abuse images, Default settings, E-commerce, Internet governance, Privacy, Regulation, Self-regulation

Think again Signor Buttarelli and friends

Most people with an interest in the field will now know the Article 29 Working Party no longer exists. It has been replaced by the European Data Protection Supervisory Board (EDPB). The EDPB is an independent body made up of representatives of the national data protection authorities (DPAs) each of which, in turn, is an independent body (or should be) within its own jurisdiction.

The EDPB is meant to be a vehicle which facilitates co-operation and collaboration between DPAs with a view to promoting greater consistency between jurisdictions that are anyway supposed to be applying the same rules and regulations.  At the moment the Chair of the EDPB is Andrea Jellinek of the Austrian DPA.

The Board’s secretariat is provided by the European Data Protection Supervisor (EDPS)Signor Giovanni  Buttarelli. Buttarelli is also the Commission’s principal official with responsibility for ensuring, inter alia, EU institutions observe their own data privacy laws.  In addition he is the principal source of advice to  all EU institutions on privacy policy and can appear in the Court of Justice  of the European Union to provide expert advice on how to interpret the EU’s privacy laws.

He’s a big shot

You get the picture? If the world of privacy in the EU was a pyramid Buttarelli would be sitting on the pointiest part. A prince among princes, or perhaps the High Priest of a newly emerging priestly class would be a more apt metaphor.

But he is child-blind

Eleven days ago Buttarelli posted a blog. It addresses the draft ePrivacy Regulation that is currently chugging its way through EU institutions.  Buttarelli is clearly losing patience with the political shenanigans that he thinks are delaying and therefore jeopardising his carefully crafted text. I will return to that blog in a moment but first let me list a few words that do not appear in the blog at any point: child, children, youth and young.

Déjà vu all over again

The way the GDPR evolved and was passed into law very visibly demonstrated how little the privacy community understood or had troubled to engage with the position of children as internet users or data subjects. Yet for all the tears that were shed and ink that was spilt before and since the GDPR was adopted, for all the howls of protest, not least coming out of the UK where, via the Kidron Amendment, the GDPR was expressly adapted to make it more child-oriented, Signor Buttarelli remains distant. In his expansive blog he does not even trouble to refer to minors.

But  in his blog he did say this by way of a general comment

… is unfair and economically unsustainable to expect controllers providing electronic communications services to be subject to a patchwork of (nationally based) data rules….

Hear hear

Why does any of this matter to children?

One of the things the draft ePrivacy Regulation appears to do is make it unlawful for companies to use or continue using tools such as PhotoDNA to locate child sex abuse material. This is a necessary precursor to securing its deletion, reporting it to the police and thereby commencing a search for the victim depicted.

How do I know this? Because I asked a lawyer who was involved in the drafting. She acknowledged the draft did outlaw PhotoDNA and similar tools because she countered that it was open to any Member State to derogate from that part of the Regulation.

Derogation  inevitably leads to a

patchwork of nationally based data rules

Further comment is almost superfluous but, for the avoidance of doubt, if it is

unfair and economically unsustainable

for companies to be made the subject of a patchwork I can tell Signor Buttarelli in this case it is definitely not good for children either.

Last Friday’s meeting

I understand that at a key meeting last Friday Commission officials defended their draft arguing that Recital 26 met any concerns people might have about the use of PhotoDNA or similar tools.

First of all a Recital is not the law and secondly, see above.  This takes us into patchwork territory. Cui bono? Not children.

Seemingly Commission officials also referred to an EU Directive of 2011.  This is the Directive on  combating the sexual abuse  and sexual exploitation of children and child pornography (sic). That is a mystery. There is nothing anywhere in that Directive which addresses the deployment of tools such as PhotoDNA, certainly there is nothing which mandates or permits their use.

There is, however, Article, 25 (1) which obliges every Member State to establish mechanisms to facilitate the removal of “web pages”  (?) containing or disseminating child pornography (sic). 25 (1) permits no variation. No patchwork. This is the Article which is the legal basis for hotlines. Every EU Member State has at least one.

In his own words

Buttarelli’s blog opens with this passage

A swarm of misinformation and misunderstanding surrounds the case for revising our rules on the confidentiality of electronic communications, otherwise known as ePrivacy. 

The answer, therefore, is clear. Signor Buttarelli should write another blog explaining how nothing in his proposed ePrivacy Regulation will prevent businesses from deploying tools such as PhotoDNA, neither will it permit any patchwork of laws governing their use. In short he should explain how it will make things better for children or at least not make them any worse.

Posted in Default settings, E-commerce, Internet governance, Privacy, Regulation, Self-regulation

Facebook and the power of technology to do good

If you read the blog I posted yesterday you will see that the European Union is discussing how to make it harder for companies to engage proactively in removing child sex abuse materials from their platforms. Today Facebook shows us just how powerful a tool technology can be in doing precisely that. I trust this will send a strong message.

Big numbers

On their global platform in Q3 2018, that’s the three month period ending on 30th September, Facebook detected or received reports on 8.7 million pieces of content which violated their policy on child nudity or child sexual exploitation and were removed. 99% of these were detected without having been reported. This does not necessarily mean the images were gone even before anyone other than the originator saw them, but that could be the case.

Not all of the images would be illegal but…

Because Facebook has such a strict policy on nudity it is likely not all of the images removed would be illegal under English law, or US law, consequently not all of these would have been reported to NCMEC, nor would they necessarily have been found to be illegal by the IWF.

A great many are likely to have been within the “grey areas” as  Jutta Croll and her  colleagues in Germany have been saying for some time, thus proving there are things that can be done to address a problem many thought was impossible to get at. Gut gemacht.  Even so it is likely a substantial proportion of the images Facebook removed were illegal in lots of jurisdictions.

It doesn’t stop there

Facebook also announced today they are developing software to help NCMEC prioritize  reports they pass on to law enforcement for investigation. The idea is to give the police the heads up on which are more likely to be linked to the most serious cases. And Facebook and Microsoft are teaming up to create tools which will be made available to smaller firms to help reduce bad or illegal behaviour.

With an echo of Google’s earlier announcement about its “image classifier” Facebook too is developing ways to help find images that have not previously been seen by human eyes and determined to be illegal or contrary to a platform’s policy.

Technology solving problems technology has created

Facebook tells us only 1% of the images they removed were the product of a report made by a human.  Google told the same story when they released similar data. Technology is  solving a problem technology had created.

When you set these sorts of numbers and percentages against what has been achieved by systems which rely on humans making reports you don’t have to be Einstein  or a weather forecaster to see which way the wind is blowing. Project Arachnid hints at what the world might look like soon.

And when you read things about the terrible distress suffered by the poor souls who work as content moderators, we must all hope companies will bring it on as quickly as possible.

Expecting people, too many likely living in far flung corners of the planet, probably employed on minimum wage rates perhaps with little local support, to have to sift out and remove the most gross, violent or revolting portrayals of the worst aspects of some human beings’ behaviour, is hard to defend.

Yes we need safeguards and clear parameters to govern machine driven processes.  AI is never likely to be 100% perfect so humans have to stay involved to resolve the marginal cases and maintain the system. Yes we need genuine transparency so we are all reassured things are working as intended. But today all I can say is well done Facebook.


Posted in Facebook, Regulation, Self-regulation, Uncategorized

The EU is on the edge of making another child safety blunder

On Friday of this week there will be a meeting, I assume in Brussels, to discuss progress on on the draft ePrivacy Regulation currently making its way through the EU’s machinery. If when you have read this blog and you feel so minded please feel free to contact someone in your Government to let them know what you think they should say at that meeting. Remember what John Stuart Mill (and many others) said (paraphrasing)

For bad stuff to happen all it needs is for good people to do nothing

I was contacted by several good people who did not want to do nothing. They told me

The EU is about to pass a law which will make it illegal for online businesses to deploy  or continue deploying PhotoDNA and similar tools to detect child sex abuse material as part of a process which leads to its deletion and reporting to the police.

It is probably best if I do not repeat the precise words I used when I heard this. Suffice to say I expressed profound scepticism.

I mean so far this year, for example in the USA, NCMEC has received over 13 million notifications of csam and around 99% were generated thanks to PhotoDNA. By the end of this year the number is likely to be over 20 million, and these images will have been located and deleted before anyone reported them! This does not definitely mean they were deleted before anyone other than the originator had actually seen them but it might do.

Who could possibly want to stop something like this from carrying on?

Nevertheless, mindful of my own injunction, I contacted several people and in the end spoke to one of the lawyers who is involved in writing the Regulation.

In essence I asked two questions

Did the drafters of the Regulation intend to outlaw, reduce or limit the scope for companies to continue their pre-existing practice of deploying PhotoDNA or similar tools which are designed to identify child sex abuse material in the form of videos or stills?


Irrespective of the intentions of the drafters, is such an interpretation of the current wording possible and reasonable?

The answer to both questions should have been a simple “no”. It wasn’t.

Proactive scanning for illegal, unlawful or prohibited content or behaviour is today a standard feature of a great deal of security-oriented activity. It can be extremely important, for example, in terms of analysing metadata to detect suspicious patterns of behaviour such as grooming.

For these reasons I thought the current wording was just sloppy, written by someone who did not think it through. Any confusion surrounding the use of PhotoDNA and similar could be rapidly and easily cleared up.

Imagine my surprise, therefore, when yesterday into my inbox dropped  the proposal of the Austrian Presidency. It will be discussed this Friday. If you read it you will notice two things: it at least recognizes there is an issue with csam but it also appears to acknowledge that the Regulation does make the use of PhotoDNA illegal because it draws attention to the fact that Member States can derogate from that part of the Regulation.

There might be a good many reasons why a national Parliament cannot get a derogation  measure through their local machinery within a given time frame so what is to be gained by allowing a patchwork of laws to emerge around an issue of this kind?

In  Article 25 (1) of the EU Directive of 2011 a rule is stated in crisp clear language. It says every Member State must have machinery to allow people to report csam. No exceptions. No variations. Today they all comply. That is how it should be when dealing with csam.

I guess the larger question is how can it be, yet again, that the EU has bungled and blundered its way into a mess like this? What is missing in their machinery that allows stuff like this to happen? Repeatedly.

The EU has rested on its online child protection laurels for too long. Increasingly the Brussels rhetoric and the reality are diverging but I will return to this at another time.




Posted in Child abuse images, Internet governance, Self-regulation

They both got it wrong

Post-GDPR the principal focus of recent discussions between ICANN and the European Data Protection Board (EDPB) has been on how entities that previously had access to WHOIS, and used it a lot, will be able to do so in the future.  ICANN has issued what it calls a  “temporary specification” reflecting its self-interested reading of the law.

Under the temporary specification some parties may, in fact,  no longer be able to access WHOIS at all or will only be able to do so after a lot of potentially time consuming and possibly expensive faff and faddle.

ICANN is also proposing that a system of “layered access” is put together to allow certain designated and accredited interests e.g. law enforcement, to retrieve data they think they need for their investigations. Good luck with that.  I wonder how many law enforcement bodies there are in the world? These days we usually replace complex systems with simpler ones. Here ICANN is proposing the exact opposite. I doubt it will work.

These discussions about access rights are clearly important and I will return to them, but first I want to argue that just as much attention needs to be given to ensuring WHOIS is accurate.  “Accurate” includes being up to date!

In the beginning….

From the very beginning of the internet its developers and “the community” had agreed that all the registrations of what would become known as web sites or domains had to contain the name, address and contact details of the individual or entity who owned or was responsible for the management of the site or service. These details had to be accessible to every internet user.

Largely this was to ensure if any problems arose it would be easy for other people on the network speedily to alert whoever needed to take action to put things right.  Clearly it was therefore important that this information was accurate and ICANN’s contracts with Registrars and Registries always had an accuracy requirement built in.

The aforementioned bits of contact data had to be entered in a prescribed form in a database we now know as “WHOIS”. As late as 2009 ICANN gave an ostensibly solemn undertaking confirming it accepted WHOIS should continue to run on the original lines. The fact that WHOIS had by then long since ceased to be accurate was conveniently overlooked but nevertheless the principle was retained as an objective. Solemnly.

The decline in accuracy

If there is one thing privacy law has always been clear about it is about the importance of stored personal data being accurate. However, the last time anybody looked, in 2012, only 23% of all WHOIS entries were fully accurate and maintained in the way they were supposed to be. In other words accuracy was the exception rather than the rule.

Law enforcement and rights holders say that access even to an inaccurate WHOIS can often provide valuable investigative leads, which is why they are keen not to lose what they previously had. My point is if WHOIS was accurate there would likely be less need to investigate anything or such investigations as might still be needed would probably be a lot easier, quicker and cheaper to complete.

Getting back to or creating an accurate WHOIS may now seem like a gargantuan challenge, but that is no reason to give up on it. Domain names are too cheap anyway. Some are being given away free. Hats off to the genius who allowed that to happen.

If prices have to rise by a few dollars to cover the cost of the extra checking involved, to confirm that the ownership and management information rendered to Registries and Registrars is accurate, the internet will not collapse. Innovation will not come to an end. Yes it would probably hit sales or renewals of domains. Some CEOs of Registries and  Registrars may need to postpone the purchase of their second yachts, but that is not a calamity. ICANN could still continue in all its pomp but the world would benefit hugely because there would almost certainly be a lot less online crime. Read on.

So what’s the issue with access rights?

The first internet domain was registered in 1985.  By the end of 1994 there was a grand total of 2,700.  They were slow to catch on at first.  Apparently, there are nearly two billion today, since you asked.

In October 1995 the EU adopted the first Data Privacy Directive.  It was a response to the growth in the automated processing of personal data linked to the arrival of large, commercial computer systems. Here there was rarely any doubt about where or how data had originated and who managed it.

In the early to mid-1990s, as the Directive was being prepared and passed, the internet in general and domains/web sites in particular barely existed in the public consciousness or in the consciousness of the European policy making class. They did not feature in any of the discussions which led up to the adoption of the Directive. It would be another year before Nominet, the UK ‘s ccTLD Registry, was established and three years before ICANN was founded following the death of Jon Postel.

Looks like things started going wrong around 2002/3

The 1995 EU Directive prompted the UK Parliament to adopt our Data Protection Act, 1998, more or less as it came out of Brussels. We were well behaved in those days.

Nominet had a think about the Act’s meaning and impact on their business. Similar discussions were taking place among other EU-based ccTLDs.

It appears that, internally, Nominet’s geeks wanted to comply fully with what they believed were the extant WHOIS conventions, namely to display the name, street address, email address, telephone and fax numbers of every .UK registrant, businesses and natural persons alike, plus those of their first cousins and next door neighbours.

Against that and about the same time a number of .UK registrants had started complaining to Nominet about the amount of unsolicited items they were receiving,  as spam and snail mail, some of which, they suspected, had been generated by scam artists who had been raiding WHOIS. The same people also complained to the ICO who came sniffing around in 2002/3.

Cursed by the narrow myopia of the specialist and perhaps feeling hemmed in by the context-free letter of the law, the ICO advised Nominet to modify its position, so it did. In this way WHOIS was fatally diluted,“blown over by a side wind” as English lawyers like to say. A new version of WHOIS emerged because of a legal accident.

I gather that, elsewhere and earlier, there had been some marginal experiments at concealing WHOIS data. These were now given a major boost, seemingly backed or mandated by law rather than mere profit-seeking whimsy.

On the face of it, it is hard to argue that information about someone’s name, address and email are not “personal data” but the public interest in preserving WHOIS  as an accurate and accessible database should have been accorded a great deal more weight. I am reminded of the excesses of “health and safety” zealots who take imaginative leaps on the most slender of pretexts, sometimes to disguise an undeclared motive.

True enough,  spam  and unsolicited snail mail were a pain particularly then for private individuals but, at least in respect of the online component, today almost every hosting package and email service, both personal and corporate, includes increasingly sophisticated anti-spamming tools. Thus, avoiding spam alone cannot have been, or should not have been, a sufficient justification for the radical, long term, wide ranging step Nominet, the ICO  and their confrères ushered in. It was completely disproportionate.

Nowadays, with so many alternative ways of publishing that do not need an individual to own or manage a domain, there is even  less justification for allowing the current state of inaccurate play to continue. I have never been wholly convinced there is a significant free speech element attaching to the “right” to hide your contact details but accepting there could be a small or residual one I would not be against the idea of allowing certain classes of sites to shield their contact information, at any rate from unrestricted public view. How such a system would be managed could be tricky but not impossible.

The fundamental point, surely, is that if you choose to establish a web site, you are stepping into a public space and certain things unavoidably follow, a fortiori where it is known that hiding contact details is likely to harm the public interest.

A Cambridge study

In his ground-breaking study published in 2014 Cambridge Computer Scientist Richard Clayton showed, inter alia, even among registrants that went to the trouble of using privacy and proxy services (as opposed to just lying directly to the Registrar)

“A significant percentage of the domain names used to conduct illegal or harmful Internet activities are registered via privacy or proxy services to obscure the perpetrator’s identity.”

Clayton commented that even the identity and contact data given over to the privacy and proxy services were often inaccurate anyway. He also says it is usually possible to curtail unlawful behaviour on web sites without having to contact the web site owner or manager and who am I to argue with that? But equally there can be little doubt that verifying the accuracy of the contact data in the first place would reduce the volume of bad behaviour thus obviating the need to get in touch with anyone  at all.

An unexpected gift from the gods

So not only did  WHOIS accuracy appear not to matter any more  – ICANN was doing nothing to enforce its rules – now you didn’t even have to publish some of the data.

Here was an unexpected gift from the gods. By establishing privacy and proxy services Registries and Registrars could, for the first time, turn WHOIS into a revenue stream. Instead of it being a dead weight, costing them money and causing them grief to maintain,  now they could make some cash out of it.  The skids were under WHOIS. Big time.

ICANN, the Registries and Registrars created this scandalous state of affairs. Would it be corrected when the GDPR came along?

And so to the GDPR

In 2012, when the GDPR began its less than stately, extended progress through the European institutions nobody thought to raise or even mention WHOIS and the way Registries, Registrars and ICANN were by then behaving in relation to it.

Nowhere in the initial draft GDPR or in any of its later iterations, including the adopted final legal text,  do the words or acronyms  “ICANN”, “WHOIS”, “Registry”, “Registrar” or “registrant” appear. At no point in any  of the Committees of the European Parliament, or in any of the plenary sessions that were held in Brussels or Strasbourg to discuss the GDPR do any of those words  or acronyms appear. This is because they were never discussed. Never.

Neither  did any of those words pass the lips of anyone who attended any meetings of the Council of Ministers or the Trialogue (I asked people who were in the room). Zip. Nada. Niente. Wala. Nolla.  When the GDPR was adopted by the UK Parliament the story was repeated and I believe the same is true in every Parliament in all 28 Member States.

Where was law enforcement and where were the rights holders when all this was going on? Why weren’t they laying down in the roads outside the Berlaymont building? Where was the vast army of privacy and administrative lawyers? Why did they fail to ensure that, in reaching a decision, all relevant factors were being taken into account by those charged with the responsibility of making the law?

A question

If the ownership and contact details of everyone who owned or controlled a web site had been robustly verified and kept up to date in a database that was open to public inspection, how many web sites do you think there would be that engaged in (a) the distribution of child sex abuse materials, (b) the sale of fake pharmaceuticals or (c) you get the point?

Do you think the answer would be (a) about the same as now, (b) a great many more than now or (c) a lot closer to zero? I’m going to give you a clue. The answer is very unlikely to be (a) or (b).

Is it possible that if European Parliamentarians, national Governments or national Parliaments had had the matter put squarely before them they might have said they were happy with the new status quo? I don’t think so.

Was any publicly accountable policy maker asked to weigh in the balance preserving what was now being assumed to be the  status quo against the reality of what secrecy and inaccuracy had produced? No they were not. Policy makers could have said

“Enough already. For the avoidance of doubt we choose to insist that in future in the public interest openness and accuracy are required by law.”

Or they could have said

We are fully aware of how the privacy laws are being interpreted and acted upon by ICANN, the Registries and Registrars and we are entirely content with the  status quo.”

I’m guessing they would have opted for the first one but they didn’t do either because they were never given the chance. Shame on those officials who allowed that to happen.

We have been badly let down

Here is my summary: EU privacy interests,  which in this instance includes Commission staff, Article 29 and all its successors and associates, have seriously failed EU citizens and Member State Governments.

I acknowledge that the GDPR was an enormous and extremely complex legislative instrument, but that hardly excuses what happened. And once the deed was done ICANN and its cronies – Registrars and gTLD Registries – jumped in and did what they always do. They exploited the situation to their financial advantage with the public interest being relegated to second or third place. In this they have erred in law.

So both sets of key players got it wrong.

We must be able to do better than this.

Posted in Child abuse images, Default settings, E-commerce, Privacy, Regulation, Self-regulation, Uncategorized

The starter’s gun has been raised!

Yesterday in Parliament the British Government tabled the last pieces of the legal jigsaw that had to be assembled  to allow the British Board of Film Classification to begin their work of requiring commercial pornography sites to operate robust age verification services in order to ensure persons under the age of 18 cannot ordinarily access their content.

To coincide with this long awaited moment the BBFC have also launched their web site to provide members of the public with a run down of how the new system will work.

Both Houses of Parliament have to debate the orders that were laid. It is anticipated that will take place in late November with the actual commencement date likely to be set for two or three months after then. That’s when the serious business begins.

As D-Day approaches there will be an increased level of activity to ensure as many people as possible are aware of what the new regime will entail.

As we have repeatedly stated, all of the legal pornography that is on the internet today will still be there tomorrow. Any adult  who wants to access it will still be able so to do.

The only  change will be that adults will have to go through a privacy-preserving age verification mechanism, as they already have to with online gambling and a range of other legally age restricted services and products that are available over the internet.

In fact the age verification solutions which have emerged to cope with the anticipated demand for age verification on commercial pornography sites are among the most privacy-respecting apps ever to have been developed. Mindful of the key legal principle of data minimization, the only thing an age verification provider needs to satisfy themselves about is your age. Nothing else.

This has been a long time coming.

Posted in Uncategorized

We need to keep up with the times

A while ago I had my DNA analysed by a well known family history company. I did it partly out of curiosity and partly because I have been doing quite a lot of research into, er, my family’s history. I thought it would be  interesting to see if the science matched the folk lore. By and large it did. Not entirely, and the bits where it didn’t were fascinating.

Because I agreed  to it, the company was able to use my DNA profile to connect me with hundreds of people who  appeared to be relations.  Almost all of them were in the USA, deriving from both sides of my genetic inheritance, Polish and Irish.   Most were quite distantly related: 4th, 5th and 6th cousins. I had never heard of any of them so the fascination was low key and nerdy. If I was planning to write a full family history going back as far as the Ark it would be useful, but I’m not so it isn’t.

Even so, I thought it semi-magical the way one simple, inexpensive little test, which I commissioned and paid for myself, could unlock so much information. Then I learned that DNA databases of the kind I had subscribed to were being used to solve cold cases – unsolved murders and rapes where the murderer or rapist had left some DNA at the scene but at the time either the technology did not exist to extract it or there was nothing in any of the then available DNA databases to provide a potential match.

Eventually someone had the bright idea of uploading information from the suspected murderer’s or rapist’s DNA sequence to one or more of these family history websites. They did not get a hit pointing directly to the murderer or rapist but they did find links to persons likely to be related. By identifying a family member the police finally zoomed in on someone who became a suspect. Following an investigation they were able to  make an arrest and secure a conviction.

I think this is completely brilliant. What’s not to like? People voluntarily provide information about themselves which allows the police to do what they are supposed to do i.e. identify suspects and investigate any potential connection with the victim.  I’m guessing that DNA evidence alone would never be sufficient to convict anyone, particularly in ancient cold cases, but it gave the police the vital initial clue that allowed them to start putting together a larger dossier to present to the court.

But objections are arising. These were neatly summed up by a Professor Murphy, author of “Inside the Cell: The Dark Side of Forensic DNA.”

You shouldn’t have fewer civil rights because you’re related to someone who broke the law

That is a neat sound bite because it looks so eminently reasonable. But when you break it down there is a lot less to it than at first meets the eye.  Does anyone seriously believe that any legal instrument was ever intended to make it harder  for the police to catch murderers or rapists using published data that was freely given?  How can that infringe anybody’s rights? Is Professor Murphy really saying

“If you are thinking of engaging in criminal behaviour,  you would be well advised to notify all your known relatives of your intentions. Ask them not to delve into their family history or at least, if they do, tell them not to publish any DNA data that could be interrogated by any arm of the state.”

More succinctly

“Relax guys, we are not going to allow advances in technology to help law enforcement find you.”

Hashes and PhotoDNA

Here’s another example: I am hearing on several grapevines that on both sides of the Atlantic a fight is developing over the use of hashes. Hashes, you will know, are a digital fingerprint of an image which, in this case, has already been confirmed as being an illegal child sex abuse image. The hashes cannot be reverse engineered to recreate the image.

Nobody should be in possession of a child sex abuse image. The emergence of Micrososft’s PhotoDNA technology has completely transformed the way in which the internet industry has been able to locate and delete them. So far in 2018 the US-based National Center for Missing and Exploited Children has received reports of over 13 million and, overwhelmingly, these reports have come from companies like Facebook, Google and Twitter. The UK’s Internet Watch Foundation is also finding staggeringly large volumes using the same system. Proactive searching is impossible without hashes. Waiting for the public or sysadmins to report suspected images was, originally, the only mechanism we had available to us but advances in technology, as represented by PhotoDNA, have rendered that redundant.  We need to move on. Keep up with the times.

Yet, unbelievably, some privacy lawyers seemingly are suggesting that a hash can be thought of as being personal information and that any attempt to find a hash –  for example of a child being sexually abused – therefore requires a warrant or some prior judicial process and/or notification to the person thought to be in possession of the hash.

I am tempted to stop writing here and go lie down in a darkened room. Pretend the world is not going mad.

Aren’t hashes the least privacy-invading way of doing any sort of investigation? Don’t they take you only to the profiles of individuals where there is a reasonable suspicion that, wittingly or unwittingly, they have become involved in distributing illegal material? Isn’t that the point where a search might be needed or warrants obtained? In the meantime, or if that is not practical for whatever reason, at least the images can be removed from public view. Again, what’s not to like?

Internet security today depends on proactivity. Companies scan for spam, viruses and all kinds of unacceptable or illegal behaviour or content. They even prohibit it in their Ts&Cs. Is it being suggested companies cannot take reasonable and proportionate steps to enforce their own contracts? I would rather be discussing the culpability of those businesses who hide behind platform immunity, the ones who are not using hashes when they know, or ought to know, their systems are being or are likely to be misused in these ways.

The internet has facilitated a shaming explosion in the availability of child sex abuse material. By common consent it is beyond the capacity of every law enforcement agency in the world to address  the current volumes. While retaining a focus on trying to prevent child sex abuse in the first place, where we fail, technology provides the only way we can hope to get rid of these images. This is about respecting and honouring the dignity of the victims depicted and reducing the risk of new crimes being committed against children as yet unharmed.

Posted in Child abuse images, Location, Regulation, Self-regulation, Uncategorized