The ICANN saga – part 2

Earlier this week I blogged about an article that appeared in The Times concerning ICANN’s neglect of children’s issues. The story got a little promotional piece on the front page and was huge inside the paper.

Imagine my surprise when I was contacted by The Times again yesterday. ICANN had sent them a statement and the journalist wanted my reaction to it. I asked him to send me the statement. He did.

First point to make: this was ICANN responding to an article that was all about children and children’s welfare. In their 346 word reply the word children did not appear once. Neither did any derivative of children e.g. child or anything that might be even loosely connected e.g. youth or young.

The second is that to any but the wholly initiated their prose is all but incomprehensible. Look at this:

“ICANN is a unique institution that is governed via a bottom up, consensus-driven multistakeholder model. As a result, ICANN staff cannot unilaterally impose guidelines or requirements on registries, registrars or other stakeholders in a top-down manner. Policy recommendations are developed and refined by the ICANN community through its Supporting Organizations and influenced by Advisory Committees – all comprised of volunteers from across the world – in a “bottom-up”, multistakeholder, open and transparent process. Each Supporting Organization has its own specific policy development process. The 2013 Registrar Accreditation Agreement (RAA) was the product of a long public consultation process that led to the consensus development of new gTLD policies and the incorporation of advice from law enforcement agencies into the agreement”

I have been to a few ICANN meetings so I think I can just about translate this but the journalist on The Times put it very eloquently when in a second article that appeared today he said

ICANN say they cannot and will not accept responsibility for the publication of illegal content online. We rely on courts and governmental regulatory activity to police illegal activity.

But isn’t the rather obvious point that by their own actions ICANN can either make it harder or easier for such regulatory activity to be effective? Regulators do not willfully ignore crimes but the way ICANN has set its rules means the volumes are now overwhelming them.

Top-down, bottom up, multistakeholderism counts for nothing if the end product is mayhem.  Cui bono? ICANN cannot hide behind a working method to excuse it of a clear responsibility to act in the public interest. ICANN is a legal entity in its own right. It should not have to ask volunteers if it should act to protect children in whatever way it can.

Posted in Child abuse images, ICANN | 1 Comment

Well done America. Poor show Europe, er, I mean Holland

An ignominious bit of history was made the other day. For the first time since records began, in the late 1990s, the USA has been knocked off the top spot for publishing child sex abuse images. Year after year, as reliably as clockwork, the Uncle Sam was numero uno as the world’s largest apparent source. There was a brief period when it looked like Russia might overtake them but then suddenly the Kremlin got busy and that threat to US supremacy faded away.

How do we know Holland is now top dog? On Monday of this week the IWF published its annual report. Here is a brief synopsyis of the key facts:

92% of all child sexual abuse URLs identified globally in 2016 were hosted in five countries: Netherlands (37%), USA (22%), Canada (15%), France (11%), and Russia (7%). In contrast, the UK now hosts less than 0.1% of child sexual abuse imagery globally.

As I recall, in the past the US and Canadian numbers used to be combined into a single figure for North America but even if we did that again Holland still has the edge if only by a tiny amount. Either way the Dutch percentage is up 18% on last year whereas the US has gone down by 15% and the Canadians by 5%. Is there any other heading where a relatively small country like Holland outstrips a giant like the USA?

How do we explain the US and Canadian success? A colleague in the USA who is close to these things said the following

This is a really telling story about the success of U.S. companies implementing PhotoDNA and scanning for CSAM based on hash values. 

So is the opposite true? I suspect it is. In other words the failure  by companies based in Holland to deploy tools such as Photo DNA accounts for Holland’s current shameful position. And this has been coming down the track for several years. Nobody in the Netherlands can honestly claim they are shocked or surprised, unless they haven’t been paying attention.

By the way my US source added that he didn’t think there was necessarily less CSAM making its way on to US servers but

The difference now is that the U.S. companies are voluntarily taking proactive steps to   scan their systems and are essentially finding and removing the content before anyone in the public does.   They are able to stay a step ahead with technology.

And for that the rest of the world and a great many children are extremely grateful.

The irony of the fact that INHOPE and Europol have their headquarters in the country that is being least effective in dealing with CSAM will not be lost on everyone.

Posted in Child abuse images, Regulation, Self-regulation

Children’s groups call for action against ICANN

UK children’s groups call for action against ICANN’s negligence concerning children. Letters and briefing sent to UK and US Governments. They are a bit long  (2 pages and 4 pages) so download them from here
Posted in Child abuse images, Default settings, Google, Internet governance, Regulation, Self-regulation

Google takes the plunge

Word reaches me from the USA that Google is exploring how to admit under 13s to its services officially and legally. It has created a programme called “Family Link”. Seemingly it is only available at the moment if you live in America and it only works via or with Android devices. I’m not sure they will be able to hold to that line in the long run but it’s great that Google is trying to find a way through.

It has always been possible for US companies to allow under-13s to use their services but if they did so knowingly they first had to obtain parental consent. If they discovered anyone using their site or service who was in fact under 13 they simply closed their account and threw them off.

In other words rather than involve themselves in the potentially messy business of obtaining parental consent most of the big social media players simply said they wouldn’t allow anybody under 13 to join or use their service. This encouraged misrepresentation on a gigantic scale, with all the attendant difficulties that go with that.

It will be interesting to see how the Google experiment works out and absolutely riveting to see how and how soon the other big players will react. React they will.

Coming so close to the commencement of the GDPR in Europe I can see several potentially important spin-offs, all of them welcome and I guess the next time the Federal Trade Commission in the USA looks at the matter its attitude towards age verification and age limits might start to shift.

A colleague of mine remarked the other day that there were usually two complaints one heard about most new initiatives in the online child protection space. If a big company did something that was good and threatened to set a new standard this was unfair to small businesses because they couldn’t afford it. The other was that if a small business did something that was good that might set a new standard it could be dismissed because our volumes are too large to make this practical. It’s called a double bind and what it reminds us is that people in the child protection space have to focus on one simple thing; what is the right thing to do?  We are not advocates for small businesses, or large ones, although common sense and an idea of proportionality will always shade our expectations.

However, what this latest Google initiative demonstrates, yet again, is that the big players can pretty much do anything they want to in the technology space. The key ingredient is wanting. Well done Google for setting the ball rolling. Another genie is out of the bottle.

Posted in Age verification, Default settings, E-commerce, Google, Regulation, Self-regulation

Towards a new internet strategy for the UK

On 27th February Karen Bradley MP, Secretary of State at DCMS announced the Government’s intention to  develop a new internet strategy. As a member of the Executive Board of UKCCIS I was asked for my views.  I reproduce them below.

I pointed out that in the time available it had not been possible for me to consult the children’s organizations with whom I normally work, through CHIS. The note I sent in therefore expressed only my personal views.

End of the Bill

In his speech on the Digital Economy Bill, last Monday night in the House of Lords, Lord Ashton referred to the  Secretary of State’s announcement in the context of there being a need for a wider discussion about the effects of pornography in society as a whole, not solely in respect of children. I would hope there will be an opportunity to contribute to that aspect of the review. I accept it was never envisaged that the Digital Economy Bill was to be a trigger for a wider debate about what sorts of pornography are more or less acceptable, whether being viewed by children or not. However, just because children cannot view certain types of material that have been put  behind an age verification wall, it does not mean that its continued availability to adults does not constitute a threat to children. Such material might encourage, promote or appear to legitimize or condone  harmful behaviours which either directly or indirectly put children at risk.

Of the issues mentioned in the Secretary of State’s statement of 27th February  I will focus on:

  1. Future proofing and technology solutions for online safety
  2. The role of industry in online safety

Future proofing

“Future proofing” is a great idea but it sits ill with the reality  of how Silicon Valley and the  high tech industries generally operate and how they think they ought to operate.

Speed to market and first-mover advantage are often dominant or enormously important concerns in the product development cycle. Novelty, “newness” and built-in or planned obsolescence are major drivers.  “Permissionless innovation” is held out as a defining principle. All this can sometimes get in the way of prudence. Nowhere is this better exemplified than in the case of Facebook.  It was founded around the idea that the company would “ Move fast and break things”. Apparently recently this has been changed to “The fast shall inherit the Earth”. It would be interesting for someone with the time and the inclination to count how many times Facebook has apologised for “getting it wrong”, said it is in “listening mode”, “learning” or something of that kind.

As long as companies believe it is easier and, from a business point of view, better and comparatively risk-free,  to apologise after the event if a problem is discovered than it is to get it right  beforehand we will forever be playing catchup. To some degree there will always be an element of catchup but it feels as if there is  currently a  gaping chasm. It needs to be narrowed, even if it cannot be entirely and permanently bridged.

Duty of care

Under our general laws of negligence everyone is under an obligation to take reasonable care in everything they do but in the technology space a great many factors militate against individuals in the UK  being able to bring enforcement actions, mount claims for damages or require companies to improve their safety performance. For this reason there may be some value in explicitly establishing that all businesses operating in UK markets  have a duty of care to children and that prior to any product launch or revision of terms and conditions of service which will apply or be available in the UK they are obliged to consider the child safety and child welfare aspects of their proposed action and maintain documentary evidence that demonstrates  this has happened. In appropriate circumstances, such evidence could be called upon by a new regulator (see below). A breach could be the subject of a substantial fine or other penalty.

A whole new world is approaching rapidly

Right now we stand on the edge of a major step change in the technological space. It builds on what has gone before but appears to promise a qualitative shift of substantial proportions. Advances in Artificial Intelligence (AI), in the emergence of an algorithmic world, the growth of the “Internet of Things” (including toys)  and in particular the development of Augmented Reality and Virtual Reality applications and associated hardware, are taking us towards a tremendously exciting future but it is completely beyond the scope of a body like UKCCIS even remotely to contemplate being able to maintain any sort of credible or useful public interest oversight. I am not saying I am against us trying but we shouldn’t misrepresent what we could reasonably hope to achieve.

The question of age

In respect of a hugely important set of issues which arise in relation to the implementation of the GDPR, it is acknowledged that the Office of the Information Commissioner will be in the driving seat. There is nevertheless the key question for Parliament to resolve, presumably on a motion to be tabled by the Government,  of what the lower age limit is to be i.e. at what age can a child decide for themselves whether or not to hand over personal data to commercial and other third parties providing Information Society Services? If such a motion is not decided by Parliament ahead of May 2018, we will default to age 16.

Thus, if the UK is to depart from the de facto current standard of 13 a great deal of re-writing and re-consideration of all sorts of things will need to happen so, clearly, the sooner everyone knows the longer-term position the better it will be for all concerned.

A related question is how and on what basis the Government will decide what recommendation to place before Parliament about the age limit? What consultation will there be and with whom? Will children themselves be able to have a say? Is there any research evidence which strongly points in a particular direction in terms of the optimal lower age limit? I know of none. Back in the 20th Century, when the Americans came up with 13, social media sites did not exist, or at least not in anything even remotely like they do today.

Has any consideration been given to the online safety and other implications of different countries having different lower age limits?

The role of industry – an obligation to act

 If we are not already there I think we are coming close to the end of the road with the way online child safety policy has been formulated, implemented and monitored hitherto in the UK.  The major social media platforms play a pivotal role in the lives of our children. In that regard, they are in a monopolistic or near monopoly position. Yet there is an almost complete lack of transparency in respect of the way in which  they operate in relation to children’s safety. Quick to say sorry, slow to reveal. It is true we know they do some excellent work in relation to child abuse images and detecting paedophile behaviour but there is a lot more to online child safety than that. Bullying and children’s concerns about inappropriate content also loom large.

How much do these businesses spend on child safety as opposed to, say, lobbying and corporate hospitality?  How many people do they employ in carrying out moderation as opposed to government relations? Wouldn’t knowing that give us some idea of the real priority attached to child safety?

Late last year the Children’s Commissioner for England asked Facebook and Google a number of questions about children’s use of their service and they both declined to reply. This has happened before, at least in respect of Facebook. A few years ago, at the end of several months of discussion and negotiations with the UKCCIS Evidence Group, Facebook finally said (paraphrasing) “We will not give you any information we are not legally obliged to publish. It could be commercially sensitive and we have to take account of things like the rules of the New York Stock Exchange”.

A new regulator

We need a body with legal powers to compel online businesses to answer questions which are relevant to determining how they are addressing online child safety issues. In the face of evidence that things continue to go wrong we cannot forever be asked to take everything on trust.

Such a body could also play a role in developing and keeping under review the broader and emerging online space in respect of the online child safety agenda. It could perhaps replace  some or all of the functions of the UKCCIS Executive Board and become a new, well-resourced, independent focal point for online child safety with the capacity and the obligation to keep tabs on new and emerging technologies specifically in the context of online child safety.

This regulator could issue codes of practice which would have legal force, establishing standards against which businesses are held accountable. In lots of ways the internet industry has matured. It has become central to the way modern societies work. It cannot expect to be left outside the same sort of oversight rules that apply in practically every other area of importance in our communal life. “Internet exceptionalism” is an idea whose time has not come.

When the last Government asked Ofcom to enquire into the operation of the “Big Four’s” filtering policies Ofcom merely asked the ISPs to provide them with information. Ofcom did not seek to verify the information it was given, neither did it try to determine if it told the whole story. Ofcom did not seek to explain why there were differences in outcomes as between the “Big Four”. This is not satisfactory but it was all Ofcom felt it could do given it had no legal powers to do more.

Preserve the immunity but narrow its scope

Microsoft recently confirmed that it was only aware of around 100 businesses and other organizations (this included law enforcement bodies) worldwide that were availing themselves of their (free) PhotoDNA technology for fighting against the distribution or storage of child abuse images on their networks or services. A depressingly low number.

To combat this degree of lethargy or apparent indifference we need to narrow the scope within which the principle of immunity under the eCommerce Directive can be maintained. In other words, while leaving the overriding principle of immunity intact – it would be wrong for anyone to be held liable for something where they had no knowledge of it – we should nevertheless institute a new law or rule which says, having regard to available technologies, online businesses  must take all reasonable and proportionate steps to ensure their networks or services are not being used for unlawful purposes and also shows that they are taking reasonable and proportionate steps to enforce their own Terms and Conditions. If Ts&Cs can be published but there is zero serious effort to ensure they are honoured it is tantamount to being a deceptive practice. Relying so heavily on users reporting breaches isn’t working well enough. I understand that AI is seen as being a possible route to salvation. Let’s hope that works – and quick.

The current principle of immunity was first introduced in the USA in 1996 and taken up in Europe in the eCommerce Directive of 2000 in order to protect fledgeling businesses from being sued while they innovated in good faith. It, has, however, become an alibi for inaction, a refuge for scoundrels. So far from being fledgelings some online businesses are now fully grown condors.

Even new start-ups are acting in an environment which is completely different from that which prevailed in the mid-to-late 1990s and the early part of “noughties” when several of today’s giants began their lives. That said it should still be recognised that we expect more of larger companies than we do smaller ones. Proportionality matters. Small businesses deserve more protection from liability than large ones, providing such latitude is not abused or interpreted as giving them a licence to be reckless or not care at all.

The international dimension

 The UK Government, rightly, is widely acknowledged to be a global leader in the online child protection space as evidenced, for example, by its role in establishing and funding the WePROTECT Global Alliance. I think there would be some value in having, say, an annual report on the activities undertaken by the Alliance, detailing progress.

HMG also participates in major internet governance institutions such as ICANN and the IGF where, again, an annual report may be both valuable and interesting. I think this is especially true in respect of ICANN whose performance in respect of online child safety has been somewhere between woeful and outright neglectful or positively dangerous.

Posted in Advertising, Age verification, Default settings, E-commerce, Facebook, Google, ICANN, Internet governance, Microsoft, Pornography, Privacy, Regulation, Self-regulation | 1 Comment

Pornography, age verification and related matters

At the end of February, the Government announced that sex education was to become a compulsory part of the national curriculum. Apparently, among other things,  sexting, self-generated images and the harms associated with pornography are to be addressed within the new arrangements that schools will be making.

Yesterday evening the age verification part of the Digital Economy Bill received its final outing in the House of Lords so I think we can confidently say we have reached an important point of inflection. The government deserves to be congratulated for moving forward on both fronts in step in this way.

When, exactly, the age verification law will come into effect has still to be determined and it is also the case anyway that because the Lords changed the Bill it has to go back to the Commons to see if they agree. I think it is likely they will but unless and until that has happened we should all keep the champagne on ice.

In relation to the age verification provisions, it will, of course, be vital to monitor the efficacy of the new arrangements. As the various clauses came under intense scrutiny in the Lords a few rough edges were revealed in terms of the definitions to be used in relation to the sorts of material which the Regulator ought to disallow on a site that wants to be accepted as being an age verified site.

However, in responding to various points coming from the opposition parties the Minister made two telling comments:

  1. We absolutely do not intend to create a regime that unintentionally legitimises all types of sexually explicit content as long as age verification controls are in place. We are most definitely not saying that material not allowed under other legislation is allowed if age verification is in place.

    That is why (we make) it absolutely clear that content behind age verification controls can still be subject to criminal sanctions provided by existing legislation.

  2. But we concede that there is unfinished business here. Having protected children, we still need to examine other online safety issues ….. my department is leading cross-government work on an internet safety strategy that aims to make the UK the safest place in the world to go online. 

The Minister, not unfairly, made the further observation that the prime focus of Part 3 of the Bill had been to achieve a higher degree of online child safety by establishing an age verification regime for commercial pornography websites. This is a world first for a liberal democracy and goodness knows there is much that needs to be done to get it all working properly and in a privacy-respecting manner.

It was never envisaged that the Bill was to be a trigger for a wider debate about what sorts of pornography are more or less acceptable, whether being viewed by children or not.  But just because children cannot view certain types of material, because they are behind an age verification wall, it does not mean that its continued availability to adults does not constitute a threat to children. Such material might encourage, promote or appear to legitimize or condone  harmful behaviours which either directly or indirectly put children at risk. That was most certainly the view expressed by several Peers who tried, unsuccessfully, to get the House of Lords to adopt a more expansive remit. Nevertheless, in the cross-government exercise to which the Minister referred perhaps these matters can be looked at with a view to bringing in new legislation if necessary.

However, to return to the Minister’s remarks about  ….We are most definitely not saying that material not allowed under other legislation is allowed if age verification is in place.

Does this mean that if the Regulator finds any such material it must require it to be removed? It would look very odd indeed, would it not, if a web site owner were ever charged with breaking the law for publishing material found on an age verified web site when that same site had been given the green light under a system ushered in by an Act of Parliament? Hey ho. It all adds to the gaiety of the nation.

Posted in Default settings, E-commerce, Pornography, Regulation, Self-regulation, Uncategorized

Stockholm Syndrome and the internet

Stockholm Syndrome describes how hostages can eventually come to adopt the values or point of view of their captors.  Having just returned from ICANN 58 in Copenhagen I am convinced there is a lot more of it about than I had suspected hitherto.

Where else could you hear someone say? “Shame we couldn’t sort this out here in Copenhagen.  The agenda just got super-squeezed. Let’s try and do it in Johannesburg.”

Yes. Johannesburg is where ICANN 59 will be held but via matter-of-fact, deadpan statements of this kind it’s made to sound like we can all just hop on a bus tomorrow morning or meet round the corner in Costa Coffee.  Easy-peasy. What’s the flap? This is routine, ordinary everyday stuff.

But of course it isn’t. In fact “doing it in Johannesburg” means waiting three months, spending hundreds of pounds, for some it will be thousands of pounds,  and probably it requires another week of your time. You pretty much have to be a civil or public servant of some sort,  an academic with an interest in the field, work for a tech company, be a paid lobbyist or be independently wealthy and leisured to be able to be so casual about “doing it in Johannesburg”.

The process itself looks like it has become the justification for carrying on the process and a great many of the captives, not all,  are now indistinguishable from their symbiotic masters.  There is a network of working parties, review groups, committees, and so on which make Byzantium seem like a lean and mean racing machine. To any but those within the priesthood ICANN is as transparent as Mississippi mud.

Stockholm Syndrome is also the only explanation I can come up with for the following (summarised) conversations I had with a couple of perfectly decent, in fact highly agreeable and  (otherwise) extremely smart ICANN stalwarts.

Me: Seems like unless you are actually in the room and shouting, your interest will just get overlooked.  That’s not because everybody else is bad or against it, it’s just that they are there with their own agenda and there are limits to how far, out of the kindness of their hearts, they can deviate from that to help out on an issue  not front and centre for them or rather not front and centre for whoever is paying for them to be there.

One of the Other People: (OTOP) That’s true. It’s just how things work around here.

Me: Writing and submitting papers, trying to connect remotely I guess can be better than nothing but it’s still second best by a long chalk. You can’t beat being in the room. Then there’s the benefits of networking and serendipity. Not possible if you’re not there.

OTOP: You got that right. It’s just how things work around here.

Me: There are clearly major child protection interests at stake in a number of the decisions ICANN’s Board takes.  Yet I struggle to see much sign that ICANN’s leadership are in the least bit interested in any of them. Look what happened with the Russian version of .kids.

OTOP: I agree but you need to get people involved  in the ICANN processes to make sure ICANN makes what you think are the right decisions.

Me: The children’s groups are working in all sorts of ways helping all sorts of needy children and their families so you are saying I need to get them to divert their attention from those matters to help ICANN do the right thing by children?

OTOP: Yes

Me: Many of the children’s organizations are struggling as it is. We’re not like the banks, insurance companies and pharmaceuticals people who have money coming out of their ears to hire lawyers, lobbyists, staffers and who knows who else to do the needful to make sure their interests are properly safeguarded within ICANN e.g. by going to all the meetings, reading the papers and tracking all the things that are going on.

OTOP: Well it’s just how things work around here.  Many people who participate in ICANN are volunteers.

Me: Really? I’m sure I saw at least one report showing that, in fact,  a very high proportion of people who took part in the policy-making bits of ICANN were actually paid in one way or another by a business or organization with a tech interest. 

But anyway  are you saying that bad decisions can be taken by ICANN simply because children’s advocates cannot afford to come to all these meetings or find the volunteers or other resources to participate in the extremely lengthy processes ICANN has established?

OTOP: That’s possible I suppose.

Me: Doesn’t that mean this is a bad system?

OTOP: Well that’s how it works around here.

Me: ICANN has a duty of care to children. It should not leave children’s safety to chance. It should be part of ICANN’s mission as an institution to ensure, at the very least, it does no harm to children and I might even hope for a more positive attitude.

Meanwhile ICANN’s paymasters continue to prosper.

 

Posted in Default settings, ICANN, Internet governance, Regulation, Self-regulation