A new internet safety strategy for the UK?

Earlier today the Government’s consultation on a new internet safety strategy for the UK finally came to a close.

Following the usual internal consultative processes, the children’s organizations in membership of the Children’s Charities’ Coalition on Internet Safety (CHIS), of which I am Secretary, had prepared and submitted their views. A copy of what we said, including the Appendix, can be downloaded here.

Documents of that kind are not the place to dwell on the many things that have worked well in Britain.  I have written about these before.

Thus, I am very proud of the UK’s many notable world firsts and to those listed in the earlier blog we can add, in 2013, the then Prime Minister donating £50 million to kick off what is now called the We Protect Global Alliance. Earlier this year, we tightened up the law on grooming and the Digital Economy Act, 2017 completed its passage through Parliament, paving the way for age verification to keep children out of pornography sites.

However, in terms of many of the things which, domestically, continue to concern children and parents day to day,  it is plain the UK has relied far too heavily on self-regulation and voluntarism and  this has brought us to a point where parents are more worried about harms that might happen to their children via technology than they are, for example, about their potential premature engagement with alcohol and tobacco. Hard to credit.

We have had a plethora of codes and initiatives but without any real or systematic mechanisms having been put in place to monitor or measure the effect or impact of what they were meant to be achieving. This is linked to or is part of an even larger problem. As we say in the submission.

“….. no other industry of comparable size or importance operates with such a near total lack of transparency or accountability.”

At the moment much of public policy seems to be driven by crises which, in turn, are fuelled by media headlines. We need to move away from that to evolve a trusted, respected and more stable mechanism which is very broadly supported by civil society, industry and public alike, and in particular by parents and children.

For this reason, CHIS is calling for the creation of an expert, broadly-based time-limited body with the necessary, personnel, resources and legal powers to require businesses operating online in the UK to co-operate in providing answers to these questions:

  1. What are the optimal conditions for managing the internet in Britain so as to ensure the internet is as safe as it can be for children?
  2. What information and what mechanisms are needed to reassure parents, children, the public and policy-makers that everything that can reasonably be done to keep the internet safe for children is, in fact, being done?
  3. What minimum standards should be observed by different kinds of online businesses and how and by whom should they be overseen and enforced?

Based on their findings the expert body would make recommendations to Parliament on how we should approach internet governance in the future.

The way we manage the internet and matters connected to it in the UK, like the internet itself, have evolved in an ad hoc and typically reactive manner. It’s time to take a fresh look. Moreover, there are no serious international mechanisms for doing this so we have to start here and work outwards or else wait for fate to wash over us.

In the USA, in a briefing organized by the Congressional Internet Caucus on 8th September 2017, Professor Julie Cohen, Professor of Law & Technology, Georgetown University Law School, in a similar vein observed that the administrative arrangements which presently exist in America had not been “optimised for the information economy we are starting to have”.  Exactly right.

 

 

Posted in Internet governance, Regulation, Self-regulation

Questions about the GDPR

Thanks to a series of guidance notes that have been issued by the Article 29 Working Party we now know a lot more about the way several parts of the GDPR are likely to work when they come into force in May next year.

However, as far as the impact of the GDPR on children is concerned, while there have been one or two references to children within some of the aforementioned documents, nothing has been produced which tries to pull it all together and present a complete picture. Given the almost universally acknowledged complexity of some of the challenges faced in respect of children that is most regrettable.

Probably the first thing that will emerge into the public domain from anywhere in “official Europe” is a discussion paper from the British data protection authority, the Information Commissioner’s Office. It will be available before the end of December. Maybe that will kick off a wider debate across the EU. It should.

In the meantime, I reproduce below my list of questions or points I think could benefit from some clarification.

Apologies for the length but I hope it is helpful to some of you.

A question of age

  • In the original Commission proposal, the age of 13 was suggested as the single minimum age of consent for data processing purposes for the whole of the EU in relation to “Information Society Services (ISS) offered directly to a child.”
  • The only argument advanced in support of it was that 13 was already in widespread use because of COPPA, a US Federal law passed in 1998.
  • Since the emergence of Facebook, YouTube and other major social media platforms in the early part of the 21st century does anyone know if any data protection authority, national government or other body has carried out research to establish the age at which a young person, typically, is likely to be capable of giving informed consent to joining a generic or non-specialised commercial ISS such as Facebook, Snapchat, Instagram or YouTube?
  • In its final form Article 8 of the GDPR allows Member States (MS) to choose from four age levels for consent for data processing purposes: 13, 14, 15 and 16.
  • Has each of these been reconciled with the provisions of the UNCRC where, it will be noted, no specific qualifying age is stipulated? The only consideration referred to there is the capacity of the child.
  • In the event of a legal challenge, won’t the absence of any evidence or reasoned argument to support a particular age within a given jurisdiction, potentially weigh heavily with the court? Jurisdictions that opt for 16 or 15 are most likely to be at risk.
  • Is it right that an Article 8 age limit is not a once and for all decision? MS can change their minds and introduce a new level within the permitted range at any time?
  • Can we unequivocally confirm that wherever the word “child” or “children” appears in the GDPR it refers to persons below the age of 18?
  • What is the legal basis for that?

Information Society Services offered directly to a child

  • Article 8 speaks of “information society services offered directly to a child”. It is extremely important to be clear what these are.
  • One suggestion is that if a service solicits or allows persons below the age of 18 to be members then that makes it a service that is being offered directly to a child.
  • The fact that the same service may also solicit or allow persons above the age of 18 to be members does not change the fact that it also offers those services directly to a child.
  • The implications of this could be far-reaching.

Risk assessments, variable ages and one-off permissions

  • Presumably, every ISS will need to consider each discrete and particular data processing activity that is possible on their site or within their service? They will need to do this in order to ensure they have completed an impact assessment for all of them and have obtained the appropriate permissions.
  • A common assumption is that giving parental consent for a child below the Article 8 age is a one-off action. But is that really the case?
  • Might it not be that once admitted to most of the services we know about today, sub Article 8 age children could engage in a wide range of different kinds of activities and some of these may require a specific or additional form of parental consent before they can proceed? In other words the original permission may not cover everything. Some services aimed at very young children already work like that but now it will be a legal requirement.
  • Is there scope, in effect, to have more than one age level within a MS? Might it be possible to say that in order to join a generic ISS you must be at least 13 (or whatever) and no parental consent is required, but if you then wish to do x or y on or within the service you must be a higher age and if you are not, parental consent must be obtained before you can proceed? The practicalities may be fearful but that is a different matter.
  • Has any thought been given to the implications for individual platforms of there being among their users children with the power to agree to certain things on their own while children from a different country may need parental consent for the same actions?

Categories of persons

  • The GDPR in effect creates four categories of persons:
  • Fully competent adults i.e. persons who are over 18
  • Children who are below the age of 18 but above the Article 8 age.
  • Children who are below the Article 8 age
  • Recital 75 speaks of “vulnerable persons”. Clearly, this will include adults but might there also be such a thing as a “vulnerable child” or is it the case that, for GDPR purposes, all children are considered to be equally vulnerable and there are therefore no varying degrees of vulnerability to which an ISS may need to pay attention?

Persons above a certain age

  • Will there be any sort of expectation for ISS to ensure persons above a specified or recommended age are NOT using services intended for persons below that age?

Person below a certain age

  • It is anticipated that, as now, a great many online services will simply draw a line at the Article 8 age and declare that persons below it are not allowed to join or remain as members. This will enable them to avoid getting involved in the potentially messy and expensive business of obtaining verifiable parental consent.
  • However, absent any age verification requirement, what is likely to happen, again as now, is enormous numbers of children will simply tick a box or make up a date of birth to declare themselves to be at or above the Article 8 age.
  • In the UK, for example, this has led us to a situation where over 75% of all 10-12-year-olds are members of services which specify 13 as the minimum. In other MS the proportion is even higher. Doubtless, many parents (although by no mean all) will have colluded or acquiesced but that raises different issues.
  • How will the GDPR address this problem? Will ISS be under any sort of obligation to engage proactively in curbing or reducing unauthorised usage involving underage persons?
  • In those countries which raise the Article 8 age above the current minimum specified by the ISS, typically 13, what will happen to children who fall below the new, higher age, e.g. 16, when the GDPR comes into force? Are they automatically kicked off? Do they lose all their photographs and posts?

Adult services

  • Is it the case that any site or service, or part thereof, which expressly states it is intended only for adults will be required to have robust age verification services in place at least to cover off those adult sections? How might this work with services such as Twitter and YouTube?

Counselling services

  • Recital 38 makes clear that counselling services would not be expected to obtain parental consent prior to offering an ISS to a child below the Article 8 age but this is not repeated in an Article. Is there any doubt about the lawfulness of not obtaining parental consent in such circumstances?

Applicable jurisdiction and going on holiday

  • Which country’s law matters? The suggestion is that for the whole of the rest of the GDPR the jurisdiction that counts is that of the data controller. So if a service is based in Sweden, Swedish law and the Swedish data protection body have primacy.
  • However, when it comes to children, because of the Article 8 derogation, there are three possibilities and, to the best of my knowledge, no one in a position of authority has so far said which is the right one.
  • Thus, as above, it could be the country in which the ISS is domiciled, but maybe it is the country in which the child is domiciled or the country where the child is physically located at the time of using the service.
  • Then there’s the holiday problem. A child is properly signed up to a service in country A where she normally lives and the Article 8 age there is 13. She then goes on holiday to country B where the age is 16. Can she continue to use the service while in country B? What if the stay in country B is longer than a normal holiday of a few days or weeks, e.g. is several months?

Profiling

  • According to the Article 29 Working Party guidance, Article 22 of the GDPR does not prevent controllers from making profiling decisions about children, if the decision does not have a “legal or similarly significant effect” on the child.
  • By contrast Recital 71 says profiling “should not concern a child” Discuss.
  • The Article 29 Working Party guidance goes on to say where the profiling influences a child’s choices and behaviour it could potentially have a “legal or similarly significant effect”, depending upon the nature of the choices and behaviours in question.
  • Is it therefore the case that the results of different types of profiling e.g. advertising, are only definitely allowed if they do not produce a legal or similarly significant effect?
  • But if that means the only acceptable advertisements that may be directed at children are ones which do not prompt a purchasing decision why would any business want to place them anyway?
  • It would be useful to have more case studies illustrating the kinds of profiling activities which would be acceptable in respect of children so as to get a better idea of how to decide whether or not something would be likely to have a “ legal or similarly significant effect”.

Consent needed/not needed

  • In relation to joining or performing others acts on an ISS, where the child is below the Article 8 age, is it clear their consent is not needed or is irrelevant? The only consent that matters is that of the parent?
  • By the same token, is it the case that a child under the Article 8 age cannot withdraw their consent because they have never given it?
  • Will this not produce some strange results, particularly in those countries which have opted for a higher minimum age e.g. 16?
  • And how would this interface with the right of erasure?
  • What position would an ISS be in if they learned that a parent was coercing a child into using or being a member of a particular service and the child did not want to use it or be on it?
  • Presumably, even if a sub-Article 8 age child’s consent is strictly not needed the ISS still has a legal obligation to explain the nature of the service and any associated data processing in language which is understandable to the child?

Differences in infrastructure between MS

  • Not every country has the same technical or other infrastructure that would allow parental consent to be verified by online means, or possibly even by non-online means. Some parents may therefore be “more verifiable” than others.
  • Yet the implication will be that “this child is on this site or service only because their parents have been verified”. That may carry with it a further implication that somehow the (apparent) child can be trusted to a higher degree. After all the identity of the (apparent) child’s parent has been checked and confirmed so there is some comeback if anything goes wrong.
  • Has any thought been given to the security or child safety implications of there being platforms where sub-Article 8 age children will be present where their parents have been checked according to potentially radically different standards?

Welcome to our world

  • It is clearly the case that henceforth the privacy community is going to be involved in a major way in online child protection and children’s rights issues. How do we imagine the privacy community will get itself up to speed and stay up to speed with research and the full range of issues that impact on children’s use of the internet? Too narrow a focus on purely data privacy issues could mean they miss the mark.
Posted in Consent, Default settings, E-commerce, Facebook, Google, Internet governance, Privacy, Regulation, Self-regulation, Uncategorized

Unequal weight of arms

Just as every child attending school in England learns about the fateful year of 1066 – the last time England was successfully invaded by a foreign power  – so Chinese youngsters learn about the “unequal treaties“. In essence,  in the 19th Century various European powers, chief among them the UK and France, took disproportionate advantage of China’s then weakened state after the so-called Opium Wars. They extracted unreasonable reparations, took land and also insisted on unfair trading rights. This was the time when Hong Kong became a British colony, by virtue of a lease which only ended in 1997.

The unequal treaties later became a potent political driver in developing Chinese nationalism. It fuels Chinese attitudes towards the West even to this day. Perhaps it is going too far to say that the emergence of China as a global power is in any sense driven by a desire for payback but I know some who do not think that idea too fanciful.  A famous search engine tells me the origins of the phrase “revenge is a dish best served cold” are disputed but whoever eventually gets the credit they were obviously an acute observer of human nature.

I appreciate the difference in scale and scope but I am nevertheless often driven back to thinking about the analogous unequal weight of arms which exists right now as between big tech companies, individual consumers and civil society, not to mention governments of nation-states – those “annoying” but extremely tenacious, anachronistic entities which continue to exert such an extraordinary grip on people’s loyalties and sentiments.

The plutocratic demi-gods of Silicon Valley promised everyone they would make the world a better place. Full stop. And, yes, I can order my groceries online, get cheap flights, talk to my relations in San Diego. Yet there is all that other stuff. That was never part of the advertised deal.

The legal immunities online businesses were given at the optimistic birth of the internet experiment created the perfect conditions for online businesses to try anything and everything to find new ways of making money, they call it “innovation”,  while simultaneously removing any sense of urgency in respect of eliminating evil. The rising tide that was reflected in massive enthusiasm for the new technology floated many ships that might otherwise not have made it out of the harbour under an even marginally less permissive regime. There is no point speculating about what would have happened if things had been ordered differently. They weren’t.

The ability and obvious willingness of online businesses to hop between jurisdictions both to minimise tax liabilities and throw up a smokescreen of obfuscatory alleged difficulties which ultimately rely upon a weak to non-existent or asynchronous international legal framework, stoke resentments that sooner, or later, probably sooner, will find a way to express themselves. It may not be pretty. What I am not quite sure of is whether or not it is already too late. Is the die cast?

I fear many companies have concluded that it is so they will not change their ways. They will just try to keep the merry-go-round turning for as long as they can.

Accountability to independent agencies and full or at any rate much fuller transparency are going to be the new watchwords. Big tech knows this but sufficient unto the hour is the evil thereof. And if they light candles to the gods of uncertain fortune, who knows?

Posted in Default settings, Regulation, Self-regulation

A comment from Facebook

I got an email underlining that the author of the piece I blogged about  earlier today ceased to work for Facebook a long time ago.  I also received a link that isn’t displaying properly on my machine so I reproduce the text here anyway (with the link below).

https://newsroom.fb.com/news/h/enforcing-our-policies-and-protecting-peoples-data/

 

Enforcing Our Policies and Protecting People’s Data

By Justin Osofsky, VP Global Operations

We’ve seen allegations that we don’t care how people’s data is used. While it’s fair to criticize how we enforced our developer policies more than five years ago, it’s untrue to suggest we didn’t or don’t care about privacy. The facts tell a different story.

We’ve listened to feedback from people who use Facebook, experts in privacy and security, and regulators on how we can do better. We’ve devoted hundreds of people and new technology to enforce our policies better and kick bad actors off our platform. To name just a few examples:

First, it’s always been against our policy for a developer to collect data it doesn’t need to operate its app. But in the past five years we’ve significantly improved our ability to detect and prevent these violations. All apps requesting detailed user information go through our App Review process where developers must explain how they are going to use what they collect – before they’re allowed to even ask for it.

Second, when developers are permitted to use our platform, we give people the tools to control their experience. Before you decide to use an app, you can review the permissions the developer is requesting and choose which information to share. You can manage or revoke those permissions at any time. We introduced this more than three years ago.

Third, we enforce our policies by banning developers from our platform, pursuing litigation to ensure any improperly collected data is deleted, and working with developers who want to make sure their apps follow the rules.

We’re not stopping here. Our privacy program, created in 2012, includes hundreds of people from a variety of teams across the company. This group works with product managers and engineers to protect people’s data, to give people information about how our features work, and to provide people control over how their data is used. This program is audited as part of a 20-year agreement we have with the US Federal Trade Commission. We’re held accountable for what we say and what we do.

Our approach reflects the laws and regulations we have to follow – both in the US and around the world – but it also reflects something more fundamental. Facebook may be successful today, but our future isn’t guaranteed without the trust of the people who choose to come here every day. It’s why we promote a culture inside Facebook that questions decisions and that is relentless in finding ways to improve. We’ll continue listening to feedback and finding ways we can do better.

 

 

 

https://newsroom.fb.com/news/h/enforcing-our-policies-and-protecting-peoples-data/

 

 

 

Posted in Facebook, Internet governance, Privacy, Regulation, Self-regulation

A former Facebook employee speaks

This article by an ex Facebook employee is definitely worth a read.

The headline says it all. Who knew?

Hitherto every discussion about what to do to promote children’s rights online or make the internet better and safer for children started from the basis that multi stakeholder voluntarism and industry self-regulation were the obvious and preferred way to go. This certainly made things easier for governments but that experiment has run its course and been found wanting too many times in too many ways.

Trust of high tech is at an all time low. It cannot be rebuilt against a background of near total ignorance of what is truly going on, particularly inside the bigger companies that dominate so much of the internet.

We have no context or means of judging what actual priority is being given to addressing children’s interests as internet users. This has to change and it will not be acceptable if we simply allow companies to tell us, unilaterally, how they propose to be more transparent in the future. There must be some external, independent validation.

However, I don’t  think we should rule out any  role for self-regulation in respect of any and every issue  to do with children and the internet but the case for it needs to be demonstrated case by case against a background of a much higher and wider level of certainty and transparency about what companies are doing.  In November, 2017 we are a very long way from that.

 

 

Posted in Internet governance, Regulation, Self-regulation

Legislation in the USA

Regular readers will be aware of the twists and turns of the debate surrounding the future of s. 230 of the Communications Decency Act, 1996. That is the thoroughly ill-fitting name for a law which has protected companies like Backpage from being prosecuted for facilitating child prostitution and trafficking.

Republican Senator Portman and Democrat Senator Blumenthal proposed an amendment to the law. Large swathes of the tech industry opposed it.

Not any more, it seems. Big tech has had a change of heart and, as a result, a slightly revised version of the amendment has just cleared a crucial hurdle in the Senate which apparently has, in turn, cleared the way for a final vote in both Houses.

No doubt people will speculate about the reasons why big tech has shifted its position and they are likely to conclude that they are running scared because of the obvious and still mounting anger in Washington over “fake news” and related matters. In other words, political will and political circumstances, in the end, are what count.

I trust this moral is not lost on members of the UK Parliament and the powers that be in the European Union.

Not everybody is happy but there you go. Seemingly bodies such as the Electronic Frontier Foundation still say it is an “awful Bill”. Why? Because they believe that in order to avoid any risk of being liable for facilitating child prostitution or trafficking internet businesses will “err on the side of removal”. Three cheers for that is all I can say.

Obviously, I am not in favour of innocent speech being wrongly deleted but if that is truly a concern perhaps it will act as a spur to find ever more ingenious ways of solving it. Erring on the side of protecting children is not a shortcoming as far as I am concerned. The wonder and the shame is that it has taken this long to get there.

Posted in Default settings, E-commerce, Internet governance, Regulation, Self-regulation, Uncategorized

Silicon Valley on the rack

This morning the Today programme carried a big piece about Twitter, Google and Facebook being up in front of a US Congressional Committee to explain how it was, during the last Presidential election, their platforms seem to have been exploited in highly undesirable ways, one of which appears also to be illegal.

The ostensibly illegal way concerns the manner in which 126 million Americans were exposed to Russian-backed election content. 120 “fake Russian-backed pages” also created 80,000 posts that were seen directly by 29 million Americans but were probably actually seen by many more through sharing, liking and following.

Now whether you were glad or sad that Donald Trump won is sort of beside the point. What is at issue here is the way something as fundamental as the system for electing the President was so easily manipulated. Did someone fall asleep at the wheel? And if “Big Tech” can get something like this wrong, what else might be amiss?

Remember Hilary Clinton got more votes than Donald Trump but her votes were “maldistributed”. The Electoral College got Trump across the line, not a numerical majority. The implication is that micro-targetting specific demographics in swing States was to some degree or other responsible for the outcome and micro-targetting is what the big platforms claim to be good at.

Before anyone in the UK starts getting uppity or superior about maldistribution, similar things have happened here, most notably in the 1951 General Election (no, I don’t remember it!). That year Labour got its largest ever number of votes and nearly 1% more of the share of the votes than the Tories but they got 321 seats to Labour’s 295. Winston Churchill was Prime Minister again. Enough already with the history lesson.

Now if there is one thing which is bound to get the attention of politicians it is someone else messing with the well- understood mechanisms and methods by which their elections are conducted.

I doubt anyone imagines, even for a moment, that any of the businesses currently on the rack positively wanted or intended to help Trump or harm Clinton. In a sense it is much worse than that. They were tricked or gamed by outsiders who obviously understood their technology better than they did and knew how to bend it for their own dark ends.

I guess if your motivating spirit is “move fast and break things”  or you believe it is always better to apologise than seek permission, something like this was inevitable.  But the gods will have their terrible revenge and we might be witnessing the beginnings of it on Capitol Hill right now.

So did Silicon Valley get Trump elected? That seems unlikely but what is astonishing, even shocking, is that nobody can say they didn’t or that there is even room for reasonable doubt. And this is all coming to light after the event with each of the companies acknowledging they had no idea they were being played in the way they were.

Saying “oops, sorry” somehow doesn’t cut it.

Turning to the UK again, I wonder how much of the EU referendum vote can be explained by similar skullduggery?

The ability to micro-target specific demographics to sell corn chips or holidays on the Costa Brava is one thing. But when that sort of power starts to engage with electoral politics it is very worrying particularly if, at the same time, it also allows unambiguous lies and the grossest distortions to be peddled with impunity or play into filter bubbles.

The truth is in a first-past-the post Parliamentary system such as ours the outcome of a General Election is determined in a relatively limited number of seats by a comparatively small number of voters.  It is one of the reasons I became a convert to some form of proportional representation.

I trust the powers that be are on the case.

Posted in Regulation, Self-regulation