A big but solvable problem

The great majority of social media sites say that 13-year-olds may use their services but because few, if any, do any kind of age verification we know that 75% of all 10-12-year-olds in the UK are on them. However, on Twitter and Tumblr, for example, there is a phenomenal amount of hardcore porn published by individual account holders. For practical purposes, it can be viewed by anyone.

There is no justification for a site that specifies 13 as its minimum age to provide ready access to 18+ material, and if the same site also knows that in fact large numbers of sub-13s are customers their position is completely indefensible.

We cannot let the Digital Economy Bill pass into law without addressing this.  It begins its Committee stage in the House of Lords this week.

As it stands larger commercial porn sites will be required to introduce age verification to limit access to over 18s – that’s brilliant – but everyone else, in particular social media sites such as Twitter and Tumblr, escape any such requirement. Without more, we may, therefore, end up driving kids away from the humungous porn publishers who operate through their own sites straight into the virtual clutches of porn merchants who operate via other people’s platforms.

My suggestion is that where the proposed new Regulator identifies accounts or services that persistently publish pornography on a significant scale (proportionality is important) via any social media site, service or platform the Regulator should have the power to require the owner of the site, service or platform either to delete the material or the account or put it behind a robust age verification gateway. The whole site or service would never be blocked or restricted.

Some social media sites or services have a facility to allow account holders to indicate that their content is “adult”. They may even provide a way to restrict access to it but if these measures can be ignored or circumvented with a  few mouse clicks, they don’t count. If the profile is attracting a large enough volume of hits it qualifies.

Does my idea break with a core principle of the Digital Economy Bill, namely that the publisher must be commercial? Not necessarily because if the publisher is operating at the required scale it is exceptionally likely there is a commercial component in there somewhere even if it isn’t instantly apparent. But the key point has to be scale.

Posted in Age verification, Default settings, E-commerce, Pornography, Regulation, Self-regulation

A game changer?

It’s not every day  of the week Winnipeg can advance a claim to be the centre of world attention. If it wasn’t for the small matter of that Inauguration thing going on a little way south of them maybe Winnipeg would have been, at any rate it should have been more in the spotlight of late.

Cybertip Canada is the hotline for reporting child abuse images in the land of the maple leaf. They are based in Winnipeg and have just launched Project Arachnid. Canadian law enforcement have been strongly supportive as has NCMEC. Quite simply Arachnid  is a web crawler that is linked to a  database of known child abuse images. The database is constructed using PhotoDNA.  Cybertip Canada sends Arachnid out to see if it can find any matches.

The IWF was the first hotline in the world to operationalize proactive searching for child abuse images on the web but with Arachnid Cybertip Canada have taken it to a whole new level. The early results are completely mind blowing. There has never been anything like it before. Not ever, at least not that has reached the public domain or my ears. And note this  – Arachnid can work with equal facility on the dark web and the open web.

In trials Cybertip Canada ran for six weeks prior to launch Arachnid processed over 230 million web pages, identified 5.1 million unique web pages hosting child sex abuse material and detected over 40,000 unique images of child sex abuse. Now read that again. They did all that in only six weeks.

Part of the beauty of the way Arachnid works is, having located a known image, it automatically identifies and notifies the hosting company asking them to delete it. All relevant information is passed on to law enforcement agencies. Where appropriate and where there is one the hotline in the country concerned is also notified and via this route, if through no other, I imagine the urls will find their way into the INHOPE  and other databases so they can be locally deployed.

Is there a police force anywhere in the world ready to deal with a potentially gigantic increase in information about web pages containing child abuse images? Almost certainly not but if Arachnid leads to a higher number of images being removed more swiftly than before that has to be a good thing. There is no doubt whatsoever that is what victims want. The police will have to catch up later.

Who knows, maybe what we learn from Arachnid about the volumes of child abuse images out there and the limitations imposed by current levels of police resources on law enforcement’s ability to address them will finally convince politicians who control the purse strings to cough up more spondulix?

Normally sober, steady individuals, not given to excessive exuberance believe this Canadian export has the potential to be a game changer.  It is likely to have profound implications for the way hotlines around the world work and relate to each other.

I’m told it is still relatively easy to locate child abuse images if you search in languages that do not use a latin script and, if that’s true, it will raise a question about how well all the tools currently in use are being integrated into some of the otherwise ostensibly global systems. But if it’s not true or we can find a way to fix it and integrate the results into Arachnid then 2017 suddenly starts to look a great deal brighter than the early portents were suggesting.

 

Posted in Child abuse images, Regulation, Self-regulation

Answers, answers

2017 has set off at a blistering pace.

The first week is not over yet and already we have had two important announcements.

Searching for answers

NSPCC got in first with its call for parents to make greater use of the safe search functions that are available in pretty much every web browser, or at any rate all the well known ones. We are provided with some graphic examples of how things can go wrong if a child types in what most of us would think of as being perfectly innocent words a young person might use a lot.

As part of their campaign NSPCC has produced an extremely useful guide showing how to set up safe search and other parental controls on every type of device or internet access point any child is likely to use.

But here’s the thing. We now know that one in three of all internet users in the world is a legal minor – below the age of 18 – and that this rises to nearly one in two in parts of the developing world. In the UK the proportion is roughly 1 in 5. We also know there are nearly 19 million families in the UK. Of these about 8 million have dependent children and internet penetration in households with dependent children is well-nigh 100%.

Thus whatever way you look at it the internet is a medium in which children are a very substantial and persistent presence. The internet may, or may not, topple dictators but it definitely helps little Susie with her spelling. That being so, what is the argument against having safe search turned on by default? Anyone who doesn’t want to use it can turn it off but it should be that way around. No parent should have to jump through hoops to do whatever can be done at a technical level to keep unwanted and unsuitable stuff away from their kids. I know the false sense of security argument but what about the haven’t got a clue/I can’t read English/ too stressed right now/technology freaks me out but I know my kids need it position? Do we just say tough and move on? I hope not.

Commissioning answers

Next today saw the publication of  Growing up Digital an excellent report by the Children’s Commissioner for England. It was another call to action.  There is a completely brilliant section – written by a lawyer from Schillings – where Instagram’s Ts& Cs are translated into plain language that a younger person might have a better chance of understanding. Try this for size. Here are several child-friendly new clauses…..

  1. Officially you own any original pictures and videos you post, but we are allowed to use them, and we can let others use them as well, anywhere around the world. Other people might pay us to use them and we will not pay you for that.
  1. Although you are responsible for the information you put on Instagram, we may keep, use and share your personal information with companies connected with Instagram. This information includes your name, email address, school, where you live, pictures, phone number, your likes and dislikes, where you go, who your friends are, how often you use Instagram, and any other personal information we find such as your birthday or who you are chatting with, including in private messages (DMs). We are not responsible for what other companies might do with this information. We will not rent or sell your personal information to anyone else without your permission. When you delete your account, we keep this personal information about you, and your photos, for as long as is reasonable for our business purposes. You can read more about this in our “Privacy Policy”. This is available at: http://instagram.com/legal/privacy/
  1. Although Instagram is not responsible for what happens to you or your data while you use Instagram, we do have many powers:

– We might send you adverts connected to your interests which we are monitoring. You cannot stop us doing this and it will not always be obvious that it is an advert.

– We can change or end Instagram, or stop you accessing Instagram at any time, for any reason and without letting you know in advance.

– We can also delete posts and other content randomly, without telling you, for any reason. If we do this, we will not be responsible for paying out any money and you won’t have any right to complain.

– We can force you to give up your username for any reason.

– We can, but do not have to, remove, edit, block and/or monitor anything posted or any accounts that we think breaks any of these rules.

-We are not responsible if somebody breaks the law or breaks these rules; but if you break them, you are responsible.

  1. Although you do not own your data, we do own ours. You may not copy and paste Instagram logos or other stuff we create, or remove it or try to change it. You should use common sense and your best judgment when using Instagram.

Beyond this  I will not go through those sections of the Children’s Commissioner’s report which repeat the all too familiar litany of shortcomings, complained about by large numbers of children who told the researchers they felt the social media platforms were too unresponsive. In the days of vinyl it was possible for a needle to get stuck in a groove and keep on playing the same soundtrack over and over again. That’s where we are right now with a lot of this.

Yes it is true that Facebook, Google and others provide stunning services and do some brilliant child protection work, particularly around illegal content and illegal behaviour. Their wider philanthropy and educational initiatives are to be applauded but they provide no alibi for inaction or obfuscation in other parts of the space. Read on.

Answers came there none

The Commissioner tells us

It is currently impossible to know how many children are reporting content, what they are reporting and how these reports are dealt with. When the Children’s Commissioner requested information from Facebook and Google about the numbers and types of requests it receives from minors to remove content neither was able to provide it.

So there we have it. These two big beasts must know what is going on within their businesses but they choose not to disclose it, even to a body which is dedicated solely to children’s interests. This cannot be allowed to continue.

Getting the answers

The denouement of the Report – its crowning glory – is its call for the creation of an e-Ombudsman for children. Australia has one. We need one. And it has to be a body with the legal power to compel online businesses to answer its questions and obey its directions. Otherwise they won’t if they fear it might harm their business interests.

Posted in Default settings, E-commerce, Facebook, Google, Privacy, Regulation, Self-regulation

Return to Sender – nothing new here

According to a well known search engine there is a dispute about the provenance of what is probably the best known definition of madness. The one where insanity is described as doing the same thing over and over yet expecting a different result. Well whoever actually said it first might have been thinking quite specifically about a report I have just read.

One internet is a blockbuster published by the Global Commission on Internet Governance and Chatham House. While it will doubtless constitute a valuable source of references for scholars, I have two major criticisms. First there’s the perfunctory, almost casual way children’s and young people’s use of the internet is discussed, missing several important points not by inches but by miles. However, what leaps off the page is the dated air of unreality. One Internet could have been written, and probably was, any time in the mid to late 1990s. As a testament to the old religion it has a certain charm but that’s it.

One Internet is a manifesto for an imagined status quo  ante that actually may never have existed, or if it did it was for the briefest of moments. What One Internet most definitely is not is a manifesto for the future. It is an egregious expression of hope and optimism delivered in the teeth of self-evident, almost overwhelming adversity and rapidly growing signs of failure.

Maybe the authors were simply unlucky in terms of timing. Perhaps 18 months or two years ago there was a window but in the immediate aftermath of Brexit and Trump’s victory in the USA? In sight of growing support for nationalist and isolationist political parties all over Europe and elsewhere, at a time when Russia and China seem more and more determined to do whatever they like, when the glitz, glamour  and promise of globalization is fading as things like fake news, online fraud and the Mirai botnet expose the vulnerabilities of the internet and all who depend on her, it seems, to say the least,  unlucky to bring out a paper which so strongly argues for  the same old same old.

Never before in our lifetimes has the international order seemed more threatened and unstable. The report itself acknowledges this to some degree when it says the future of the internet does indeed hang in the balance.  But that future is not an abstraction, a contained system co-existing in a parallel universe, it is rooted in what is happening in the world where we put our feet.

If ever there was a need for new thinking it is now. You will not find any in One Internet.

Three possible futures

One Internet describes three possible futures. The first is a “Dangerous and Broken Cyberspace”. Not an option favoured by the authors. It arises because inter alia the…..

inadvertent effects of government regulation are so high that individuals and companies curtail their usage (of the internet). Governments impose sovereign-driven restrictions that further fragment the internet and violate basic human rights.

Wow. Those naughty Governments. I suspect the UK would be on this list of culprits.

The next is Uneven and Unequal Gains. Again this is frowned upon but how might it come about? It happened because

The economic value of the Internet is compromised by governments failing to respond appropriately to  the  challenges of  the  digital  era, choosing instead to assert sovereign control through trade barriers, data localization and censorship and by adopting other techniques that fragment the network in ways that  limit the free flow of goods, services, capital and data.   

Governments to blame again. Note that industry is not criticized here. Governments are just getting in the way.

Then there’s outcome number three. This is the big one. The target.

Broad, Unprecedented Progress

In   (this)   scenario, the   Internet   is  energetic, vigorous and  healthy.  A  healthy  Internet  produces unprecedented opportunities for social justice, human rights, access to information and knowledge, growth, development and innovation.

And how are we to  reach these sunny  uplands? Easy.

We call on governments, private corporations, civil society, the technical community and individuals together to create a new social compact  for  the  digital  age.   This  social compact will require a very high level of agreement among governments, private corporations, civil society, the technical community and individuals. Governments can provide leadership, but cannot alone define the content of the social compact. Achieving agreement and  acceptance will require the  engagement of all stakeholders in the Internet ecosystem.

To scold national governments with pious platitudes seems close to insulting. One internet is an argument for a particular business model  the support for which is being ever more searchingly questioned by growing numbers of people on every continent and it is therefore also being questioned by Governments who are put there by those same people.

Instead of  One Internet’s starting point  being a very obvious ideological commitment to preserving their singular vision of the internet the project could have asked itself

 What is it about the way the internet is working at the moment that is causing so many problems for so many Governments, making them feel compelled to act?  How can we address these and what part might internet governance institutions play in that process?

Rather the report seems to argue that the internet’s palpable imperfections and key parts of industry’s persistent shortcomings are the price we all have to pay in perpetuity in order to retain the new technologies’ undoubted benefits. We should just get used to it.

No, No, No, as a famous former British Prime Minister once said.

Children

One Internet acknowledges an earlier report from GCIG and Chatham House, of which I was a joint author (One in Three) but then it completely overlooks all of its principal recommendations. For example One Internet  expressly endorses the NetMundial statement  when it says

NETmundial….. mark(s)  a  major  step  by all stakeholder groups toward agreement on the basics of  Internet  governance, including  agreement  that Internet  governance should be carried out through a  distributed,  decentralized  and  multi-stakeholder ecosystem.

In the NETmundial statement none of the following words appear, not even once: child, children, youth or young, despite the fact that, as One in Three  shows, one in three of all internet users in the world is below the age of 18, and this rises to nearly one in two in parts of the developing world. Whatever else people might imagine the internet is or could become, right now it is a family medium, a children’s medium. The rules of the road need to be rewritten to reflect that. Yet you will look in vain in One Internet  for even a hint that the report’s authors  are aware of this dimension, much less do they embrace it.

One Internet makes the occasional reference to the issue of child abuse images and to wider issues of child welfare but its tone is hurried and curt.

Intermediary liability

On the vexed but hugely important question of intermediary liability One Internet simply says it endorses the Manila Principles. The agencies that seem to have taken the lead in preparing the Principles, and as far I can see most of those who have subsequently endorsed them, are drawn from a very narrow spectrum of internet activists. Multistakeholder it is not.

I know of no person of standing who wants to abolish the principle of immunity for internet intermediaries.  It is simply wrong – it would be unjust – to attempt to make anyone liable for something they could not have known anything about.

Yet there is no doubt that the principle of immunity for intermediaries has provided too many online businesses with an incentive to do nothing. It is a permanent alibi for inaction and evasion in connection with some of the most important threats to children e.g. the continued spread of child abuse images and aspects or types of cyber bullying.

We should take a leaf out of the law and practice of data protection. Here it is universally accepted that states not only have a right to impose requirements on businesses in terms of minimum security and other standards but they also have a right to establish independent agencies to carry out inspections to determine how those standards are being observed by any organization that collects, processes or stores personal data.

We know that networks are being abused by a variety of lawbreakers in ways which harm children,  just look at the continuing scandal of advertising supported piracy web sites and the ongoing large scale distribution of child abuse images.

The case for imposing cyber hygiene obligations in respect of a broad range of internet businesses is clear.

In other words, while the principle of immunity from any substantive offences  or civil wrongs should be maintained, companies ought to be required to take reasonable and proportionate steps to detect, eliminate or mitigate any and all unlawful activity taking place on their network. At the very least companies should be expected to take reasonable and proportionate steps to enforce their own terms and conditions of service, otherwise these Ts&Cs are tantamount to being a deceptive practice. An independent inspectorate could  have a role here to reassure the public that the designated standards are being observed by everyone who sets up shop in cyberspace.

If these sorts of things were happening public confidence in the internet might start to rise and  Governments would feel under less pressure to intervene and regulate. This wouldn’t fix everything that’s wrong with the world today but in this particular niche it would most assuredly be a step in the right direction.

Posted in Default settings, Internet governance, Privacy, Regulation, Self-regulation

Toys can be tricky

Children’s toys that can connect to the internet featured in eNACSO’s recent publication (June, 2016) When ‘free’ isn’t . They were also the principal topic of conversation at a workshop organized earlier this month at the IGF entitled The Internet of Toys and Things. It was standing-room only. This is clearly a hot-button item and it is likely to get hotter. It’s not hard to work out why.

First of all the repeated stories about  fake news and security failures have lead one distinguished, independent commentator to question whether or not the internet as a whole is becoming the equivalent of a failed statethat is to say  a place where we go at our peril because law, order and security can no longer be guaranteed. Who wants their children playing in or with a failed state?

A concrete illustration of this failed state idea occurred in October with the Mirai-driven distributed denial of service attack which, inter alia, took Twitter, Netflix and CNN offline, even if only briefly. Here the culpable botnet was utilising household and other objects which form part of the internet of things. Toys might well have been among them.

Toys are a sub-set of the internet of things but they are quite distinct in a very obvious respect: they are close to our children. Extremely close. Thus to the extent that we lose confidence in the security and stability of the internet of things, or its ability to respect our privacy,  so parents’ willingness to engage with connected toys is likely to reduce, possibly even vanish altogether. That would be a great pity because there is little doubt that the potential for children and young people to benefit from greater connectedness and interactivity with smart systems seems almost self-evident. But there are limits. Or ought to be.

Up to now in this blog I have been speaking only of the security and legal dimenions of privacy in terms of what can happen or what can go wrong with connected toys.  But there is a larger question which has nothing to do with privacy, security or current laws. It is to do with parenting.

It would be wrong to think that all connected toys are subject to identical risks or raise identical parenting concerns. They aren’t and they don’t. Moreover interactive games and toys have been around for a long time with few, if any, ill effects.  However, with the huge advances in AI , algorithms, and processing power, never mind the connectedness of the internet, modern toys are surpassing anything we have ever seen before or could have imagined. Thus, on top of the privacy or legal concerns, it seems clear to me that a number of profound ethical issues are hoving into view. These need to be discussed and debated in a neutral environment.

The US-based Family Online Safety Institute  have published Kids & the Connected Home.  It is one of the first and best reports of its kind, comprehensively documenting the range and different types of connected toys currently on the market. Its account of the history of technology and toys is also excellent. But FOSI most decidedly is not neutral ground. Its starting point appears to be:  ours not to reason why, here is another technological advance which, by definition must be good, so let’s look for a pathway that will ensure it succeeds. This is perfectly honourable, if a million miles away from being the full story in a case like this.

Contrast FOSI’s approach with that of Professor Sherry Turkle speaking about one very connected toy:  the Barbie doll.

Children naturally confide in their dolls and share their deepest feelings. At a tender age, they need to have their feelings genuinely heard and validated, and they should be sympathized with, uplifted, and supported. Children learn best from sincere dialogue with a real listener.

 

Some of the toys available today record entire conversations and not only send them to the parents – but also to people and/or machines in remote locations who doubtless can or will analyze them in many different ways. Am I the only one who feels a little unsettled by all this? As someone at the eNACSO workshop said

It’s one thing occasionally to tiptoe up to your child’s bedroom door and listen in as they say their prayers but to get everything they ever say? That’s spooky.

Somebody somewhere needs to call a halt while we all take a breath and think this through, not just as a privacy issue. Not just as a technical challenge confronting companies in terms of how to give parents confidence in the privacy dimensions of their products so they will buy them but in terms of what this might be doing to parenting and to children’s development. We have been agonising over whether it is right or how to use robots to look after the elderly. We also need to agonise a lot more about putting robots into the complex path of our children’s emotional development.

 

 

 

Posted in Default settings, E-commerce, Regulation, Self-regulation, Uncategorized

Not so dark after all

At the recent IGF  I attended several workshops where privacy was discussed. At a number of them two things struck me: everyone agreed privacy was desirable, but almost nobody thought it actually existed on the internet. The predominant view among online privacy activists seemed to be that whatever tools you thought you could use to keep your online stuff secret somebody somewhere – in a business or in a government agency, probably both –  either already knew what you were up to or, if they were sufficiently determined, they could work it out.

I remember one young woman from the Balkans who appeared to believe political activists in totalitarian states who thought the internet was their friend or ally were foolishly or recklessly playing with their own lives and liberty, and probably the lives and liberty of others. No one argued with her.

It was not suggested that anyone could (yet) routinely break messages that had been strongly encrypted but in terms of tracking and establishing patterns of relationships, that was easy peasy and, more importantly, it was enough for most police states to draw their own conclusions and act accordingly.

But what about the dark net, I hear you ask? We know the cops in a number of countries have had successes in cracking criminal operations that have used the dark net. Could these simply have been lucky breaks, or one-offs?

The other day I read a case that began to open up this obscure cyber corner. It gave me reason to believe that even on the dark net we can get the bad guys on the run, or at least unsettle them and make them think it is no longer a guaranteed safe haven.

It appears FBI agents took over and ran a machine that operated on the dark net to distribute child abuse images. Using what they called a “Network Investigative Tool” which they put on the offending server, the FBI were able to collect identifying information about people who logged in. Over 200 have already been arrested and charged in the USA following this sting operation but in the course of the action the Feds were able to gather information about 100,00 users in 120 countries. Bravo. Let’s hope the police forces in these countries were able to do something with the information the FBI handed over.

In some Federal courts in the US a number of the cases that were brought against individuals were dismissed because the FBI refused to disclose in open court exactly how the “Network Investigative Tool” operated but evidently not every court made that a condition of proceeding.

In the case that caught my eye, in Washington State,  the judge was highly critical of the FBI for, effectively, distributing child abuse images for two weeks while they ran their operation but he declined to order the FBI to declassify their secret methods. Thank goodness. If that had happened it would only have helped the bad guys to construct a work around.

The really encouraging aspect, however, is also its most obvious. The dark net is not so dark after all. I am sure heavy duty, tech-savvy cyber criminals had worked that out some time ago but for those of us who might otherwise and hitherto  have been prone to lapse into bouts of hopeless cyber depression this blog should act as a little ray of sunshine. Spread the word.

Posted in Child abuse images, Internet governance, Privacy, Regulation, Self-regulation

Only 100

I have written many times about the fantastic work Microsoft did when they developed Photo DNA – the tool that allows law enforcement and other agencies to create a “digital fingerprint” of a child abuse image. This “fingerprint” can then be deployed on a network to detect any re-occurrences of the same image thus either preventing it from being uploaded again or expediting its removal and investigation if it is already being stored there. It’s a great service to the victims depicted in the images and can save a huge amount of police time in several different ways.

Microsoft did not have to create  PhotoDNA. There was no law or regulation obliging them to do so, much less was there a law or regulation saying they then had to give it away for nothing, which is what happens. Microsoft did it because they could and because they knew it would do good in the world. Three cheers, again, for  Redmond.

Now switch to Mexico. The Internet Governance Forum is in session. I am in the audience. A senior Microsoft Executive discloses that 100 organizations are using PhotoDNA.

We know that Twitter, Facebook and Google are three of the 100 because they speak about it in public frequently. When I asked Microsoft for information about the other 97 – who are they, what types of businesses or organizations are in there? – the shutters came down. Confidentiality agreements prevented Microsoft from going into  detail. All I learned was that within the 97 are law enforcement agencies and NGOs. In other words the 97 are not all internet businesses.

Marshalling those super sleuthing skills and powers of deduction for which I am justly famous,  I decided to check out if Microsoft itself might be using PhotoDNA and, sure enough it is. PhotoDNA appears to be  integrated into its Cloud Service so, presumably, that means the Microsoft business is on board, as are the unknown or undisclosed number of Cloud Service customers.

But leaving aside Microsoft’s Cloud Service customers who are covered I am still deeply shocked at the seemingly very low rate of take up of PhotoDNA.

Any and every business that provides members of the public with any kind of online storage facility or transmission mechanism must know that sooner rather than later their services will be used by those who are engaged in child abuse.

That being so, why would they NOT deploy a tool like PhotoDNA?

Every online business should be obliged to take all reasonable and proportionate steps to mitigate all forms of unlawful behaviour that might otherwise take place on their networks even  though, without actual knowledge,  they can never attract substantive liability for  the unlawful conduct or content in question.

I am not suggesting we interfere with or change the rules concerning the liability of intermediaries but just as restaurants must always comply with food hygiene laws, online businesses should be required to do likewise in respect of cyber hygiene.

Posted in Child abuse images, Default settings, E-commerce, Facebook, Google, Internet governance, Microsoft, Regulation, Self-regulation