Answers, answers

2017 has set off at a blistering pace.

The first week is not over yet and already we have had two important announcements.

Searching for answers

NSPCC got in first with its call for parents to make greater use of the safe search functions that are available in pretty much every web browser, or at any rate all the well known ones. We are provided with some graphic examples of how things can go wrong if a child types in what most of us would think of as being perfectly innocent words a young person might use a lot.

As part of their campaign NSPCC has produced an extremely useful guide showing how to set up safe search and other parental controls on every type of device or internet access point any child is likely to use.

But here’s the thing. We now know that one in three of all internet users in the world is a legal minor – below the age of 18 – and that this rises to nearly one in two in parts of the developing world. In the UK the proportion is roughly 1 in 5. We also know there are nearly 19 million families in the UK. Of these about 8 million have dependent children and internet penetration in households with dependent children is well-nigh 100%.

Thus whatever way you look at it the internet is a medium in which children are a very substantial and persistent presence. The internet may, or may not, topple dictators but it definitely helps little Susie with her spelling. That being so, what is the argument against having safe search turned on by default? Anyone who doesn’t want to use it can turn it off but it should be that way around. No parent should have to jump through hoops to do whatever can be done at a technical level to keep unwanted and unsuitable stuff away from their kids. I know the false sense of security argument but what about the haven’t got a clue/I can’t read English/ too stressed right now/technology freaks me out but I know my kids need it position? Do we just say tough and move on? I hope not.

Commissioning answers

Next today saw the publication of  Growing up Digital an excellent report by the Children’s Commissioner for England. It was another call to action.  There is a completely brilliant section – written by a lawyer from Schillings – where Instagram’s Ts& Cs are translated into plain language that a younger person might have a better chance of understanding. Try this for size. Here are several child-friendly new clauses…..

  1. Officially you own any original pictures and videos you post, but we are allowed to use them, and we can let others use them as well, anywhere around the world. Other people might pay us to use them and we will not pay you for that.
  1. Although you are responsible for the information you put on Instagram, we may keep, use and share your personal information with companies connected with Instagram. This information includes your name, email address, school, where you live, pictures, phone number, your likes and dislikes, where you go, who your friends are, how often you use Instagram, and any other personal information we find such as your birthday or who you are chatting with, including in private messages (DMs). We are not responsible for what other companies might do with this information. We will not rent or sell your personal information to anyone else without your permission. When you delete your account, we keep this personal information about you, and your photos, for as long as is reasonable for our business purposes. You can read more about this in our “Privacy Policy”. This is available at: http://instagram.com/legal/privacy/
  1. Although Instagram is not responsible for what happens to you or your data while you use Instagram, we do have many powers:

– We might send you adverts connected to your interests which we are monitoring. You cannot stop us doing this and it will not always be obvious that it is an advert.

– We can change or end Instagram, or stop you accessing Instagram at any time, for any reason and without letting you know in advance.

– We can also delete posts and other content randomly, without telling you, for any reason. If we do this, we will not be responsible for paying out any money and you won’t have any right to complain.

– We can force you to give up your username for any reason.

– We can, but do not have to, remove, edit, block and/or monitor anything posted or any accounts that we think breaks any of these rules.

-We are not responsible if somebody breaks the law or breaks these rules; but if you break them, you are responsible.

  1. Although you do not own your data, we do own ours. You may not copy and paste Instagram logos or other stuff we create, or remove it or try to change it. You should use common sense and your best judgment when using Instagram.

Beyond this  I will not go through those sections of the Children’s Commissioner’s report which repeat the all too familiar litany of shortcomings, complained about by large numbers of children who told the researchers they felt the social media platforms were too unresponsive. In the days of vinyl it was possible for a needle to get stuck in a groove and keep on playing the same soundtrack over and over again. That’s where we are right now with a lot of this.

Yes it is true that Facebook, Google and others provide stunning services and do some brilliant child protection work, particularly around illegal content and illegal behaviour. Their wider philanthropy and educational initiatives are to be applauded but they provide no alibi for inaction or obfuscation in other parts of the space. Read on.

Answers came there none

The Commissioner tells us

It is currently impossible to know how many children are reporting content, what they are reporting and how these reports are dealt with. When the Children’s Commissioner requested information from Facebook and Google about the numbers and types of requests it receives from minors to remove content neither was able to provide it.

So there we have it. These two big beasts must know what is going on within their businesses but they choose not to disclose it, even to a body which is dedicated solely to children’s interests. This cannot be allowed to continue.

Getting the answers

The denouement of the Report – its crowning glory – is its call for the creation of an e-Ombudsman for children. Australia has one. We need one. And it has to be a body with the legal power to compel online businesses to answer its questions and obey its directions. Otherwise they won’t if they fear it might harm their business interests.

Posted in Default settings, E-commerce, Facebook, Google, Privacy, Regulation, Self-regulation

Return to Sender – nothing new here

According to a well known search engine there is a dispute about the provenance of what is probably the best known definition of madness. The one where insanity is described as doing the same thing over and over yet expecting a different result. Well whoever actually said it first might have been thinking quite specifically about a report I have just read.

One internet is a blockbuster published by the Global Commission on Internet Governance and Chatham House. While it will doubtless constitute a valuable source of references for scholars, I have two major criticisms. First there’s the perfunctory, almost casual way children’s and young people’s use of the internet is discussed, missing several important points not by inches but by miles. However, what leaps off the page is the dated air of unreality. One Internet could have been written, and probably was, any time in the mid to late 1990s. As a testament to the old religion it has a certain charm but that’s it.

One Internet is a manifesto for an imagined status quo  ante that actually may never have existed, or if it did it was for the briefest of moments. What One Internet most definitely is not is a manifesto for the future. It is an egregious expression of hope and optimism delivered in the teeth of self-evident, almost overwhelming adversity and rapidly growing signs of failure.

Maybe the authors were simply unlucky in terms of timing. Perhaps 18 months or two years ago there was a window but in the immediate aftermath of Brexit and Trump’s victory in the USA? In sight of growing support for nationalist and isolationist political parties all over Europe and elsewhere, at a time when Russia and China seem more and more determined to do whatever they like, when the glitz, glamour  and promise of globalization is fading as things like fake news, online fraud and the Mirai botnet expose the vulnerabilities of the internet and all who depend on her, it seems, to say the least,  unlucky to bring out a paper which so strongly argues for  the same old same old.

Never before in our lifetimes has the international order seemed more threatened and unstable. The report itself acknowledges this to some degree when it says the future of the internet does indeed hang in the balance.  But that future is not an abstraction, a contained system co-existing in a parallel universe, it is rooted in what is happening in the world where we put our feet.

If ever there was a need for new thinking it is now. You will not find any in One Internet.

Three possible futures

One Internet describes three possible futures. The first is a “Dangerous and Broken Cyberspace”. Not an option favoured by the authors. It arises because inter alia the…..

inadvertent effects of government regulation are so high that individuals and companies curtail their usage (of the internet). Governments impose sovereign-driven restrictions that further fragment the internet and violate basic human rights.

Wow. Those naughty Governments. I suspect the UK would be on this list of culprits.

The next is Uneven and Unequal Gains. Again this is frowned upon but how might it come about? It happened because

The economic value of the Internet is compromised by governments failing to respond appropriately to  the  challenges of  the  digital  era, choosing instead to assert sovereign control through trade barriers, data localization and censorship and by adopting other techniques that fragment the network in ways that  limit the free flow of goods, services, capital and data.   

Governments to blame again. Note that industry is not criticized here. Governments are just getting in the way.

Then there’s outcome number three. This is the big one. The target.

Broad, Unprecedented Progress

In   (this)   scenario, the   Internet   is  energetic, vigorous and  healthy.  A  healthy  Internet  produces unprecedented opportunities for social justice, human rights, access to information and knowledge, growth, development and innovation.

And how are we to  reach these sunny  uplands? Easy.

We call on governments, private corporations, civil society, the technical community and individuals together to create a new social compact  for  the  digital  age.   This  social compact will require a very high level of agreement among governments, private corporations, civil society, the technical community and individuals. Governments can provide leadership, but cannot alone define the content of the social compact. Achieving agreement and  acceptance will require the  engagement of all stakeholders in the Internet ecosystem.

To scold national governments with pious platitudes seems close to insulting. One internet is an argument for a particular business model  the support for which is being ever more searchingly questioned by growing numbers of people on every continent and it is therefore also being questioned by Governments who are put there by those same people.

Instead of  One Internet’s starting point  being a very obvious ideological commitment to preserving their singular vision of the internet the project could have asked itself

 What is it about the way the internet is working at the moment that is causing so many problems for so many Governments, making them feel compelled to act?  How can we address these and what part might internet governance institutions play in that process?

Rather the report seems to argue that the internet’s palpable imperfections and key parts of industry’s persistent shortcomings are the price we all have to pay in perpetuity in order to retain the new technologies’ undoubted benefits. We should just get used to it.

No, No, No, as a famous former British Prime Minister once said.

Children

One Internet acknowledges an earlier report from GCIG and Chatham House, of which I was a joint author (One in Three) but then it completely overlooks all of its principal recommendations. For example One Internet  expressly endorses the NetMundial statement  when it says

NETmundial….. mark(s)  a  major  step  by all stakeholder groups toward agreement on the basics of  Internet  governance, including  agreement  that Internet  governance should be carried out through a  distributed,  decentralized  and  multi-stakeholder ecosystem.

In the NETmundial statement none of the following words appear, not even once: child, children, youth or young, despite the fact that, as One in Three  shows, one in three of all internet users in the world is below the age of 18, and this rises to nearly one in two in parts of the developing world. Whatever else people might imagine the internet is or could become, right now it is a family medium, a children’s medium. The rules of the road need to be rewritten to reflect that. Yet you will look in vain in One Internet  for even a hint that the report’s authors  are aware of this dimension, much less do they embrace it.

One Internet makes the occasional reference to the issue of child abuse images and to wider issues of child welfare but its tone is hurried and curt.

Intermediary liability

On the vexed but hugely important question of intermediary liability One Internet simply says it endorses the Manila Principles. The agencies that seem to have taken the lead in preparing the Principles, and as far I can see most of those who have subsequently endorsed them, are drawn from a very narrow spectrum of internet activists. Multistakeholder it is not.

I know of no person of standing who wants to abolish the principle of immunity for internet intermediaries.  It is simply wrong – it would be unjust – to attempt to make anyone liable for something they could not have known anything about.

Yet there is no doubt that the principle of immunity for intermediaries has provided too many online businesses with an incentive to do nothing. It is a permanent alibi for inaction and evasion in connection with some of the most important threats to children e.g. the continued spread of child abuse images and aspects or types of cyber bullying.

We should take a leaf out of the law and practice of data protection. Here it is universally accepted that states not only have a right to impose requirements on businesses in terms of minimum security and other standards but they also have a right to establish independent agencies to carry out inspections to determine how those standards are being observed by any organization that collects, processes or stores personal data.

We know that networks are being abused by a variety of lawbreakers in ways which harm children,  just look at the continuing scandal of advertising supported piracy web sites and the ongoing large scale distribution of child abuse images.

The case for imposing cyber hygiene obligations in respect of a broad range of internet businesses is clear.

In other words, while the principle of immunity from any substantive offences  or civil wrongs should be maintained, companies ought to be required to take reasonable and proportionate steps to detect, eliminate or mitigate any and all unlawful activity taking place on their network. At the very least companies should be expected to take reasonable and proportionate steps to enforce their own terms and conditions of service, otherwise these Ts&Cs are tantamount to being a deceptive practice. An independent inspectorate could  have a role here to reassure the public that the designated standards are being observed by everyone who sets up shop in cyberspace.

If these sorts of things were happening public confidence in the internet might start to rise and  Governments would feel under less pressure to intervene and regulate. This wouldn’t fix everything that’s wrong with the world today but in this particular niche it would most assuredly be a step in the right direction.

Posted in Default settings, Internet governance, Privacy, Regulation, Self-regulation

Toys can be tricky

Children’s toys that can connect to the internet featured in eNACSO’s recent publication (June, 2016) When ‘free’ isn’t . They were also the principal topic of conversation at a workshop organized earlier this month at the IGF entitled The Internet of Toys and Things. It was standing-room only. This is clearly a hot-button item and it is likely to get hotter. It’s not hard to work out why.

First of all the repeated stories about  fake news and security failures have lead one distinguished, independent commentator to question whether or not the internet as a whole is becoming the equivalent of a failed statethat is to say  a place where we go at our peril because law, order and security can no longer be guaranteed. Who wants their children playing in or with a failed state?

A concrete illustration of this failed state idea occurred in October with the Mirai-driven distributed denial of service attack which, inter alia, took Twitter, Netflix and CNN offline, even if only briefly. Here the culpable botnet was utilising household and other objects which form part of the internet of things. Toys might well have been among them.

Toys are a sub-set of the internet of things but they are quite distinct in a very obvious respect: they are close to our children. Extremely close. Thus to the extent that we lose confidence in the security and stability of the internet of things, or its ability to respect our privacy,  so parents’ willingness to engage with connected toys is likely to reduce, possibly even vanish altogether. That would be a great pity because there is little doubt that the potential for children and young people to benefit from greater connectedness and interactivity with smart systems seems almost self-evident. But there are limits. Or ought to be.

Up to now in this blog I have been speaking only of the security and legal dimenions of privacy in terms of what can happen or what can go wrong with connected toys.  But there is a larger question which has nothing to do with privacy, security or current laws. It is to do with parenting.

It would be wrong to think that all connected toys are subject to identical risks or raise identical parenting concerns. They aren’t and they don’t. Moreover interactive games and toys have been around for a long time with few, if any, ill effects.  However, with the huge advances in AI , algorithms, and processing power, never mind the connectedness of the internet, modern toys are surpassing anything we have ever seen before or could have imagined. Thus, on top of the privacy or legal concerns, it seems clear to me that a number of profound ethical issues are hoving into view. These need to be discussed and debated in a neutral environment.

The US-based Family Online Safety Institute  have published Kids & the Connected Home.  It is one of the first and best reports of its kind, comprehensively documenting the range and different types of connected toys currently on the market. Its account of the history of technology and toys is also excellent. But FOSI most decidedly is not neutral ground. Its starting point appears to be:  ours not to reason why, here is another technological advance which, by definition must be good, so let’s look for a pathway that will ensure it succeeds. This is perfectly honourable, if a million miles away from being the full story in a case like this.

Contrast FOSI’s approach with that of Professor Sherry Turkle speaking about one very connected toy:  the Barbie doll.

Children naturally confide in their dolls and share their deepest feelings. At a tender age, they need to have their feelings genuinely heard and validated, and they should be sympathized with, uplifted, and supported. Children learn best from sincere dialogue with a real listener.

 

Some of the toys available today record entire conversations and not only send them to the parents – but also to people and/or machines in remote locations who doubtless can or will analyze them in many different ways. Am I the only one who feels a little unsettled by all this? As someone at the eNACSO workshop said

It’s one thing occasionally to tiptoe up to your child’s bedroom door and listen in as they say their prayers but to get everything they ever say? That’s spooky.

Somebody somewhere needs to call a halt while we all take a breath and think this through, not just as a privacy issue. Not just as a technical challenge confronting companies in terms of how to give parents confidence in the privacy dimensions of their products so they will buy them but in terms of what this might be doing to parenting and to children’s development. We have been agonising over whether it is right or how to use robots to look after the elderly. We also need to agonise a lot more about putting robots into the complex path of our children’s emotional development.

 

 

 

Posted in Default settings, E-commerce, Regulation, Self-regulation, Uncategorized

Not so dark after all

At the recent IGF  I attended several workshops where privacy was discussed. At a number of them two things struck me: everyone agreed privacy was desirable, but almost nobody thought it actually existed on the internet. The predominant view among online privacy activists seemed to be that whatever tools you thought you could use to keep your online stuff secret somebody somewhere – in a business or in a government agency, probably both –  either already knew what you were up to or, if they were sufficiently determined, they could work it out.

I remember one young woman from the Balkans who appeared to believe political activists in totalitarian states who thought the internet was their friend or ally were foolishly or recklessly playing with their own lives and liberty, and probably the lives and liberty of others. No one argued with her.

It was not suggested that anyone could (yet) routinely break messages that had been strongly encrypted but in terms of tracking and establishing patterns of relationships, that was easy peasy and, more importantly, it was enough for most police states to draw their own conclusions and act accordingly.

But what about the dark net, I hear you ask? We know the cops in a number of countries have had successes in cracking criminal operations that have used the dark net. Could these simply have been lucky breaks, or one-offs?

The other day I read a case that began to open up this obscure cyber corner. It gave me reason to believe that even on the dark net we can get the bad guys on the run, or at least unsettle them and make them think it is no longer a guaranteed safe haven.

It appears FBI agents took over and ran a machine that operated on the dark net to distribute child abuse images. Using what they called a “Network Investigative Tool” which they put on the offending server, the FBI were able to collect identifying information about people who logged in. Over 200 have already been arrested and charged in the USA following this sting operation but in the course of the action the Feds were able to gather information about 100,00 users in 120 countries. Bravo. Let’s hope the police forces in these countries were able to do something with the information the FBI handed over.

In some Federal courts in the US a number of the cases that were brought against individuals were dismissed because the FBI refused to disclose in open court exactly how the “Network Investigative Tool” operated but evidently not every court made that a condition of proceeding.

In the case that caught my eye, in Washington State,  the judge was highly critical of the FBI for, effectively, distributing child abuse images for two weeks while they ran their operation but he declined to order the FBI to declassify their secret methods. Thank goodness. If that had happened it would only have helped the bad guys to construct a work around.

The really encouraging aspect, however, is also its most obvious. The dark net is not so dark after all. I am sure heavy duty, tech-savvy cyber criminals had worked that out some time ago but for those of us who might otherwise and hitherto  have been prone to lapse into bouts of hopeless cyber depression this blog should act as a little ray of sunshine. Spread the word.

Posted in Child abuse images, Internet governance, Privacy, Regulation, Self-regulation

Only 100

I have written many times about the fantastic work Microsoft did when they developed Photo DNA – the tool that allows law enforcement and other agencies to create a “digital fingerprint” of a child abuse image. This “fingerprint” can then be deployed on a network to detect any re-occurrences of the same image thus either preventing it from being uploaded again or expediting its removal and investigation if it is already being stored there. It’s a great service to the victims depicted in the images and can save a huge amount of police time in several different ways.

Microsoft did not have to create  PhotoDNA. There was no law or regulation obliging them to do so, much less was there a law or regulation saying they then had to give it away for nothing, which is what happens. Microsoft did it because they could and because they knew it would do good in the world. Three cheers, again, for  Redmond.

Now switch to Mexico. The Internet Governance Forum is in session. I am in the audience. A senior Microsoft Executive discloses that 100 organizations are using PhotoDNA.

We know that Twitter, Facebook and Google are three of the 100 because they speak about it in public frequently. When I asked Microsoft for information about the other 97 – who are they, what types of businesses or organizations are in there? – the shutters came down. Confidentiality agreements prevented Microsoft from going into  detail. All I learned was that within the 97 are law enforcement agencies and NGOs. In other words the 97 are not all internet businesses.

Marshalling those super sleuthing skills and powers of deduction for which I am justly famous,  I decided to check out if Microsoft itself might be using PhotoDNA and, sure enough it is. PhotoDNA appears to be  integrated into its Cloud Service so, presumably, that means the Microsoft business is on board, as are the unknown or undisclosed number of Cloud Service customers.

But leaving aside Microsoft’s Cloud Service customers who are covered I am still deeply shocked at the seemingly very low rate of take up of PhotoDNA.

Any and every business that provides members of the public with any kind of online storage facility or transmission mechanism must know that sooner rather than later their services will be used by those who are engaged in child abuse.

That being so, why would they NOT deploy a tool like PhotoDNA?

Every online business should be obliged to take all reasonable and proportionate steps to mitigate all forms of unlawful behaviour that might otherwise take place on their networks even  though, without actual knowledge,  they can never attract substantive liability for  the unlawful conduct or content in question.

I am not suggesting we interfere with or change the rules concerning the liability of intermediaries but just as restaurants must always comply with food hygiene laws, online businesses should be required to do likewise in respect of cyber hygiene.

Posted in Child abuse images, Default settings, E-commerce, Facebook, Google, Internet governance, Microsoft, Regulation, Self-regulation

More progress

About 10.00.p.m. last night the Digital Economy Bill completed its passage through the House of Commons. The key clauses on age verification had initially been tabled by Claire Perry MP with signatures from Members of Parliament from seven different political parties. The great news, previously relayed through my blogs, was that the Government, in essence, adopted Claire Perry’s amendments and made them their own. They went through without demur.

It is clear there are legitimate concerns around the privacy dimensions of how the policy will work in practice but as to the main idea – of using age verification to restrict access to commercial  pornographic web sites – no one expressed any opposition at all.

The Bill now goes to our second Chamber, the House of Lords, where it is likely to occupy their Lordships until mid-February-ish. No doubt there will also be a “run-in” period, as there was when age verification for online gambling was introduced, so it could be yet a while before the new regime finally kicks in.

Almost certainly in the Lords there will be probing around some of the privacy angles but the chances 0f any of  the key parts of the Bill affecting children  being materially altered are extremely close to zero. This was a Manifesto pledge and the elected House has spoken.

Well done Claire Perry and well done the Government and all the political parties that helped get this measure through.

The internet is meant to be all about innovation. This is certainly a major innovative initiative and I am very pleased the UK is taking it. The eyes of the democratic world will be upon us and when we have demonstrated that the approach works, you can be sure many other countries will follow suit.

As I have said before – the internet is a family medium, a children’s medium, just as much as it is anything else, and its rules of the road will have to reflect that. Goodbye Wild West. Hello civilization.

Posted in Default settings, E-commerce, Internet governance, Pornography, Regulation, Self-regulation

Community values trump dead Utopian vision

The debate taking place in the UK about the introduction of age verification for pornography sites is, so to speak, exposing a number of very strange arguments. Some are advanced by people such as the Open Rights Group (ORG) who are flailing around looking for any and every reason to be against the government’s proposals.

Entering into the spirit of the “post-truth age” which we all now seem to be living in, neither is the ORG averse to rearranging or distorting the facts. For example, in their opening salvos they spoke about the government planning to ban “erotica”. Untrue.  That word does not  appear anywhere in the Digital Economy Bill.  Moreover the BBFC is the body that will have the responsibility for implementing the legislation, and they know where to draw the line between erotica and pornography even if the ORG doesn’t.

We were told every type of pornography site is being hit. Incorrect. Only commercial sites operating on a significant scale will be subject to the legislation. Anyone who read the draft legislation would have seen that. The words are clear. So either ORG didn’t read the Bill or they did and chose to misrepresent the position.

Then in an interview in The Guardian we were informed that if Parliament passes the Bill with the age verification clauses in it we will be putting ourselves alongside Turkey and Saudi Arabia. Excuse me? In Saudi Arabia the intention is to deny access to pornography to everyone. That is not the case here. All the UK is trying to do is restrict access by children, in accordance with our existing laws.  Trying to establish guilt by  the smear of association is a desperate tactic at the best of times but when it is also based on a fiction it just seems, well, pathetic.

However, if only by accident the ORG has hit on a couple of points where I think they are on to something. One I agree with, the other I most decidedly do not. Let’s go with the latter first.

There are four types of pornographic material circulating on the internet. One is straightforwardly and indisputably illegal: child abuse images, and we already have a good way of dealing with them.   They are not at issue here.The second  is material which can be viewed in public cinemas or bought over the internet. It has been classified as 18 by the BBFC. Sites displaying this type of material will be caught by the new law. That’s logical.

Next is material which has been rated R18. Under our current law this should only be sold on the premises of licenced sex shops to persons over the age of 18. Yet it is always available online on commercial pornography sites. It shouldn’t be but given the practical difficulties – the very reasons why the Bill has been brought forward  – there have never been any prosecutions. It is a law honoured in the breach more than in the observance.

In the case of R 18 the UK government, in effect, is proposing to liberalise the law because in future such material will be lawfully available online on sites which have age verification. I’m ok with that.

Finally we have material which is so extreme or disgusting that the BBFC refuses it any kind of classification. I am not going to go into detail on a family show but if you want to know more you could do worse than look here. Yet this material is also viewable on porn sites currently easily accessible by children. That is wrong. It will have to go if the sites want to stay legal within the framework of the coming age verification regime.

Nobody is suggesting it is necessarily illegal for individuals to perform any of the acts listed but the question here is this: should images of such things be available and viewable on a medium where, worldwide, 1 in 3 of every user is a child,  rising to 1 in 2 in parts of the developing world? I don’t think so. By the way in the UK and within the EU the proportion is around 1 in 5.

I know the internet utopians hate to hear this but the internet today is a family medium or a family service as much as it is anything else and the rules of the road are going to have to reflect that.  John Perry Barlow’s vision hasn’t worked. That shimmering image has evaporated. Get over it and don’t blame child protection advocates.

The argument is about the supremacy of community values  over techno-determinism. The UK should be able to have the internet it wants for its children. I believe that up to now the UK has been failing in its legal  obligations to protect children by not having an age verification law of the kind being proposed. But, hey….that will soon be behind us so let’s not dwell on it.

Where do I align with ORG and its friends?

The core of my argument is around protecting children from age inappropriate material. On questions of privacy I think any and all solutions which are to be deployed to carry out age verification should be privacy friendly, privacy compliant and scam-proof. Although I am a member of the BBFC’s children’s viewing advisory panel  I am not privvy to their plans or thinking on this aspect but I would very much like to see some sort of arrangement  emerge which involves the Office of the Information Commissioner being given a role in deciding which age verification solutions are acceptable and which are not. I have grave reservations about simply using credit cards.

Posted in Age verification, Child abuse images, Default settings, E-commerce, Internet governance, Pornography, Privacy, Regulation, Self-regulation, Uncategorized