Value- free technology? I don’t think so

The idea that technology is or can be value free has always struck me as being absurd. Whoever invents a particular application, piece of equipment or a platform has certain objectives in mind and these, in turn,  must have been shaped by their personal attitudes or beliefs or their business aims, often both.

A classic and very un-Olympian example of the latter variety was presented to me several years ago when I attended an IETF workshop. The participants were concerned with developing the protocols to allow browsers to collect and transmit geo-location data from connected devices. I pointed out there would be a number of social consequences attaching to such a development, both good and potentially bad, but almost to a man (repeat, man) the assembled technicians declared they had been sent to the workshop by their employers (mainly big technology companies) to reach an agreement not debate social policy. They didn’t quite say we’re only following orders but it was close.

A recent article in New Scientist  ( “Digital Discrimination”, 30th July – behind a pay wall) shows what can happen when otherwise or supposedly neutral  technologies are allowed to do their thing.

Take the case of Gregory Seldon, a 25 year old black man who lives in the USA. He wanted to make a trip to Philadelphia. Using AirBNB he spotted a particular place, tried to book but was informed it had already gone. Seldon carried on looking and saw that the same location was, in fact, still being advertised for the same dates. Suspicious, he created several new profiles which indicated the applicant was a white man. They were all told the apartment was available.

Seldon Tweeted about his experience on #airbnbwhileblack. The floodgates opened with more or less identical accounts streaming in from all across the country.  It emerged that three academics at Harvard (Edelman, Luca and Svirsky) had found people with names that were primarily thought of as being associated with African Americans e.g. Tanisha and Tyrone were 16% less likely to be accepted as guests on AirBNB than people with names like Brad and Kirsten.

The good news is AirBNB accept they have a problem and are actively seeking a solution but there seems little doubt this goes a lot wider and deeper than social media platforms.

Anupam Chandler, a Professor of Law at UC Davis believes discrimination can be “baked into” the data that form the basis of algorithms thus technology could become a “mechanism for discrimination to amplify and spread like a virus”.

Stands to reason I suppose. Typically algorithms are based on observed patterns of pre-existing behaviour. If that behaviour has a racist (or other) bias then, absent any countervailing measures, the algorithm will simply replicate and thereby, at the very least, sustain it in the future. That would be bad enough but the network effect is likely to give new legs and a new scale to the phenomenon thereby making it worse. In such circumstances it is just not acceptable to say (something like) “it’s not our fault society is riddled with racists (or with sexism)….all we are doing is devising systems which (hand-wringing) unfortunately racists  and sexists are using”.   The logic of this argument is that society needs to deal with racism  and sexism and  technologists are merely hapless, helpless victims of sad circumstance.  Baloney is the least offensive word I can come up with to describe what I think about that argument.

In this connection I was pleasantly surprised to discover that (our old friend) the GDPR has specific provisions which require technology companies to take steps to prevent discrimination based on personal characteristics such as race or religious beliefs. It also creates a “right of explanation”, enabling citizens to question the logic of an algorithmic decision. How easy it will be to enforce this provision is debatable, but it’s not a bad start.

 

 

Posted in Default settings, Internet governance, Regulation, Self-regulation

ICANN gets it wrong – again

In previous blogs I have lamented ICANN’s apparent lack of interest in the position of children and young people, despite them being such a substantial constituency of internet users in all parts of the world. Neither does ICANN appear to be seized of the  significance of minors having distinct and important legal rights not enjoyed by or shared with adults.

It is no comfort to know ICANN is not alone in this regard in the constellation of bodies which form what might loosely be called the firmament of internet governance.

New Top Level Domains

In  January 2012 ICANN announced they were going to encourage the domain name system to expand and break free from the “tyranny” of .com, .net and .org. A thousand flowers would be allowed to bloom. An open invitation was issued to anyone and everyone to put forward ideas for new generic top level domains.

The problem is ICANN did not even consider whether this bold step might  have implications for children and young people. Yet they knew or ought to have known that applications were going to be made  for domains which, eventually, would result in web sites being created that were specifically intended to appeal directly to children and young people or would be likely to attract them in substantial numbers.

Anyone who was minded to submit a proposal for a new domain name was pointed towards an Applicant Guidebook.  There is nothing in it which even remotely suggests ICANN accepts it has a duty of care to children. On the contrary ICANN confirmed to me that all applications for all  new domains would be treated identically. Proposals that were obviously child related would be appraised in exactly the same way as applications that were obviously related to such things as pharmacies, banks, insurance and so on.

In England if a general administrative rule is applied inflexibly in such a way as to produce an unjust result for particular groups or classes of individuals the rule itself can often be challenged. I think the present facts constitute just such a case but, sadly, English jurisprudence is not relevant.

Here come the .kids

In the end three entities applied  to establish a new domain that would be called “.kids”. These were Google, Amazon and the .kids Foundation.

Under ICANN’s rules, at the end of the  appraisal process if you have two or more applicants still standing the matter is resolved by an auction. In other words money decides the outcome. Google and Amazon have plenty. The .kids Foundation doesn’t.

However, there is a provision within ICANN’s rules to allow for community-based applications to be made and, if successful, these would “trump” (pardon my use of that word  just now) regular commercial applications. The .kids Foundation was accepted as being eligible to make a community-based  application but in the end the application itself was turned down because it was decided they had not met the criteria. This means an auction will follow and the .kids Foundation will lose.

Having looked at them I am firmly of the view that the criteria the .kids application was judged against showed no understanding whatsoever of the particularities of children’s or young people’s engagement with the internet or of children’s rights but maybe that’s the subject of another blog.

What interested me though was, given ICANN’s historic disinterest in children and young people, on what basis or how would they make any kind of a determination on the .kids Foundation application anyway?

Enter the Economist Intelligence Unit

The short answer is, they didn’t. ICANN sub-contracted the adjudication process to the Economist Intelligence Unit. (EIU)

Knowing what a smart bunch you are I’m guessing your next question is going to be: so what expertise does the EIU have to adjudicate on a matter of this type? I mean when you think about children’s rights, children’s organizations, children online or children anything the EIU is not going to be the first outfit to pop into your well-informed mind. They might be the last but they definitely won’t be the first.

The basis on which ICANN contracted with the EIU to  carry out the evaluation process is set out in commendable detail. Here is an extract from an EIU document.

The  evaluation  process  respects  the  principles  of  fairness,  transparency,  avoidance  of potential conflicts of interest…..

Emboldened by this assurance I asked the EIU the following question:

Are you able to say if any of the EIU staff who assisted with the decision-making process (in the matter of .kids) had any background in matters connected with children’s and young people’s use of the internet or in working with children’s organizations?

The answer was

We generally don’t respond to this type of question. Please contact ICANN.

I also asked if Google or Amazon were customers of the EIU (see above).

Here the answer was

We are not allowed to discuss our clientele.

Answer came there none

I followed the EIU’s advice and contacted ICANN. After a bit of to-ing and fro-ing over a couple of weeks answer came there none, hence this blog.

EIU and ICANN are both at fault

So here’s where I’m at right now: ICANN behaved according to form  – no excuse but no surprise either – but the EIU should have declined to adjudicate in the matter of .kids, either because the criteria they were being asked to use were palpably inappropriate and needed changing or because they (the EIU) lacked the expertise, or both.

Mind you, without the expertise in the first place how could the EIU have known the criteria  they were being asked to use were rubbish? In which case the EIU should simply have refused to enter the space at all saying it was outwith their operational competence. Alternatively ICANN should have satisfied itself that the EIU had the necessary expertise before entrusting it with the task and if they were not satisfied the EIU had the expertise they should have insisted they acquired it. My guess is both organizations are therefore equally and grievously at fault.

We are not the only ones with a beef

Having read a recent article in The Register I learn I am not alone in criticising the way both ICANN and the EIU have handled community applications. According to the recently departed ICANN Ombudsman the application for .gay has been similarly banjaxed. Interestingly,the Ombudsman also says the ICANN Board are not obliged to follow the advice proferred by the EIU. I can imagine why the Board might be nervous about departing from  the EIU’s advice but if the whole process was messed up to begin with that should be the least of their concerns. They ought to be courageous and focus on making the right decision.

A postscript:  Doubtless there are a great many already established websites  in pre-existing domains which appeal directly to children and  young people. Perhaps there are questions to be asked about them also, but that ought not to deflect or detract from raising concerns about  the ownership, management and operation of a whole domain with such an exclusive, predominant or overwhelming focus on minors.

 

Posted in E-commerce, ICANN, Internet governance, Regulation, Self-regulation

More on the GDPR – a draft

This is my first stab at trying to identify all the key headings within the GDPR which are likely to have a significant impact on children. If I have missed anything important or got something wrong please let me know and I will publish a revised version. Think of this as my first draft.

Having hunted around online I can tell you lots of lawyers have produced general briefings on the GDPR but nobody seems to have prepared one on all the children’s and young people’s angles, much less have they brought them together in a single document.

The overarching principles

Here is the key part of Recital 38

Children merit specific protection with regard to their personal data, as they may be less aware of the risks, consequences and safeguards concerned and their rights in relation to the processing of personal data. Such specific protection should, in particular, apply to the use of personal data of children for  the purposes  of marketing  or creating  personality  or user profiles  and the collection of personal  data with regard  to children when using services offered directly to a child…

Related to this is Article 5 (1) (a) which specifies that all data processing shall be done … transparently and fairly. If the data subject is a child this is a highly relevant fact or consideration which will determine whether a particular course of action is transparent or fair and if it is is not both those things it is not lawful.

Article 6 gives more detail about the conditions which need to be met for data processing to be considered lawful. 6(1)(f) reads as follows

… (the processing must be necessary for business reasons… except  where such reasons….) are overridden  by the interests or fundamental  rights and freedoms of the data subject which require protection of personal data, in particular where the data subject is a child. (emphasis added).

Article 35 refers specifically to new technologies and mandates risk assessments. Under this heading  age verification may be compulsory in certain circumstances.

Article 36 is interesting.  Section 1 says

The  (business) shall consult the  supervisory   authority prior to  processing where a  data protection impact assessment under Article  35 indicates that the processing would result in a high risk in the absence of measures taken by the controller to mitigate the risk.

I imagine here is where the supervisory authority might suggest or require that age verification be adopted.

Definition of a child.

There is no definition of a child within the GDPR. However, since every Member State is a signatory to the UN Convention on the Rights of the Child 18 must be the relevant standard. This has a number of potentially interesting consequences i.e. everywhere in the GDPR where the word child or children appears (that’s 25 and 8 times respectively) we must assume they are referring to all persons aged 17 and below. However, because the GDPR is also peppered with references to risk assessments and proportionality – context is everything – this clearly does not mean the GDPR anticipates that everyone under the age of 18 needs to be treated identically.

The age of consent for data purposes

Article 8 allows Member States to make an explicit decision about the lowest age at which a child may decide for themselves whether or not to hand over personal data to an online service provider without the provider having to obtain the consent of the child’s parents. The lowest age a Member State may choose is 13 (the current de facto status quo in most EU Member States) and the highest available option is 16. If a Member State makes no decision then by default the age becomes 16 automatically in May, 2018.

Article 8 does not qualify, modify or change the definition of a child. It only sets a parameter which impacts on an online provider’s obligation to obtain parental consent. Thus, if a country opts for 15 it is not saying that for privacy or any other purposes 15 is the age at which someone may be judged to be an adult or to have the same capacities as an adult. However, I guess it would be odd if, despite designating, say, sub-16s, 15s, 14s or 13s for special treatment within a jurisdiction that this did not imply that they should also receive  extra or special or at any rate different attention. I can see that here there could be important tensions with a young person’s legal rights.

Thus, in effect, we have two classes of children. Those below the age of consent for data purposes  and those above it who are nevertheless still below the age of majority (18). I guess that has always been the case but introducing a spread of ages, including one as high as 16, does rather highlight or emphasise the matter.

Recital 38 makes an exception with regard to online preventative or counselling services offered directly to a child. In such cases the provider is not obliged to seek the consent of the young person’s parents.

 Accessibility and transparency

Recital 58 reads as follows:

The principle of transparency requires that any information  addressed to the public or to the data subject be concise, easily accessible and easy to understand, and that clear and plain language and, additionally, where appropriate, visualisation  be used….

and…

Given that children  merit specific  protection,  any information  and communication, where processing  is addressed to a child, should be in such a clear and plain language that the child can easily understand.

Article 12 gives effect to Recital 58

The controller shall take appropriate  measures to provide any information… to the data subject in a concise,  transparent, intelligible  and easily accessible  form, using clear and plain language, in  particular  for any information addressed specifically to a child. (emphasis added)

Expect to see icons and pictograms being deployed on a much larger scale, especially but not solely in areas known to be frequented by children.

A new kid on the block

The Article 29 Working Party and national data protection authorities as a group have not been conspicuously involved in the campaign to make the online space a safer and better environment for children. That should be about to change.

Article 57 reads as follows

  1. Without prejudice to other tasks set out under this Regulation, each supervisory authority shall on its territory:

(a)    monitor and enforce the application of this Regulation;

(b)    promote public awareness  and understanding  of the risks,  rules,  safeguards  and rights  in relation  to processing. Activities addressed specifically to children shall receive specific attention (emphasis added)

Will national Data Protection Authorities start to attend meetings of the Safer Internet Centres? How will that link work? In the UK will the Information Commissioner’s Office join the Board of UKCCIS? How will it relate to Internet Matters?

Codes of conduct

This is going to be an extremely important area of activity.

Article 40 reads as follows

  1. The Member States, the supervisory authorities,  the (European Data Protection Supervisory) Board and the Commission  shall encourage the drawing up of codes of conduct intended  to contribute  to the proper application  of this Regulation,  taking account of the specific features of the various processing sectors and the specific needs of micro, small and medium-sized enterprises.
  1. Associations and other bodies representing categories  of controllers or processors may prepare codes of conduct, or amend or extend such codes, for the purpose of specifying the application of this Regulation, such as with regard to:…

(g)   the information provided to, and the protection of, children, and the manner in which the consent of the holders of parental responsibility over children is to be obtained… (emphasis added)

Consistency and implementation

Article 57 (1) (g) imposes an obligation  on national data protection authorities to

cooperate   with, including sharing information,  and provide mutual  assistance to, other supervisory authorities with a view to ensuring the consistency of application and enforcement of this Regulation.

Article 41 refers to arrangements for monitoring approved codes of conduct and Article 97 makes clear the Commission has an obligation to keep the operation of the Regulation under review and to report accordingly to the European Parliament and the Council of Ministers.

The right to be forgotten

This makes its appearance in Recital 65  and in Article 17 (where it is also described as “right to erasure”..

In the Recital it says

(the right of erasure)……..is relevant in particular  where the data subject has given his or her consent as a child and is not fully aware  of the risks involved  by the processing, and later  wants  to remove such personal  data, especially  on the internet.  The data subject should be able to exercise  that right notwithstanding the fact that he or she is no longer a child.(emphasis added)

Profiling

In Recital 24 profiling is alluded to as being a form of data processing which allows the data controller (typically an online business) …to take decisions concerning…or for analysing or predicting…personal preferences, behaviours and attitudes.

I guess that includes making decisions about what sort of advertisements to serve up to someone but it could go a lot further than that. However, as we saw in the earlier reference to Recital 38

... specific protection should, in  particular,  apply to  the use of  personal data of  children for  the purposes  of marketing  or creating  personality  or user profiles  and the collection of personal  data with regard  to children when using services offered directly to a child.  (emphasis added)

And Recital 71 makes it clear children should not be the subject of profiling. How that will work out in practice remains to be seen.

Posted in Age verification, Default settings, Internet governance, Regulation, Self-regulation

Life is full of surprises

This blog could be a modern parable or warning that if you get fixated on one big thing you can easily miss other important stuff that is happening at the same time. It is also a timely reminder of how massively under-resourced the children’s lobby is when trying to keep on top of some of the major issues coming down the online track.

I speak, of course, about the GDPR. It is very obviously going to preoccupy many of us over the coming period – even those living in Brexitland.

Think about it. If the EU can compel the USA to agree to the new Privacy Shield – and the USA went along with it because they want their online businesses to continue being able to access European markets – does anyone seriously think Britain is going to develop  its own or a significantly different set of privacy rules for the internet? It’s absurd.  One way or the other the UK is going to have to work with the GDPR. And you will hear no complaints from me on that score. At least not yet.

A conversation in Brussels

Earlier this week I was in Brussels for two days, surrounded by privacy experts from the Commission, from national Data Protection Authorities, the new European Data Protection Supervisor, the Article 29 Working Party, from civil society and industry.  This was a hard core gathering.

I will now relay the gist of a key conversation I had with one of the most senior people. It mirrored another discussion I had with a different official the previous week so I am now confident this is not mere speculation or wishful thinking either on my part or on the part of the officials.

No express obligation to carry out age verification

I referred to the fact the GDPR does not expressly require any online service provider to verify the age of anyone joining or using their service (although there are rules about obtaining parental consent where the service is aimed at children).

As we know,  each Member State can set its own minimum age at which a young person is considered competent to hand over personal data to providers of online services such as Facebook without the provider having to obtain parental consent.  Member States can choose between 13 and 16 but if they do nothing in May 2018 automatically the default age will become 16.

The widespread assumption is that, as with the present arrangements, the big internet companies will just say that, in any given jurisdiction,  no-one below the minimum age can be a member or use their site or service. That simplifies everything.

However, absent any obligation to verify anyone or anything, particularly in countries where 16 is the lower age limit, the fear is all we will see on social media platforms is even larger numbers of young people misrepresenting their age so they can hang out with the older, cool kids.

A very unwelcome scenario

I then went on to say that, for example if, in a given country, the age limit for data was the same as or less than the age of consent for sex, on the face of it anybody using a relevant service would have a case to argue they had reasonable grounds for believing the person they were engaging with on or through the same service was old enough to talk about sex or meet up for sex. The crime of grooming would essentially wither on the vine, even though, as previously noted, there could nevertheless  still be millions of young people on the site who were in fact below the legal age.

At this point the official raised an eyebrow and said

That’s bit of a leap. Look at what it says about impact assessments

Here is the text of Recital 84

..where processing operations are likely to result in a high risk to the rights and freedoms of natural persons, the controller should be responsible for the carrying-out of a data protection impact assessment to evaluate, in particular, the origin, nature, particularity and severity of that risk. The outcome of the assessment should be taken into account when determining the appropriate measures to be taken in order to demonstrate that the processing of personal data complies with this Regulation. Where a data-protection impact assessment indicates that processing operations involve a high risk which the controller cannot mitigate by appropriate measures in terms of available technology and costs of implementation, a consultation of the supervisory authority should take place prior to the processing.

Article 35 gives effect to Recital 84.

To be perfectly honest I can see scope for arguing about how extensive the obligations are under Article 35. The Article speaks about the assessment being made in relation to the

impact….on the protection of personal data.

In other words not in relation to matters that are not connected with the protection of personal data e.g. the possibility of a minor being sexually abused or otherwise being exposed to an age inappropriate  environment.

Could such a literal reading lead to such a ridiculous outcome? Perhaps. Does that mean it definitely won’t happen? No.

The officials were clear

Even so the officials certainly saw the Recital and the Article as laying down a foundation for mandatory age verification in respect of certain classes of online services. We did not discuss any particular online sites or services but the major social media platforms were not far from my contemplation.

We shall see. I am not sure the officials fully grasped the revolutionary nature of what they were saying and therefore they might be unaware of the potential scale of the push back which, even now, is likely to be gathering force.

 

Posted in Age verification, Default settings, E-commerce, Facebook, Google, Internet governance, Regulation, Self-regulation

Evidence to the House of Lords Communications Committee

On 19th July I appeared before the House of Lords Communications Committee. They are holding an Enquiry into  Children and the internet.  On the day I was largely responding to oral questions.

Will Gardner from Childnet was alongside me. Will tended to lead on the (many) questions about what was happening in schools, particularly around bullying. I majored on other issues.

There is little point in having a Parliamentary Enquiry simply to recite or celebrate how well  the UK is doing in the online child protection space.  Make no mistake I remain firmly of the view that the UK is a world leader in this area. This was therefore a (rare) opportunity to highlight where improvements  or changes are still needed.

What follows is a summary of what I said, plus one new point (compensation for victims of child abuse images) which I added when I later sent in a written version. That’s allowed!

Digital Economy Bill 2016-17

The Digital Economy Bill is directed at larger commercial pornography sites, almost all of which are domiciled outside of the UK and therefore, for practical purposes, beyond the reach of UK courts and law enforcement. While nominally these sites are “free” in that they do not charge to look at the bulk of their wares they are nevertheless highly commercial in nature, collecting their income in other ways e.g. through direct sales and advertising.

The Bill is most welcome but it has a fatal flaw. The Bill requires the sites to introduce age verification to prevent persons under the age of 18 from being able to look at their content.  The assumption is that the credit card companies will threaten to withdraw payments facilities and the advertisers will threaten to withdraw advertising from non-compliant sites (which would be operating illegally) and this will be a sufficient incentive for most porn publishers either to comply or cease publishing into the UK. This is a reasonable assumption. The Bill will also create a Regulator with a power to compile a list of non-compliant sites. This list will be circulated to interested parties e.g. credit card companies and advertising agencies but neither are obliged to act although, as already noted, it is anticipated most will. However, if a commercial pornography site uses no UK-based payments facilities and receives no advertising from UK sources, or it changes its business model to arrange things that way, it could continue to operate with impunity.

Thus for persistently non-compliant sites the Bill should give the Regulator a residual power to require access to non-compliant sites to be blocked, in a manner similar to that which, de facto, already exists for child abuse images.

Big social media platforms are like public utilities

We need to start thinking about the major social media platforms in the same way as we do public utilities. Certainly in respect of children and young people the platforms’ dominance in some areas means children and young people may feel they have little choice but to join and be part of the social milieu to which all or the great majority of their friends belong.  It is unacceptable for there to be no way for the public or parents  to be reassured about the efficacy and appropriateness of these businesses’ internal systems for dealing with complaints from or issues raised by children. An independent regulator (perhaps Ofcom) should have the legal power to compel at least the larger platforms to open their books and allow independent inspection and verification of their public-facing processes to ensure they are working satisfactorily.

Filtering in the UK

The UK’s system for providing filters to customers of the UK’s “Big Four” domestic broadband providers is excellent but there appears to be significant variations in the levels of take up between the different ISPs. At first sight this seems strange because the demographics of their customer base do not look as if they are wildly different. In any event the claims the ISPs make about levels of take up have not been independently verified. When the last (and so far only) checking exercise was carried out, Ofcom merely asked the ISPs to inform them of their take up levels. Ofcom sought neither to verify the claims the ISPs made nor to explain the reasons for any differences. This is not satisfactory. Moreover the current voluntary system for providing filters only extends to the customer base of the “Big Four”. It seems they reach only 90% of households. Children in the other 10% deserve the same level of protection.

The system of filtering for mobile networks appears to be working satisfactorily but it has never been thoroughly inspected and verified by an independent agency.

Ditto in relation to “Friendly WiFi” i.e. the system where the providers of internet access via WiFi in public spaces take steps to limit access to adult content and illegal materials. A key question here would be to determine how extensively it is operating and perhaps also to identify any major enterprises or concerns that had not adopted “Friendly Wifi”.

Age limits

There has never been a proper, independent evaluation of the optimal age limits for using social media platforms. The single lower age limit of 13 is the product of a US Federal law which was passed in the 20th Century before social media platforms existed. With one or two exceptions e.g. Spain, the rest of the world acquiesced rather than sought to examine critically the appropriateness of that age standard.  Perhaps we need more than one age level depending on the nature of the platform and the type of activity in question. In addition the absence of any obligation to verify the age of customers is leading to a huge level of non-compliance. This is not satisfactory.

Compensation for victims portrayed in child abuse image

A new law is required to allow victims of child sex abuse to claim compensation from persons found in possession of images of that abuse. The USA has a similar law specifically designed for this purpose. Aside from assisting with victim recovery it could also act as a major deterrent to a certain class of person who collects these images. The MoJ is currently considering this idea.

An unambiguous duty of care

We ought to establish that the providers or suppliers of digital services have an unambiguous legal duty of care to consider the online child safety aspects of any and every service before it is released. One of Facebook’s founding ideas was “Move fast and break things”, otherwise expressed as, “it is easier to apologise after the event rather than seek permission before it”. It is understood that this has now been formally renounced by Facebook yet it remains a dominant idea across the whole of the internet industry.

Internet governance

There are several notable weaknesses in internet governance institutions and processes: one is their failure to take proper account of the fact that children and young people are a very substantial constituency of users and that they have rights under international law which are routinely ignored. ICANN in particular has been woeful in several key regards. HMG has an important leadership role in this area.

Helping parents

Finding ways to help parents to help their children get the most out of the internet while remaining safe is a major and urgent societal challenge.  We cannot blithely assume it is a problem which will solve itself with the passage of time.  In this context schools have an important role to play but if we see them as the sole or principal route to parents we will fail because too many schools continue to be seen by too many parents as unwelcoming places. A public health sort of approach may therefore be worth considering as an additional or complementary strategy. What we are talking about, in essence, are the skills needed for 21st Century parenting. That repertoire of skills must now include a knowledge of how the internet fits into young people’s lives and how best to support children and young people in the use of the technology.

 

Posted in Age verification, Child abuse images, Consent, Default settings, E-commerce, Facebook, Internet governance, Pornography, Regulation, Self-regulation

It seems like only yesterday

It seems like only yesterday……hang on a minute, it was only yesterday (well – two days ago but I claim poetic licence), I was predicting that once McDonald’s had agreed to introduce filtering (for adult pornography and child abuse images)  into their system for providing free WiFi access on their premises in the USA Starbucks would soon follow.

It wasn’t a fix – at least not on my part – but late last night another press release hits my Inbox telling me that Starbucks are looking into to doing just that. Well done to Enough is Enough and their energetic CEO, Donna Rice Hughes.

Again Donna was kind enough to acknowledge that she took inspiration for embarking on her campaign from what had first been done in the UK. However, to negotiate the shores of US politics on an issue which many (misguidedly) see as being connected to free speech rights, requires substantial political skills and clearly Donna has them.

In David Cameron’s valedictory speech in the House of Commons  the other day he said

In politics you can achieve a lot of things..nothing is impossible if you put your mind to it.

He might have added

But first you have to believe in something and want to do it.

I can hardly think of a single worthwhile reform I have been involved with in the internet space where, at the beginning, powerful forces argued it couldn’t or shouldn’t be done. It was either impossible, too expensive or  it threatened others’ irreducible vital interests.

Well there you go. Cheers to Donna Rice Hughes.

Posted in Default settings, Pornography, Regulation, Self-regulation

Good news from across the pond

From little acorns…., the march of a thousand miles starts with a single step…..and so on.

In what is otherwise a fairly miserable period in terms of public policy developments, at least if you live in the UK, word reaches me that, following a campaign that was started in the USA in the Autumn of 2014 McDonalds have agreed to introduce filters to restrict access to legal pornography and child abuse images in their outlets in the USA.

The campaign was headed up by Enough is Enough and its CEO was good enough to acknowledge in her announcement that she was emboldened to begin the campaign by the success we had had in the UK.

Their next target in the US is Starbucks. I have a hunch they are going to win.

 

Posted in Child abuse images, Pornography, Regulation, Self-regulation