My research agenda

 

I expect we all have ideas buzzing around our heads about unanswered questions which, if we had the time and the money, we would like to see resolved. Below is my current list. Some are probably not research proposals as such but are more by way of suggestions about questions needing scholarly attention and which politicians need to consider.

If you want to add anything under either heading let me know. I might try to keep this particular blog as a sort of running commentary on our (or do I mean “my”?) lack of key bits of knowledge or understanding.

If some topics seem strangely absent this could either be because I have overlooked them (inexcusable) or I believe significant work is already going on in that area. In the latter category I would put obtaining a better insight into how the possession of different types of child abuse material could act as an accurate predictor of someone’s likely future behaviour in terms causing further harm to children.

The operation of social networks

  • In Europe and perhaps in other parts of the world many apparently successful social network sites or other online services operate either with no minimum age requirement or with ones which differ from the widely used “Rule of 13” US based companies are obliged to follow.
  • Across a range of headings what is the concrete experience of children and legal minors generally within these systems? Are there any significant differences? Or aren’t there? What do these differences or the lack of them tell us about age based rules?
  • Why is it that Facebook can sweep to the No. 1 slot in some if not most countries it operates in, and stay there, but in other countries it doesn’t achieve the same prominence? Is each case unique or are there common factors?
  • Is there any good reason for supposing, or better still any evidence to support the view that a young person of 13 has exactly the same intellectual, emotional and experiential capacity to decide a broad range of privacy related issues as a young person of 17 e.g. in terms of what information they publish online about themselves, or publish about their friends or their family and in terms of their ability to decide what pictures or videos to post of same, who to accept as friends, how to use location based services and so on?
  • Isn’t it more likely that between the ages of 13 and 17 a young person does a lot of growing up and therefore during this period their ability to make well-informed decisions about a wide range of matters changes and improves? Doesn’t this argue for more granularity in the way privacy settings and permissions are established and the ages at which they kick in? Alternatively if the answer from the earlier question suggests age rules make no difference to anything why preserve them at all?
  • Let’s assume a site operates a rule which says “When you reach the age of 13 you don’t need your parents’ consent to have an account with us”. How exactly does that help stimulate a healthy and constructive discussion between parent and child about the child’s use of the site? In how many other parts of 13 year olds’ lives does a similar situation arise?
  • Isn’t the site, in the end, saying to parents “That’s your problem not ours. Yes we know we created the problem in the first place by providing the service but that’s the business we’re in”. Doesn’t this site acknowledge that they are setting up the potential for conflict at the get go? Or is this tension wholly illusory and either never happens at all or only very rarely?
  • Let’s stick with that same site and say no one at it will talk to a parent about their child’s use of their account, even where the child gets into difficulties. Neither will this site allow parents to act as an interlocutor for the child, much less will they take an instruction from the parent e.g. to close down or limit the operation of the account in the child’s best interests. How does that help foster constructive family engagement? Or does the site say once more “Not our problem. Nothing to do with us.”

E-commerce, advertising and privacy

  • Starting with EU countries what are the rules about the age at which a commercial company or other entity operating over the internet may, without first obtaining parental consent, canvass or accept personally identifiable information from kids? Are the rules different for real world environments?
  • Starting with EU countries what are the rules about advertising products and services over the internet on web sites directed at children or likely to attract large numbers of legal minors? Are the rules different for real world environments?
  • Starting with EU countries what are the rules about the age at which legal minors might buy different products or services? Are the rules the same for the internet and the offline world?
  • Is it not reasonable to impose limits of some kind on the amount that can be spent within In App purchases where the App is plainly directed at children and especially where the App itself is “free”?

Filtering

  • If filtering is so rubbish and ineffective why does every University in the UK, every Government Department, local authority, public utility and larger employer spend so much money on installing and maintaining it? Is this another gigantic con, comparable to the Y2K farce? Or is it that some people only dislike filtering software selectively?
  • Computer x has good filtering software on it and computer y has none at all. Two children of broadly similar levels of maturity and knowledge of the internet use these different computers. How can it be that the likelihood of either child coming into contact with material or situations which cause them upset or harm is apparently identical? If both children were equally averse to risk-taking or equally disinclined to look for dodgy stuff I guess that would explain it but otherwise what seems to be being suggested is that filtering software never does anything at all. That’s not true.
  • We know what filtering companies tell us about the sales of their products. We know what ISPs tell us about the number of times their filtering software has been downloaded. We know what parents tell us they do with filtering software in their home. We also know what children tell us their parents do with filtering software. There is often a significant difference between the two accounts. We know what parents tell us they do in terms of mediating and setting rules governing the use of the internet generally in the home. Again we know what children tell us their parents do in terms of setting such rules and once more there is often a significant difference between the two accounts rendered.
  • The only thing we don’t know is what actually happens on a day to day basis. To discover the unvarnished truth would require a substantial sum of money to be put aside to finance a monitoring project, and there would have to be a good few homes included to allow for an adequate sample. Doubtless there would also be some knotty ethical hurdles which would need to be carefully negotiated.

Age verification

  • All of the UK mobile phone companies do age verification but they each use a different method. How effective are these methods?
  • In the real world there is almost universal acceptance of the notion that we, even when we are adults, may be required to prove our age before we can go into certain places or buy particular products and services. Can we reach an agreement about the extent to which it is reasonable to try to replicate or accommodate in the online world similar child protection policies and practices?
  • To put that slightly differently, whereas it is accepted there will be some classes of web sites or online services where the necessity to prove your age or identity should never arise, can we not agree that there are some where it ought to be standard practice?
  • How can it be that so many household name companies know it is illegal to sell different products and services to persons under the age of 18 yet they nonetheless continue to sell them online without having in place any robust systems for determining whether or not the person they are selling them to meets that age standard?

The need for a new juriprudence

  • Laws and concepts which were developed in an analogue world may have little or no relevance in the online world or they may cause disproportionate difficulties. How can we develop a jurisprudence and legal processes which can keep up with the times? Alternatively would someone be willing to say how many extra courtrooms we should build, how many extra judges, legal clerks, lawyers and para legals we need to recruit and how many extra prisons we need to build and provide with staff in order to stick with the old ways, developed when the world was a very different place?
  • Can we develop a consensus around the notion that collectively we have a responsibility to do whatever we can to minimize the risks to vulnerable children or children from families at the margins of society? Or do we resolutely say “Sorry Chief. Not my problem. This is somebody else’s. I insist there should be no fetters of any kind on my freedom to do whatever I please when I go online, nothing at all to prevent me being able do it just as speedily as my internet connection will allow. Not even a nano-second of avoidable delay is acceptable under any and all circumstances.”
  • Can we systematically go through the laws of contract, tort, privacy, data protection, consumer protection, the sale of goods, advertising, jurisdiction and the operation of company law to identity points of friction between real world activities and online activities in so far as they affect minors? Which are the parts of one which cause problems with the other or, by contrast, are there aspects of pre-internet or traditional laws which might have been overlooked or not thought about in the context of the new realities of cyberspace and children’s and young people’s use of it?
  • Around the world just how many civil cases have been started against internet companies where the principal subject matter being disputed was liability for something that happened to a legal minor when online? How were these cases settled or what was the judgement?

Regulation

  • In a rapidly converging world what is the justification for maintaining separate regulatory regimes for mobile phone companies and businesses which operate solely or very largely over the internet yet provide many services which are identical or nearly identical to those provided by the mobile networks?
  • Is there a case for seeking to establish an EU-wide equivalent of the USA’s Federal Trade Commission? At the moment the EU’s powers to intervene vis a vis internet companies seem to be limited to cases where there is an alleged breach of monopoly power. That is important but it is also too narrow a focus.
  • The sort of body I have in mind would be able to take on the internet giants on more equal terms and bring together several disciplines which are currently managed by different institutions or by weaker national enforcement agencies.
  • As an ever-increasing amount of money is spent on buying goods and services online, do we not need a strong nationally, query internationally, based system of trading standards enforcement and consumer protection which has a specific focus on the internet? Could it be embraced by the same body referred to above? It goes without saying such an organization should have specific expertise in relation to matters affecting the online sale of age restricted goods and services.
  • In relation to sites directed towards or likely to attract large numbers of children a new institution of the kind being suggested could establish a framework of rules, or a set of minimum standards which perhaps in themselves need not be legally binding, governing data protection, privacy, advertising and other commercial practices for companies wishing to operate across more than one jurisdiction within the EU.
  • It could, for example, in the event of a dispute give a determination as to what constituted a children’s web site or rule that a particular site or service did indeed attract very large numbers of children and therefore had to adjust its operational policies accordingly to conform with standards which apply to a children’s web site or service. If a site wished to submit a proposal for a child safety package, or for an age verification system or a child-friendly content classification scheme could this body examine it, approve it or ask for it to be amended before awarding an official seal of approval of some kind?
  • Or is it the case that calling for institutional reform or for a major new institution to be created is simply a recipe for countless and probably interminable bureaucratic turf wars and, instead, we ought to focus on getting existing outfits to co-ordinate more closely or be more co-operative?
  • Is there a case for addressing online child safety through the route of hardware standards? These could be prescribed under the EU’s R&TTE Type Approval processes and draw on pre-existing standards work to which several agencies have contributed.

Vulnerability

  • There seems to be a consensus that children who are vulnerable for one reason or another in the offline world are also likely to be vulnerable in the online world. However, there is no consensus as to whether or to what extent the online environment itself acts as a magnifier or cause of different types of problematic behaviours.
  • At the moment the weight of academic opinion seems to point to the internet having a neutral effect in that regard but my suspicion is that, unless one takes a very wide definition of what constitutes prior “vulnerability”, for some children the internet will bring them to places or situations which do put them in peril in ways which have no real world equivalent or which could or would never materialise in an offline space.

There may be people who conclude…..that all talk about Internet danger is an exaggeration. But this is…..wrong. There are dangers on the Internet. We need to understand them, prevent them and eliminate them. We need active police presence online, hotlines, prevention programs, and pressure on ISPs and social networking sites to minimize risks. We only need to know that there are dangers in order to warrant this. We do not have to argue that the Internet is especially dangerous, any more than we have to argue that our local town is especially dangerous in order to justify law enforcement and crime prevention activities there. 

  • Finklehor goes on

We need good epidemiology and other research to be able to assess claims about the impact…..on children. Not only would better epidemiology allow us to quell anxieties more quickly and convincingly, good research would also point out what is almost certainly the inevitable  conclusion. Change has different effects on different subgroups. So some may benefit and some may suffer (Willoughby, 2008). Assessing these complex effects should be the ultimate goal.

  • I have no argument with that conclusion. Someone needs to get on with the assessing.

About John Carr

John Carr is a member of the Executive Board of the UK Council on Child Internet Safety, the British Government's principal advisory body for online safety and security for children and young people. In the summer of 2013 he was appointed as an adviser to Bangkok-based ECPAT International. Amongst other things John is or has been a Senior Expert Adviser to the United Nations, ITU, the European Union, a member of the Executive Board of the European NGO Alliance for Child Safety Online, Secretary of the UK's Children's Charities' Coalition on Internet Safety. John has advised many of the world's largest internet companies on online child safety. In June, 2012, John was appointed a Visiting Senior Fellow at the London School of Economics and Political Science. More: http://johncarrcv.blogspot.com
This entry was posted in Advertising, Age verification, Child abuse images, Consent, Default settings, E-commerce, Internet governance, Location, Pornography, Privacy, Regulation, Self-regulation. Bookmark the permalink.