Now showing 1 - 3 of 3
  • Publication
    Internet Censorship in the United Kingdom: National Schemes and European Norms
    (Hart, 2018-11-29)
    The United Kingdom (UK) has been at the vanguard of online censorship in democracies from the beginning of the modern internet. Since the mid-1990s the government has developed distinctive patterns of regulation – targeting intermediaries, using the bully pulpit to promote ‘voluntary’ self-regulation, and promoting automated censorship tools such as web blocking – which have been influential internationally but raise significant issues of legitimacy, transparency and accountability. This chapter examines this UK experience in light of the European Convention on Human Rights (ECHR) and EU law, arguing that in key regards current censorship practices fail to meet European standards. The chapter builds on the existing literature in two main ways. First, it assesses emerging censorship practices in the area of terrorist material and extreme pornography. Second, it considers how recent EU legislation and ECtHR case law might constrain the freedom of the UK government and force a move towards different models of censorship. The chapter starts by outlining the regulatory context. It then takes three case studies – Child Abuse Material (CAM), terrorist material, and pornography/extreme pornography under the Digital Economy Act 2017 – and traces how censorship has evolved from one context to the next. These systems are then evaluated against the standards set by European law and in particular Articles 6 and 10 ECHR, the Open Internet Regulation, and the Directives on Sexual Abuse of Children and on Combating Terrorism. The chapter concludes by considering what lessons we can learn from the UK experience.
      1053
  • Publication
    Internet Filtering: Rhetoric, Legitimacy, Accountability and Responsibility
    (Hart Publishing, 2008-10) ;
    This paper argues that the automatic and opaque nature of internet filtering, together with the fact that it is generally implemented by intermediaries, raises new problems for the law and in particular may tend to undermine aspects of freedom of expression. The paper starts by challenging the rhetoric underlying the use of the term “filtering” and suggests that the use of other terms such as "blocking" or "censorware" may be more appropriate. It then considers where filtering fits into the modalities of governance and the resulting issues of legitimacy and accountability. As regards legitimacy it argues that the use of technology to exert control over internet speech frequently undermines aspects of the rule of law concerning both the process for and content of norms governing behaviour. In relation to accountability, the paper argues that where it is not clear what is being blocked, why, or by whom, the operation of mechanisms of accountability - whether by way of judicial review, media scrutiny, or otherwise - is greatly reduced. Finally the paper suggests that, as compared with control through legal instruments, filtering may rob users of moral agency or responsibility in their use of the internet, with the implication that they may freely do whatever it is technically possible to do, with no necessity of moral engagement in their activities.
      1931
  • Publication
    Child Abuse Images and Cleanfeeds: Assessing Internet Blocking Systems
    (Edward Elgar, 2013)
    One of the most important trends in internet governance in recent years has been the growth of internet blocking as a policy tool, to the point where it is increasingly becoming a global norm. This is most obvious in states such as China where blocking is used to suppress political speech; however, in the last decade blocking has also become more common in democracies, usually as part of attempts to limit the availability of child abuse images. Numerous governments have therefore settled on blocking as their 'primary solution' towards preventing such images from being distributed (Villeneuve 2010). Child abuse image blocking has, however, been extremely controversial within the academic, civil liberties and technical communities, and this debate has recently taken on a wider public dimension. At the time of writing, for example, public pressure has forced the German Federal Government to abandon legislation which would have introduced a police run system while the European Parliament has also rejected Commission proposals for mandatory blocking (Baker 2011; Zuvela 2011). Why have these systems been so controversial? Two lines of criticism can be identified, which might be termed the practical and the principled. The practical argument claims that blocking is ineffective, with ill-defined goals and easily evaded by widely available circumvention technologies (see e.g. Callanan et al. 2009). The principled argument, on the other hand, is that blocking systems undermine the norms associated with freedom of expression in democratic societies (Brown 2008). This latter argument stems from the fact that blocking sits at the intersection of three different regulatory trends – the use of technological solutions ('code as law'), a focus on intermediaries and the use of self-regulation in preference to legislation – which individually and all the more so collectively create a risk of invisible and unaccountable 'censorship by proxy' (Kreimer 2006; McIntyre & Scott 2008). This chapter introduces and evaluates these claims by examining three prominent examples of child abuse image blocking – the United Kingdom Internet Watch Foundation ('IWF') Child Abuse Image Content ('CAIC') list, the European Union sponsored CIRCAMP system and United States hash value systems. It discusses the operation of each system and the extent to which the critics' concerns are borne out. It concludes by considering the lessons which might be learned for proposals to extend blocking to other types of content.
      1868