Law Library Stacks

Back to Index of Protecting Journalists

Full Report (PDF, 1MB)

A number of laws protect freedom of expression in England and Wales. While freedom of expression is protected, there are certain circumstances in which it may be overridden.

There are a significant number of pieces of criminal legislation that can be applied to harassing or abusive online communications that range from the Communications Act 2003 to the Public Order Act 1986 and the Protection from Harassment Act 1992. The Law Commission has conducted an in depth “scoping report” and found that the laws, with some limitations, cover online communications that are abusive, but that the various overlapping laws have led to uncertainty. Technological limitations within the police force and this uncertainty of the law have led to underreporting and difficulties in successfully prosecuting offenders. The Law Commission has recommended that the laws be reviewed and consolidated to provide greater clarity and certainty. 

The defamation law of England and Wales has recently been overhauled and provides a specific process for individuals to request the removal of material that they believe is defamatory. The process uses website operators as an intermediary to facilitate the removal of this type of information. 

I. Introduction

Intimidation and harassment of those in public life, including journalists, has increased significantly over the past decade, particularly for political and female journalists.[1] A government report notes that

[t]he rise of the internet and social media in recent decades has fundamentally reshaped the way we engage with each other and as a society. This radical shift has brought many benefits, but there are also associated risks and harms, and it has proved challenging for the law to keep pace with this rapidly changing environment.[2]

The widespread use of social media has been the most significant factor accelerating and enabling intimidatory behavior in recent years. Although social media helps to promote widespread access to ideas and engagement in debate, it also creates an intensely hostile online environment.[3]

The use of communications by people to abuse others is not a new phenomenon and, over the past 130 years, the laws have evolved to respond to address abuse through new means of communication.[4] The Post Office Protection Act 1884 made it an offense to send grossly offensive materials through the mail and the Post Office Amendment Act 1935 prohibited the use of telephones to communicate indecent, obscene or abusive messages. Half a century later the Malicious Communications Act 1988 was enacted to address anonymous “poison pen letters.” Most laws that could be relevant to the misuse and abuse of online communications were drafted before the rapid growth in communications technology.[5] As technology continues to evolve, there has been significant discourse over whether the law should be reformed. 

While a significant number of journalists have faced online abuse and harassment, female journalists have been disproportionately affected.[6] An international survey of female journalists found that almost two thirds experienced abuse online, that half of these did not report the abuse, and that two fifths admitted they had censored their work as a result of the abuse.[7] Concerns have been raised that this “represents a broader threat to the freedom of the press”[8] and the government has said that the issue of intimidation must be addressed, stating: “[t]his abuse is unacceptable – it goes beyond free speech and free debate, dissuades good people from going into public life, and corrodes the values on which our democracy rests.”[9] The United Nations Human Rights Office of the High Commissioner has stated that abusive online communications directed at journalists, particularly female journalists, could have a widespread and long-term impact:

[F]ailure to legislate effectively against abusive online communications has a disproportionate economic impact on women, who feel unsafe on the internet and may disengage with the many opportunities it offers; . . . large-scale online abuse suffered by high-profile women may further erode the willingness of women to stand for elected public office, or to take up senior positions, reducing diversity in the workforce and public life for the next generation.[10] 

Back to Top

II. Freedom of Speech

The European Convention on Human Rights was incorporated into the national law of the United Kingdom by the Human Rights Act 1998.[11] Article 10 of the European Convention on Human Rights provides for freedom of expression and grants individuals the right to hold opinions and to receive and share ideas, without state interference. It specifically includes politics and matters of public interest:

Everyone has the right to freedom of expression. This right shall include freedom to hold opinions and to receive and impart information and ideas without interference by public authority and regardless of frontiers.[12]

Freedom of expression is a qualified right, which means that it may be restricted in certain circumstances provided it is prescribed by law and necessary in a democratic society to protect a legitimate aim. Article 10(2) specifies that

[t]he exercise of these freedoms, since it carries with it duties and responsibilities, may be subject to such formalities, conditions, restrictions or penalties as are prescribed by law and are necessary in a democratic society, in the interests of national security, territorial integrity or public safety, for the prevention of disorder or crime, for the protection of health or morals, for the protection of the reputation or rights of others, for preventing the disclosure of information received in confidence, and for maintaining the authority and impartiality of the judiciary.[13]

The European Court of Human Rights has noted that the right does not just extend to information that is “favorably received” but extends beyond that to also cover “those that offend, shock or disturb the State or any sector of the population. Such are the demands of that pluralism, tolerance and broadmindedness without which there is no democratic society.”[14] The European Court of Human Rights has further determined that whether the restriction on freedom of expression is necessary “requires the existence of a pressing social need, and that the restrictions should be no more than is proportionate”[15] and that a legitimate aim extends to “the protection of the reputation or rights of others.”[16]

The Law Commission notes that it is “difficult to anticipate where the boundaries lie”[17] between protected and unprotected speech.  The Crown Prosecution Service (CPS), the public body responsible for criminal prosecutions in England and Wales, has issued guidance that prosecutions should only be undertaken for communication offenses if “interference with the freedom of expression is unquestionably prescribed by law, is necessary and is proportionate.”[18] Despite this guidance, which aims to achieve consistency in prosecutions across the country, the Law Commission has expressed concern that the lack of clarity in the law of communications offenses means

. . .  there is a risk of overcriminalisation of online communication for gross offensiveness. This can tip the balance of parity between offline and online communication; with online communication being subject to a greater risk of prosecution for “gross offensiveness” than offline communication.[19]

The Law Commission has further stated that the communications offenses

are remarkably broad, both in terms of the different forms of communications now captured, and the proscribed behaviour and speech caught by the section 127 provision. When combined, the result is a criminalisation of some forms of communication that many may find surprising . . . [and] criminalises many forms of speech that would not be an offence in the “offline” world, even if spoken with the intention described in section 127 . . . . were it not for prosecution guidance, and human rights protections, they could conceivably be used to police a huge array of low level speech.[20]

Back to Top

III. Protection from Online Harassment

There do not appear to be any laws that specifically apply to the online harassment of journalists. However; England and Wales have a significant number of statutes that aim to protect individuals from harassment and abuse and that can be applied to any person, regardless of their occupation, and cover abuse and harassment through online communications.[21] Given the extent and number of statutes that can apply, this report provides a summary of the most commonly used laws to prosecute individuals who harass individuals online.

A. Communications Offenses

The primary statutes addressing abusive online communications are the Malicious Communications Act 1988 and the Communications Act 2003. 

Section 1 of the Malicious Communications Act 1988 provides that it is an offense to send electronically any communication that conveys a message that is indecent or grossly offensive, a threat, or false information if the purpose of the communication is to cause distress or anxiety to either the recipient or another person.[22] The maximum penalty is imprisonment for up to two years, a fine, or both.

Section 127(1) of the Communications Act provides that it is a criminal offense to send, or cause to be sent, a message through a public electronic communications network “that is grossly offensive or of an indecent, obscene or menacing character.”[23] When considering whether a message is menacing the means through which the message was sent, and context of the communication should be considered, and it must “create fear or apprehension in those to whom it is communicated, or who may reasonably be expected to see it.”[24] 

Whether the message is grossly offensive is a question of fact and

in making this determination the Justices must apply the standards of an open and just multi-racial society, and that the words must be judged taking account of their context and all relevant circumstances.[25]

The term “grossly offensive” has been criticized by the Law Commission as being ambiguous and subjective, “leading to inconsistent outcomes.”[26]

Section 127(2) of the Communications Act provides that it is an offense to send, or cause to be sent, a message through a public electronic communications network that the sender knows is false. This offense has a relatively low threshold for fault as the offender only needs to have the intent of sending the false message “for the purpose of causing annoyance, inconvenience or needless anxiety to another.”[27] This offense is committed as soon as the message is sent for this purpose. It does not matter if the message was later retracted or deleted or whether or not the intended recipient opened the message.[28] Thus, liability is not dependent on receipt of the message or any evidence of harm caused by it. Rather, the purposeful sending of such a message is an offense under this section. The Law Commission has criticized the application of this section, noting that the absence of the recipient being offended or feeling menaced

. . .  suggests that the offence is not exclusively concerned with protecting other people from receipt of unsolicited messages of the proscribed character.[29]  

Proceedings for this offense must start within three years from the date of the offense,[30] and the maximum penalty for offenses under section 127 of the Communications Act 2003 is up to six months’ imprisonment, a fine, or both.

This section has been applied to messages sent through social media services, including Facebook, Facebook Messenger, and Twitter,[31] which were not in existence at the time the act was written.[32] There remains uncertainty over whether the offense applies to communications posted on social media in public forums.[33] As the offense requires the use of a public electronic communications network, communications sent over private networks, which the Law Commission notes include Bluetooth connections, are not covered.[34]

The Law Commission has stated there is a mismatch between how the Communications Act is written and the practice of the CPS in deciding whether any prosecution is legally justified, as CPS must take into account the right to freedom of expression and whether the prosecution is in the public interest, and the Law Commission has noted freedom of expression is not adequately protected in the act. The effect of this has rendered the law unclear and uncertain.[35] The Law Commission is also of the opinion that the offenses contained in the Malicious Communications Act and the Communications Act have significant overlap that can lead to confusion and that the offenses should be reviewed and consideration given to “amalgamation into one coherent set of offences.”[36] 

B. Harassment

The Protection from Harassment Act 1997 was introduced to protect individuals from harassment and stalking.[37] This act provides that harassment is both a criminal offense and cause of a civil action. Section 1 of the act prohibits individuals from acting in a manner that amounts to harassment of another person, where the perpetrator knows, or ought to know, that the action amounts to harassment. Aiding, abetting, counseling or procuring one or more people to harass a person is also an offense.[38] Unlike the communications offenses, there must be at least two incidents by the same person, or group of people, in order for their actions to constitute harassment.[39] This offense is punishable with up to six months of imprisonment, a fine, or both.[40]

Civil action can be started, even if the alleged harasser has not been convicted of a criminal offense, and the court may issue an injunction to restrain individuals from engaging in conduct that amounts to harassment.[41] It is a criminal offense for the person named in the injunction to do any acts prohibited by the injunction, and breaching the terms of any injunction is punishable with up to five years’ imprisonment, a fine, or both.[42] In cases where the harassment has caused financial loss or emotional issues, such as anxiety, the court may also award compensation.[43] 

Stalking is an offense under the Protection from Harassment Act and occurs where a person engages in behavior he or she knows, or ought to know, amounts to the harassment of another person, and the behavior involves those associated with stalking, such as following a person, contacting a person using any means, publishing a statement or other material about another person, or monitoring a person's use of the internet or other form of electronic communication.[44] This offense is punishable by up to six months’ imprisonment, a fine, or both.

The Protection from Harassment Act also contains the offense of putting a person “in fear of violence.”[45] This offense arises when a person, on at least two occasions, engages in a course of conduct that causes another person to fear that violence will be used against them. The offense may also occur when the course of conduct amounts to stalking and, on at least two occasions, causes a person to fear that violence will be used against them, or causes them “serious alarm or distress which has a substantial adverse effect on [their] usual day-to-day activities.”[46] In contrast to the offense of harassment, this offense has a significantly longer penalty and is punishable with up to ten years’ imprisonment, a fine, or both.

The Law Commission notes that, while the offenses contained in the Protection from Harassment Act can apply to harassment conducted online by “pile on” abuse, where a significant number of people collectively harass a single person, the provisions “are complex and . . .  [not] well understood or widely used,”[47] and thus, not adequately addressed by the current legislation.[48] The result of this has been “that the criminal law is having little effect in punishing or deterring forms of “group abuse.”[49]

C. Other Offenses

In addition to the communication offenses and harassment and abuse offenses, there are a number of acts that may also be an offense under the laws of England and Wales including, but not limited to

  • stirring up hatred on the basis of racial, religious, or sexual orientation;[50]
  • intentionally harassing, causing alarm or distress;[51]
  • using threatening, abusive, or insulting actions to cause fear or provoke violence;[52]
  • publishing an obscene article;[53]
  • publicly displaying indecent matter;[54]
  • possessing extreme pornography;[55]
  • disclosing private sexual photographs and films with intent to cause distress;[56] and
  • inchoate offenses, such as conspiracy, assisting, or encouraging another to commit a crime.[57]  

D. Prosecutorial Guidance

Prosecutors must consider a number of other factors when making the decision whether or not to prosecute a case, including whether there is sufficient evidence; it is in the public interest to prosecute the offense, and the prosecution is proportionate and justified, giving particular regard to the right to freedom of expression contained in Article 10 of the European Convention on Human Rights.[58] 

CPS has published guidance that is specific to offenses involving communications sent via social media.[59] This guidance requires prosecutors to consider whether another substantive offense, such as stalking or harassment, has been committed, and to pursue these, rather than communications offenses.[60] If communications offenses are prosecuted, section 127 of the Communications Act should be the starting point, unless a higher sentence is required due to the facts and circumstances of the case.[61] Despite this guidance, the Law Commission has noted that “the majority of online hate speech is pursued as one of the communications offences.”[62]

The CPS has also issued guidance to prosecutors for cases that involve journalists. While the guidance states that it also applies to cases involving journalists as victims, it leans more towards cases that involve the prosecution of journalists, ensuring prosecutors have regard for the protection of freedom of the press.[63] 

E. Extraterritorial Application

Given the ease of international communications the internet provides, the courts have recently adopted the substantive measure test, which means that offenses with a foreign aspect may be tried in the courts of England and Wales if a substantial measure of the activities that constitute the crime occur within its jurisdiction. The Law Commission has stated that “uncertainty exists in relation to this approach, as it applies to Internet activities.”[64]

Back to Top

IV. Defamation

The law relating to defamatory material—that is, published material that causes, or is likely to cause, serious harm to a person’s reputation — is contained in the Defamation Act 2013,[65] which was enacted, in part, to provide a fairer system for addressing materials published online. The update

. . . reflects the Government’s view that disputes should be resolved directly between the complainant and the poster [of the information] where possible. It aims to support freedom of expression by giving the poster an opportunity to express his or her views. It also aims to enable complainants to protect their reputation by resolving matters with the person who is responsible for the defamatory posting where they can be identified, while ensuring that material is removed where the poster cannot be identified or is unwilling to engage in the process.[66]

Prior to the enactment of the Defamation Act 2013, website operators generally automatically removed content upon the receipt of a complaint in order to avoid becoming a party to a lawsuit, as they were considered to be the publisher of the statement at common law and could be held liable for the content of these posts.[67] Concerns were raised that this cautious approach was limiting free speech, as it meant that some non-defamatory content was being removed and, in cases where content was not removed, individuals were pursuing legal actions against the website operator rather than the individual who authored and posted the content.[68] Given the vast increase in online users, the government determined that failing to take action in this area of law would negatively impact free speech.[69] 

The Defamation Act places the website operator as a liaison point between the aggrieved party and the author of the content.[70] The regulatory process is contained in Defamation (Operators of Websites) Regulations 2013. Website operators are not under a duty to follow this procedure, and they may instead choose by themselves whether or not to remove any disputed material, or whether they wish to rely on other defenses to the defamation action.[71] 

Section 5 of the Defamation Act 2013 provides a defense against claims of defamation to website operators that host third-party content. In order to use the defense, a website operator must “show that it was not the operator who posted the statement on the website.”[72] The defense may be defeated if the claimant can show that

  • he or she could not identify the person who posted the allegedly defamatory statement;
  • he or she notified the operator of the complaint relating to the statement; and
  • the website operator failed to respond to the complaint in accordance with the process contained in the Defamation (Operators of Websites) Regulations 2013.[73]

Section 5(6) of the Act provides that the complainant must include the following information in the complaint: his or her name, the statement as it appears on the website in question, and the reasons why the statement is believed to be defamatory. Regulation 2 of the Defamation (Operators of Websites) Regulations 2013 provides that the complainant must also include the following information when contacting the service provider:

(a)  specify the electronic mail address at which the complainant can be contacted;

(b)  set out the meaning which the complainant attributes to the statement referred to in the notice;

(c)   set out the aspects of the statement which the complainant believes are—

(i) factually inaccurate; or

(ii) opinions not supported by fact;

(d)  confirm that the complainant does not have sufficient information about the poster to bring proceedings against that person; and

(e)   confirm whether the complainant consents to the operator providing the poster with—

(i) the complainant’s name; and

(ii) the complainant’s electronic mail address.[74]

Even if the notice provided to the website operator does not contain all the information required by both the Act and the Regulations, the Regulations provide that it must be treated as a complaint for the purposes of the Defamation Act 2013.[75]

Within forty-eight hours of receiving a complaint, the website operator must send the poster of the content complained of

  • a copy of the complaint, with the complainant’s information concealed if he or she has not consented to the sharing of this information; and
  • written notice that the content complained of will be removed unless the poster provides a written response by midnight no later than the fifth day after the notification was sent.[76] 

The poster must then notify the operator whether he or she wants the content to be removed from the website specified in the notice. If the poster does not want the content to be removed, the poster must provide his or her full name and postal address, and indicate whether the website operator may provide this personal information to the complainant. If the poster fails to respond to a notice from the website operator, or does respond but fails to include all the required information, the website operator must, within forty-eight hours after the deadline provided to the poster, remove the statement from the website contained in the notice of complaint and notify the complainant of this. If the poster responds to the website operator that he or she wants the content removed, the website operator has forty-eight hours after notification to remove the information, and must then notify the complainant that the content has been removed. 

If the website operator does not have a means of contacting the poster, he or she must remove the statement complained of within forty-eight hours of receiving a written notice from the complainant. The website operator has forty-eight hours after receiving the complaint to send an acknowledgement to the claimant stating that either the poster has been notified, or the post has been removed.[77]

The law also provides an expedited process in cases where an alleged defamatory statement is posted repeatedly. If the same complainant has requested the removal of the same material from the same website operator more than two times, and the information has been removed in accordance with the provisions of the Regulations, the complainant must specify this in the complaint and the website operator must remove the statement within forty-eight hours of receiving the complaint.[78]

If the website operator fails to follow the procedure specified in the Regulations and meet the time limits, the operator can potentially be held liable for the content.[79]

Back to Top

V. Implementation

A. General Concerns

While there are a significant number of laws that have been interpreted to cover online harassment and abuse, statistics of recorded incidents of alleged crimes do not indicate the laws are being robustly implemented. Malicious communications accounted for 11% of all recorded violence against the person offenses in that year, and the charge rate was 3%.[80] Reasons cited by the Home Office for the low prosecution rate were that, in 46% of cases, the victims did not support police action,[81] and the perpetrator was unidentifiable in 22% of these cases, due to what the Law Commission has described as a constant “arms race” occurring between criminals and law enforcement over the traceability of communications.[82] The low charge rate has also been attributed in part to the volume of complaints. The Law Commission has noted that, with an estimated 44 million social media users across the United Kingdom (UK), any kind of coordinated and comprehensive response will face significant challenges.[83]  

The Law Commission has noted that the current legislation has resulted in the police response frequently being “confused and minimal,”[84] resulting in cases of abuse often being underreported. A chief constable of police has stated that the number of different statutes that can be involved in malicious communications cases are not “helping investigators, the Crown Prosecution Service or victims to bring these people to justice.”[85] Inconsistency and knowledge variances across the different police forces have also played a role,[86] along with overlapping offenses causing confusion.[87] This has led to a “mismatch here between the forms of harm that are occurring online and the response of the criminal justice system.”[88] Concerns have also been raised that interactions between law enforcement and service providers have been too informal without adequate processes to ensure the right to freedom of expression is protected.[89]

There currently appear to be no judgments involving journalists who have been harassed in the Law Reports. This may be due to any prosecutions occurring in the Magistrates’ Court, the judgments of which are not included in the Law Reports. There are a number of newspaper articles that describe the harassment of journalists, but none indicate that police action has been taken. One incident was recently reported in the news in which a member of a far-right group live-streamed himself knocking and shouting at a journalist’s door at 11:00 p.m. and again at 5:00 a.m., revealing the journalist’s home address as his followers bombarded the journalist with messages through social media. The police were called to both incidents but there are no reports that the individual, or anyone else, was charged with any offense.[90] There have also been reports that the BBC hired a bodyguard to protect one of its political reporters during the 2017 elections.[91]

B. Evidentiary Issues

Cases that involve evidence held in another country can take significant periods of time to obtain under the current mutual legal assistance procedures, with reports that the police are waiting for up to eighteen months for social media companies to provide evidence to them.[92] In addition to the lack of timeliness provided by the mutual legal assistance procedures, the cost and human resources required to obtain this evidence “will sometimes simply be prohibitive for law enforcement to pursue in the context of abusive and offensive communication offences.”[93] In response to these concerns, the Crime (Overseas Production Order) Act was enacted in 2019. This act enables UK law enforcement agencies to apply to the court for an order to obtain electronically stored data directly from a person or company located outside of the UK, if the purpose of obtaining the communication is to assist with domestic investigations and the prosecution of serious crime.[94] The orders only work in jurisdictions that are subject to an international cooperation arrangement that permits the orders to be recognized, and are designated countries under the act. 

Back to Top

VI. Proposals for Reform

The Law Commission has noted that, in the majority of cases, the criminal laws of England and Wales cover harassing and abusive online communications, in some cases criminalizing online behavior to a greater degree than offline offenses. It has acknowledged that there are some ambiguities and technical issues with the law, which along with considerable overlap in some offenses, has led to uncertainty and there is “considerable scope for reform.”[95] The Law Commission recommended that communication offenses be reformed and consolidated to provide clarity and ensure proportionality,  and that the criminal law be reviewed to see how it can “more effectively address the specific harm caused to an individual who is subjected to a campaign of online harassment,”[96] along with “a review of how effectively the criminal law protects personal privacy online.”[97]

While the Law Commission has made these recommendations, opinions on the reform of the criminal laws relating to abusive online communications in the recent past has been divided. A 2014 Select Committee of the House of Lords determined that “the criminal law in this area, almost entirely enacted before the invention of social media, is generally appropriate for the prosecution of offenses committed using social media.”[98] A 2017 report by the Select Committee on Home Affairs recommended that the entire legislative framework be revised to ensure that, among other offenses, online hate speech and harassment laws are up to date, as most criminal provisions predate the use of social media and, in some cases, the internet.[99] Conversely, the Committee on Standards in Public Life determined that the current legislative framework addressing online abuse is sufficient and that any new legislation specific to social media would be unnecessary and “could be rendered out of date quickly.”[100] This committee did recommend that liability should be extended to social media companies if they fail to remove abusive content from their platforms.[101]

Any changes to the law will face problems, including ensuring that the qualified right to freedom of expression is adequately balanced against any criminal provisions, investigative and evidentiary issues, and jurisdictional issues that frequently arise when the victim and the offender are in different countries or the content is hosted in a separate jurisdiction.[102] The enforcement of any laws will also require ensuring that both the technical capabilities and resources of the police are up to date and fully funded.[103]

Back to Top

VII.  Further Action

A. National Committee for the Safety of Journalists

In April 2019, the Organization for Security and Co-operation in Europe (OSCE) proposed that its member states should establish

a national committee for safety of journalists which would gather representatives of the prosecutor’s office, the police and journalist associations to verify that all attacks and threats are properly investigated, improve procedures if needed; propose protection measures when necessary and implement preventive action to reinforce the security of journalists.[104]

On July 11, 2019 the Secretary of State for Digital, Culture, Media and Sport acted in response to this proposal and announced the establishment of the National Committee for the Safety of Journalists.[105] The Committee is responsible for setting out a National Action Plan on the Safety of Journalists to examine the current protections journalists have and make sure that mechanisms are in place to make anyone who threatens journalists accountable:

With rising disinformation and threats against the media, the UK’s strong and independent press is a beacon of freedom that this Government is committed to supporting and preserving.

The Committee will champion journalists’ ability to safely carry out their important roles in society and to continue to hold the powerful to account. This is part of our broader commitment to ensuring the future sustainability of high-quality, public interest news.[106]

B. Engaging Law Enforcement

The government is also working with law enforcement to review whether its “current powers are sufficient to tackle anonymous abuse online.”[107] The police are also receiving training to improve digital capability and are working to make it easier for people to report online crimes. The Digital Public Contact program enables the public to contact the police digitally in order to facilitate the reporting of crime.[108]

C. Regulation of Online Platforms

In 2017, the government criticized Google, Facebook, and Twitter over a lack of transparency regarding both their collection of data and performance of reporting and takedown procedures. The government expressed concern that no targets were set for the time it took these platforms to remove reported content.[109]

The Digital, Culture, Media and Sport Committee released a report in late February 2019. The committee’s chair, Damian Collins, stated,

[w]e need a radical shift in the balance of power between the platforms and the people. The age of inadequate self regulation must come to an end. The rights of the citizen need to be established in statute, by requiring the tech companies to adhere to a code of conduct written into law by Parliament, and overseen by an independent regulator.[110]

The report recommended that laws be introduced to establish a legal duty of care for companies that host online content and to provide

. . .  for clear legal liabilities to be established for tech companies to act against harmful or illegal content on their sites, and calls for a compulsory Code of Ethics defining what constitutes harmful content.

An independent regulator should be responsible for monitoring tech companies, backed by statutory powers to launch legal action against companies in breach of the code. Companies failing obligations on harmful or illegal content would face hefty fines.[111]

The committee recommended that any new regulator be funded through a levy on tech companies operating in the UK.[112] The committee further recommended that a new category be created for social media companies that would tighten the liabilities of tech companies that are “not necessarily either a ‘platform’ or a ‘publisher.’ This approach would see the tech companies assume legal liability for content identified as harmful after it has been posted by users.”[113] The government is currently working to consult on legislation that can be introduced to implement these recommendations.[114]

Back to Top

Clare Feikert-Ahalt
Senior Foreign Law Specialist
September 2019


[1] HM Government, Online Harms, CP 57 (2019), https://perma.cc/6D3L-X72F.

[2] Abusive and Offensive Online Communications, Law Commission, https://perma.cc/MKS4-8237.

[3] Committee on Standards in Public Life, Intimidation in Public Life Cm. 9543 (2017), https://perma.cc/FZC3-YXES

[4] Law Commission, Abusive and Offensive Online Communications: A Scoping Report ¶ 1.36 (2018), https://perma.cc/VZ3M-FPZH.

[5] House of Lords, Select Committee on Communications, Social Media & Criminal Offences, First Report, 2014-15, HL 37, https://perma.cc/5P4J-VGGL.

[6] Erika Fraser & Laura Martineau-Searle, VAWG Helpdesk Research Report No. 211: Nature and Prevalence of Cyber Violence against Women and Girls, UK Aid (Oct. 8, 2018), https://perma.cc/8NTW-WNKF.

[7] Press Release, Int’l Fed’n of Journalists, IFJ Global Survey Shows Massive Impact of Online Abuse on Women Journalists (Nov. 23, 2018), https://perma.cc/M5XR-W2AX. See also Law Commission, Abusive and Offensive Online Communications: A Scoping Report, supra note 4,¶ 1.49; Fraser & Martineau-Searle, supra note 6, at 10.

[8] Committee on Standards in Public Life, Intimidation in Public Life, Cm. 9543 (Dec. 2017) at 79-80, https://perma.cc/XLK8-4AKW. See also Graham Ruddick, BBC Chair Calls for End to Abuse of Journalists – Especially Women, Guardian (London) (Sept. 13, 2017), https://perma.cc/84DG-C8HX.

[9] HM Government, Online Harms,supra note 1, Box 14.

[10] Law Commission, Abusive and Offensive Online Communications: A Scoping Report, supra note 4, ¶ 1.49 (citing United Nations Human Rights: Office of the High Commissioner, UN Experts Urge States and Companies to Address Online Gender-Based Abuse but Warn against Censorship (Mar. 8, 2017),https://perma.cc/HTV7-TK3B).

[11] Human Rights Act 1998, c. 42, https://perma.cc/ZKN8-XVNC

[12] Id. sched. 1, art. 10(1).

[13] Id. sched. 1, art. 10(2).

[14] Handyside v. UK (1976) 1 EHRR 737 at 49. See also Muller v. Switzerland (1988) 13 EHRR 212 at 33.

[15] Ursula Smartt, Media & Entertainment Law 64 (3d ed. 2017).

[16] Law Commission, Abusive and Offensive Online Communications: A Scoping Report, supra note 4, ¶ 2.68.

[17] Id. ¶ 2.72.

[18] Crown Prosecution Service, Guidelines on Prosecuting Cases Involving Communications Sent via Social Media (Aug. 21, 2018) ¶ 27, https://perma.cc/6G5G-EHKX.

[19] Law Commission, Abusive and Offensive Online Communications: A Scoping Report,supranote 4, ¶ 5.84.

[20] Id. ¶¶ 4.63 & 13.47.

[21] Communications Act 2003, c. 21, https://perma.cc/5JKX-3CFX; Criminal Attempts Act 1981, c. 47, https://perma.cc/UHC5-XVZE; Crime and Disorder Act 1998, c. 37, https://perma.cc/K5RQ-UGGJ; Criminal Justice Act 2003, c. 44, https://perma.cc/LUG8-PVAP; Criminal Justice and Courts Act 2015, c. 2, https://perma.cc/RB3Y-8DTB; Criminal Justice and Immigration Act 2008, c. 4, https://perma.cc/UDY4-YEPU; Data Protection Act 2018, c. 12, https://perma.cc/W57W-HMZ9; Indecent Displays (Control) Act 1981, c. 42, https://perma.cc/F52D-2KUR; Malicious Communications Act 1988, c. 27, https://perma.cc/G844-4UT6; Obscene Publications Act 1959, 7 & 8 Eliz. II, c. 66, https://perma.cc/6AU2-8BCZ; Protection from Harassment Act 1997, c. 40, https://perma.cc/EPC9-YHHQ; Public Order Act 1986, c. 64, https://perma.cc/8A8B-QRNW; Serious Crime Act 2007, c. 27, https://perma.cc/HWN4-DF2R; Sexual Offences Act 2003, c. 42, https://perma.cc/2XM8-AYMW.

[22] Malicious Communications Act 1988, c. 27, § 1.

[23] Communications Act 2003, c. 21, § 127(1).

[24] Chambers v. DPP [2012] EWHC 2157 (Admin); [2013] 1 WLR 1833 ¶ 30, https://perma.cc/TC26-6PQA.

[25] DPP v. Collins [2006] UKHL 40; [2006] 1 WLR 2223 ¶ 9, https://perma.cc/N4HQ-VBVJ.

[26] Law Commission, Abusive and Offensive Online Communications: A Scoping Report, supra note 4, ¶ 13.18.

[27] Communications Act 2003, c. 21, § 127(2).

[28] Id. ¶ 4.65.

[29] Law Commission, Abusive and Offensive Online Communications: A Scoping Report, supra note 4, ¶ 4.77.

[30] Communications Act 2003, c. 21, § 125(5).

[31] Chambers v. DPP [2012] EWHC 2157 (Admin); [2013] 1 WLR 1833 ¶¶ 23-24.

[32] Law Commission, Abusive and Offensive Online Communications: A Scoping Report,supra note 4, ¶ 4.82.

[33] Law Commission, Abusive and Offensive Online Communications: Summary of Scoping Report (undated) 6, https://perma.cc/FV6U-SR5G

[34] Law Commission, Abusive and Offensive Online Communications: A Scoping Report, supra note 4, ¶ 4.92.

[35] Law Commission, Abusive and Offensive Online Communications: Summary of Scoping Report, supra note 33, at 10. 

[36] Law Commission, Abusive and Offensive Online Communications: A Scoping Report, supra note 4, ¶ 4.153.

[37] Protection from Harassment Act 1997, c. 40. See alsoSteve Foster, Human Rights and Civil Liberties 545 (3d ed. 2011).

[38] Protection from Harassment Act 1997, c. 40 § 7(3A).

[39] Taking Action About Harassment, Citizens Advice, https://perma.cc/8B4E-55PL.

[40] Protection from Harassment Act 1997, c. 40 §§ 1-2.

[41] Id. § 3.

[42] Id. § 3(9).

[43] Id. § 3.

[44] Id. § 2A.

[45] Id. §§ 4–4A.

[46] Id.§ 4A(b)(ii).

[47] Law Commission, Abusive and Offensive Online Communications: Summary of Scoping Report,supra note 33, at 6. See also Law Commission, Abusive and Offensive Online Communications: A Scoping Report,supra note 4, ¶ 8.162.

[48] Law Commission, Abusive and Offensive Online Communications: A Scoping Report, supra note 4, ¶ 8.207.

[49] Id.

[50] Public Order Act 1986, c. 64 Part III & 3A, https://perma.cc/L7YQ-74J2. The number of prosecutions for these offenses is very low. See also Crown Prosecution Service, Hate Crime Report 2017-18 (Oct. 2018) at 13, https://perma.cc/W2LH-9YBL.

[51] Public Order Act 1986, c. 64 § 4A.

[52] Id. § 4.

[53] Obscene Publications Act 1959, c. 66 § 2.

[54] Indecent Displays (Control) Act 1981, c. 42 § 1, https://perma.cc/36WP-8XSR.

[55] Criminal Justice and Immigration Act 2008, c. 4 § 63, https://perma.cc/FAR6-S5EF.

[56]  Criminal Justice and Courts Act 2015, c. 2 § 33, https://perma.cc/95MS-ERVG.

[57] Serious Crime Act 2007, c. 27 §§ 44-46, https://perma.cc/WZ4E-TPXH.

[58] Social Media – Guidelines on Prosecuting Cases Involving Communications Sent via Social Media, Crown Prosecution Serv. (Revised Aug. 21, 2018), ¶ 31, https://perma.cc/MH38-TUCA.

[59] Id. Part A.

[60] Id.

[61] Crown Prosecution Serv., supra note 18, ¶ 13.

[62] Law Commission, Abusive and Offensive Online Communications: Summary of Scoping Report, supra note 33, at 8.

[63] Media: Guidance for Prosecutors on Assessing the Public Interest in Cases Affecting the Media, Dir. for Pub. Prosecutions (Sept. 13, 2012), https://perma.cc/9E9J-YDTX.

[64] Law Commission, Abusive and Offensive Online Communications: A Scoping Report, supra note 4, ¶ 4.138.

[65] Defamation Act 2013, c. 26, § 1(1), https://perma.cc/2X3V-3SGC.  

[66] Explanatory Memorandum to the Defamation (Operators of Websites) Regulations 2013, SI 2013/3028, ¶ 7.6, https://perma.cc/5DT8-HNC3

[67] Godfrey v. Demon Internet Ltd. [1999] EWHC QB 244, https://perma.cc/YBV8-7JLY

[68] Explanatory Memorandum to the Defamation (Operators of Websites) Regulations 2013, supra note 66, ¶ 7.2. 

[69] Id. 

[70] Defamation Act 2013, c. 26 § 5, https://perma.cc/2X3V-3SGC. The regulatory process is contained in the Defamation (Operators of Websites) Regulations 2013, SI 2013/3028, https://perma.cc/W7Y9-49BG.  

[71] Explanatory Memorandum to the Defamation (Operators of Websites) Regulations 2013, supra note 66, ¶ 7.4. 

[72] Defamation Act 2013, c. 26, § 5(2).  

[73] Id. § 5.

[74] Defamation (Operators of Websites) Regulations 2013, SI 2013/3028, ¶ 2.

[75] Id. ¶ 4.

[76] Id. Sched. ¶ 2.

[77] Id. Sched. ¶¶ 2–4.

[78] Id. Sched. ¶ 9.

[79] House of Commons Library, The Defamation Act 2013, Jan. 2014, SN/HA/6801, at 6, https://perma.cc/2H2Y-JBCF.

[80] Law Commission, Abusive and Offensive Online Communications: A Scoping Report,supra note 4, ¶ 2.116.

[81] Id. ¶ 2.117.

[82] Id. ¶ 2.127.

[83] Id. ¶ 2.145.

[84] Id.

[85] Id.¶ 2.133; Matthew Weaver, Police Are Inconsistent in Tackling Online Abuse, Admits Chief Constable, Guardian (London) (Apr. 14, 2016), https://perma.cc/925T-R7AZ.

[86] Id. ¶ 2.132.

[87] Law Commission, Abusive and Offensive Online Communications: Summary of Scoping Report,supra note 33, at 7.

[88] Law Commission, Abusive and Offensive Online Communications: A Scoping Report, supra note 4, ¶ 8.208.

[89] Id.  See also Article 19, Self-Regulation and ‘Hate Speech’ on Social Media Platforms (2018), https://perma.cc/G6BL-HDGL.

[90] Tom Embury-Dennis, Tommy Robinson: Police Called After Ranting Anti-Islam Activist Bangs on Door of Historian Who Helped Fund Lawsuit Against Him, Independent (London) (Mar. 5, 2019), https://perma.cc/26YY-88WM; Mike Stuchbery, Tommy Robinson Hammered on My Door at 5am and Brought a Torrent of Abuse in His Wake – But He Won’t Shut Me Up, Independent (London) (Mar. 5, 2019), https://perma.cc/PB5T-64MN.

[91] Lizzy Buchan, Laura Kuenssberg: BBC Political Editor ‘Given Bodyguards’ at Labour Party Conference After Online Abuse, Independent (London) (Sept. 25, 2017), https://perma.cc/4PFY-5KV9

[92] Law Commission, Abusive and Offensive Online Communications: A Scoping Report, supra note 4, ¶ 2.111.

[93] Id. ¶ 2.111.

[94] Crime (Overseas Production Orders) Act 2019, c. 5, https://perma.cc/WD95-PKFK.

[95] Reform of the Criminal Law Needed to Protect Victims from Online Abuse Says Law Commission, Law Commission (Nov. 1, 2018), https://perma.cc/PZ2D-M5T5.

[96] Law Commission, Abusive and Offensive Online Communications: A Scoping Report,supra note 4, 1.31.

[97] Id.

[98] House of Lords Select Committee on Communications, Social Media and Criminal Offences (July 2014) HL 37, https://perma.cc/AY46-86SV.

[99] Select Committee on Home Affairs, Hate Crime: Abuse, Hate and Extremism Online (May 2017) HC 609 ¶ 56, https://perma.cc/Y49S-QM2H

[100] Committee on Standards in Public Life, Intimidation in Public Life (Dec. 2017) Cm. 9543 at 16, https://perma.cc/2S42-8VTM.

[101] Id. at 14.

[102] R v. Smith (No. 4) [2004] EWCA Crim 631 [2004] QB 1418; [2004] QB 1418, https://perma.cc/5EYW-PXFE.  See further Legal Guidance: Jurisdiction, Crown Prosecution Service, https://perma.cc/Z28A-2RKM.

[103] Law Commission, Abusive and Offensive Online Communications: A Scoping Report, supra note 4, ¶¶ 2.65 & 2.128.

[104] OSCE Representative on Freedom of the Media, Opening Speech, International Conference, Vienna, Austria, Journalists Under Attack: A Threat to Media Freedom (Apr. 12, 2019), https://perma.cc/27FV-89BF.

[105] Press Release, Department for Digital, Culture, Media & Sport and The Rt. Hon Jeremy Wright MP, UK to Establish National Committee for the Safety of Journalists (July 11, 2019), https://perma.cc/5U3X-WSL3.

[106] Id.

[107] HM Government, Online Harms, supra note 1, Box 6.

[108] Id.

[109] Committee on Standards in Public Life, supra note 3, at 41.

[110] Ofcom, Addressing Harmful Online Content 2 (Sept. 18, 2018), https://perma.cc/ZPN9-BWK7.

[111] Democracy Is at Risk from the Relentless Targeting of Citizens with Disinformation, House of Commons, https://perma.cc/LM8H-R2XV

[112] House of Commons Digital, Culture, Media and Sport Committee, Disinformation and ‘Fake News’: Final Report, 2019, H.C. 1791, https://perma.cc/5H2X-G2WU

[113] Id. ¶ 14.

[114] Cabinet Office, Protecting the Debate: Intimidation, Influence and Information 12 (May 2019), https://perma.cc/43DZ-A22H.

Back to Top

Last Updated: 12/30/2020