Abstract and Keywords
This chapter explores the legal framework regarding cybercrime, with a focus on Europe. In the case of cybercrime, competent authorities face a moving target, as technological developments, both on the side of perpetrators and on the side of policing and forensics, often outwit prevalent and tested strategies against traditional crime. This chapter first raises the question of what makes cybercrime ‘cyber’, and then introduces the international and supranational legal frameworks that are meant to cope with cybercrime, with a focus on the Cybercrime Convention. Finally, the chapter offers a reflection on the image of the weighing scale where it comes to balancing safety and security against rights and freedoms.
The more we become dependent upon data- and code-driven environments, the more serious the impact of cybercrime. Whereas individual damage or harm may be remedied by way of private law compensation, substantial damage to critical infrastructure, societal trust, and economic welfare requires a complementary approach that re-establishes and confirms the normative foundations of societal intercourse. To some extent this is the task of administrative law, imposing sanctions for violating legal norms that aim to protect what political theory and legal philosophy call ‘public goods’. In economics, the term ‘public goods’ refers to goods that are non-exclusionary because they cannot be monopolized (such as the air we breathe) and non-rivalrous because usage by the one does not imply less use by another (such as information). In political theory and legal philosophy, the term refers to something that benefits society in general, whether that something is exclusionary, non-rivalrous or neither. We can think of welfare, public health, freedom of expression, universal access to electricity, education, or—at a higher level of abstraction—we can think of a fair distribution of income and access to other goods. In this—non-economic—sense, public goods are closely related to shared values, though the term ‘good’ refers to more than an aspiration or mental preference, as it denotes the actual availability of the good. In legal philosophy, human rights are considered as public goods. The GDPR is an example of an administrative law that protects public goods such as privacy, non-discrimination, and freedom of expression. The administrative law approach, however, easily conflates sanctions with paying a fee to exempt oneself from following the law. ‘Speeding on a public road is prohibited, and whoever speeds will be fined’ may turn into ‘speeding on a public road is allowed if one is willing to pay the fine’.
(p.164) Criminal law is about punishing those who negate or ignore the shared normativity that societies thrive on. It is far more than a utilitarian calculus meant to deter a homo economicus (the calculating human agent) from violating the law, by imposing costs that hopefully overrule the benefits. Neither is criminal law a way to shame vulnerable agents into ‘behaving’ themselves. Criminal law is about censure, about addressing fellow citizens as responsible agents instead of manipulable pawns.
The monopoly of violence prohibits private punishment or taking the law in one’s own hands. The internal and external sovereignty of states implies that a government that does not protect its citizens against crime or against other states will lose its footing. Criminal law therefore does not merely provide competences, it also constitutes a task. A government that systematically forsakes punitive interventions when criminal offences are committed, may raise fear about further breaches of the societal contract. This places a heavy burden on governments, as they need to provide safety and trust, without, however, themselves violating safety and trust in the process of defending it. This goes for all criminal law interventions, whether investigatory or punitive.
6.1 The Problem of Cybercrime
In the Internet Security Threat Report of 2018, Symantic reports:1
From the sudden spread of WannaCry and Petya/NotPetya, to the swift growth in coinminers, 2017 provided us with another reminder that digital security threats (p.165) can come from new and unexpected sources. With each passing year, not only has the sheer volume of threats increased, but the threat landscape has become more diverse, with attackers working harder to discover new avenues of attack and cover their tracks while doing so.
According to Symantec, one in thirteen web requests leads to malware; 24,000 is the average number of malicious mobile apps blocked each day; 5.4B WannaCry attacks have been blocked. Compared to 2017, Symantec reports an 80 per cent increase in malware attacks on Macs, a 46 per cent increase in new ransomware variants, a 600 per cent increase in attacks against internet of things (iot) devices, a 13 per cent overall increase in reported vulnerabilities, a 29 per cent increase in industrial control system (ICS) related vulnerabilities, and, finally, an 8,500 per cent increase in coinminer detections.
Though these numbers raise many questions—e.g. as to the distribution of high-impact effects compared to mere nuisance—cybercrime is a major threat for consumers, businesses, law enforcement, national security, and critical infrastructure.
6.1.1 Computer crime
In the time when computing systems were stand-alone devices, what we now call cybercrime was framed as ‘computer crime’.
The computer ‘as an instrument’ concerned offences such as spam or phishing; the computer ‘as a target’ concerned the use of malware or distributed denial (p.166) of service (DDOS) attacks; traditional crimes ‘in the context of computers’ would be digital identity fraud, online copyright infringements, or online child pornography.
Whether old or new, the question remains what is ‘the difference that makes a difference’ between existing criminal offences and more recently added computer or cyber offences.
The rise of the internet and the world wide web, the interconnection between computing systems (the resilient routing of packages across a network of nodes), and the hyperlinking of information across the network (resulting in an unprecedented explosion of content, communication, and metadata), signified the shift from computer crime to cybercrime. We can safely say that we now live in a different world than two decades ago. This is related to the affordances of an unprecedented rise of computing power on the one hand (with the implied miniaturization of the carriers of digital data), and hyperconnectivity on the other (with the ensuing network effects).
This makes cybercrime different across six dimensions of human intercourse, in ways that are highly relevant for the criminal law: distance, scale, speed, distribution, invisibility, and visibility.
1. distance is implied in the ability to exercise all kinds of ‘remote control’, ignoring traditional, territorial borders, and thereby, for example, causing major issues for the force of law across different jurisdictions;
2. scale is implied in the ability to automate scripts that can affect an enormous amount of other automated systems, that can in turn easily (p.167) multiply the reach of a message or malware, thus, for example, enabling massive spam and attacks;
3. speed is implied in the combination of an exponential increase in computing power and hyperconnectivity, which, for example, enables the immediate or timed destruction of evidence (even at a distance) and an easy way out of criminal accountability;
4. distribution is implied in the networked nature of both the various stacks of the internet, the web, and various application layers, across remote servers (cloud computing) and amalgamated in hardware that combines operating systems, firmware, different software, and applications that have been developed by different teams and companies, while default settings may be changed by the seller, by the buyer (e.g. a service provider), and/or by the end-user, presenting all those involved with seemingly unsurmountable problems in the attribution of responsibility when things go wrong, for example, in the case of self-driving cars or data-driven energy grids;
5. invisibility is implied in the differentiation between backend systems that call the shots and frontend systems where end-users are presented with a choice architecture that hides the choices made in the backend, presenting huge issues for the foreseeability of one’s actions, for forensics, and for the attribution of causality in the case of harm or other types of damage, as, for example, in the case of manipulative micro-targeting of individual political opinion;
6. visibility is implied where the collection, linkability, and inferencing of ‘big data’ and the wonders of machine learning enable the ‘legibility’ of end-users in ways that render them vulnerable to, for example, identity theft, invisible nudging or manipulation, blackmailing, extortion, and—in the case of children—grooming.
We can continue the listing and move into corporate espionage, cyberwar, and concerted attacks on critical infrastructure, for example, that of energy supply or democratic institutions. Clearly, states, with their ‘traditional’ monopoly of violence and their ‘traditional’ ius puniendi (the right to impose public punishment), have been struggling to redefine the borderless and initially lawless realm of ‘cyberspace’, to combat cybercrime by way of policing, forensics, and judicial cooperation. Because cybercrime does not stop at national borders, states are collaborating at the international and supranational level to come to terms with the transnational nature of cybercrime.
As discussed above, public law consists of constitutional law, international public law, and administrative law. Constitutional law is relevant for cybercrime to the extent that it determines the right to a fair trial, the criminal law legality principle, and the right to privacy (that is often at stake when states create and apply investigatory competences to combat cybercrime). International public law is relevant for cybercrime because the need to act across territorial borders has resulted in concerted efforts to conclude international treaties on cybercrime. Administrative law is relevant for cybercrime to the extent that supranational legislation on cybersecurity (notably EU directives), imposes duties on Member States (MSs) to align their approach across national borders.
6.2.1 The Cybercrime Convention
The CC was initiated by the CoE, though from the beginning some states outside the CoE were involved, notably the United States, Canada, Japan, and South Africa. To this day, it is the most global treaty on cybercrime thus far concluded. The treaty was signed on 23 November 2001 and entered into force on 1 July 2004, after five states had ratified, including at least three MSs of the CoE (in line with Article 36 CC). In the Netherlands, the treaty entered into force on 1 March 2007, in Japan on 1 November 2012, in the United States on 1 January 2007 (treaties are in force in a contracting state once the treaty itself is in force and after the relevant state has ratified, see section 4.2.1 above). The status of accession and ratification on (p.169) 10 January 2020 was: three signatures that have not yet been ratified and sixty-four ratifications.
The idea of the CC is (1) to agree on new competences to investigate cybercrime, adapted to the intricacies of cyber- as opposed to traditional crimes, and (2) on joint definitions of criminal behaviour in cyberspace to make sure that offenders cannot avoid charges by escaping to more lenient jurisdictions, while thus (3) ensuring that legal certainty is safeguarded across territorial borders, both with regard to investigatory competences and with regard to what qualifies as criminal conduct, while always (4) preserving a proper level of legal protection regarding related human rights and freedoms.
The fact that the CC is international and not supranational law means that whether it has direct effect in MSs depends on whether a MS has a monist or dualist system of recognizing the applicability of international law. In the Netherlands, as discussed above, Article 93 of the Netherlands Constitution makes this dependent on the way international law is formulated (see section 4.2.2 above). Direct effect is only at stake when the content of a treaty addresses citizens by way of granting them rights. The CC, however, does not address citizens. It addresses the MSs, requiring them to implement the content of the CC. This means that the CC lacks direct effect and must first be implemented in national law.
This clearly shows the structure of the CC, highlighting the goal of achieving a similar level of protection against cybercrime on the substantive and the procedural level across national borders.
The fact that the CC lacks direct effect raises the following questions:
1. Can Dutch police base their investigations into cybercrime on the CC?
2. Can a victim of online credit card fraud sue the perpetrator based on the CC?
3. Can a Dutch court convict on the basis of the CC?
The answer should be clear by now: the police cannot base their investigations on legal powers attributed by the CC (only on competences attributed by national law that implements the CC); a victim of online credit card fraud cannot sue the perpetrator based on the CC (the CC does not concern private law, the police and/or the public prosecutor hold the monopoly to initiate a criminal charge); a court cannot convict a perpetrator based on the CC (only based on a criminalization enacted in national law that implements the CC).
126.96.36.199 Substantive law
Note that the CC assumes that the criminal law legality principle is in force (see above section 3.1.3 and 188.8.131.52): no punishment without prior and precise criminalization, as, for example, defined in Article 7 ECHR that is binding for the MSs of the CoE. By imposing legal obligations on contracting parties to criminalize specified conduct under the heading of cybercrime, the CC reasserts that criminalization in the legal sense is a prerequisite of fighting cybercrime in constitutional democracies.
The first set of criminal offences concerns CIA-related crimes, such as hacking, or computer trespass, interception, data interference, and system interference. I will discuss them more extensively, as they are highly relevant for computer scientists.
Article 2—Illegal access
Each Party shall adopt such legislative and other measures as may be necessary to establish as criminal offences under its domestic law, when committed intentionally, the access to the whole or any part of a computer system without right. A Party may require that the offence be committed by infringing security measures, with the intent of obtaining computer data or other dishonest intent, or in relation to a computer system that is connected to another computer system.
The legal effect of this provision is that contracting parties are obligated to enact the relevant criminalization, under the legal conditions specified.5 This means that, structured in terms of legal effect and legal conditions, parties should enact that:
What does this mean for ‘ethical hacking’ (penetration testing to detect security problems)? From an ethical perspective, one can distinguish between a black hat hacker (malicious intent), a white hat hacker (good intent and permission), and a grey hat hacker (good intent but no permission). However, from a legal perspective, if a system is hacked intentionally without permission of the user or owner, the act qualifies as a criminal offence, irrespective of good or bad intent, unless there is another ‘right’ to access, such as a legal competence (e.g. for the police, provided the relevant conditions for the exercise of that competence apply). One could think of (p.172) three ways to prevent punishment for ethical hacking (that is, for grey hat hacking).
First, the public prosecutor may decide not to prosecute if they find there is no general interest in prosecuting,6 for instance because the hacker followed guidelines of responsible disclosure. Note that penetration testing will fall within the scope of this criminal offence unless one has permission or an assignment to conduct such testing. In some countries, the public prosecutor has no discretion to abstain from prosecution, due to a strict interpretation of the procedural criminal law legality principle (see above section 184.108.40.206).
This brings us to the second way that punishment can be prevented, which would entail that a legal justification can be invoked, despite the fact that the hacker had no right to access the system.7 Such a defence concerns the requirement that a criminal offence implies ‘wrongfulness’ as part of the mens rea that constitutes a criminal offence (see above section 220.127.116.11). One could, for instance, claim that a higher—legally relevant—duty overruled the duty to refrain from intentionally accessing the system without right. It will be up to the court to decide whether such a higher duty justified unlawful access. Note that once a court acknowledges such a higher duty, this would justify all similar cases of unlawful access, unless the decision is overruled by a higher court. As the reader may guess, courts will be cautious in accepting such grounds of justification, due to the consequences of such acceptance.
The third way to prevent punishment could be conviction without punishment,8 which would be a clear sign that the court does not accept the lawfulness of the access, but nevertheless finds good reason in the circumstances of the offence that was committed to abstain from punishment. Note that in jurisdictions that impose minimum sentences for such an offence, without enabling courts to convict without punishment, this is not an option.
After Article 2 on unlawful access, we have another CIA-related offence in Article 3 on interception:
Article 3—Illegal interception
Each Party shall adopt such legislative and other measures as may be necessary to establish as criminal offences under its domestic law, when committed intentionally, the interception without right, made by technical means, of non-public (p.173) transmissions of computer data to, from or within a computer system, including electromagnetic emissions from a computer system carrying such computer data. A Party may require that the offence be committed with dishonest intent, or in relation to a computer system that is connected to another computer system.
In terms of legal effect and legal conditions, this provision requires the following:9
Article 4 CC stipulates the criminalization of another CIA-related offence, notably that of data interference:10
1 Each Party shall adopt such legislative and other measures as may be necessary to establish as criminal offences under its domestic law, when committed intentionally, the damaging, deletion, deterioration, alteration or suppression of computer data without right.
2 A Party may reserve the right to require that the conduct described in paragraph 1 result in serious harm.
In terms of legal effect and legal conditions, this implies that parties enact that:
Article 5 then stipulates the criminalization of the final CIA-related offence, that of system interference:11
Each Party shall adopt such legislative and other measures as may be necessary to establish as criminal offences under its domestic law, when committed intentionally, the serious hindering without right of the functioning of a computer system by inputting, transmitting, damaging, deleting, deteriorating, altering or suppressing computer data.
In terms of legal effect and legal conditions this entails that parties must legislate.
As indicated above, the CIA-related offences are followed by ‘traditional’ crimes such as identity fraud in Articles 7–8, by content crime, notably child porn in Article 9, and by copyright violations in Article 10. These can be analysed similarly to Articles 2–5.
18.104.22.168 Procedural law
The second part of the CC concerns procedural law, effectively stipulating that specified investigatory powers are attributed to the police and justice authorities: expedited preservation of computer data (traffic and content), (p.175) production orders, search and seizure, and interception (metadata and content data). I will provide an analysis of the production order and the legal power to conduct search and seizure and leave it to the reader to study the legal conditions for lawful interception.
As explained in section 2.2.1, legal norms can be distinguished as either primary or secondary rules. Primary rules regulate human intercourse by way of prohibitions and obligations. Secondary rules constitute competences to legislate, govern, or adjudicate, more generally, they constitute the competence to act. Substantive criminal law, discussed in the previous section, can be understood as a set of secondary rules that impose punitive sanctions when specified primary norms have been violated. The first part of the CC stipulates which primary norms must be protected by way of criminalization. The second part of the CC, regarding procedural criminal law, can be understood as a set of secondary rules that defines under what conditions ‘competent authorities’ are allowed to exercise a set of legal powers that should enable them to combat cybercrime. The second part of the CC thus stipulates what secondary norms must be instituted by the contracting parties in the realm of cybercrime investigation.
Article 18 requires contracting parties to enact legal powers for its competent authorities to request computer data and subscriber information.
1. Each Party shall adopt such legislative and other measures as may be necessary to empower its competent authorities to order:
a) a person in its territory to submit specified computer data in that person’s possession or control, which is stored in a computer system or a computer-data storage medium; and
b) a service provider offering its services in the territory of the Party to submit subscriber information relating to such services in that service provider’s possession or control.
2. The powers and procedures referred to in this Article shall be subject to Articles 14 and 15.
3. For the purpose of this Article the term ‘subscriber information’ means any information contained in the form of computer data or any other form that is held by a service provider, relating to subscribers of its services, other than traffic or content data and by which can be established:
a) the type of communication service used, the technical provisions taken thereto and the period of service;
b) the subscriber’s identity, postal or geographic address, telephone and other access number, billing and payment information, available on the basis of the service agreement or arrangement;
Here we see the legality principle at work (see sections 22.214.171.124 and 126.96.36.199 above), as this stipulates that government authorities can only act if there is a legal basis, while in the case of invasive actions such as criminal investigations these actions require a more detailed legal basis. In other words, whenever government authorities act, they must be ‘competent authorities’, meaning they have been attributed the legal powers for their actions. As discussed in section 188.8.131.52, legal competences have a double function: they both constitute and limit the power they attribute. Article 18 requires parties to attribute specified legal powers, based on the assumption that competent authorities can only act within the confines of the specification that constitutes the power. The second paragraph further asserts this, by referring to Articles 14 and 15, which limit the scope of the investigatory competences and stipulate that relevant safeguards must be in place. We will return to this in section 6.2.2 below.
Let me explain the relevance of a production order in the realm of cybercrime by extensively quoting case law of the European Court of Human Rights (ECtHR), which speaks for itself, nicely demonstrating that even without recourse to the CC, the ECHR implies positive obligations for the contracting parties of the CoE to enact legal competences for the police to give a production order. The case is that of K.U. v. Finland.12 To enable easy reading, I have used some bullet points, without, however, changing the text:
7. On 15 March 1999 an unidentified person or persons placed an advertisement on an Internet dating site
• in the name of the applicant,
• who was 12 years old at the time,
• without his knowledge.
The advertisement mentioned his age and year of birth,
• gave a detailed description of his physical characteristics,
• a link to the web page he had at the time,
• which showed his picture, as well as his telephone number, which was accurate save for one digit.
In the advertisement, it was claimed that he was looking for an intimate relationship with a boy of his age or older
• ‘to show him the way’.
9. The applicant’s father requested the police
• to identify the person who had placed the advertisement in order to bring charges against that person.
The service provider, however,
• refused to divulge the identity of the holder of the so-called dynamic Internet Protocol (IP) address in question,
• regarding itself bound by the confidentiality of telecommunications as defined by law.
10. The police then asked the Helsinki District Court (käräjäoikeus, tingsrätten)
• to oblige the service provider to divulge the said information pursuant to section 28 of the Criminal Investigations Act (esitutkintalaki, förundersökningslagen; Act no. 449/1987, as amended by Act no. 692/1997).
11. In a decision issued on 19 January 2001, the District Court refused
• since there was no explicit legal provision authorising it to order the service provider to disclose telecommunications identification data in breach of professional secrecy.
The court noted that by virtue of Chapter 5a, section 3, of the Coercive Measures Act ( … ) and section 18 of the Protection of Privacy and Data Security in Telecommunications Act ( … )
• the police had the right to obtain telecommunications identification data in cases concerning certain offences, notwithstanding the obligation to observe secrecy.
However, malicious misrepresentation was not such an offence.
35. The applicant complained under Article 8 of the Convention that
• an invasion of his private life had taken place and that
• no effective remedy existed to reveal the identity of the person who had put a defamatory advertisement on the Internet in his name, contrary to Article 13 of the Convention.
Article 8 provides:
‘1. Everyone has the right to respect for his private and family life, his home and his correspondence.
2. There shall be no interference by a public authority with the exercise of this right except such as is in accordance with the law and is necessary in a democratic society in the interests of national security, public safety or the economic well-being of the country, for the prevention of disorder or crime, for (p.178) the protection of health or morals, or for the protection of the rights and freedoms of others.’
Article 13 provides:
‘Everyone whose rights and freedoms as set forth in [the] Convention are violated shall have an effective remedy before a national authority notwithstanding that the violation has been committed by persons acting in an official capacity.’
41. There is no dispute as to the applicability of Article 8:
• the facts underlying the application concern a matter of ‘private life’,
• a concept which covers the physical and moral integrity of the person (see X and Y v. the Netherlands, cited above, § 22).
Although this case is seen in domestic law terms as one of malicious misrepresentation,
• the Court would prefer to highlight these particular aspects of the notion of private life,
• having regard to the potential threat to the applicant’s physical and mental welfare brought about by the impugned situation and to his vulnerability in view of his young age.
42. The Court reiterates that,
• although the object of Article 8 is essentially to protect the individual against arbitrary interference by the public authorities,
• it does not merely compel the State to abstain from such interference:
• in addition to this primarily negative undertaking,
• there may be positive obligations inherent in an effective respect for private or family life (see Airey v. Ireland, 9 October 1979, § 32, Series A no. 32).
43. These obligations may involve
• the adoption of measures designed to secure respect for private life even in the sphere of the relations of individuals between themselves.
There are different ways of ensuring respect for private life
• and the nature of the State’s obligation will depend on the particular aspect of private life that is at issue.
While the choice of the means to secure compliance with Article 8 in the sphere of protection against acts of individuals
• is, in principle, within the State’s margin of appreciation,
• effective deterrence against grave acts, where fundamental values and essential aspects of private life are at stake, requires efficient criminal-law provisions (see X and Y v. the Netherlands, cited above, §§ 23–24 and 27; August v. the United Kingdom (dec.), no. 36505/02, 21 January 2003; and M.C. v. Bulgaria, no. 39272/98, § 150, ECHR 2003-XII).
49. The Court considers that practical and effective protection of the applicant required that
• effective steps be taken to identify and prosecute the perpetrator, that is, the person who placed the advertisement.
In the instant case, such protection was not afforded.
• An effective investigation could never be launched because of an overriding requirement of confidentiality.
• Although freedom of expression and confidentiality of communications are primary considerations and users of telecommunications and Internet services must have a guarantee that their own privacy and freedom of expression will be respected, such guarantee cannot be absolute and must yield on occasion to other legitimate imperatives, such as the prevention of disorder or crime or the protection of the rights and freedoms of others.
• Without prejudice to the question whether the conduct of the person who placed the offending advertisement on the Internet can attract the protection of Articles 8 and 10, having regard to its reprehensible nature, it is nonetheless the task of the legislator to provide the framework for reconciling the various claims which compete for protection in this context.
• Such framework was not, however, in place at the material time, with the result that Finland’s positive obligation with respect to the applicant could not be discharged.
• This deficiency was later addressed. However, the mechanisms introduced by the Exercise of Freedom of Expression in Mass Media Act (see paragraph 21 above) came too late for the applicant.
Let us now move to Article 19 CC, which requires contracting parties to enact a power to conduct a search and seizure of stored computer data.13
1. Each Party shall adopt such legislative and other measures as may be necessary to empower its competent authorities to search or similarly access:
a) a computer system or part of it and computer data stored therein; and
b) a computer-data storage medium in which computer data may be stored in its territory.
2. Each Party shall adopt such legislative and other measures as may be necessary to ensure that
• where its authorities search or similarly access a specific computer system or part of it, pursuant to paragraph 1.a,
• and such data is lawfully accessible from or available to the initial system,
• the authorities shall be able to expeditiously extend the search or similar accessing to the other system.
3. Each Party shall adopt such legislative and other measures as may be necessary to empower its competent authorities to seize or similarly secure computer data accessed according to paragraphs 1 or 2. These measures shall include the power to:
a) seize or similarly secure a computer system or part of it or a computer-data storage medium;
b) make and retain a copy of those computer data;
c) maintain the integrity of the relevant stored computer data;
d) render inaccessible or remove those computer data in the accessed computer system.
4. Each Party shall adopt such legislative and other measures as may be necessary to empower its competent authorities
• to order any person who has knowledge about the functioning of the computer system or
• measures applied to protect the computer data therein
• to provide,
• as is reasonable,
• the necessary information, to enable the undertaking of the measures referred to in paragraphs 1 and 2.
5. The powers and procedures referred to in this article shall be subject to Articles 14 and 15.
As in Article 18, this competence is restricted by the safeguards required in Articles 14 and 15 (see paragraph 5), thus asserting the legality principle.
Paragraph 4 includes the competence to request a password to access a computing system, or to decrypt content. The reference to the safeguards of the Rule of Law in paragraph 5 may imply that such a request cannot be directed to a suspect, as this could violate the privilege of nemo tenetur, that is, the privilege against self-incrimination. The full privilege reads nemo tenetur se ipsum accusare, or no one is bound to incriminate themselves. The ECtHR reads this privilege into Article 6 ECHR, even though it is not explicitly articulated.14 It mainly guards against unwarranted compulsion, but it does not provide an absolute right; depending on the severity of the public interest that is at stake, the effectiveness of procedural safeguards and (p.181) how the information obtained is to be used, infringements can be justified. Whether the ECtHR would consider a categorical competence to order a suspect to provide a password as a violation of Article 6 ECHR is not clear, but this will probably depend on whether effective legal safeguards and proportionality requirements are in place.
Note that the CC does not impose an obligation on contracting parties to enact a legal power for the police to remotely hack into computing systems. Though this is not prohibited, such enactment is not an implementation of the CC and there is no obligation to enable remote access, unless via an already accessed system, based on paragraph 2.
184.108.40.206 Extraterritorial jurisdiction to enforce or investigate
Another caveat concerns the limitation of access to the territory of the investigating state, first in paragraph 1.b, which limits search to databases on the territory of the relevant contracting party. Second, a search in a remote system via an already accessed system, based on paragraph 2, is restricted to cases where the competent authorities ‘have grounds to believe that the data sought is stored in another computer system or part of it in its territory’. This restriction is based on a fundamental principle of international law, which prohibits extraterritorial jurisdiction to enforce. As discussed in section 4.1 and 4.4, the combination of internal and external sovereignty, which constitutes both international and national law, implies that states respect one another’s territorial integrity as part of their sovereignty. Conducting criminal law investigations on the territory of another state has therefore been outlawed since the famous Lotus case of the Permanent Court of International Justice (PCIJ) in The Hague of 1927.15 In this case it was decided that such extraterritorial enforcement jurisdiction is only permitted in case of permission granted by the state on whose territory the investigations take place. Such permission can be ad hoc, but it can also be based on mutual legal assistance treaties (MLATs). Article 32 CC confirms the prohibition of extraterritorial enforcement jurisdiction:
Article 32—Trans-border access to stored computer data with consent or where publicly available
A Party may, without the authorisation of another Party:
a) access publicly available (open source) stored computer data, regardless of where the data is located geographically; or
(p.182) b) access or receive, through a computer system in its territory, stored computer data located in another Party, if the Party obtains the lawful and voluntary consent of the person who has the lawful authority to disclose the data to the Party through that computer system.
Article 32 CC has given rise to hefty discussions, as some contracting parties have enacted competences to remotely hack into computing systems that may be on foreign territory. The reader can imagine that this particular issue has major implications for the force of law, the practice of international law, and for the defining features of internal and external sovereignty. The jury is still out on where this will end.
Articles 20 and 21 CC require contracting parties to provide legal powers to enable interception by the competent authorities, of traffic data (Article 20) and content data (Article 21). Note that without such powers, the police would commit a criminal offence and become punishable. The reader is invited to study both articles in detail, dissecting the cumulative and alternative legal conditions that must be fulfilled for interception to be lawful.
6.2.2 Limitations on investigative powers
As indicated above, the legality principle requires that governments act in a way that is not arbitrary, sufficiently foreseeable, proportional, and embedded in adequate safeguards. This includes respect for human rights and a proactive approach to potential risks for democracy and the Rule of Law. As mentioned in the previous section, Article 15 explicitly requires the contracting parties to implement all relevant provisions of the CC in line with the demands of constitutional democracy.
Article 15—Conditions and safeguards
1. Each Party shall ensure that the establishment, implementation and application of the powers and procedures provided for in this Section are
– subject to conditions and safeguards provided for under its domestic law,
– which shall provide for the adequate protection of human rights and liberties,
– including rights arising pursuant to obligations it has undertaken under the 1950 Council of Europe Convention for the Protection of Human Rights and Fundamental Freedoms,
– the 1966 United Nations International Covenant on Civil and Political Rights, and
– other applicable international human rights instruments, and
– which shall incorporate the principle of proportionality.
– judicial or other independent supervision,
– grounds justifying application, and
– limitation of the scope and the duration of such power or procedure.
3. To the extent that it is consistent with the public interest, in particular the sound administration of justice, each Party shall consider the impact of the powers and procedures in this section upon the rights, responsibilities and legitimate interests of third parties.
Article 15 basically integrates the case law of the ECtHR (the highest court of the CoE, which initiated the CC) into the CC. Above, in section 5.3.5, we have discussed the legal conditions that must be fulfilled for the justification of infringing measures of secret surveillance, such as notably Weber & Saravia.16 The safeguards stipulated in such case law are highly relevant for the competences that must be attributed by the contracting parties and similarly, and the proportionality test of Article 8 ECHR (see also Article 15, paragraph 1) must be built into the procedures that condition the application of these legal powers.
220.127.116.11 Proportionality test for police access to personal data
An interesting example of a proportionality test regarding police access to personal data retained by internet service providers (ISPs) was conducted by the CJEU in its judgment of October 2018.17 The case concerned a police request to obtain identifying information on those who interacted with a stolen smartphone during a twelve-day period after the phone was stolen. The question was whether this constituted a ‘serious’ interference with the fundamental rights and freedoms of those persons. The CJEU finds in paragraph 60:
It is therefore apparent that the data concerned by the request for access at issue in the main proceedings only enables the SIM card or cards activated with the stolen mobile telephone to be linked, during a specific period, with the identity of the owners of those SIM cards. Without those data being cross-referenced with the data pertaining to the communications with those SIM cards and the location data, those data do not make it possible to ascertain the date, time, duration and recipients of the communications made with the SIM card or cards in question, nor the locations where those communications took place or the frequency of those (p.184) communications with specific people during a given period. Those data do not therefore allow precise conclusions to be drawn concerning the private lives of the persons whose data is concerned.
This was the first step in a proportionality test, weighing the proportionality between the infringement and the purpose it aimed to achieve. Paragraph 61 concludes that the request is not a ‘serious’ infringement. For the proportionality test, the CJEU concludes in paragraph 62:
As stated in paragraphs 53 to 57 of this judgment, the interference that access to such data entails is therefore capable of being justified by the objective, to which the first sentence of Article 15(1) of Directive 2002/58 refers, of preventing, investigating, detecting and prosecuting ‘criminal offences’ generally, without it being necessary that those offences be defined as ‘serious’.
Directive 2002/58 is the ePrivacy Directive which aims to protect the confidentiality of online communication. Article 15 of said directive allows for legislative measures that restrict the protection granted in the ePrivacy Directive, for purposes such as prevention and investigation of criminal offences. The CJEU basically states that Article 15(1) does not limit such restrictions to prevention and investigation of ‘serious’ criminal offences. Together with the finding that the request of identifying information that is at stake in this case is not a serious infringement, the CJEU concludes that it is allowed to investigate criminal offences that are not considered serious by way of investigative measures that do not constitute a serious infringement. Though the CJEU is the highest court of the EU and not the highest court of the CoE, its proportionality test is relevant as it follows that of the ECtHR, due to Article 52(3) of the CFREU:
2. In so far as this Charter contains rights which correspond to rights guaranteed by the Convention for the Protection of Human Rights and Fundamental Freedoms, the meaning and scope of those rights shall be the same as those laid down by the said Convention. This provision shall not prevent Union law providing more extensive protection.
18.104.22.168 Proportionality test, balancing tests, and the image of the scale
As a final touch down, I want to briefly discuss the image of the scale that is so often invoked when proportionality comes into focus. In much literature we encounter the idea that security and liberty are mutually exclusive, suggesting (p.185) that we can’t eat our cake and have it too. This suggests that a trade-off between security measures and liberty rights are a given: more of the one supposedly results in less of the other. This is not correct as far as digital security is concerned. Security measures such as encryption will often enable or reinforce a user’s capability to make freely chosen and well-informed decisions about sharing personal data. Nevertheless, the opposite is equally incorrect. Some security measures will require disclosure, penetration testing, or even deep packet inspection to facilitate attack monitoring, and this will necessarily infringe individual rights and freedoms, especially where such measures are often invisible or even secret.
The idea that security measures and liberty rights must be framed in terms of a trade-off is not restricted to the domain of cybersecurity. It also pervades the broader domain of policy science where it refers to national and public security, the fight against transnational terrorism and foreign intelligence targeting critical infrastructure and democratic processes. Here, security denotes threats to a person’s autonomy and bodily integrity, an organization’s resilience, a state’s existence or economic welfare, based on targeted attacks. In that sense security is a subdomain of safety, which also refers to threats, though not necessarily based on deliberate targeting.
In the context of cybercrime law, the broader discussion of a trade-off between security and liberties plays out whenever investigatory measures infringe human rights such as privacy, freedom of expression, or, for example, the privilege against self-incrimination. The CC, as we have seen in the preceding subsections, requires proportionality between the infringing measures and the objective that is meant to be protected. It is crucial to acknowledge that such proportionality is not equivalent with the trade-off that is often suggested when the image of the scale is invoked (more protection of security requires less human rights protection), though we should also not take the opposite position that such a trade-off never occurs.
Sometimes, increased security requires infringement of, for example, privacy, but this is not necessarily the case. Some digital security measures may indeed increase privacy protection, for instance when end-to-end encryption is seen as a security measure. In the domain of police investigations into cybercrime, however, whereas the police may use such measures for their internal communication, consumers that employ them may been seen as obstruction of police investigations. Within the context of cybercrime, security measures concern police access to computing systems, production orders, and interception. The first point made by Waldron highlights that the mere fact that such measures infringe privacy does not imply that they increase public security. If a then b does not imply that if b then a.
This also relates to his sixth point: security measures often promise more than they can effectively achieve. In itself this is to be expected, but when a balancing test is done, we must accept that measures that are ineffective cannot be necessary and thus not proportional.
Thinking in terms of a trade-off, using the image of the scale, suggests that the trade-off between liberty and security is a matter of calculation: some amount of liberty is traded against some amount of security. Waldron’s second point is that this is clearly not the case. Though a security measure may—metaphorically—be understood in terms of costs (liberties) and benefits (security), there is no generally valid way of counting either the costs or the benefits. Security is a different ‘thing’ than liberty, while both can be understood as public goods as well as private interests. Though one could ‘rank’ costs and benefits, this does not imply they can be added up or deducted on one and the same scale, which is exactly what the image of the scale lures us into assuming.
This again links to the sixth point; we should not mistake a security measure for the effect it aims to achieve.
The idea of a trade-off also wrongly assumes that liberty and security are independent variables, whereas in a constitutional democracy there are many (p.187) dependencies between them. As discussed in section 2.2 and 3.3, in a constitutional democracy a government must not only: ‘(1) act with an eye to the public interest, but also (2) act within the confines of the legality principle and (3) treat citizens with equal respect and concern’. This entails that liberty is not something to trade at will against other public goods, but something that—like security—is constitutive for a legitimate government. Often, as citizens, we cannot be secure in our life and limbs if liberties can be flouted by the state in its struggle to provide us with security. This connects with the fact that diminishing liberties will often increase insecurity in relation to the state.
Convincing people to give up some liberty to gain some security misrepresents a reality where the liberty of one group of people may be diminished to ensure increased security of another group. The security of some is then traded against the liberty of others. Depending on what kind of security measures are at stake, liberty may be redistributed, for instance, when those dependent on welfare benefits are exposed to automated decision systems that disregard their privacy, whereas others can afford to protect themselves by buying an expensive but well protected smart phone. Data protection law may protect legal residents of the EU whereas illegal aliens may find themselves ‘naked’ in the eye of the immigration machine, seeing their privacy ‘traded’ for the security of already securely settled lawful residents.
6.3 The EU Cybercrime and Cybersecurity Directives
In the strict sense, the Directive on attacks against information systems (2013/40/EU) is ‘the’ EU Cybercrime Directive. As to its aims and instrumental value, it overlaps with the CC, requiring EU MSs to criminalize illegal access, attacks against information systems and computer data, and illegal interception (substantive criminal law). Other than the CC it does not concern the criminalization of fraud, child pornography, or copyright violations, clearly focusing on CIA-related offences. Also, other than the CC, it does not impose obligations regarding criminal procedure and criminal investigations. The goal of the directive is minimum harmonization, meaning that MSs can go beyond what is required, but not below that, as Article 1 states under the heading of ‘Subject matter’:
This Directive establishes minimum rules concerning the definition of criminal offences and sanctions in the area of attacks against information systems. It also (p.188) aims to facilitate the prevention of such offences and to improve cooperation between judicial and other competent authorities.
Interestingly, this directive obligates MSs to impose ‘minimum maximum’ penalties for specific cybercrimes, for example, in Article 9, paragraph 2:
Member States shall take the necessary measures to ensure that the offences referred to in Articles 3 to 7 are punishable by a maximum term of imprisonment of at least two years, at least for cases which are not minor.
Criminal law is often considered core to internal sovereignty, meaning that states resist supranational interference with their criminal law policy. By stipulating minimum maximum punishment, EU law reserves discretion for MSs that reject minimum punishment or allow conviction without punishment (as in the Netherlands in Article 9a NCC, see above section 4.1.2 and 22.214.171.124).
The directive pays special attention to criminal law liability for legal persons, and for issues of jurisdiction, and includes various types of cross-national collaboration within the Union (e.g. information exchange via national points of contact, and collection of relevant statistics).
Next to the ‘real’ EU Cybercrime Directive, the EU has also enacted a cybersecurity directive, the Directive on Network and Information Security (NIS) (EU) 2016/1148. The subject matter here is defined in Article 1 as:
1. This Directive lays down measures with a view to achieving a high common level of security of network and information systems within the Union so as to improve the functioning of the internal market.
2. To that end, this Directive:
(a) lays down obligations for all Member States to adopt a national strategy on the security of network and information systems;
(b) creates a Cooperation Group in order to support and facilitate strategic cooperation and the exchange of information among Member States and to develop trust and confidence amongst them;
(c) creates a computer security incident response teams network (‘CSIRTs network’) in order to contribute to the development of trust and confidence between Member States and to promote swift and effective operational cooperation;
(d) establishes security and notification requirements for operators of essential services and for digital service providers;
(p.189) (e) lays down obligations for Member States to designate national competent authorities, single points of contact and CSIRTs with tasks related to the security of network and information systems.
( … )
6. This Directive is without prejudice to the actions taken by Member States to safeguard their essential State functions, in particular to safeguard national security, including actions protecting information the disclosure of which Member States consider contrary to the essential interests of their security, and to maintain law and order, in particular to allow for the investigation, detection and prosecution of criminal offences
Note that Article 2 of the NIS Directive states that personal data processed pursuant to this directive falls within the scope of the Data Protection Directive (now the GDPR), meaning it does not fall within the scope of the Police Data Protection Directive (which is focused on processing of personal data ‘by competent authorities for the purposes of the prevention, investigation, detection or prosecution of criminal offences or the execution of criminal penalties, and on the free movement of such data’). This, again, clarifies the difference between cybersecurity and cybercrime. The concept of ‘security of network and information systems’ is actually defined in Article 4(2) of the NIS Directive:
‘security of network and information systems’ means the ability of network and information systems to resist, at a given level of confidence, any action that compromises the availability, authenticity, integrity or confidentiality of stored or transmitted or processed data or the related services offered by, or accessible via, those network and information systems;
By defining security in terms of resilience against ‘any action that compromises the availability, authenticity, integrity or confidentiality’, the NIS Directive anchors itself in the CIA concerns that define cybersecurity.
On cybercrime law
Brenner, Susan W. 2012. Cybercrime and the Law. Boston: Northeastern University Press.
Schwarzenegger, Christian, Finlay Young, Gian Ege, and Sarah J. Summers. 2014. The Emergence of EU Criminal Law: Cyber Crime and the Regulation of the Information Society. Oxford and Portland: Hart Publishing.
Tosoni, Luca. 2018. ‘Rethinking Privacy in the Council of Europe’s Convention on Cybercrime’. Computer Law & Security Review, September. https://doi.org/10.1016/j.clsr.2018.08.004.
On production orders
Hert, Paul de, Cihan Parlar, and Juraj Sajfert. 2018. ‘The Cybercrime Convention Committee’s 2017 Guidance Note on Production Orders: Unilateralist Transborder Access to Electronic Evidence Promoted via Soft Law’. Computer Law & Security Review 34 (2): 327–36. https://doi.org/10.1016/j.clsr.2018.01.003.
On extraterritorial jurisdiction to enforce in cyberspace
Hildebrandt, Mireille. 2013. ‘Extraterritorial Jurisdiction to Enforce in Cyberspace? Bodin, Schmitt, Grotius in Cyberspace’. University of Toronto Law Journal 63 (2): 196–224. https://doi.org/10.3138/utlj.1119.
On the image of the scale
Hildebrandt, M. 2013. ‘Balance or Trade-off? Online Security Technologies and Fundamental Rights’. Philosophy & Technology 26 (4): 357–79.
Waldron, Jeremy. 2003. ‘Security and Liberty: The Image of Balance’. Journal of Political Philosophy 11 (2): 191–210. https://doi.org/10.1111/1467-9760.00174.
(1) Symantec, Internet Security Threat Report 2018, 5–6, available at: http://images.mktgassets.symantec.com/Web/Symantec/%7B3a70beb8-c55d-4516-98ed-1d0818a42661%7D_ISTR23_Main-FINAL-APR10.pdf?.
(2) Convention on Cybercrime 2001, Treaty ETS No.185 of the Council of Europe.
(3) Directive on Attacks against Information Systems 2013/40/EU, available at: http://eur-lex.europa.eu/legal-content/EN/TXT/HTML/?uri=CELEX:32013L0040&from=EN.
(4) Directive (EU) 2016/1148 on Network and Information Security (NIS), available at: http://eur-lex.europa.eu/legal-content/EN/TXT/HTML/?uri=CELEX:32016L1148&from=EN.
(5) In the Netherlands this has been implemented in Article 138ab of the Netherlands Criminal Code (NCC).
(6) In the Netherlands this could be based on Article 349 in relation to Article 167(2) of the Netherlands Code of Criminal Procedure (NCCP).
(7) In the Netherlands this could be based on Article 352 NCCP and Articles 40, 41(1), 42, 43(1) NCC.
(8) In the Netherlands this can be based on Article 9a NCC.
(9) In the Netherlands this has been implemented in Article 139c NCC.
(10) Implemented in the Netherlands in Article 350a NCC.
(11) Implemented in the Netherlands in Article 138b NCC, and in Article 161 sexies NCC if such interference hinders or obstructs data ‘in the general interest’ or causes a ‘disruption in a public telecommunications network or in the execution of a public telecommunication service’.
(12) ECtHR, 2 December 2008, Application no. 2872/02 (K.U. v. Finland).
(13) E.g. implemented in the Netherlands Code of Criminal Procedure in Articles 125 (search), 125j (extended search), 125k (an order to provide excess, by way of an encryption key or password; not to suspect),125l (legal privilege), 125m (notification), 125n (duty to delete data), 125o (competence to block data).
(14) ECtHR, 25 February 1993, Application no. 10828/84 (Funke v. France), paragraph 44.
(15) (France v. Turkey) (1927) PCIJ, Ser. A, No. 10.
(16) ECHR, 29 June 2006, Application no. 54934/00 (Weber & Saravia v. Germany).
(17) CJEU, 2 October 2018, C‑207/16 (Ministerio Fiscal v. Juzgado de Instrucción No. 3 de Tarragona).