With its judgment in SRB, the ECJ has created a paradox in EU data protection law: data may be both personal and non-personal at the same time. This may be reminiscent of a thought experiment widely known as Schroedinger’s cat. However, unlike the hypothetical cat, the data in the pseudonymisation box is both personal and non-personal, depending on who’s looking at the box. For the controller, i.e. the entity determining the purposes and means of the processing, who has the additional information that allow them to open the box, the data is personal. The same is true for anyone who may reasonably likely attain this information. For those who do not have th…
With its judgment in SRB, the ECJ has created a paradox in EU data protection law: data may be both personal and non-personal at the same time. This may be reminiscent of a thought experiment widely known as Schroedinger’s cat. However, unlike the hypothetical cat, the data in the pseudonymisation box is both personal and non-personal, depending on who’s looking at the box. For the controller, i.e. the entity determining the purposes and means of the processing, who has the additional information that allow them to open the box, the data is personal. The same is true for anyone who may reasonably likely attain this information. For those who do not have this additional information, the ECJ argues, the data are non-personal.
The GDPR only applies to personal data. So this case law has major implications for the protection of such data and the applicable regulation. In this post I take the ECJ’s case law on the definition of personal data, pseudonymisation and the scope of data protection law as a starting point to consider the wider implications of this line of jurisprudence. I also look ahead at the Commission’s recent Digital Omnibus proposal to change the definition of personal data in the GDPR, purportedly under the impression of this case law. I find that the relative approach for the definition of personal data, adopted by the Court, leads to a multitude of problems in the course of processing operations and that the changes proposed by the Commission would further aggravate the situation.
1. Pseudonymisation and the SRB judgment
Pseudonymisation of data is a technical and organisational measure, which means that ‘the personal data can no longer be attributed to a specific data subject without the use of additional information provided that such additional information is kept separately and is subject to [further] technical and organisational measures that ensure that the data are not attributed to an identified or identifiable natural person’ according to Article 4(5) GDPR. Meanwhile, the GDPR itself, according to Article 2(1), only applies to personal data, which are defined in turn as ‘any information relating to an identified or identifiable natural person’ in Article 4(1) GDPR. If, thus, pseudonymisation aims to hinder attribution of data to persons, this technical and organisational measure could also be seen as affecting the application of the GDPR under Article 2(1).
This question was one of the issues in the ECJ’s judgment in SRB, where the eponymous Single Resolution Board had pseudonymised comments that were submitted to it, before passing them on to Deloitte, which was tasked with assessing them and thereafter, presumably, returned them to the SRB which continued its processing operation after de-pseudonymising the data (paras. 21-28). The Court found that pseudonymisation was not part of the definition of personal data, but a technical and organisational measure (paras 71-72). Yet, when the pseudonymisation of personal data prevented the identification of a data subject, then this might, if implemented according to the legal requirements, ‘have an impact on whether or not those data are personal within the meaning of [Art. 4(1) GDPR]’ (paras 74-75). In the judgment the ECJ referred to the applicable EU Data Protection Regulation (EU) 2018/1725. For simplicity, I refer to the identical provisions of the GDPR.
Referring back to its judgments in OC v Commission (para. 51) and, ultimately, Breyer (para. 46), the Court stated that for risk mitigation through pseudonymisation it was sufficient if the risk of identification appeared insignificant, for instance due to a legal prohibition of disproportionate effort (SRB, para. 82). Having thus set the scene, the Court found that the existence of additional information enabling identification did not imply that pseudonymised data was personal data for every person in all cases (para. 82). Relying on its judgments in Breyer (paras 44, 47-48) and IAB Europe (paras 43 and 48) it held that ‘data that are inherently impersonal […] were nevertheless connected to an identifiable person, since the controller had legal means of obtaining additional information from another person making it possible to identify the data subject’ (SRB, para. 83). While the Court did not further elucidate the somewhat opaque meaning of ‘inherently impersonal data’, it continued that in such circumstances, the fact that the information enabling identification was held by third parties, did not actually prevent their identification by the controller.
The Court further invoked its judgment in Gesamtverband Autoteile-Handel (paras 46 and 49) where it had held that impersonal data may become personal when the controller puts them at disposal of people who have reasonably likely means of identification (SRB, para. 84). Thus, the ECJ concluded that whether data were personal had to be determined in each context. For instance, when pseudonymised data were transferred to a third party and it could not be ruled out that the recipient had means reasonably likely allowing them to identify data subjects, for instance by cross-checking with other data available to them, the pseudonymised data had to be considered personal data (para. 85). However, as the Court had previously stressed that the recipient in the case at hand, Deloitte, had at no time had access to the additional information for identification (para. 28), it concluded that pseudonymised data did not constitute personal data in all cases for every person, when the pseudonymisation effectively prevented them from identifying the data subject (para. 87).
2. The relative definition and the level of protection
The ECJ, thus, further developed its relative definition of personal data that it has been pursuing since Breyer in 2016. Taking SRB and Breyer together, we can see that there are two major issues that, taken together, substantially lower protections for data subjects when dealing with pseudonymised data: Those handling pseudonymised data may be exempt from the GDPR altogether and a mere legal prohibition, rather than other, more robust technical and organisational measures, may be sufficient to trigger this exemption.
The first major issue I want to discuss is that under the current case law, the GDPR may not apply to some pseudonymised data. Under the relative definition, whether data is regarded as personal or non-personal depends on the specific situation of the person processing data. Where a controller pseudonymises personal data, a co-controller, processor or other third party, may claim they do not have access to the additional information, the data are thus not personal for them and they operate outside the scope of the GDPR.
The relative approach thus gives controllers considerable leeway, especially considering that there are many different forms of pseudonymisation that differ widely in terms of effectiveness of protection for the personal data at issue. Importantly, elevating the perspective of controllers, processors or other third parties to a decisive criterion is a choice and it is not without alternative. Indeed, the perspective of controllers, processors or other third parties may not be the sole relevant perspective. Data protection, according to Article 1(2) GDPR aims to protect the fundamental rights of individuals with regard to the processing of personal data, i.e. it is supposed to protect individuals from the inherent risks of data processing (in detail Bieker, pp. 195-202). To counter such risks, controllers have to take technical and organisational measures according to Article 24 GDPR. As the Court rightly points out, pseudonymisation is one such measure.
Yet, the Court’s line of reasoning does not account for the practical risks of data processing. Under Article 24(1) GDPR, the technical and organisational measures have to be appropriate to the risks to the rights of individuals. Surely, if the additional information held by the controller is not disclosed to the processor – as was the case in SRB from all we know – there is, in the end, no damage to the individual. However, the mere existence of the additional information creates a risk that it may be disclosed to the processor or third parties. And this warrants protection. So rather than exclude this operation from the scope, it has to be ensure that the risks are appropriately mitigated.
Of course, the obligation to mitigate risks is not unlimited. Article 32 GDPR scales the obligations of controllers and processors according to the risk of the processing. Pseudonymisation may reduce the risk to individuals. Ensuring that the additional information is stored safely with the controller further reduces the risk. And yet, it does not completely eliminate the risk. As we see in practice, data breaches happen. They occur at a very large scale, just taking into consideration those that are reported in the media . Thus, it does not seem warranted to release the co-controller or processor from their limited obligation to implement appropriate technical and organisational measures to protect the pseudonymised data they receive.
The second major problem when applying the current case law is that the Court finds a legal prohibition, i.e. a provision that bans a certain practice, to be sufficient to protect against the risk of de-pseudonymisation. Technical and organisational measures as required inter alia by Articles 24 and 32 GDPR can take various forms and may be of varying usefulness for a specific risk. Pseudonymisation itself in one such measure. However, as it comes with its own risks, especially de-pseudonymisation, it requires further technical and organisational measures to mitigate this follow-up risk. In SRB (para. 82) the Court reiterates its finding (originally from Breyer, para. 46) that a legal prohibition, is an appropriate measure to prevent risks of re-identification of pseudonymous data. Following this argument, it would be sufficient, if the legislator banned a given third party from accessing information, for instance through a legal provision that protected the confidentiality of the data in question. This argument originated in Advocate General Campos Sánchez-Bordona’s 2016 Opinion (para. 68) in that case. The DPD, as applicable legislation at the time, did not contain any further details on how to account for pseudonymised data. However, in recital 26 GDPR, the legislator found that to ascertain whether means were reasonably likely to be used to identify individuals, the available technology at the time and the technological developments should be taken into account. The recital does not mention legal measures.
So while the ECJ relies on legal guarantees, the legislator only considered technical ways to revert pseudonymisation. Considering the risk for individuals, there is a substantive difference whether it is technically feasible to break pseudonymisation or whether a controller, processor or other third party violates a prohibition.
3. A lower standard and other consequences
Taking the ECJ’s standards together, co-controllers and even processors may be released of their limited obligations under the GDPR by a simple legal rule that states that they and third parties are not allowed to revert the pseudonymisation of data, even when this may be a simple technical process. This does not take into account the above-mentioned data breaches and the reality of current data practices. There are many ways data flow between controllers, processors and third parties in rather complex processing operations that face issues of scope and legal compliance (in detail Cobbe, pp. 17-30 and Balayn/Gürses). The Court’s jurisprudence has not accounted for this and thus, with its relative approach, exposes individuals to considerable risks.
On a practical level, the Court’s case law also creates issues for controllers, as they have to conclude a co-controller or processing agreement, when they transfer personal data to another entity for processing. However, if those data are not personal to the recipient, the GDPR would not apply to them and they would not need to conclude such an agreement. As the two parties will have to conclude some form of contract, it may be in the best interest of controllers to ensure that this contract includes the provisions of a co-controller or processing agreement, if not in name then in substance. This, in turn, would mean that co-controllers and processors are free of their GDPR obligations only formally, as the contract would have to set equivalent standards. Further, they would have to constantly evaluate whether the data have become personal.
In cases of data breaches or data subject complaints, the data protection authorities would first have to establish jurisdiction (also see noyb, p. 5) and, where the data have not become personal for the processor, would not be able to intervene on data subjects’ behalf. This would only leave individuals with civil actions against processors, which would require costly court proceedings.
4. The Commission’s Digital Omnibus proposal
Interestingly, the Commission, in its purported attempt to introduce simplifications to the GDPR, chose this particular line of jurisprudence to include in its Digital Omnibus proposal. Article 3(1)(a) of the proposal amends the definition of personal data in Article 4(1) GDPR by three sentences. The first two sentences resemble para. 86 of SRB, stating that ‘[i]nformation relating to a natural person is not necessarily personal data for every other person or entity, merely because another entity can identify that natural person […] where that entity cannot identify the natural person to whom the information relates, taking into account the means reasonably likely to be used by that entity.’ The third proposed sentence adds that ‘[s]uch information does not become personal for that entity merely because a potential subsequent recipient has means reasonably likely to be used to identify the natural person to whom the information relates.’ Thus, the Commission did not only not include the clarification of the Court in SRB (para. 84) that ‘impersonal data my become “personal” in nature where the controller puts them at the disposal of other persons who have means reasonably likely to enable data subjects to be identified’, referring to its previous judgment in Gesamtverband Autoteile-Handel (paras 46 and 49), but even contradicts this case law (also see Korff, p. 6) with the third sentence of Article 3(1)(a) of the proposal.
According to the accompanying Staff Working Document (p. 38), the proposal implements recent case-law of the ECJ and brings clarity to this key notion, thus increasing legal certainty. While the Commission is pushing a questionable narrative that there are only targeted amendments (in detail Alemanno) that clarify the GDPR, keep its core intact (also see Ruschemeier), the proposed change to Article 4(1) GDPR would considerably restrict its application. Given the substantive jurisprudence of the Court in other judgments, such as Gesamtverband Autoteile-Handel, Breyer or OC v Commission, it is unclear, why the Commission would only implement what amounts to one paragraph of a judgment without other relevant context from the very same and further judgments. While the legislator is free to decide to reform rules even after the Court has interpreted them, the inherent contradiction with the Court’s other case law, betrays the Commission’s narrative. Passing Article 3(1)(a) of the Digital Omnibus would be a disservice to the consistency of the data protection framework (see also Stalla-Bourdillon, p. 10) and legal certainty, as it would require further jurisprudence to clarify its scope (also see EDRi). At the same time, from the perspective of individuals affected by data processing, the Court is unlikely to fully resolve the issues, as the Court itself has caused considerable legal uncertainty with its case law (in detail Lodie/Lauradoux, pp. 11-13).
5. Conclusion and outlook
EU data protection law has already passed a critical juncture – and taken the wrong path. The ECJ’s relative approach to the definition of personal data lowers the standard of protection, aggravates enforcement by data protection authorities and will also cause the Court a considerable workload. Data subjects, data protection authorities and, ultimately, courts will have to establish who has or had reasonably likely access to the additional information to even know whether the GDPR applies. At the same time, the Commission’s proposal would erode the definition of personal data even further, would cause even more legal uncertainty and leave data subjects without redress outside of court proceedings in many instances.
Meanwhile, in practice, we are already seeing Big Tech companies co-opting privacy enhancing technologies to obfuscate and expand their processing operations. Giving them further avenues to reduce protection, for instance by adopting the Commission’s proposal, will backfire and cause harm for individuals. The ECJ could adapt its jurisprudence when facing such practices. There would still be potential for better differentiating between anonymous data that are outside of the scope of the GDPR and pseudonymous data that fall within its scope. As it currently stands, the Court’s case law is not well-equipped to deal with such future developments. Perhaps in data protection, as in physics, the simultaneous state of being and not being is not very stable.
**Dr. Felix Bieker is senior researcher at ULD, the data protection authority of Schleswig-Holstein, and works on platforms, infrastructures, and critical approaches to EU data law. **
This work was funded by the Federal Ministry of Research, Technology and Space within the project ‘Neue Datenschutzgovernance – Technik, Regulierung und Transformation’ (DatenTRAFO), https://plattform-privatheit.de.