Published on

What is Data Laundering?

Personal Data Laundering is the process through which personal information which has been collected, processed, or derived (all “processed”, following GDPR terminology) against the applicable data protection rules is reintroduced in the data economy masking its illegitimate source.[1]

This concept mirrors the concept found in the financial sector about the laundering of money which is derived from criminal activity. However, if money laundering requires that the object of laundry are the proceeds of criminal offences, there is no criminal sanction at the level of GDPR for data processed against its rules, only administrative ones. There are sanctions that have been implemented at national level by member states, who included various degrees of criminal offences related to infringement of data protection regulations.[2] In Switzerland there is a 143 Swiss Criminal Code (Hacking), which covers the steal of any data. These sanctions and the covered actions are however not uniform, so it makes sense to look at the issue at the EU level.

If we look only at GDPR, the infringements of the regulation that may give rise to a form of Personal Data Laundering would encompass in particular those covered by art. 83 para. 5 of the GDPR, which are:

1. Basic principles for processing, including conditions for consent (art. 5-9)

2. Right to be forgotten (art. 17)

3. Right to restriction of processing (art. 18)

4. Right to object to legitimate interest (art. 21)

The commonality of these clauses is that the data is taken out of the sphere of influence of the data subject without appropriate consent or legal justification.

The laundering action would then aggravate an infringement of data protection regulation as described above by magnifying the illegitimate access to personal data and by reducing to practical impossibility the ability of the data subject to enforce its rights. This because new subjects acquiring access to these data, in the wrong conviction that it has been obtained legitimately, will for example not follow their information duties as per art. 14 GDPR.

As soon as the data is laundered, meaning presented in a way that gives the appearance of having been lawfully obtained, it can be circulated and lawfully acquired by third parties under Art. 14 GDPR.

It must be noted that in the data economy, personal data is rarely processed on a one-by-one scale, so that an instance of Personal Data Laundering would involve data of many data subjects and potentially large amount of personal information for each of these.

That said, I can imagine different concrete scenarios, where such acts are of particular harm for the data subjects:

A. Personal data is sold to multiple commercial organisations that, unaware of the illegitimacy of the received data, contact the data subject against his will. This obliges the data subject to take each time contact with the contacting organisations (do you remember the spam before effective AI filters?).

B. Personal data is sold to be used as training data for machine learning algorithms, which give a more granular level of understanding or information on the population represented by the data subjects. Because of the scale of illegitimate data that may be involved, the population of data subjects concerned will have more unwanted exposure or transparency of personal information, since the model and the insight it provides can be then used by the developer or even be shared or sold to infer more information about subjects. In extreme cases, personal information of specific individuals might be extracted from the model in a malicious way, or the granularity of the insight obtained will be such that there is almost complete transparency.
It is easy to understand that a more granular understanding of a population obtained by illegitimate means might be undesirable.[3]

C. Personal data is sold to commercial organisations that further process the data to obtain (and possibly sell) insights about people including the data subject. Similarly, to the example of the machine learning algorithm, the harm to specific data subjects is the enhanced insights that can be obtained in their life thank to statistical deduction facilitated by the availability of further (legitimate) information.
Also in this case, the information obtained might be such that it is possible to distil it (i.e., combine it with available personal information) to a degree such that otherwise inaccessible personal information might be derived.

Currently, the legal approach is that when data is anonymized, it is not covered by GDPR anymore so that its protection rules do not apply.[4] However, it is worth noting that distinguishing between personal and non-personal data is difficult and must be done taking into consideration many different aspects. The market for personal data often involves one or more intermediaries, in and outside the EU, who might have little incentive to follow, or comprehension of, the rules protecting the individuals’ data and what are the requirements of anonymisation, so that Personal Data Laundering instances are easier.

Why now? (or not yet?)

Why is this relevant? Data protection is recognized in Europe as a fundamental right.[5] It derives from the right to privacy, which protects the notion of dignity, the privacy or the right to a private life, the right to be autonomous, in control of information about yourself, and to be let alone.[6] Despite some technical discrepancy that makes data protection distinct from the right to privacy,[7] it can be said that data protection is instrumental in ensuring the right to privacy.

More and more data is collected from data subjects and their environment, and more and more insight can be gained from this data thank to network effects, which allows for example to create “systems to provide recommendations, feedback, suggestions, and nudge in a personalized and adaptive way.”[8] Also without any Personal Data Laundering, the insight that it might therefore be obtained thanks to the collection and combination of data together with the knowledge accumulated about human behaviour is already very advanced.

This development is facilitated by the always developing technology and in particular by artificial intelligence, which allows to deal with this mass of information and to employ it in a personalized way. It is also what allows the creation of personalised services for an affordable price or even for free, the management of the infrastructures that allows our way of life, as well as the assurance of much of the security we enjoy.


The development in data collection, data processing and modelling capabilities shows the increasing tension between progress and the current philosophy based on individuals’ independence and privacy. A profound societal and cultural transformation is happening. In my opinion, there is currently not a critical need for addressing the Personal Data Laundering through legislation, also because it is generally relatively easy to collect abundant personal data in a compliant manner. This topic is however an important aspect to keep in mind when developing best practices.

[1] Compare a definition on Wikipedia

[2] For an overview, see

[3] Admittedly, the current general practice is to share quite liberally personal information.

[4] I.e., the data does not concern an identified or identifiable natural person as described in the definition of personal data under article 4(1) GDPR, which defines the scope of the regulation.

[5] Article 8(1) of the Charter of Fundamental Rights of the European Union and Article 16(1) of the Treaty on the Functioning of the European Union provide that everyone has the right to the protection of personal data concerning him or her.


[7] See, among others, and “The distinction between privacy and data protection in the jurisprudence of the CJEU and the ECtHR” in “International Data Privacy Law, 2013, Vol. 3, No. 4.“

[8] Boratto, L., Vargiu, E. Data-driven user behavioral modeling: from real-world behavior to knowledge, algorithms, and systems. J Intell Inf Syst 54, 1–4 (2020).