Deepfakes, Artificial Intelligence, and the Reconfiguration of Privacy: Regulatory Lessons from the Collien Fernandes Case
- Dr Nina Mohadjer
- 14 hours ago
- 8 min read
This piece is written Dr Nina Mohadjer (Dr Nina Mohadjer is a legal technology expert with over 17 years of cross-border experience, recognised for her leadership in multilingual document review, e-discovery, and global diversity-focused thought leadership.)
To read the piece with all format citations please refer to this document - https://docs.google.com/document/d/1vRQqwjjx3_1nzpDTGPhmidl3ceGxEmNx/edit?usp=sharing&ouid=111700836984990015084&rtpof=true&sd=true

1. Introduction
Artificial intelligence has fundamentally altered the architecture of information production. Among its most disruptive manifestations is the emergence of deepfakes—synthetic media capable of replicating human likeness with a degree of realism that challenges traditional evidentiary assumptions. While such technologies hold significant innovative potential, their misuse has exposed profound weaknesses in legal systems structured around analogue conceptions of truth, identity, and privacy.
The recent German case involving Collien Fernandes has crystallised these concerns. Allegations that AI-generated pornographic content and fabricated online identities were created and disseminated without her consent have triggered both public outrage and legislative momentum. More importantly, the case reveals a structural shift in the nature of privacy harm: the law is no longer confronted solely with the disclosure of private information, but with its synthetic reconstruction.
This article argues that deepfakes necessitate a doctrinal and regulatory recalibration. Drawing on recent developments in German law, it demonstrates that existing frameworks are ill-equipped to address AI-enabled harms and that responsibility must be redistributed across the technological ecosystem.
2. The Synthetic Transformation of Privacy
Deepfakes represent a departure from traditional privacy violations. Historically, legal protections have focused on the unauthorised dissemination of truthful information. By contrast, deepfakes fabricate false yet credible representations, thereby transforming the nature of harm.
This transformation is particularly significant when viewed through the lens of informational self-determination, as articulated by the German Federal Constitutional Court. The principle presupposes that individuals retain control over the use and disclosure of their personal data. Deepfakes undermine this premise by extracting biometric identifiers—most notably facial features—and reconstituting them in entirely fictitious contexts.
Recent German legal scholarship and parliamentary analysis confirm that this form of synthetic manipulation creates a distinct category of harm. A 2026 report prepared for the Bundestag emphasises that deepfakes “falsely give the impression that certain events have taken place” and can be used to destroy reputations or facilitate identity theft. The harm is therefore not merely informational but ontological: it affects the very construction of personal identity in the digital sphere.
2.1 Informational to Representational Self-Determination
Deepfakes expose a limit in classical privacy doctrine. The decisive injury is no longer exhausted by the disclosure of true but private facts; rather, it lies in the unauthorised conversion of identifiable features into synthetic representations that can circulate as persuasive evidence of conduct. In that sense, the relevant protected interest is better understood, at least analytically, as representational self-determination: the individual’s authority over whether her likeness, voice, and embodied identity may be re-authored into apparently authentic media. This reading is consistent with contemporary Union law. The GDPR treats facial images used for unique identification as biometric data, while the AI Act defines a ‘deep fake’ by reference to its capacity to resemble existing persons or events and to appear authentic or truthful., Read together, these instruments suggest that deepfake harm is not merely informational. It is representational, because it concerns the legal and social consequences of fabricated identity; and it is evidentiary, because synthetic media can function as false proof in digital environments.
3. The Collien Fernandes Case as a Catalyst for Reform
The allegations in the Fernandes case—namely the creation of AI-generated pornographic material and the use of fake online profiles—have been widely interpreted as a paradigmatic instance of AI-enabled abuse. The case is particularly significant because it exposes a core deficiency in German criminal law: the absence of a clear prohibition on the creation of harmful deepfake content.
At present, German law primarily addresses downstream harms such as distribution. This creates a conceptual gap, as the injury to personality rights occurs at the moment of synthetic fabrication. The inadequacy of this framework has been acknowledged at the highest political levels. In response to the Fernandes case, the German Ministry of Justice announced plans to criminalise the production of pornographic deepfakes, signalling a shift toward recognising the act of creation as legally significant.
The case has also intensified broader debates about digital violence, particularly against women. Public demonstrations and political discourse have framed deepfake pornography as a form of gender-based harm, further reinforcing the need for legislative intervention.
4. German Criminal Law: Between Adaptation and Reform
4.1 Existing Legal Framework
In the absence of specific legislation, deepfake-related conduct is currently addressed through a patchwork of criminal provisions. These include defamation (§§ 186–187 StGB), coercion (§ 240 StGB), and violations of intimate privacy (§ 201a StGB). While these provisions may apply in certain circumstances, they are not designed to capture the специфity of AI-generated falsification.
Civil law remedies, grounded in the Allgemeines Persönlichkeitsrecht, offer broader protection. Victims may seek injunctions, damages, and removal of content. However, enforcement remains difficult, particularly where perpetrators act anonymously or operate across jurisdictions.
4.2 Emerging Legislative Developments
Recognising these limitations, German lawmakers have initiated significant reforms. In 2024, the Bundesrat introduced a draft bill proposing the insertion of a new § 201b into the Criminal Code. This provision would criminalise the production and dissemination of AI-generated or digitally manipulated media that falsely appear authentic and violate personality rights.
The proposed offence—often described as “violation of personality rights by digital falsification”—represents a doctrinal innovation. It shifts the focus from the content of the communication to the method of its creation, thereby acknowledging the unique harms posed by synthetic media.
Further developments in 2025 and 2026 indicate growing political consensus around the need for reform. Legislative initiatives aim not only to criminalise deepfake pornography but also to strengthen victim rights and improve the identification of perpetrators. Notably, recent proposals envisage penalties including imprisonment and enhanced investigative powers, reflecting the seriousness with which these offences are increasingly regarded.
Despite this progress, it remains the case that, as of early 2026, Germany lacks a fully enacted, comprehensive legal framework specifically targeting deepfakes. Parliamentary analyses continue to highlight enforcement difficulties and doctrinal gaps, particularly in relation to sexualised deepfakes involving adults.
The German difficulty, however, is not simply the absence of legal material, but the absence of a criminal prohibition calibrated to synthetic falsification as such. The Bundestag’s 2026 analysis states that Germany still lacks explicit deepfake regulation, that adult non-consensual sexualised deepfakes are only partially covered by existing pornography law, and that enforcement is especially difficult where content is produced or published abroad or where evidentiary transparency is weak. This reveals a structural mismatch. Personality injury begins at fabrication, whereas much of the current criminal framework intervenes only at the stages of distribution, coercion, stalking, insult, fraud, or other downstream manifestations. German law therefore captures many consequences of deepfakes, but remains less precise in addressing the underlying act of synthetic appropriation that makes those consequences possible.
5. European Regulatory Context
German developments must be situated within a broader European regulatory landscape. The EU has begun to address deepfake risks through a combination of data protection, platform regulation, and AI-specific legislation.
The General Data Protection Regulation (GDPR) is particularly relevant insofar as deepfake generation involves the processing of biometric data. However, its effectiveness is limited by enforcement challenges and its focus on data processing rather than synthetic representation.
More directly applicable are the Digital Services Act (DSA) and the proposed Artificial Intelligence Act. Article 35 DSA imposes obligations on very large online platforms to address risks associated with manipulated content, while the AI Act introduces transparency requirements for AI-generated media. In particular, Article 50 AI Act requires that deepfakes be clearly disclosed as such, reflecting a regulatory shift toward mitigating deception.
Nevertheless, these instruments remain incomplete solutions. They focus primarily on transparency and platform responsibility, leaving unresolved the question of how to address the underlying act of synthetic identity manipulation.
The European framework is therefore layered but fragmented. Article 50(4) AI Act imposes a disclosure obligation for deepfake content, and the Commission’s current code-of-practice process expressly links that obligation to risks of deception and manipulation in the information ecosystem. The DSA, by contrast, addresses systemic risks of dissemination, requiring very large online platforms and search engines to assess and mitigate risks linked to algorithmic design, recommender systems, content moderation, intentional manipulation of the service, gender-based violence, and rapid amplification of illegal content. Yet neither regime directly answers the anterior normative question whether the non-consensual synthetic sexualisation of an identifiable person should be unlawful as a primary wrong. That question is taken up more directly by Directive (EU) 2024/1385, which requires Member States to criminalise the production, manipulation or alteration, followed by public dissemination via ICT, of material making it appear as though a person is engaged in sexually explicit activity without consent where serious harm is likely. Even here, however, Union law stops short of criminalising mere private creation in the absence of subsequent public accessibility.
6. Responsibility in the AI Ecosystem
The Fernandes case underscores the inadequacy of a purely individualistic model of liability. In an environment shaped by complex technological infrastructures, responsibility must be distributed across multiple actors.
Users who create or disseminate harmful deepfakes remain central to any accountability framework. Their conduct constitutes the immediate cause of harm, and existing principles of criminal liability continue to apply. However, the scalability and anonymity afforded by AI tools complicate enforcement and necessitate supplementary mechanisms.
Developers, by contrast, occupy a structurally upstream position. The design of AI systems determines the extent to which they can be misused. This gives rise to a form of anticipatory responsibility, requiring the implementation of safeguards such as content filters, watermarking, and usage restrictions. European regulatory developments increasingly reflect this perspective, imposing obligations on developers to assess and mitigate risks.
Platforms represent a third locus of responsibility. As intermediaries, they facilitate the dissemination of deepfake content and are therefore critical to its containment. The DSA imposes due diligence obligations, including content moderation and transparency requirements. However, the effectiveness of these measures depends on technological capacity and institutional willingness to enforce them.
A more precise allocation of responsibility should distinguish between design-stage, deployment-stage, and circulation-stage duties.
At the design stage, providers can be expected to incorporate provenance tools, watermarking, misuse testing, access controls, and friction against obvious abuse.
At the deployment stage, the law increasingly imposes duties of disclosure and lawful processing, especially where synthetic content relies on biometric identifiers.
At the circulation stage, platforms bear obligations relating to notice, moderation, risk assessment, and mitigation.
This lifecycle model is analytically preferable to an undifferentiated account of responsibility because it aligns legal obligations with the point in the synthetic-content chain at which a given actor is best placed to reduce harm. It also clarifies why a purely user-centred liability model is insufficient: by the time individual wrongdoing becomes visible, the conditions of scale, speed, anonymity and amplification may already have been engineered into the system.
7. Systemic Implications
Beyond individual cases, deepfakes pose systemic risks to legal and social institutions. The Bundestag has noted that the proliferation of synthetic media may erode trust in digital information to the point where authenticity can no longer be assumed. This “epistemic destabilisation” has significant implications for evidentiary standards and judicial processes.
Moreover, the disproportionate targeting of women in deepfake pornography highlights the intersection of technology and structural inequality. Empirical studies indicate that the vast majority of such content involves female subjects, underscoring the gendered nature of AI-enabled harm.
8. Conclusion
Deepfake technology represents a paradigmatic challenge for contemporary legal systems. By enabling the fabrication of realistic yet false representations, it disrupts traditional conceptions of privacy, identity, and truth.
The Collien Fernandes case has brought these issues into sharp focus, exposing the inadequacy of existing legal frameworks and catalysing legislative reform in Germany. Recent developments, including the proposed introduction of § 201b StGB, signal an emerging recognition that synthetic identity manipulation requires distinct legal treatment.
However, doctrinal innovation alone is insufficient. Addressing the risks posed by deepfakes requires a comprehensive approach that integrates criminal law, data protection, platform regulation, and AI governance. It also necessitates a reallocation of responsibility across the technological ecosystem.
Only through such a multifaceted response can the law preserve its protective function in an era increasingly defined by artificial intelligence.


