Highlight 50/2025: Deepfakes and Human Rights: Why the EU AI Act is Becoming the Global Standard for Ethical AI Regulation?
Karmelita Deonarine, 23 December 2025

The use of artificial intelligence (AI) to generate synthetic content involving people’s likeness, commonly known as deepfakes, has become a major source of public concern and human rights debate. Deepfakes—derived from “deep learning” and “fake”—are hyper-realistic videos, images, or audio recordings that depict individuals saying or doing things they never did. Built on Generative Adversarial Networks (GANs), a deep learning architecture in which two neural networks compete to produce increasingly realistic outputs, these tools can now replicate human faces, voices, and expressions with striking accuracy.
While originally developed for creative purposes, deepfake technologies are increasingly weaponized for harmful ends. They are used to produce non-consensual sexual imagery, perpetrate fraud and identity theft, conduct marketing scams, and drive political disinformation campaigns. Such misuse raises questions about human rights protections and the obligations of states and private actors in the digital age.
Deepfakes implicate several core international human rights. The right to privacy and protection of personal data, enshrined in Article 17 of the ICCPR and Articles 7 and 8 of the EU Charter of Fundamental Rights, is violated when biometric or voice data are used without consent. The right to human dignity and reputation, protected under Articles 17 and 19 of the ICCPR and Article 1 of the EU Charter, is threatened when AI-generated content falsely associates individuals with harmful or humiliating conduct.
Amid these challenges, the European Union (EU) has emerged as a global leader in regulating synthetic media. The EU Artificial Intelligence Act, adopted in 2024 and set to enter into force in August 2026, offers the first comprehensive, rights-centred regulatory framework for AI systems.
At the heart of the Act is a risk-based approach.
• Unacceptable risk: AI systems that pose clear threats to human safety, livelihoods, or fundamental rights are prohibited, e.g. social scoring systems, use of real-time remote biometric identification systems for public surveillance etc.
• High risk: AI systems that could significantly affect health, safety, or fundamental rights must undergo rigorous risk assessment, documentation, human oversight, and ongoing monitoring, e.g. systems for biometric identification and emotional recognition, critical infrastructure, digital devices etc.
• Transparency risk: Generative AI outputs, including deepfakes, must be clearly and visibly labelled, and users must be informed when interacting with AI systems such as chatbots.
• Minimal or no risk: Low-impact AI systems, such as AI-enabled video games or spam filters, are not subject to additional obligations.
Disclosure of deepfakes is required under Article 50(2), supported by Recital 134 , which encourages providers to implement technical solutions such as watermarking, metadata-based identifiers, and provenance tools to ensure synthetic content is labelled and detectable.
Penalties are substantial. Violations of prohibited AI practices can result in fines of up to €35 million or 7% of global turnover, while other major infringements can incur penalties of up to €15 million or 3% of turnover.
Unlike fragmented regulatory frameworks that rely on existing laws on fraud, defamation, or data protection—or jurisdictions where regulation is constrained by broad interpretations of free expression, such as the United States—the EU model stands out for its harmonization, coherence, preventative focus, and explicit grounding in fundamental rights. This regulatory clarity is already exerting a “Brussels effect,” shaping global expectations for ethical AI conduct. Brazil and Canada illustrate this dynamic clearly, as their emerging AI legislation closely reflects the EU AI Act’s risk-based and fundamental-rights-centred framework, whereas Japan has pursued alignment with EU principles to maintain regulatory interoperability and market access.
Karmelita Deonarine, Highlight 50/2025: Deepfakes and Human Rights: Why the EU AI Act is Becoming the Global Standard for Ethical AI Regulation?, 23 December 2025, available at www.meig.ch
The views expressed in the MEIG Highlights are personal to the authors and neither reflect the positions of the MEIG Programme nor those of the University of Geneva.