Overcoming these challenges is particularly important because, first, the use of such technologies has a gendered dimension since it disproportionately affects women who are the target of approximately 90% of deep fakes in the form of non-consensual fake pornography. Equally important, taking account of these practical challenges in determining the scope will enhance the enforceability of this provision. Getting the scope of the definition right is essential to address appropriately the distinct harm profile stemming from deep fake technology, specifically concerning image-based sexual abuse and disinformation. While many factors contribute to this situation, the lack of clarity on what constitutes a deep fake is one of them. A patchwork of applicable provisions, ranging from privacy laws to copyright, is currently used by lawyers trying to litigate cases involving deep fake technology without too much success. However, there are practical challenges to this seemingly consensual definition, particularly when drawing boundaries between deep fakes and lower audio-visual manipulation (i.e., cheap fakes). At first glance, there seems to be a consensus among scholars, companies working with deep fakes, and media outlets over two elements that define deep fakes: (i) the use of AI-based technology and (ii) the intent to deceive. This provision aims to protect natural persons from the risks of impersonation or deception when an AI system generates or manipulates image, audio, or video content that appreciably resembles existing persons, places, or events and would falsely appear to a user of this system to be authentic or truthful.Īgainst the background of divergent interpretations of what constitutes an AI system and, in particular, of what deep fakes are, this attempt to regulate is surprising. Under the AI Act’s risk-based approach to AI systems, deep fakes are considered by the European Commission as limited risk AI systems applications. A new transparency obligation for deep fakesĭeep fake systems are subjected to specific transparency obligations in Article 52(3) of the proposed AI Act.There are concerns about the proposed approach, and the enforcement of a new transparency obligation is one of them. For this reason, it is surprising that the EU Artificial Intelligence Act proposal (hereafter: AI Act) refers to deep fakes and includes them under its scope in a first attempt to regulate the phenomenon at the EU level. Despite its increasing popularity, there are challenges in defining what deep fakes are and what ought to be regulated when it comes to deep fake phenomena. In particular, regulators are increasingly concerned by the developments and applications of this technology in two main areas: image-based sexual abuse and disinformation. Since 2018, deep fake technology has been one of the areas in which artificial intelligence has evolved dramatically, and thus, governments primarily see deep fakes as an emerging threat. Angelica Fernandez, University of Luxembourg
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |