EXAMINING THE INTERSECTION OF DEEPFAKES AND COPYRIGHT LAW

By Ria Dennis
I Year B. A. LL. B (Hons.), National University of Advanced Legal Studies, India. Email: riadennis2141@nuals.ac.in.

ABSTRACT

Deepfake technology, a product of artificial intelligence, represents a transformative yet controversial innovation with profound implications for intellectual property rights (IPR). This paper explores the intersection of deepfakes and copyright laws, focusing on the challenges posed by this technology to copyright regime in India. Using a comparative approach, it examines international regulatory frameworks and proposes legislative and policy recommendations to address the inadequacies in India’s current legal system. The discussion highlights the need for a balanced approach that fosters innovation while safeguarding intellectual property rights.

Keywords- Deepfake, Copyright, International Regulatory Approaches, Technology, Liability etc.

I. INTRODUCTION

Deepfake technology, a form of artificial intelligence, has garnered global attention due to its dual potential for innovation and misuse. Deepfake technology is based on an advanced algorithm which produces hyper-realistic videos using a person’s face, voice or likeness by utilizing machine learning techniques such as generative adversarial networks (GANs). These fake videos or audios are then manipulated to appear as if the person is saying or doing certain things.[1] The integration of deepfake technology with Intellectual Property Rights (IPR) raises pressing questions about copyright and authorship in the digital era. While deepfake technology offers opportunities in fields like entertainment and education[2], its misuse for creating deceptive or harmful content has significant implications for intellectual property laws. India’s legal framework for IPR, focussing on the Copyright Act, 1957 lacks specific provisions to address the complexities introduced by deepfake technology. This paper investigates these challenges, evaluates the adequacy of existing legal provisions, and proposes reforms to ensure a robust legal framework that addresses the multifaceted implications of deepfake technology. It also incorporates comparative analysis by exploring regulatory measures in jurisdictions like the United States, UK and EU, which has introduced specific deepfake legislation.

II. STATEMENT OF THE PROBLEM

The rapid evolution of deepfake technology challenges the foundational principles of IPR, such as originality and authorship. Deepfakes can infringe copyright by replicating the likeness or creative output of individuals without consent, often leading to legal ambiguities regarding ownership and liability.[3] Moreover, the anonymity of deepfake creators and the transnational nature of the internet exacerbate the difficulty of enforcing IPR.[4] This research investigates these issues, aiming to provide a roadmap for addressing the gaps in current legal provisions and safeguarding intellectual property.

III. BACKGROUND

Intellectual Property Rights encompass laws designed to protect creations of the mind, such as literary works, music, and visual art. Copyright law safeguards original works of authorship fixed in a tangible medium. However, deepfake technology disrupts this paradigm by generating content that blurs the lines between originality and replication.

The history of intellectual property law reflects its adaptation to technological advancements, from the printing press to digital media. The emergence of deepfake technology marks the latest challenge in this evolution. As noted in studies, early deepfake applications ranged from harmless entertainment to harmful uses like revenge pornography and political disinformation. The capacity to generate deepfakes will diffuse rapidly no matter what efforts are made to safeguard it.[5]

IV. LITERATURE REVIEW

Madhura Thombre’s article highlights the socio-legal implications of deepfake technology, emphasizing the inadequacy of current laws in addressing privacy violations and identity theft.[6] Similarly, Lindsey Joost’s analysis underscores the limitations of traditional tort and copyright laws in regulating deepfake content, particularly when creators remain anonymous.[7] The primary debate centres on balancing innovation with regulation. Advocates for stricter laws argue that deepfakes undermine trust and violate IPR. Conversely, critics caution against overregulation, which may stifle creative uses of the technology. Existing studies primarily focus on the ethical and technical aspects of deepfakes, leaving significant gaps in the legal discourse, particularly in the Indian context. There is also very limited understanding of how existing IPR doctrines apply to AI-generated content.

V. OBJECTIVES

1. To analyse the implications of deepfake technology on copyright laws.

2. To evaluate the adequacy of existing legal frameworks in addressing deepfake-related challenges.

3. To propose legislative and policy recommendations for regulating deepfakes without stifling innovation.

VI. RESEARCH QUESTIONS

1. How do deepfakes challenge the principles of originality and authorship under copyright laws?

2. How can India’s copyright law adapt to address the challenges posed by deepfake technology to IPR?

VII. ANALYSIS

India’s Legal Framework Addressing Copyright Infringement Under Deepfakes

India’s Copyright Act of 1957 and the Information Technology Act of 2000, provide the foundational framework for addressing violations of intellectual property. However, these laws, developed in an era before the emergence of artificial intelligence, lack specific provisions for regulating deepfakes. As deepfakes become increasingly pervasive, their intersection with IPR has exposed significant gaps in existing laws, especially concerning authorship, attribution, and enforcement.

Under Indian copyright law, deepfakes that involve fabricating a photo or video of a person without authorization constitute an infringement of copyright. The Copyright Act, 1957 (hereinafter referred to as ‘ICA’) serves as India’s primary legislation for protecting creative works, including original literary, dramatic, musical, and artistic works, as well as cinematographic films and sound recordings.[8] According to Section 17 of the ICA, the first ownership of a work belongs to the author or creator, meaning that the photographer holds the copyright for a photograph and the producer holds the copyright for a video. Therefore, if someone creates a deepfake by manipulating these works, they are infringing on the rights of the original copyright holders. Additionally, Section 14 grants exclusive rights to copyright owners, including the right to reproduce, distribute, or create derivative works based on their creations.[9] A deepfake, which involves using or altering a copyrighted work, requires the permission of the original copyright holder.

However, the concept of originality under the ICA does not account for AI-generated content, such as deepfakes. Courts in India have yet to establish a clear precedent on whether AI- generated works qualify for copyright protection or how to attribute authorship when AI algorithms are used to create such works. This leaves a significant gap in the legal framework, especially in relation to deepfakes created through AI technologies. Moreover, Section 52 of the ICA, which provides a defence for "fair dealing" in certain cases, does not protect deepfakes, particularly those created with malicious intent, such as for impersonation or spreading misinformation.[10] There exist provisions for civil and criminal liability under Section 55 and Section 63 of ICA which provide damages, injunctive relief, imprisonment, and fines against infringers. These provisions provide adequate cautions to tackle deepfakes created for despiteful purposes but fail to extend protection to deepfakes created with lawful purposes.[11]

Deepfakes also raise concerns about moral rights under Section 57 of the ICA, which grants authors the right to claim authorship and object to any distortion or derogatory treatment of their work. If a deepfake involves an unauthorized replication of a person's creative output, such as a performance, speech, or image, it violates their moral rights, particularly the right to integrity, if the deepfake misrepresents them or damages their reputation.[12] However, the protection offered by moral rights is limited to authors of copyrightable works, such as literary, artistic, or cinematographic creations. This limitation means that if a deepfake involves someone's likeness or image—elements that are not inherently copyrightable—the affected individual may not be able to claim protection under Section 57. This highlights a significant gap in legal safeguards for individuals whose identities or personas are exploited through deepfakes, underscoring the need for broader legal measures to address this issue effectively. Consequently, deepfakes created without consent from the copyright holder and with harmful intent are an infringement under Indian copyright law, but the current legal framework does not fully address the challenges posed by AI-generated content and the exploitation of personal likenesses.

Deepfakes, often spread via social media and other online platforms, can reach a vast audience, raising significant concerns about copyright infringement. The Information Technology Act, 2000, primarily governs electronic commerce and cybersecurity. While it provides mechanisms for addressing digital content misuse, it does not specifically address AI-generated content or deepfakes. It grants "safe harbour" protection to intermediaries, provided they act expeditiously upon receiving notice of unlawful content.[13] Intermediaries such as social media platforms are typically not held liable for user-generated content, but they must remove unlawful content when they receive actual knowledge or a court order directing them to do so. However, the absence of guidelines for identifying and categorizing deepfakes makes enforcement challenging.

In the case of Myspace Inc. v. Super Cassettes Industries Ltd., the Court expanded on this principle, holding that intermediaries must take down infringing content upon receiving a notification from private parties, even without a court order.[14] This sets a precedent for how intermediaries should respond to copyright violations without waiting for legal proceedings to conclude.

Additionally, the Information Technology [Intermediary Guidelines (Amendment) Rules], 2018, which are still under review and have not yet come into effect, introduce stricter requirements for intermediaries. These draft rules mandate that intermediaries proactively monitor and remove unlawful content, including deepfakes, within 24 hours of receiving a court order or a notification.[15] To comply, intermediaries would need to employ automated tools like algorithms to detect and take down infringing content. However, a significant challenge arises because current technology used to detect deepfakes has a relatively low accuracy rate of 65.18%, making it difficult for intermediaries to effectively identify and moderate deepfake content in real time.[16] As a result, intermediaries may face difficulties in complying with these new rules when it comes to identifying and removing deepfakes in a manner that aligns with copyright law, potentially leading to issues of false positives or content removal failures.

Emerging Legal Trends and Challenges Posed by Deepfakes to IPR

As AI technologies advance, India faces several challenges in adapting its legal framework to address deepfake-related copyright violations. Deepfakes challenge the interpretation of originality, a cornerstone of copyright law. Traditional works involve human creativity, while AI-generated deepfakes are the result of algorithms processing existing data. The absence of clear guidelines for attributing authorship to AI-generated works creates ambiguity in enforcing copyright laws.[17] For instance, if an AI creates a deepfake mimicking a famous artist’s style, determining the rightful owner of the work becomes contentious and it becomes difficult to determine if the credit goes to the AI’s developer, the dataset’s creator, or the individual whose likeness is used.

The global nature of the internet further complicates enforcement efforts, as deepfake content often originates from jurisdictions with lax regulations, making cross-border enforcement difficult.[18] The anonymous and transnational nature of the internet complicates enforcement of IPR laws against deepfake misuse. Platforms hosting deepfake content often operate across jurisdictions, creating enforcement bottlenecks. Further, technological limitations arise as current detection tools for identifying deepfakes are not foolproof, making it difficult for courts and enforcement agencies to establish the authenticity of content.[19]

Comparative Analysis of International Regulatory Approaches

United States

The United States has adopted a proactive stance on deepfake regulation, focusing on transparency, accountability, and adaptability. A cornerstone of this approach is the Deep Fakes Accountability Act, which mandates the use of watermarks and disclaimers on AI-generated content.[20] These measures ensure that platforms disclose whether content has been synthetically generated or manipulated, thereby safeguarding against misuse while preserving creative freedom. Additionally, state-level initiatives complement federal legislation. For instance, California has enacted laws prohibiting the distribution of certain deepfake content, such as manipulated videos intended to deceive voters during elections.[21] This sector-specific approach targets high-risk areas like electoral integrity and reputational harm, demonstrating the U.S.'s commitment to addressing specific threats posed by deepfake technology.

A unique feature of the U.S. regulatory framework is its focus on technology neutrality.[22] Rather than targeting specific AI tools, the regulations address the outcomes of misuse. This adaptability allows the framework to remain relevant as new AI technologies emerge, striking a balance between regulation and innovation. However, critics have highlighted inconsistent enforcement mechanisms, with limited guidance on effectively implementing requirements. This inconsistency has raised concerns about the framework’s efficacy in addressing the challenges of deepfake technology.

One of the critical challenges in the U.S. is the broad interpretation of fair use, which potentially extends protection to deepfakes created with malicious intent. Under Section 107 of the Digital Millennium Copyright Act (DMCA), fair use is evaluated through a four-factor test considering the purpose and character of use, the nature of the copyrighted work, the amount and substantiality of the portion used, and the effect on the potential market.[23] Courts have often interpreted this doctrine liberally, particularly under the concept of "transformative use" established in Campbell v. Acuff Rose.[24] Transformative use protects works that alter the purpose and character of the original work to create new expression or meaning. Deepfakes that significantly change the original’s purpose or nature can qualify as transformative, even if substantial copying occurs.

This liberal interpretation allows fair use to cover a majority of deepfake content, regardless of whether it is created with bona fide or mala fide intent. Consequently, safeguards like notice- and-takedown procedures and intermediary liability under Section 512 of the DMCA and Section 230 of the Communications Decency Act[25] become ineffective against malicious deepfakes. For example, deepfakes created with the intent to harm reputations or deceive could still be protected under the fair use doctrine if presented as parody or critique.

Furthermore, the U.S. faces a gap in protecting moral rights, such as the right of attribution and the right to reputation. Article 6 bis of the Berne Convention, 1886, advocates for protecting authors' rights, enabling them to control how their works are used.[26] However, in the U.S., these rights are limited to visual arts under the Visual Artists Rights Act of 1990 and do not extend to all copyrighted works under the DMCA.[27] This exclusion leaves authors of non-visual works vulnerable, particularly when their creations are manipulated by deepfake technology. The absence of robust moral rights protections exacerbates the challenges for creators, who may find it difficult to safeguard their work and reputation from malicious misuse.[28]

European Union

The European Union (EU) has adopted a multi-faceted approach to regulating deepfakes, emphasizing privacy, accountability, and the modernization of copyright laws. A central pillar of the EU’s framework is the General Data Protection Regulation (GDPR), which indirectly addresses deepfakes by protecting individuals' personal data. The GDPR mandates that personal data, such as an individual’s likeness or voice, cannot be processed without explicit consent.[29] This provision is particularly relevant to deepfake technology, as it often replicates a person’s identity without authorization, raising serious privacy concerns. Violations of the GDPR carry significant penalties, ensuring a strong deterrent against the unauthorized use of personal data in deepfake creation.

The EU also addresses intellectual property concerns through the Copyright Directive, which aims to adapt copyright laws to the digital age.[30] While it does not explicitly regulate AI- generated content, the directive ensures that creators are fairly compensated for their works, even when used in transformative digital processes like deepfake generation. By harmonizing copyright law with broader digital policy objectives, the EU underscores the importance of balancing creative innovation with the protection of intellectual property rights.

A key element of the EU’s copyright framework is the doctrine of fair dealing, which provides specific exceptions under which copyrighted works can be used without the owner's permission. Fair dealing is recognized under the Copyright, Designs, and Patents Act, 1988 (CDPA). It is applicable in limited scenarios, such as non-commercial research, private study, criticism, review, and reporting current events.[31] Unlike the U.S.'s broad fair use doctrine, fair dealing in the EU is narrowly construed, with clear boundaries to prevent misuse. The EU has also prioritized transparency and accountability in AI applications. Though non- binding, the Ethics Guidelines for Trustworthy AI emphasize the need for traceability, accountability, and transparency in AI-generated content.[32] These guidelines encourage member states to integrate ethical principles into their legal frameworks, fostering a culture of responsible AI use.

United Kingdom

The United Kingdom's approach to copyright law and AI-generated works is built on the Copyright, Designs and Patents Act 1988 (CDPA), which outlines specific provisions for computer-generated works and the broader concept of fair dealing. Section 178 of the CDPA defines a computer-generated work as one produced entirely by a computer without human input.[33] Section 9(3) assigns authorship of such works to the individual who made the necessary arrangements for their creation. This provision ensures that intellectual property rights for AI-generated content remain with a human party, avoiding ambiguity in ownership. For instance, a person using AI to create art or music would be considered the author, provided they orchestrated the creative process.

Fair dealing, a pivotal doctrine within the CDPA, allows for the use of copyrighted material without permission in specific contexts, as outlined in Sections 29 and 30. These exceptions include usage for non-commercial research and private study, criticism or review, and reporting current events. The concept of "fairness" in fair dealing is not explicitly defined in the statute, but case law, such as Hubbard v. Vosper, has provided interpretative guidance.[34] Lord Denning in this case emphasized that fairness must be assessed on a case-by-case basis, taking into account factors such as the nature and purpose of the use, the quantity of work used, its impact on the market value of the original work, and the motives of the user. For example, the use of copyrighted material for a parody or critique could qualify as fair dealing, while its use for commercial exploitation without permission would not.

The doctrine of fair dealing also includes specific provisions under Section 30A of the CDPA for works used as parody, caricature, or pastiche. This extension is particularly relevant in addressing deepfake technology, where AI-generated content might serve purposes like satire or commentary. For instance, a deepfake created for legitimate artistic purposes might qualify for protection under fair dealing, while one created with malicious intent would not. The judgment in Hyde Park Residence Ltd v. Yelland & Ors further supports this distinction by highlighting the importance of the user’s motive in determining fairness.[35] In this case, the court stressed that malicious intent in creating or using content would weigh heavily against a claim of fair dealing.

The UK’s framework, while offering protections for legitimate uses of copyrighted material, provides a flexible yet structured approach to dealing with AI-generated works and deepfakes.[36] By focusing on human oversight in authorship and allowing for exceptions through fair dealing, it seeks to balance innovation with accountability. This framework allows creators to explore new technologies like AI while safeguarding intellectual property and mitigating misuse.

Comparison

The challenges posed by AI-generated content, particularly deepfakes, manifest uniquely across jurisdictions, reflecting differing legal, cultural, and technological landscapes. The United States, European Union (EU), United Kingdom (UK), and India each face distinct obstacles in regulating deepfake technology, yet share common concerns around transparency, accountability, and the prevention of harm. A comparative analysis reveals insights into best practices and how these approaches can inform a balanced regulatory framework in India.

The U.S. faces a primary challenge in its broad doctrine of fair use, which may inadvertently protect deepfakes created with malicious intent. Under the Digital Millennium Copyright Act (DMCA), transformative uses are often protected, even if they involve significant copying, as long as the new work adds new meaning or expression. This creates a legal gray area for deepfakes, allowing some to qualify as parodies or transformative works despite their potential to harm reputations or mislead audiences. Enforcement mechanisms also remain inconsistent, with federal and state-level regulations lacking uniformity. For example, while California prohibits certain election-related deepfakes, no national standard exists, leading to fragmented enforcement and compliance issues.

The EU’s regulatory focus on privacy and consent under the General Data Protection Regulation (GDPR) provides strong safeguards but faces challenges in enforcement. Deepfake creators often operate anonymously or in jurisdictions outside the EU, complicating accountability. The narrow scope of fair dealing, as codified in the Copyright Directive, restricts the use of copyrighted material for creative purposes, potentially stifling innovation. However, this rigidity also ensures better protection against malicious or unauthorized uses. Balancing innovation and intellectual property protection remains a persistent challenge. Additionally, the non-binding nature of the Ethics Guidelines for Trustworthy AI leaves gaps in harmonizing ethical principles with enforceable laws.

The UK encounters similar issues with fair dealing, as outlined in the Copyright, Designs and Patents Act (CDPA), 1988, which limits exceptions to specific purposes such as criticism, review, or reporting current events. This restrictiveness makes it difficult for creators to utilize copyrighted material in transformative ways without explicit permission. Moreover, deepfakes often fall outside these narrowly defined exceptions, leaving victims of malicious deepfakes with limited recourse. The lack of a clear definition of “fairness” adds to the ambiguity, as courts must rely on case law, such as Hubbard v. Vosper, to interpret fairness. Furthermore, the absence of sector-specific regulations targeting deepfakes creates gaps in addressing high-risk areas like electoral manipulation and cyber harassment.

India’s legal framework for addressing deepfakes is nascent and fragmented. While the Copyright Act, 1957 includes provisions for fair dealing, these are limited to uses such as criticism, review, and research, similar to the UK. India also lacks specific legislation addressing deepfakes or their misuse. Privacy protections under the Personal Data Protection Bill, 2019 (now the Digital Personal Data Protection Act, 2023) are limited in scope and enforcement, leaving significant gaps in regulating the unauthorized use of personal data in deepfake creation. The absence of clear intermediary liability under the Information Technology Act, 2000 further complicates platform accountability, as tech companies often escape liability for hosting deepfake content.

A comparison of these jurisdictions highlights both commonalities and divergences in addressing deepfake challenges. The U.S.’s sector-specific approach is effective in targeting high-risk areas such as electoral manipulation, but its broad fair use doctrine poses risks of misuse. The EU’s GDPR framework provides robust privacy safeguards but struggles with cross-border enforcement. The UK’s fair dealing doctrine offers narrow but clear boundaries for copyrighted material usage, while India’s framework remains underdeveloped, lacking the specificity and enforcement mechanisms of its counterparts.

Best practices from these jurisdictions suggest a need for a multi-layered regulatory approach. For instance, the U.S. practice of requiring watermarks and disclaimers for AI-generated content enhances transparency and traceability, offering a model for India to adopt. Similarly, the EU’s emphasis on consent and ethical AI can guide India in developing a rights-based approach to deepfake regulation. The UK’s judicial interpretations of fairness, such as considering the motive and impact of the use, provide valuable insights for addressing malicious intent in deepfakes.

VIII. Recommendations for India

To address the challenges posed by deepfakes, India could adopt a hybrid regulatory framework that combines elements from various global jurisdictions, ensuring a balanced and effective response to the complexities of AI-generated content. This approach would integrate transparency, privacy, accountability, and sector-specific regulations to foster both innovation and protection.

Transparency and Traceability

Mandating the use of watermarks and disclaimers for AI-generated content, as seen in the U.S., would enhance transparency and accountability. Watermarking technology can help to clearly distinguish AI-generated works from authentic content, ensuring that viewers are aware when content is synthetic. This would reduce the risk of deepfake technology being used to deceive or manipulate public perception, especially in sensitive contexts like elections or media.

Privacy and Consent

India could strengthen privacy protections by drawing from the EU’s General Data Protection Regulation (GDPR), which ensures the responsible use of personal data. The unauthorized use of personal data in creating deepfakes, particularly for impersonation or defamation, can cause significant harm. By implementing robust data privacy laws, India could regulate the use of personal data in deepfake creation, making it mandatory for consent to be obtained before an individual’s likeness or data is used.

Narrow Fair Dealing Exceptions

India can benefit from adopting a more flexible definition of fair dealing, similar to the UK’s approach, where fair dealing exceptions are narrowly defined but still allow for some transformative uses. This approach would encourage legitimate uses of deepfake technology for purposes such as satire, education, or commentary, without compromising intellectual property rights. However, it is important to ensure that such uses do not infringe on individual rights or lead to harm.

Sector-Specific Regulations

India could introduce targeted laws to address high-risk areas where deepfakes can have serious consequences, such as electoral integrity, cyber harassment, and defamation. By establishing clear legal frameworks for these sectors, India can mitigate the risks posed by deepfakes in sensitive contexts, ensuring that malicious actors face appropriate penalties for misuse. These regulations could include specific provisions on deepfake creation, distribution, and its impact on public trust, especially in electoral processes.

Platform Accountability

Strengthening intermediary liability under Section 79 of the Information Technology Act (IT Act) is crucial to holding online platforms accountable for hosting or disseminating malicious deepfakes. Platforms should be required to proactively identify and remove deepfake content using AI-based detection tools. However, this responsibility must be balanced with the need to protect freedom of expression, ensuring that content moderation does not become overly restrictive or biased.

Legislative Reforms

A comprehensive amendment to the Copyright Act, 1957 is essential to address the challenges of AI-generated content. The amendment could include provisions that explicitly recognize AI- generated works and define the framework for attributing authorship. One possible approach is to assign joint authorship between the AI developer and the entity that provided the dataset or training input. Additionally, introducing a licensing system for the commercial use of AI- generated content would ensure that creators whose data was used are adequately compensated. Furthermore, the Copyright Act should introduce moral rights protections for individuals whose likeness or performance is used without consent, even if the content does not qualify as a traditional copyrightable work.

Similarly, the Information Technology Act should be updated to incorporate specific provisions for deepfakes, defining their creation and distribution as separate offenses based on intent and impact. Strengthening penalties for misuse in sensitive sectors, such as electoral manipulation or defamation, would ensure that those who create harmful deepfakes are held accountable. Guidelines for intermediaries should also be developed to ensure proactive identification and removal of deepfake content.

Regulatory Mechanisms

Developing effective regulatory mechanisms is crucial for managing the risks of deepfakes. Transparency guidelines, such as mandatory digital watermarks and disclaimers on AI- generated content, should be implemented to ensure that viewers are aware when content is synthetic. Additionally, establishing an independent regulatory body tasked with verifying the authenticity of high-risk content—especially when public figures or sensitive topics are involved—would enhance public trust in online content.

Judicial Training and Awareness

To ensure that the legal system is equipped to address the challenges of deepfake technology, specialized training programs for judges and enforcement agencies are essential. These programs should focus on the technical aspects of deepfake technology and its implications for copyright, privacy, and defamation. Establishing dedicated cybercrime units equipped with advanced tools for detecting and analyzing deepfakes will also enhance law enforcement’s ability to effectively combat this issue.

Public-Private Partnerships

Collaboration with technology companies is critical in developing advanced detection tools and sharing best practices for mitigating deepfake-related risks. India should also encourage academic research on the legal and ethical implications of AI-generated content to inform policymaking and ensure that laws keep pace with technological developments.

Public Awareness Campaigns

Public education campaigns should be launched to raise awareness about deepfake misuse and how to identify and report malicious content. These campaigns should target various demographics, including students, professionals, and senior citizens, and use engaging content such as videos and infographics to explain the ethical and legal implications of deepfake technology. Establishing a national helpline or online portal for reporting deepfake-related concerns would provide immediate support to victims and help address the problem more effectively.

International Collaboration

Given the cross-border nature of deepfake misuse, India must actively participate in international efforts to regulate AI-generated content. Collaborating with organizations such as the United Nations and the World Intellectual Property Organization (WIPO) to establish uniform standards for AI-generated works is essential. Bilateral agreements with major tech- exporting countries would enable India to share expertise and resources in combating deepfake threats. Joint research initiatives with international universities and research institutions would also help develop advanced detection tools and ethical frameworks for AI governance.

Ethical AI Development

To ensure the responsible development of AI technologies, India should introduce mandatory ethical AI training for developers, emphasizing the societal impacts of their work. Incentivizing the development of AI tools designed to combat deepfake misuse, such as detection software or content verification systems, will also encourage responsible innovation. Establishing an AI ethics council comprising experts from diverse fields will guide policymaking and ensure that AI technologies align with societal values and ethical principles.

By addressing these gaps and combining the strengths of various regulatory models, India can create a comprehensive legal framework that not only addresses the risks of deepfake technology but also fosters innovation and creativity in the digital age. This approach would balance the benefits of AI-driven innovation with the need to protect intellectual property, privacy, and individual rights, ensuring that India remains at the forefront of AI governance.

IX. CONCLUSION

Deepfake technology represents a dual-edged sword in the realm of intellectual property. While it offers opportunities for creative and educational innovation, its misuse poses significant challenges to copyright, privacy, and authorship. India’s existing legal framework must evolve to address these challenges proactively. The misuse of deepfake technology has far-reaching consequences beyond individual IPR violations. The economic harm to industries reliant on intellectual property, such as entertainment and advertising, is significant. Unauthorized use of a celebrity’s likeness in advertising campaigns, for example, not only undermines contractual agreements but also devalues their brand. Furthermore, the erosion of trust caused by deepfake political content has societal implications, emphasizing the urgent need for comprehensive regulation. By adopting legislative reforms, enhancing regulatory mechanisms, and fostering international collaboration, India can effectively mitigate the risks associated with deepfakes while harnessing their potential for innovation.

X. REFERENCES

1. “Deepfake Global Crisis,” European Union Intellectual Property Office, 2024, https://intellectual-property-helpdesk.ec.europa.eu.

2. Are Copyright Laws adequate to deal with Deepfakes?: A comparative analysis of positions in the United States, India and United Kingdom – KSLR Commercial & Financial Law Blog, https://blogs.kcl.ac.uk/kslrcommerciallawblog/2020/12/17/are-copyright-laws- adequate-to-deal-with-deepfakes-a-comparative-analysis-of-positions-in-the-united- states-india-and-united-kingdom/ (last visited Dec 24, 2024).

3. Bobby Chesney & Danielle Citron, Deep Fakes, 107 Calif. L. Rev. 1753, 1753–1820 (2019), https://www.jstor.org/stable/10.2307/26891938.

4. California Civil Code § 1798.91.20 (2020) (U.S.).

5. CHALLENGES OF DEEPFAKE TECHNOLOGY, UNDER THE INDIAN LEGAL SYSTEM. - THE LAWWAY WITH LAWYERS JOURNAL, (Apr. 15, 2024), https://www.thelawwaywithlawyers.com/challenges-of-deepfake-technology-under-the-indian-legal-system/, https://www.thelawwaywithlawyers.com/challenges-of-deepfake- technology-under-the-indian-legal-system/ (last visited Dec 24, 2024).

6. Copyright (Amendment) Act, 2012, No. 27 of 2012 (India).

7. Copyright Act, 1957, No. 14 of 1957 (India).

8. Copyright, Designs and Patents Act 1988, c. 48 (Eng.).

9. Deep Fakes Accountability Act, H.R. 3230, 116th Cong. (2019) (U.S.).

10. DEEPFAKES AND THE COPYRIGHT LAW IN INDIA, (Aug. 13, 2021), https://baskaranslegal.com/blog/2021/08/13/deepfakes-and-the-copyright-law-in-india/ (last visited Dec 24, 2024).

11. Dr. Nameeta Rana Minhas & Dheeraj Sonkhla, Exploring Legal and Technical Challenges of Deep Fakes in India, 6 Int’l J. for Multidisciplinary Rsch. 1 (2024).

12. Ethics Guidelines for Trustworthy AI, European Commission, (Apr. 8, 2019), https://ec.europa.eu/digital-strategy/our-policies/ethics-guidelines-trustworthy-ai.

13. General Data Protection Regulation, Regulation (EU) 2016/679, 2016 O.J. (L 119) 1 (EU).

14. Indian Copyright Act, 1957.

15. Information Technology Act, 2000.

16. IPLEADERS (May 5, 2024), https://blog.ipleaders.in/understanding-copyright-issues-entailing-deepfakes-in-india/ (last visited Dec 24, 2024).

17. James Vincent, Facebook contest reveals deepfake detection is still an ‘unsolved problem’, THE VERGE (June 12, 2020) https://www.theverge.com/21289164/facebook-deepfake-detection-challenge-unsolved-problem-ai. (last visited Dec 24, 2024).

18. Lindsey Joost, “The Place for Illusions: Deepfake Technology and the Challenges of Regulating Unreality,” University of Florida Journal of Law and Public Policy, vol. 33, no. 2, 2023, pp. 309-iv.

19. Madhura Thombre, “Deconstructing Deepfake: Tracking Legal Implications and Challenges,” International Journal of Law Management & Humanities, vol. 4, 2021, pp. 2267-2274.

20. Michael D. Murray, Deceptive Exploitation: Deepfakes, the Rights of Publicity and Privacy, and Trademark Law, 65 IDEA: L. Rev. Franklin Pierce Ctr. for Intell. Prop. 1 (2024).

21. Mohamed Hassan Mekkawi, The Challenges of Digital Evidence Usage in Deepfake Crimes Era, 3 J. L. & Emerging Tech. 176 (2023).

22. Sindhu A, Interventions on the Issue of Deepfakes in Copyright, World Intell. Prop. Org.,https://www.wipo.int/export/sites/www/aboutip/en/artificial_intelligence/conversatio n_ip_ai/pdf/ind_a.pdf (last visited Dec. 24, 2024).

23. The Digital Millennium Copyright Act 17 U.S.C. §107 (1998) (USA).

24. Vanshika Kapoor, Understanding Copyright Issues Entailing Deepfakes in India,

25. Yinuo Geng, Comparing "Deepfake" Regulatory Regimes in the United States, the European Union, and China, 7 GEO. L. TECH. REV. 157 (2023).

*******

[1] Henry Ajder, Giorgio Patrini, Francesco Cavalli, and Laurence Cullen, “The State of Deepfakes: Landscape, Threats and Impact,” Deeptrace, https://regmedia.co.uk/2019/10/08/deepfake_report.pdf.

[2] Geraint Rees, “Here's How Deepfake Technology Can Actually Be a Good Thing,” in World Economic Forum Agenda, 2019

[3] Sindhu A, Interventions on the Issue of Deepfakes in Copyright, World Intell. Prop. Org., https://www.wipo.int/export/sites/www/aboutip/en/artificial_intelligence/ conversation_ip_ai/pdf/ind_a.pdf (last visited Dec. 24, 2024).

[4] Dr. Nameeta Rana Minhas & Dheeraj Sonkhla, Exploring Legal and Technical Challenges of Deep Fakes in India, 6 Int’l J. for Multidisciplinary Rsch. 1 (2024).

[5] Chesney, Bobby, and Danielle Citron. "Deep Fakes." California Law Review, vol. 107, no. 6, 2019, pp. 1753– 1820. JSTOR, https://doi.org/10.2307/26891938.

[6] Madhura Thombre, “Deconstructing Deepfake: Tracking Legal Implications and Challenges,” International Journal of Law Management & Humanities, vol. 4, 2021, pp. 2267-2274.

[7] Lindsey Joost, “The Place for Illusions: Deepfake Technology and the Challenges of Regulating Unreality,”University of Florida Journal of Law and Public Policy, vol. 33, no. 2, 2023, pp. 309-iv.

[8] Indian Copyright Act, § 13, Act No. 14 of 1957, India Code (1957).

[9] Indian Copyright Act, § 14, Act No. 14 of 1957, India Code (1957).

[10] Indian Copyright Act, § 52, Act No. 14 of 1957, India Code (1957)., ‘Using Copyright and Licensed Content: Copyright & Fair Use’ (Indian Institute of Management) <https://library.iimb.ac.in/copyrightguidelines/cf> accessed 24 December, 2020.

[11] CHALLENGES OF DEEPFAKE TECHNOLOGY, UNDER THE INDIAN LEGAL SYSTEM. - THE LAWWAY WITH LAWYERS JOURNAL, (Apr. 15, 2024), https://www.thelawwaywithlawyers.com/challenges-of-deepfake-technology-under-the-indian-legal-system/, https://www.thelawwaywithlawyers.com/challenges-of-deepfake-technology-under-the-indian-legal-system/ (last visited Dec 24, 2024).

[12] Indian Copyright Act, § 57, Act No. 14 of 1957, India Code (1957).

[13] Information Technology Act, § 79, No. 21 of 2000, India Code (2000).

[14] MySpace Inc. v. Super Cassettes Indus. Ltd., (2017) 236 DLT 478 (India).

[15] Information Technology (Intermediary Guidelines) Rules, 2018, Gazette of India, G.S.R. 138(E) (Mar. 27, 2018).

[16] James Vincent, Facebook contest reveals deepfake detection is still an ‘unsolved problem’, THE VERGE (June 12, 2020) https://www.theverge.com/21289164/facebook-deepfake-detection-challenge-unsolved-problem-ai. (last visited Dec 24, 2024).

[17] Dr. Nameeta Rana Minhas & Dheeraj Sonkhla, Exploring Legal and Technical Challenges of Deep Fakes in India, 6 Int’l J. for Multidisciplinary Rsch. 1 (2024).

[18] Sindhu A, Interventions on the Issue of Deepfakes in Copyright, World Intell. Prop. Org.,https://www.wipo.int/export/sites/www/aboutip/en/artificial_intelligence/ conversation_ip_ai/pdf/ind_a.pdf (last visited Dec. 24, 2024).

[19] Bobby Chesney & Danielle Citron, Deep Fakes, 107 Calif. L. Rev. 1753, 1753–1820 (2019), https://www.jstor.org/stable/10.2307/26891938.

[20] Deep Fakes Accountability Act, H.R. 3230, 116th Cong. (2019) (U.S.).

[21] California Civil Code § 1798.91.20 (2020) (U.S.).

[22] Yinuo Geng, Comparing "Deepfake" Regulatory Regimes in the United States, the European Union, and China, 7 GEO. L. TECH. REV. 157 (2023).

[23] The Digital Millennium Copyright Act 17 U.S.C. §107 (1998) (USA).

[24] Campbell v Acuff Rose, 510 US 569 (1994).

[25] Communication Decency Act 47 U.S.C. § 230 (1996) (USA).

[26] Berne Convention for the Protection of Literary and Artistic Works, Sept. 9, 1886, as revised at Paris on July 24, 1971, and amended in 1979 S. Treaty Doc. No. 99-27 (1986).

[27] Visual Artists Rights Act 17 U.S.C. §106A (1990) (USA).

[28] Are Copyright Laws adequate to deal with Deepfakes?: A comparative analysis of positions in the United States, India and United Kingdom – KSLR Commercial & Financial Law Blog, https://blogs.kcl.ac.uk/kslrcommerciallawblog/2020/12/17/are-copyright-laws-adequate-to-deal-with-deepfakes- a-comparative-analysis-of-positions-in-the-united-states-india-and-united-kingdom/ (last visited Dec 24, 2024).

[29] General Data Protection Regulation, Regulation (EU) 2016/679, 2016 O.J. (L 119) 1 (EU).

[30] Directive 2001/29/EC of the European Parliament and of the Council of 22 May 2001 on the Harmonisation of Certain Aspects of Copyright and Related Rights in the Information Society, 2001 O.J. (L 167) 10, consolidated version available at http://data.europa.eu/eli/dir/2001/29/oj.

[31] Supra Note 22.

[32] Ethics Guidelines for Trustworthy AI, European Commission, (Apr. 8, 2019), https://ec.europa.eu/digital-strategy/our-policies/ethics-guidelines-trustworthy-ai.

[33] Copyright, Designs and Patents Act 1988, c. 48 (Eng.).

[34] Hubbard v Vosper [1972] 2 QB 84.

[35] Hyde Park Residence Ltd v Yelland & Ors [2000] EWCA Civ 37.

[36] Supra Note 22.