SECURING WOMEN’S DIGITAL SANCTITY: ADDRESSING AI-DRIVEN THREATS HEAD-ON
By Anwesh Ghosh, LL.M., SKB University, Purulia, West Bengal
Advocate & Consultant District Court, Barasat.
Email: anweshghosh.1006@gmail.com.
ABSTRACT
This research article delves into the pressing issue of women's privacy violation through the malicious use of artificial intelligence (AI), particularly employing the deepfake technology. The study highlights the alarming trends of AI-driven deepfakes being employed for the abuse of women's privacy, encompassing scenarios of cyberbullying, harassment, and even blackmail. The article underscores the urgency of addressing these challenges by advocating for stringent regulations governing the deployment of AI and emphasizing the need to fortify existing data usage policies. It emphasizes the critical role that regulatory frameworks play in curbing the unethical use of AI technologies, especially when it comes to women's privacy. Furthermore, the research emphasizes the imperative of enhancing the security measures on social media platforms through automatic encryption of media files during the upload process to thwart potential abuse of facial & physical data for the creation of deepfakes. By doing so, the article proposes a proactive approach to mitigate the risks associated with AI-driven privacy violations. The study also explores the potential of encryption and blockchain technology in safeguarding online data. It argues for the incorporation of robust encryption mechanisms and blockchain solutions to protect sensitive information, providing a multi-faceted defense against privacy infringements.
Objectives of Study:
· Investigate the extent of AI-driven threats to women's privacy, with a specific focus on the proliferation of deepfake technology in perpetrating cyberbullying and harassment alongwith examining the existing regulatory landscape surrounding AI usage and data protection policies.
· Propose effective regulatory measures and policy enhancements to curb the unethical use of AI and mitigate risks associated with the abuse of women's privacy.
· Explore the viability and efficacy of integrating encryption and blockchain technologies as proactive measures to enhance online data protection and safeguard women against privacy infringements.
· Provide a comprehensive framework that combines regulatory, policy, and technological interventions to establish a resilient defense against AI-driven threats, promoting a safer online environment for women.
Keywords- Artificial Intelligence (AI), Deepfake, Cyberbullying, Blackmail, Data Protection, Pornography, Blockchain, Sensitive data etc.
I. INTRODUCTION
Background and Context
Recent years have seen rapid AI advances transforming industries but raising privacy concerns, especially for women. This chapter delves into AI-driven threats, spotlighting the misuse of deepfake technology. Despite AI's transformative impact, deepfakes, manipulating media convincingly, pose a growing risk to women's privacy. This deliberate abuse intertwines with gender dynamics, leading to alarming cyberbullying. Urgency lies in addressing increasing AI- driven privacy violations, emphasizing the critical need for strategic interventions.
Evolution of AI and its threats
The development of artificial intelligence (AI) reflects humanity's quest for innovation, moving from rule-based systems to advanced machine learning models[1] and driving the industrial revolution. But this development reveals a darker side, as seen in the rise of deep counterfeiting and AI-driven crimes against women. Machine learning was originally based on symbolic artificial intelligence with specific rules, and advances now allow systems to learn from data, leading to deep learning and the development of deep neural networks that mirror the structure of the human brain.
Advances in machine learning have laid the groundwork for deep forgery, an advanced form of artificial intelligence-based manipulation. Born from "deep learning"[2] and "deception", deep fakes accurately analyze and synthesize facial and body data to create unreal but completely fake audio and video content. Deepfake technology was originally a new technology that involved changing the face of a video, but now it has become a malicious tool. The multifaceted threat posed by counterfeiting includes harassment and cyberbullying, particularly crimes against women. Criminals use the anonymity of the Internet to cause emotional distress and underground harassment by creating public content without consent, causing serious defamation. Addressing these challenges requires a rapid and comprehensive response, including adjusting regulatory frameworks and increasing public awareness. The next chapter explores these steps to build resilient defenses against new AI threats[3] and create safer digital spaces for women.
Mechanism of Deepfakes:
Deepfake technology, rooted in deep learning principles, intricately analyzes and synthesizes detailed facial and physical features using extensive datasets.
As the deepfake algorithm refines its understanding, it gains the capability to seamlessly superimpose or replace facial features in existing audio or video content. This synthesis involves a complex interplay of mathematical transformations, leveraging the neural network's learned representations to manipulate pixels with remarkable precision. The result is multimedia content that appears authentic to the human senses, blurring the line between reality and artificial creation.[4] Deepfake technology leverages advanced machine learning algorithms, particularly deep neural networks, to create realistic-looking but entirely fabricated content, such as images, videos, or audio recordings. The term "deepfake" is derived from the combination of "deep learning" and "fake." The mechanics involve training these neural networks on vast datasets, often consisting of thousands of images or videos of the target person.[5] This training allows the algorithm to learn and mimic the facial expressions, movements, and speech patterns of the individual.[6]
For women, deepfake technology poses a significant threat as it can be employed to manipulate and misuse their images and videos in several ways. One common misuse is in the creation of non- consensual explicit content. Malicious actors can superimpose the faces of women onto explicit material, creating fake videos that appear genuine. This not only violates the privacy of the individuals involved but also exposes them to reputational harm and emotional distress.[7]
Cyberbullying is another perilous application of deepfake technology. Attackers can use these algorithms to generate content that defames, humiliates, or misrepresents women, exacerbating the emotional toll and psychological distress experienced by the victims. The manipulative prowess of deepfake algorithms allows for the creation of content that appears authentic, making it challenging to discern real from fake.
Moreover, the technology opens the door to potential blackmail scenarios. Malicious actors armed with convincingly fabricated content can exploit victims by threatening to expose the deepfakes unless certain demands are met. This form of extortion preys on the vulnerability of individuals and emphasizes the urgent need for robust regulatory measures to deter and penalize such criminal activities.
The mechanics of deepfake technology, therefore, involve a complex interplay of machine learning, image processing, and neural network training.[8] As these technologies continue to evolve, it becomes increasingly crucial to address the risks and potential harms, especially concerning the privacy and well-being of women in the digital realm.
Scenarios of Abuse
Within the intricate web of deepfake technology, malicious scenarios unfold with alarming precision, extending beyond mere technological intrigue. The manipulative prowess of deepfake algorithms is exploited to craft content that humiliates, defames, or misrepresents victims, intensifying emotional tolls and psychological distress. Taking a more insidious form, deepfake technology becomes a tool for generating non-consensual explicit content, as perpetrators superimpose faces onto explicit material. This egregious violation not only inflicts reputational harm but also perpetuates a culture of online exploitation.[9] Victims grapple with profound emotional impacts as they navigate the repercussions of false portrayals in a digital space often lacking protective measures. The specter of blackmail looms large, with malicious actors armed with seemingly genuine fabricated content exploiting victims by threatening exposure unless demands are met.[10] This form of extortion capitalizes on vulnerability, emphasizing the necessity for robust regulatory measures to deter and penalize such criminal activities.
Intersection of Technology and Privacy:
The convergence of technology, privacy, and gender issues is a pivotal focus, particularly in the context of deepfake technology's emergence from advanced AI. Deepfakes operate precisely at the intersection of these realms, and the article underscores their unique impact on women, introducing a gendered dimension to privacy breaches. This intersectionality accentuates the imperative for nuanced solutions that extend beyond addressing broader technological challenges. It necessitates a comprehensive understanding of how technological advancements intersect with gender dynamics, acknowledging the specific vulnerabilities and implications[11] for women. As we navigate an increasingly digital landscape, these insights emphasize the urgency of developing strategies that not only mitigate technological risks but also safeguard the privacy and well-being of women in the face of evolving AI threats.
II. AI-DRIVEN THREATS TO PRIVACY OF WOMEN
Following are the various types of threats driven by Artificial Intelligence in the current setting:
● Deepfake Pornography: Following are the various types of threats driven by Artificial Intelligence in the current setting: Deepfake technology is extensively abused to create non- consensual explicit content, superimposing women's faces onto explicit material. This form of abuse not only violates privacy but also inflicts reputational harm, emotional distress, and potential career consequences for the victims[12].
● Cyberbullying and Defamation: Deepfake algorithms are employed to generate content that defames, humiliates, or misrepresents women. Cyberbullies use manipulated videos or images to intensify emotional tolls, causing psychological distress and tarnishing the victims' online and offline reputation.[13]
● Identity Theft and Fraud: Deepfakes can be utilized for identity theft, creating convincing replicas of women to engage in fraudulent activities. Malicious actors may use manipulated content to deceive individuals, organizations, or financial institutions, causing financial harm and tarnishing the victim's identity. This type of abuse extends the impact of deepfake technology beyond personal and emotional consequences to include tangible financial repercussions for the women targeted. Addressing this form of abuse requires a focus on both privacy protection and cybersecurity measures to prevent the exploitation of manipulated identities.
● Extortion and Blackmail: Perpetrators exploit deepfake content to engage in extortion, threatening to expose manipulated material unless victims comply with demands. This form of abuse preys on vulnerability, leveraging the fear of reputational harm to coerce victims into compliance.
● Misrepresentation in Job Settings: Deepfake technology may be misused to create fabricated content portraying women engaging in inappropriate or unprofessional behavior in work- related settings. This form of abuse poses a direct threat to women's careers by damaging their professional reputation and potentially leading to discrimination or job loss.
It undermines the trust employers place in their employees and can hinder career advancement opportunities. Combatting this type of abuse requires increased vigilance in verifying the authenticity of online content and establishing measures to mitigate the impact of false portrayals on professional lives.
● Manipulation in Intimate Relationships: Deepfakes may be used within personal relationships to manipulate or deceive partners. This abuse involves creating fabricated content to mislead or exploit trust within intimate settings, causing emotional distress and fractures in personal connections.
These forms of deepfake technology abuse underscore the diverse ways in which these malicious applications can compromise the privacy, dignity, and well-being of women. Addressing these issues requires a multifaceted approach involving technological countermeasures, legal frameworks, and increased awareness.
Concerns:
Deepfakes, employing sophisticated AI algorithms, have versatile applications including non- consensual pornography, fraud and disinformation dissemination. These manipulative tools pose significant challenges across various domains, jeopardizing privacy through malicious use of fabricated explicit content, compromising personal and financial security through fraudulent activities. The broad spectrum of deepfake applications highlights the urgent need for robust countermeasures to safeguard individuals from privacy violations, maintain transparency, and ensure the security of social media, electoral and financial systems. Addressing these challenges requires comprehensive strategies encompassing technological innovation, legal frameworks, and heightened awareness to mitigate the multifaceted risks associated with deepfake manipulation.
Real Life Scenarios:
A poet and broadcaster in Sheffield, UK, was the victim of a fake pornography campaign. What was most shocking was that the images were based on photos that had been taken from her private social media accounts, including a Facebook profile she had deleted.
The perpetrator had uploaded these non-intimate images—holiday and pregnancy photos and even pictures of her as a teenager—and encouraged other users to edit her face into violent pornographic photos. While some were shoddily Photoshopped, others were chillingly realistic.[14]
A man with the pseudonym Luke was interviewed who said “I don’t know the actual identity of the individual, but they came across as a woman online…Without my knowledge or consent they were able to screen record what was being sent to me and what I sent to them [sending was consensual]. Pretty much within minutes I got a message on Instagram that had all my followers on it talking about nudes or something like that. They tried to extort me on it, essentially saying I could pay them X amount of money to either take them down and delete the messages or not…I didn’t report it because mainly I didn’t really think it was a crime? That it would be something they [the police] would pursue.”[15]
Another example is of a Twitch streamer who bought and watched deepfake porn depicting his colleagues.[16] Targets are no longer just celebrities and influencers, but now can be and often are “private individuals,” according to Giorgio Patrini, CEO and chief scientist of Sensity AI.
Three notorious examples of exploitative codes and technologies, persistently available on GitHub as of February 2023, defy the platform's promises to remove and ban such content. DeepFaceLab (DFL), a leading deepfake software, allows the creation of realistic manipulated videos. DeepNude, the pioneering "AI-leveraged" nudifying website, quickly replicated into the first AI- leveraged nudifying chat bot. Unstable Diffusion, a donation-based sexual deepfake bot, is derived from the Stable Diffusion code on GitHub. In October 2020, Sensity AI uncovered a deepfake ecosystem on Telegram featuring an AI-powered bot enabling users to 'strip' women of their clothing. Comprising seven affiliated channels with over 100,000 members, the central hub had 45,000 unique members from 25 countries. By July 2020, over 104,800 women had been publicly 'nudified,' a 198% increase from April 2020. Notably, the bot shifted from targeting female celebrities to private individuals, with 70% being non-public figures. The bot, trained solely on female genitalia, highlighted a concerning trend as no 'nudifying' bots for other genders existed. Despite being free, users could pay for enhanced content and watermark removal. Utilizing an open-source DeepNude version, the bot operated within Telegram, accessible through a chat interface by uploading images and awaiting the 'stripped' response.
III. PSYCHOLOGICAL EFFECTS ON VICTIMS
The rise of deepfake technology and AI-induced threats has plunged women into a perilous era, navigating the intersection of privacy breach and sexual abuse. This chapter explores the profound psychological toll on female victims ensnared in the sinister web of technological exploitation. As deepfake algorithms evolve, precision aggravates the vulnerabilities, subjecting women to privacy violations and manipulation of intimate content. The staggering psychological impact includes emotional distress, humiliation, and identity crises, distorting personal narratives. Cyberbullying amplifies trauma, fostering fear and isolation. Beyond digital spaces, strained personal relationships, self-esteem issues, and constant vigilance prevail. Stigmatizing legal battles add stress, demanding urgent interventions to comprehend, address, and mitigate nuanced psychological consequences inflicted upon women by relentless AI-induced threats. Following are some of the common effects seen in the victims of abuse of deepfake technology and other AI threats:
1. Emotional Distress and Humiliation
Women victimized by deepfake pornography undergo severe emotional distress and humiliation. The breach of privacy, with explicit content disseminated without consent, induces profound shame and embarrassment. This violation extends beyond the digital realm, affecting their self- esteem, relationships, and mental well-being. The emotional toll is heightened by the constant fear of further exploitation, contributing to a pervasive sense of vulnerability and anxiety[17]. Seeking legal recourse often adds to the psychological burden, subjecting victims to stigmatization. The psychological aftermath of deepfake abuse encompasses a complex interplay of emotions, impacting victims on personal, social, and professional fronts.
2. Identity Crisis and Self-Esteem Issues
The manipulation of intimate images in deepfake abuse can trigger an identity crisis for victims. Confronted with distorted perceptions of themselves, individuals may grapple with an internal struggle to reconcile the false public representation with their authentic selves. This discrepancy can bring about profound self-esteem issues, as victims cope with the impact of the manipulated content on their self-image and struggle to maintain a positive sense of identity and self-worth.
3. Social Stigma and Isolation
The social consequences of deepfake abuse create a climate of fear and isolation for victims. Fearing judgment and stigma, individuals often isolate themselves to cope with the awareness that manipulated explicit content circulates online. This heightened self-imposed isolation becomes a defense mechanism, making it difficult for women to participate in social activities and maintain meaningful relationships. The constant threat of social judgment and the knowledge that false representations exist online contribute to a challenging environment that impedes normal social interactions and deepens the emotional impact on the victims.
4. Cyberbullying and Online Harassment
The ramifications of deepfake abuse extend to the digital realm, causing a heightened psychological toll on victims through cyberbullying and online harassment. Perpetrators exploit the manipulated explicit content as a potent weapon to amplify their harassment tactics. The fabricated nature of deepfakes, often realistic and convincing, provides malicious actors with a tool to intensify their attacks, increasing the emotional distress experienced by victims. The distorted images and videos can be weaponized to humiliate and intimidate, perpetuating a cycle of cruelty in the online environment. This form of harassment not only invades the victim's personal space but also infiltrates their digital presence, making it challenging to escape the relentless torment.[18] The intersection of deepfake technology and cyberbullying creates a particularly insidious threat, where the digital space becomes a battleground for the emotional well-being of individuals. It underscores the urgency for comprehensive measures to address and counteract the malicious use of deepfakes, emphasizing the need for a holistic approach that combines legal, technological, and educational strategies to protect individuals from the multifaceted harms inflicted by this form of abuse.
5. Impact on Relationships
Deepfake abuse strains personal relationships, inducing fear of judgment and rejection from friends, family, and partners. The breach of trust stemming from the manipulation of intimate content creates significant hurdles in rebuilding relationships. Victims grapple with the aftermath of distorted representations, making it challenging to regain a sense of security and openness in their connections. Trust issues become pervasive, impacting the dynamics of personal relationships as individuals navigate the delicate process of disclosing and addressing the emotional toll inflicted by deepfake exploitation.
6. Post-Traumatic Stress Disorder (PTSD)
Deepfake-induced PTSD manifests as flashbacks, anxiety, and nightmares, inflicting extreme mental distress on victims.[19] The trauma stems from the violation of consent and the enduring impact of explicit content dissemination. Flashbacks revisit the traumatic event, fostering a continuous cycle of distress. Persistent anxiety and nightmares amplifies the emotional toll, impacting the overall mental well-being of those subjected to non-consensual deepfake creation. The lasting psychological effects highlight the urgent need for support systems and interventions to address the mental health consequences endured by victims of deepfake abuse.
7. Loss of Control and Empowerment
The loss of control and empowerment, a consequence of deepfake abuse, leaves victims grappling with profound helplessness. The violation of privacy not only disrupts personal narratives but also undermines agency over one's own body. The manipulated content strips individuals of the autonomy to shape their digital identity, fostering a disempowered state. This breach erodes the fundamental right to control personal information, intensifying the emotional toll as victims navigate the complex aftermath of deepfake exploitation. Addressing this loss of control necessitates holistic support systems and legal measures to empower victims and reclaim agency over their narratives and bodies.
8. Constant Vigilance and Anxiety
The fear of ongoing exploitation compels victims of deepfake abuse to live in a perpetual state of vigilance, severely impacting their mental well-being. The relentless anxiety stems from the constant threat of additional manipulated content surfacing, creating a persistent source of stress. The anticipation of continued violations intensifies emotional distress, making it challenging for victims to find solace or a sense of security. This perpetual state of alertness underscores the enduring psychological consequences of deepfake exploitation[20], emphasizing the crucial need for comprehensive support mechanisms to ease the anxiety and restore a fragment of normalcy to the lives of those affected.
9. Legal Battles and Stigmatization
Seeking legal recourse after falling victim to deepfake abuse becomes a protracted and stigmatizing ordeal. The complex legal battles, coupled with societal judgment, compound the psychological toll on victims. Navigating through intricate legal processes adds stress, fostering a sense of vulnerability and frustration. The stigmatization endured during these proceedings further deters individuals from seeking justice, perpetuating a cycle of suffering. The intersection of legal complexities and social stigma underscores the urgent need for streamlined legal frameworks and empathetic support systems to alleviate the burdens faced by victims and encourage them to pursue legal remedies without fear of additional harm.
IV. OBSERVATIONS & SUGGESTIVE OPINION
● Role of Social-Media - Social media platforms have become popular ways for people to express themselves and connect with others. They allow people to post pictures and videos, share their interests, and communicate with strangers through text, voice, or video messages. However, this also means that people are giving away their facial data, voice data, and other personal information, often without realizing the potential risks. These data can be used to create deepfakes, which are realistic but fake videos or images that can manipulate or impersonate someone. Deepfakes can be used for malicious purposes, such as spreading misinformation, blackmailing, or harassing people. Moreover, AI-driven threats can also exploit the data to target people with phishing, identity theft, or cyberattacks. Therefore, people should be more aware of the dangers of giving away their data on social media platforms and take precautions to protect their privacy and security. They should also be critical of the content they see online and verify the sources before trusting them.
● Technological Advent - Technology in the form of Artificial Intelligence, data collection and analysis methods have advanced rapidly in recent years, bringing many benefits to society. However, they also pose serious threats to people's data privacy, as they enable the collection, processing, and sharing of large amounts of personal and sensitive data. These data can reveal information about people's identities, preferences, behaviors, locations, health, finances, and more. They can also be used to influence people's decisions, manipulate their emotions, or discriminate against them. Moreover, these data can be accessed, stolen, or misused by unauthorized parties, such as hackers, corporations, or persons with malicious intent. Therefore, people's data privacy is at risk from the advancements of technology, AI, and data collection and analysis methods.[21] People should be aware of their data rights and responsibilities, and demand more transparency and accountability from the entities that collect and use their data, meaning that they should take measures to protect their data and privacy online.
● Easy Money Psyche - In the contemporary digital landscape, an increasing number of individuals, including women, are turning to social media and online platforms that offer sexual services. These platforms, which promise quick financial gains, often require users to share personal data, making them vulnerable to data misuse.
The allure of easy money often prompts individuals to engage in high-risk ventures or share private data on digital platforms. The proliferation of such platforms, promising lucrative returns, exposes users to potential data misuse, leading to privacy breaches and financial losses. This vulnerability is heightened as more individuals, including women, turn to online platforms for sexual services. The pursuit of quick gains may overshadow the associated risks, leaving users susceptible to data exploitation. Malicious entities, armed with advanced technologies like AI, can create deepfakes, causing reputational damage and emotional distress. It's imperative to prioritize safety and privacy over the temptation of easy money in the digital landscape.
● SMP (Social Media Platform) Framework - To enhance user privacy and protect against the misuse of personal data, especially for women, social media and tech entities like Facebook, WhatsApp, Instagram, Google, Snapchat, etc., should consider implementing the following measures:
➢ Robust Privacy Settings: Provide clear and easily accessible privacy settings that allow users to control who can access their personal information, photos, and videos. Enable granular controls over sharing preferences, allowing users to specific audiences for different types of content.
➢ Advanced Facial Recognition Controls: Implement user-friendly features that allow users to opt-out of facial recognition technologies.[22] Provide options for users to manage and control how their facial features are used within the platform.
➢ Encryption and Secure Communication: Enhance end-to-end encryption for messages and media shared within the platforms to ensure that even the service providers cannot access the content.
➢ Two-Factor Authentication: Encourage and facilitate the use of two-factor authentication to add an extra layer of security for user accounts.[23]
➢ Educational Campaigns: Conduct awareness campaigns to educate users, especially women, about the potential risks associated with sharing sensitive content online. Provide guidelines on adjusting privacy settings and recognizing potential privacy threats.
➢ Stringent Data Handling Policies: Enforce strict policies on how user data is collected, stored, and shared. Minimize data retention periods, ensuring that data is not stored longer than necessary. Ensure that users have clear and informed consent mechanisms for data collection and usage. Provide users with the ability to revoke consent and have their data deleted if they choose to do so.
➢ Collaboration with Regulators: Collaborate with regulatory bodies to stay compliant with evolving privacy regulations. Proactively engage in discussions on best practices and standards for user data protection.
➢ Employ Blockchain Technology: Social Media Platforms should employ blockchain technology for data protection of its users. Detailed mechanism of Blockchain Technology is discussed hereinafter.
By prioritizing user privacy and implementing these measures, these platforms can contribute to creating a safer online environment and protect the personal data of all users, especially women, from potential exploitation and abuse.
● Technological Framework - In order to protect women from being abused by means of Artificial Intelligence, data, especially personal and sensitive data must be protected before everything else and that would require technological frameworks and stringent policies to be employed by various entities.
One form of data protection may be employed by the use of Blockchain Technology. Blockchain is a decentralized and distributed ledger technology that enables secure and transparent record-keeping of transactions across a network of computers. The basic idea behind blockchain is to create a tamper-resistant and verifiable record of transactions that is maintained by a network of participants, often referred to as nodes or peers. It has applications beyond cryptocurrencies, including supply chain management, voting systems, healthcare records, and more. Different blockchain platforms may have variations in their structures and consensus mechanisms, but the fundamental principles remain similar. Blockchain technology has several features that can enhance the security and protection of data.
However, it is important to note that while blockchain can enhance certain aspects of data security, it is not a complete solution, and its implementation needs to be part of a broader strategy which means that the use of blockchain technology should be integrated into a comprehensive and well-thought-out approach to address various aspects of data protection. Key elements that may be part of this broader strategy are:
Risk Assessment: Conduct a thorough assessment of the risks associated with the specific use case. Identify potential threats to data security and privacy[24], considering factors such as the nature of the data, potential vulnerabilities, and regulatory requirements.
➢ Encryption and Secure Access Controls: Implement strong encryption mechanisms to protect sensitive data, both on the blockchain and in associated storage systems.[25] Employ robust access controls to restrict data access to authorized individuals or entities.
➢ Multi-Factor Authentication: Enhance access security by implementing multi-factor authentication mechanisms. This adds an extra layer of protection to ensure that only authorized individuals can access sensitive data.
➢ Incident Response Plan: Develop a comprehensive incident response plan to address potential security breaches. This plan should include protocols for identifying, containing, and mitigating security incidents, as well as communication strategies.
➢ Regular Audits and Monitoring: Conduct regular audits to assess the security of the blockchain implementation and associated systems. Implement continuous monitoring to detect and respond to any suspicious activities promptly.
➢ Data Minimization and Purpose Limitation: Adhere to the principles of data minimization and purpose limitation. Only collect and retain the data necessary for the intended purpose, and ensure that data is not used for purposes beyond what users have consented to.
Blockchain technology can contribute to the protection of facial features data and other personal data derived from photos and videos posted on social media platforms in several ways and can be applied as the following:
➢ Decentralized Identity Management: Blockchain can be used for decentralized identity management, where individuals have control over their own identity information. Users can store their facial features data, personal information, and permissions on a blockchain, allowing them to control who has access to their data.
➢ Attribution and Watermarking: Blockchain can be used to establish the ownership and authenticity of photos and videos by embedding cryptographic signatures or watermarks in the media files.[26] This helps prevent unauthorized use and provides a verifiable record of content ownership.
➢ Secure Storage and Encryption: Blockchain can be combined with secure storage solutions to protect the actual content of photos and videos. Encryption techniques can be applied to ensure that even if data is accessed, it remains unreadable without the appropriate decryption keys.
➢ Data Monetization Control: Blockchain can empower individuals to control the monetization of their own data. Through smart contracts, users can receive compensation or benefits in exchange for allowing specific uses of their facial features data, creating a more transparent and fair data economy.
➢ Auditable Data Trail: A blockchain's immutable and transparent nature enables the creation of an auditable data trail. Individuals can track who accessed their data, when, and for what purpose, enhancing accountability and making it easier to identify potential breaches.
➢ Interoperability Standards: Establishing interoperability standards for identity and data on the blockchain can facilitate seamless and secure sharing of data between platforms, with user consent and control at the core of these interactions.
It is important to highlight that while blockchain can enhance data security, it cannot address all privacy concerns on its own. Other measures, such as secure access controls, encryption, and user education, are essential components of a comprehensive privacy strategy.
Legal Framework - India’s legal framework for data protection and privacy is evolving. Historically, India did not have a comprehensive legislation for data protection. The right to privacy was recognized as a fundamental right under Article 21 of the Constitution of India. However, this could only be enforced against the State.[27] The Information Technology Act, 2000 and the Information Technology (Reasonable security practices and procedures and sensitive personal data or information) Rules, 2011, provided some legal framework on privacy. These laws enabled electronic storage of documents and provided for the retention of personal information or data. In August 2023, India passed the Digital Personal Data Protection Act, 2023 (DPDPA), its first comprehensive data protection law. This law aims to ensure the growth of the digital economy while keeping personal data of citizens secure and protected. While these laws apply to all citizens, there are also specific laws enacted for the protection of women in India. The DPDPA introduces a broad definition of “personal information”, creates transparent disclosure requirements for data controllers (referred to as “Data Fiduciaries” in the DPDPA) with an emphasis on notice and consent, establishes fairly strong data subject rights, provides for the possibility of limitations on cross-border data transfers, and places various obligations on data controllers to safeguard personal data.[28]
Existing legislations fall short in addressing the pressing issues of deepfakes and AI-related threats, especially concerning women's safety. India lacks specific laws to combat the surge in deepfake and AI-driven digital crimes. Collaborations with experts, advocacy groups, and international entities are crucial to draft comprehensive legislation that defines offenses, imposes penalties, and establishes protective measures for women against evolving digital threats.
Legislation should mandate disclosure of AI-generated content and establish swift removal mechanisms for deepfakes. Critical provisions for victim support, expedited legal procedures, and rehabilitation services must be included. Addressing cross-border challenges through international cooperation is vital. Creating a nuanced legal framework tailored to combat deepfake abuse proactively safeguards women from evolving digital threats, upholding privacy and security.
V. Conclusion
Framing of stringent legal policies in a way so as for the criminal minded persons to not be able to exploit the loopholes of the same is of utmost importance and a matter of utter urgency. Such policies must be framed alongside technological experts having absolute expertise of the matter. To absolutely stop or prevent AI driven threats including abuse of Deepfake technology is something that cannot be dreamt of in this age of ever-growing technology but a check on all these technologies can be implemented well enough by the makers of the technology themselves owing to the fact that such technologies although have become easy to come by, is not as easy to create and need much knowledge and finances to be brought forth and the ones with the money must be made aware of the consequences. Some of them may not care about the morality of it but they will most definitely care if strict authoritative actions are taken against those who violate laws related thereto. This article urges the policy makers and legislators to come forward with a roadmap for curbing the abuse of AI driven threats by suggestive ways explained throughout and by many other ways not explored herein.
In conclusion, this article illuminates the escalating threat to women's privacy posed by the malicious application of artificial intelligence, particularly through the insidious use of deepfake technology. The study underscores the gravity of the issue, emphasizing instances of cyberbullying, harassment, and blackmail fueled by AI-driven deepfakes. It calls for stringent regulations governing AI deployment and reinforces the importance of robust data usage policies. The article advocates for proactive measures on social media platforms, proposing automatic encryption of media files during upload to impede the misuse of facial and physical data for deepfake creation.
Moreover, the study explores the protective potential of encryption and blockchain technology, asserting their role in fortifying online data security. The integration of advanced encryption mechanisms and blockchain solutions is posited as a comprehensive defense against privacy infringements. Ultimately, the research advocates for a holistic strategy encompassing regulatory interventions, fortified data usage policies, secure social media practices, and cutting-edge technologies like encryption and blockchain. By addressing these dimensions, the proposed strategy seeks to navigate the evolving landscape of AI-driven threats and safeguard women's privacy and sanctity in the digital era.
*******
[1] A.M. Turing, “Computing Machinery and Intelligence,” 49 Mind 433-460 (1950) https://redirect.cs.umbc.edu/courses/471/papers/turing.pdf. [last visited November 25, 2023]
[2] IBM Data and AI Team, “AI vs. Machine Learning vs. Deep Learning vs. Neural Networks: What’s the difference?”
https://www.ibm.com/blog/ai-vs-machine-learning-vs-deep-learning-vs-neural-networks/ [last visited November 25, 2023]
[3] Arend Hintze, “Understanding the four types of Artificial Intelligence,” GovTech https://www.govtech.com/computing/understanding-the-four-types-of-artificial-intelligence.html [last visited Nov 27, 2023]
[4] Liwei Wang, Lunjia Hu, Jiayuan Gu, Yue Wu, Zhiqiang Hu, Kun He, John Hopcroft; “Towards Understanding Learning Representations: To What Extent Do Different Neural Networks Learn the Same Representation” https://arxiv.org/abs/1810.11750 [last visited November 27, 2023]
[5] M.T. Jafar, M. Ababneh, Mohammad Al-Zoube, Ammar Elhassan, “Digital Forensics and Analysis of Deepfake Videos;” 11th International Conference on Information, and Communication Systems (ICICS) (2020)
[6] T. Nguyen, C. Nguyen, D. Nguyen, D. Nguyen, S. Nahavandi, “Deep Learning for Deepfakes Creation and Detection” (2019)
[7] P. Zhou, X. Han, V.I. Morariu, and L.S. Davis; “Two Stream Neural Networks for Tampered Face Detection,” 2017 IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW) (2017)
[8] F. Schroff, D. Kalenichenko, and J. Philbin, “Facenet: A unified embedding for face recognition and clustering”, 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2015)
[9] C. Calvert and J. Brown, “Video Voyeurism, privacy and the internet: Exposing peeping toms in cyberspace” 18 Cardozo Arts & Ent. Law Journal 469 (2000)
[10] Marco Viola, and Christina Voto, “Designed to Abuse? Deepfakes and Non-Consensual Diffusion of Intimate Images”; 30 Springer 201 (2023)
https://doi.org/10.1007/s11229-022-04012-2 [last visited Nov 25, 2023]
[11] Michael Friedewald & Ronald J. Pohoryles “Technology and privacy, Innovation,” 26:1-2 The European Journal of Social Science Research, 1-6 (2013).
http://dx.doi.org/10.1080/13511610.2013.768011 [last visited Dec 11, 2023]
[12] “What Are Deepfakes and Why the Future of Porn is Terrifying” Highsnobiety (2018). https://www.highsnobiety.com/p/what-aredeepfakes-ai- porn/ [last visited Dec 17, 2023]
[13] Yvette D. Clarke, “De-fending Each and Every Person from False Appearances by Keeping Exploitation Subject
To Accountability Act of 2019”. H.R.3230 - 116th Congress (2019-2020) (2019). https://www.congress.gov/bill/116thcongress/house-bill/3230 [last visited Dec 17, 2023]
[14] https://www.technologyreview.com/2021/02/12/1018222/deepfake-revenge-porn-coming-ban/ [last visited Dec 17, 2023]
[15] Henry and Flynn, “Image-Based Sexual Abuse,” Butler, “A Critical Race Feminist Perspective on Prostitution & Sex Trafficking in America”; Butler, “Performative Acts and Gender Constitution: An Essay in Phenomenology and Feminist Theory”; Dodge, “Digitizing Rape Culture.” (2019)
[16] Karen Hao, “Deepfake Porn is Ruining Women’s Lives. Now the Law May Finally Ban It,” MIT Technology Review (2021) [https://web.archive.org/web/20230312190036/https://www.technologyreview.com/2021/02/12/1018222/deepfa ke-revenge-porn-coming-ban].
[17] Joanna Shapland and Matthew Hall “What Do We Know About the Effects of Crime on Victims,” 14 International Review of Victimology, 175-217 (2007).
[18] Obed Sindy, “Mapping Cyberbullying Cases and in Effects on Victims in Haiti” Lacnic (2021)
[19] Vasileia Karasavva and Aalia Noorbhai, “The Real Threat of Deepfake Pornography: A Review of Canadian Policy,” 24 Cyberpsychology, Behavior and Social Networking 3
https://doi.org/10.1089/cyber.2020.0272 [last visited Dec 5, 2023]
[20] R.A. Delfino, “Pornographic Deepfakes: The Case for Federal Criminalization of Revenge Porn’s Next Tragic Act”, 88 Fordham Law Review 887, 897 (2019)
[21] Francesca Pratesi; Anna Monreale; Roberto Trasarti; Fosca Giannotti; Dino Pedreschi and Tadashi Yanagihara; “PRUDEnce: A System for Assessing Privacy Risk vs. Utility in Data Sharing Ecosystems;” 11 Transactions on Data Privacy 139-167 (2018).
[22] Al-Kawaz, Hiba, et. al. “Advanced facial recognition for digital forensics,” Proceedings of the 17th European Conference on Information Warfare and Security. ECCWS, 11-19 (2018).
[23] Mail, A.O.L., and Drop Box, “Two Factor Authentication” (2017)
[24] Landoll, Douglas “The security risk assessment handbook: A complete guide for performing security risk assessments” CRC Press, (2021).
[25] Mahmood, Ghassan Sabeeh, Dong Jun Huang, and Baidaa Abdulrahman Jaleel. "A secure cloud computing system by using encryption and access control model," 15:3 Journal of Information Processing Systems 538-549 (2019).
[26] Mikkilineni, Aravind K., et al. "Signature-embedding in printed documents for security and forensic applications." Security, 5306 Steganography and Watermarking of Multimedia Contents VI. SPIE, (2004).
[27] Dr. Harman Preet Singh; Amity Journal of Computational Sciences (AJCS); 2:2 Data Protection and Privacy Legal- Policy Framework in India: A Comparative Study vis-à-vis China and Australia (2020).
[28] Probir Roy Chowdhury, “Digital Personal Data Protection Act: India’s New Data Protection Framework” Clifford Chance & JSA (2023).