AI, Digital Arrests and Deepfakes: The Dark Side of the Digital Age

Shaleen Mahajan

mahajanshaleen5@gmail.com

India’s rapid digital growth has changed governance, banking, and communication, making services faster, easier to access, and better connected. But this technological progress has also opened the door to complex cybercrimes that take advantage of fear, trust, and differences in digital knowledge. Among the most serious threats are “digital arrest” scams and AI-powered deepfakes, which put basic rights like personal freedom, dignity, privacy, and trust in institutions at risk.

Imagine getting a phone call from someone claiming to be a police officer, warning you that you will be arrested soon, and asking for immediate payment to avoid it. This is not a story and is the reality of digital arrest scams in India. Recent high-profile cases show how serious the problem has become. Bollywood actor Suniel Shetty got temporary protection from the Bombay High Court after his image was misused in deepfake content, while industrialist S.P. Oswal lost ₹7 crore to scammers pretending to be the Chief Justice of India and holding a fake Supreme Court hearing. These examples show that digital crimes can affect anyone, regardless of their social status or profession.

A “digital arrest” is a fake scheme used by cybercriminals who pretend to be police officers and force people to make immediate digital payments. Victims are often accused of serious crimes like money laundering or drug trafficking and pressured to pay “verification fees” or “bail money” under threats of arrest, frozen bank accounts, or public embarrassment. Under the Constitution, Article 21 protects personal freedom, while Article 22 guards against illegal arrest. Laws like the Bharatiya Nagarik Suraksha Sanhita (BNSS), 2023, require proper legal procedures and judicial approval for all arrests. Since a digital arrest has no legal standing, any such demand is a crime, showing a clear mix of fraud, impersonation, and violation of constitutional rights.

“In addition to these scams, AI-powered deepfakes create another serious problem by attacking credibility itself. Generative AI makes non-consensual intimate images (NCII), also called revenge porn, even more harmful by creating hyper-realistic images and videos where someone’s face is placed on sexual content, even if no real recording exists. The line between real and fake becomes blurred, making it very hard to prove the content is false. Global studies indicate nearly 90% of deepfake content is pornographic, mostly targeting women. In India, the real scale of the problem is likely higher than official statistics suggest due to social stigma and under-reporting, highlighting the urgent need for both legal and technical solutions.

The term “deepfake” comes from “deep learning” and “fake,” and refers to content created using advanced AI techniques like Generative Adversarial Networks (GANs). GANs use large sets of a person’s images or videos to make highly realistic copies. Deepfakes can lead to identity theft and fraud. For example, a video falsely showing the MD and CEO of the National Stock Exchange promoting stock services was shared on social media. They also invade privacy and can cause social, economic, and political problems, even affecting democracy, as in the case of Irish presidential candidate Catherine Connolly, who filed a complaint about a fake video claiming she had withdrawn from the election.

In India, the Ministry of Electronics and Information Technology (MeitY) has suggested changes to the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021. These changes require digital platforms to remove banned content once they are aware of it, improve grievance procedures, and label AI-generated or altered content. Platforms must also take care to prevent illegal content from being shared or hosted.

The Information Technology Act, 2000 addresses cybercrimes through multiple provisions. Section 66C punishes electronic identity theft with up to three years imprisonment and a monetary fine. Section 66D penalizes cheating by personation using a computer resource, also with up to three years imprisonment. Section 66E handles privacy violations, while Sections 67 and 67A regulate the circulation of obscene or sexually explicit content. Section 69A empowers the government to block public access to unlawful online content.

The Digital Personal Data Protection Act, 2023 (DPDP Act) requires data fiduciaries to obtain consent from data principals and implement technical safeguards, imposing penalties for non-compliance. The Bharatiya Nyaya Sanhita (BNS), 2023, criminalizes spreading misinformation causing public mischief (Section 353) and allows prosecution of organized cybercrimes, including deepfake-related offences (Section 111). Constitutionally, Article 21 guarantees life and personal liberty, including privacy and dignity, while Article 19(1)(a) protects freedom of speech, subject to reasonable restrictions for public order, decency, morality, and individual dignity.

The concept of safe harbor, provided under Section 79 of the Information Technology Act, 2000, protects intermediaries from liability for actions of third parties, as long as they follow due diligence and do not have actual knowledge of illegal content. Courts have repeatedly emphasized the need to maintain a balance between regulation and freedom of speech. In Shreya Singhal v. Union of India (2015), the Supreme Court struck down unclear and broad powers that allowed the government to block online content without proper safeguards. Similarly, in Kunal Kamra v. Union of India (2020), concerns were raised over granting the Public Information Bureau the power to identify false information, as this could lead to excessive government control and weaken safe harbor protections.

India has also developed several reporting and enforcement mechanisms. The National Cyber Crime Reporting Portal enables anonymous reporting of offences, especially crimes against women and children. The Indian Cybercrime Coordination Centre (L4C) monitors trends and issues notices, while the Sahyog Portal streamlines notice-sharing with intermediaries. CERT-In issues cybersecurity advisories, and the Grievance Appellate Committee hears appeals against decisions of social media grievance officers. In addition, Standard Operating Procedures under the IT Rules, 2021 guide intermediaries and law enforcement agencies in preventing the spread of non-consensual intimate images and deepfakes.

AI-generated content increasingly violates personality rights, as seen in cases such as Arijit Singh v. Codible Ventures LLP, Ankur Warikoo v. John Doe, and incidents involving actor Akshay Kumar. These cases highlight how AI tools are misused to replicate voices, faces, and identities without consent. Recognising these risks, several countries have introduced AI-specific legal safeguards. Denmark has amended its copyright law to protect an individual’s body, facial features, and voice for up to 50 years after death. In the United States, the  TAKE IT DOWN Act 2025,  criminalises the publication of intimate images without consent, including AI-generated deepfakes. China requires both explicit and implicit labelling of AI-generated content, while the EU AI Act ( Article 50)  mandates that providers and users of AI systems clearly mark synthetic outputs in a machine-readable and detectable form.

In contrast, India’s legal framework dealing with cybercrime and artificial intelligence remains fragmented and insufficient. While existing laws punish offences such as obscenity, voyeurism, and cyberstalking, they do not clearly recognise deepfake creation, AI-driven impersonation, or purely digital harms as distinct offences. This gap makes investigation difficult, weakens deterrence, and delays justice for victims. The technical complexity of AI systems, especially Generative Adversarial Networks, further complicates accountability by spreading intent across developers, deployers, and users. As a result, establishing mens rea, verifying digital evidence, and fixing legal liability becomes challenging. Although preventive measures such as MeitY’s 24-hour takedown SOP for NCII and deepfakes, on-device deepfake detection by companies like Gen (Norton) and Intel, and biometric liveness systems used in corporate environments show progress, these efforts remain scattered and fall short of a comprehensive legal framework.

Globally, intermediaries are no longer granted absolute immunity. Indonesia’s action against Grok AI demonstrates that liability can arise where AI-related harm is foreseeable. Similarly, the EU AI Act follows a risk-based regulatory model with enforceable penalties, in sharp contrast to India’s advisory-driven approach. While the Digital Personal Data Protection Act focuses on consent-based data processing, it does not address synthetic or AI-generated representations, leaving victims without effective remedies. Moreover, AI-driven harms such as predictive policing and automated forensic tools challenge traditional ideas of human intent and responsibility under the IT Act, the Bharatiya Nyaya Sanhita, and the DPDP Act, raising serious constitutional concerns under Articles 14 and 21 related to fairness, dignity, and personal autonomy.

AI-generated evidence also presents new challenges for criminal justice. The use of opaque, “black-box” AI tools for risk assessment or forensic analysis limits transparency and threatens the right to a fair trial. Algorithmic bias can reproduce existing social inequalities, increasing the risk of wrongful prosecutions. Additionally, the continuous alteration of digital evidence by AI systems weakens chain-of-custody standards, creates privacy risks, and may unfairly shift the burden of proof onto the accused.

India has taken important steps through the DPDP Act, the Bharatiya Nyaya Sanhita (BNS), the Bharatiya Sakhshya Adhiniyam (BSA), and the Bharatiya Nagarik Suraksha Sanhita (BNSS). However, these laws remain largely technology-friendly rather than AI-specific. As a result, probabilistic outputs generated by AI systems are often treated as factual evidence, which weakens the adversarial justice process. Further, procedural safeguards under the BNSS do not adequately prevent excessive surveillance or the misuse of digital metadata, increasing the risk of abuse.

The proposed Deepfake Prevention and Criminalization Bill, 2023 attempts to address these gaps by explicitly criminalising non-consensual sexual deepfakes, deepfakes created to incite violence or disrupt official proceedings, and those used for fraud or identity theft. The Bill also proposes the creation of a National Deepfake Mitigation and Digital Authenticity Task Force. This body would track the spread of deepfakes, recommend penalties, advise on technological safeguards such as digital watermarking and blockchain-based verification, and suggest privacy-protective measures. Although the Bill has not yet been enacted, it marks an important move toward AI-specific regulation and is intended to work alongside existing cyber and technology laws.

In conclusion, India’s legal and technological systems must develop together to effectively respond to autonomous AI-generated harms, ensure accountability, and protect constitutional rights. Strong legislation, effective oversight, and responsible use of AI are essential to protect personal liberty, dignity, and justice while maintaining public trust in digital systems. By strengthening preventive governance, institutionalising victim-focused remedies, and updating laws to reflect the unique challenges posed by AI, India can better address threats such as digital arrest scams, deepfakes, and AI-enabled crimes, and move toward a safer and more just digital future.