Tech

DeepFake And Democracy: The Rising Threat Of Political Deception In India

Tulika Bhattacharya and Mahima Mathur

Jul 17, 2025, 12:22 PM | Updated 10:10 AM IST


Prime Minister Narendra Modi himself has highlighted deepfakes as "one of the greatest threats facing the nation".
Prime Minister Narendra Modi himself has highlighted deepfakes as "one of the greatest threats facing the nation".
  • AI-generated deepfakes are no longer a fringe concern. From faking political speeches to reviving the dead for propaganda, they now threaten voter trust, national security, and democratic integrity, a danger flagged by PM Modi himself.
  • The very foundation of democratic discourse is now being challenged by an unprecedented force. Generative AI is not merely influencing our political narratives; it is actively reconstructing reality itself. What began with seemingly innocuous AI-generated images of politicians in quirky avatars has quickly escalated into something far more concerning.

    Just imagine the confusion and doubt it sows in the minds of Indian voters when they encounter deepfakes, like the one of Duwaraka, the Tamil Tiger chief's daughter, brought back to life to rally support, despite her passing years ago. Another case involved a doctored video of Union Home Minister Amit Shah, which went viral falsely showing him advocating to scrap reservations for SCs and STs.

    This is not just about entertainment; it is about the serious challenge of discerning truth from fiction when sophisticated AI tools can make a lie look undeniably real, impacting public perception and, ultimately, the democratic process.

    When it comes to risks due to AI, online abuse tops the list at 76 per cent, deepfakes at 74 per cent, scams at 73 per cent and AI hallucinations at 70 per cent.

    Election propaganda in India has evolved beyond door-to-door campaigns and wall posters to AI-generated fake videos. There are two main categories of deepfake videos that election campaigners deploy. One is aimed at creating a positive sentiment around the candidate being endorsed, and the other is aimed at spreading misinformation about the opposition.

    Groups specifically created for the purpose of disseminating political propaganda, including deepfake-enabled misinformation, are known as scratch groups within electoral campaigning circles. The scratch groups are meticulously segregated into four types based on the degree of gullibility of group members, with the age group between 18 and 25 being most susceptible to misinformation. This systematic targeting highlights the insidious nature of this new form of propaganda.

    But here is the catch. Platforms like WhatsApp are end-to-end encrypted messaging platforms, meaning the guardrails around content moderation are not identical to the ones followed by X and Instagram. This makes the spread of deepfakes even more challenging to control.

    In the United States, the Federal Election Commission (FEC) has already initiated a process to potentially restrict all AI-generated deepfakes in political advertisements. The US Senate has introduced a bipartisan bill to regulate AI-generated deepfake political ads. In the United Kingdom, the National Cyber Security Centre has warned about the threat posed by deepfakes and other AI tools towards their next general elections.

    For messaging platforms like WhatsApp, which is the primary conduit for political deepfakes, there is an added challenge of encryption. A common proposal to preserve end-to-end encryption and restrict prohibited content is a method known as client-side scanning. Broadly, this involves pre-screening images before they are sent. The app checks the hash of the image or video against a database of hashes of prohibited content.

    It is unclear how this approach could work for deepfakes. Moreover, it would count as a breach of privacy and could have a chilling effect on free speech. Under Section 4(2) of the IT Rules, 2021, commonly known as the traceability provision, the government can ask messaging platforms like WhatsApp to disclose the original sender of a message. WhatsApp has challenged this provision in the Delhi High Court arguing that tracing a single message would require monitoring of all messages, and this would break the end-to-end encryption service it provides. For WhatsApp, the dilemma is how to adhere to legislation on deepfakes in India without impinging on its credentials as a secure messaging platform.

    The Election Commission of India (ECI) has also taken a firm stance. It warned political parties against using AI to spread misinformation and cited seven provisions of the IT Act and other laws that carry jail terms of up to three years for offences such as forgery, promoting rumours, and inciting enmity.

    In a proactive measure to combat misinformation, the ECI launched a "Myth vs Reality Register" in April 2024, a comprehensive online repository designed to debunk fake news related to elections and promote transparency.

    In May 2024, the ECI issued specific directives to political parties for responsible social media use during campaigning. Parties were mandated to promptly remove such content within three hours of it coming to their notice, warn responsible party members, report unlawful information to platforms, and escalate persistent issues to the Grievance Appellate Committee.

    The Ministry of Electronics and Information Technology (MeitY) has also issued significant advisories. It urged all intermediaries to strictly comply with the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021, specifically targeting misleading information from AI-generated deepfakes.

    It emphasised Rule 3(1)(b), which requires intermediaries to make "reasonable efforts" to prevent users from hosting prohibited content, including misinformation and impersonation. The advisory explicitly stated that purposeful AI use for deepfakes is "construed as an attempt at impersonation" under Rule 3(1)(b)(vi). It also mandated informing users about potential legal consequences under the IPC and IT Act for violations, and crucially warned that non-compliance could lead to intermediaries losing their Section 79 "safe harbour" immunity from liability for user-generated content.

    The Uphill Battle: Why India Struggles to Contain Deepfakes

    Despite the existing legal provisions and regulatory directives, the enforcement of laws against deepfakes in India faces substantial challenges, revealing critical gaps in the current framework.

    One of the foremost hurdles is the inherent difficulty in detecting deepfakes. Many AI-generated deepfakes are so realistic that they pass as genuine even under scrutiny. Indian police and judicial officers often lack the advanced tools and expertise necessary to accurately identify and prove that a video is fake. This technological deficit significantly impedes investigations.

    Compounding the detection problem is the anonymity of deepfake creators. Perpetrators frequently operate anonymously, utilising VPNs or fake accounts to obscure their identities. Tracing them requires extensive coordination with internet service providers and social media platforms. Furthermore, establishing criminal intent, or mens rea, becomes exceptionally difficult, particularly when deepfakes are anonymously created or widely shared, as the sheer volume and speed of dissemination complicate traditional investigative methods.

    Classifying a particular set of data as personal or non-personal is not the right way to secure digital data over the internet. A website can easily track user activity. On Facebook, the moment a user creates their profile, a shadow profile is created which contains their personal information including interests and behavioural patterns. Third parties track such data and use it against the consumers. The crux here is that anytime non-personal data can be changed to personal data if the data controller puts it in a specific way.

    Another identified gap is the absence of a specific law that explicitly recognises or defines "deepfakes" in India. While existing provisions within the IT Act and BNS can be extended to cover deepfake-related offences, they are widely regarded as fragmented and lacking the specificity required to effectively tackle AI-generated content. This legal vacuum leaves victims vulnerable and perpetrators largely unaccountable.

    Prime Minister Narendra Modi himself has highlighted deepfakes as "one of the greatest threats facing the nation," underscoring the high-level recognition of this significant legislative gap and the urgent need for a dedicated legal framework.

    A Six-Pillar Strategy for India's Digital Security

    We face a threat from AI-morphed videos in India’s digital landscape. The authors propose a six-step model to enhance digital security and to regulate the risks emanating from Gen AI.

    Firstly, we need a comprehensive legal and regulatory framework. This means a dedicated Deepfake and AI Misinformation Act (under the Digital India Act) which will define and criminalise malicious deepfakes, broaden intermediary accountability to include AI tools, and revoke safe harbour immunity for platforms that spread misinformation. We need to mandate clear AI content labelling and include personal data under the DPDP Act to cover deepfakes, while plugging legal loopholes that malicious actors could exploit.

    Second, we need cutting-edge technological countermeasures. This means investing heavily in indigenous deepfake detection tools for real-time, multimedia analysis and implementing digital watermarking standards for content provenance, as well as deploying AI for real-time misinformation monitoring.

    Third, we must integrate 'Privacy by Design' principles into all AI system development, ensuring privacy is built in from the start. This includes strict adherence to informed consent for data used in AI, applying data minimisation principles, mandating Data Protection Impact Assessments for high-risk AI, and strengthening user rights like the "right to be forgotten" for deepfake victims.

    Fourth, we must implement robust platform accountability mechanisms, ensuring platforms respond quickly and transparently to deepfake threats.

    Fifth, capacity building and public awareness are crucial. This involves specialised training for law enforcement and the judiciary in digital forensics and AI, launching national digital literacy campaigns to educate citizens on identifying deepfakes, and promoting ethical AI development.

    Finally, international collaboration and standard setting are indispensable. India must actively participate in global dialogues on AI and quantum governance, fostering bilateral and multilateral partnerships to share best practices, detection tools, and research findings.

    This integrated model is about more than just stopping lies. It is about safeguarding the truth and the very integrity of our democratic process.

    Tulika Bhattacharya and Mahima Mathur are Policy Consultants.


    Get Swarajya in your inbox.


    Magazine


    image
    States