India’s Unregulated AI Surveillance Landscape

India’s Unregulated AI Surveillance Landscape

Abstract

India’s growing reliance on surveillance technologies has created a multifaceted dilemma. While these technologies can improve governance and security, they also pose significant threats to civil liberties and individual privacy.

The swiftness of implementing surveillance for national security, law enforcement and urban governance produces concern for data misuse and targeting of vulnerable populations. In a legal framework that is both inadequate and incomplete, the surveillance context in India will encroach fundamental rights and diminish trust in institutions.

India must find a suitable balance to address myth challenges presented by surveillance processes that privilege transparency, accountability and oversight. Through prioritizing individuals’ rights and protecting the fundamental freedoms, India can harness the positive impacts of surveillance technology without compromising the constitutional integrity of India and its values. This will need the collective effort of relevant policymakers, civil society organizations, and the private sector stakeholders to help maximize how surveillance can serve to collectively benefit India’s collective interests to serve the needs of a modern democracy.

Introduction

Artificial intelligence (AI) is beginning to have an impact on public governance that is unparalleled, where states interact with citizens and provide for their security to a defined degree within legal parameters. This transformation is increasingly visible in India through the widespread use of AI-based surveillance tools. AI surveillance is gaining traction from facial recognition software placed in airports and public spaces to predictive policing algorithms utilized by police departments.

The attraction for AI surveillance should not be overstated. The prospect of increased efficiency, faster data processing, increased public safety, and more efficient resource allocation is apparent and compelling. In such a populous and diverse nation like India, these benefits have tremendous implications. Agencies can track crime more effectively, active monitoring of crowds during large public gatherings, and all while identifying people may be associated as a threat to national security. These developments do not just further the Indian government’s technology aims of emerging as a leading global technology leader and objectives to modernize its public administration.

The rapidly growing role of AI surveillance is also generating a great deal of debate. Questions about privacy, transparency, and accountability are at the centre of concerns about the proliferation. India’s early forays into digital governance, such as its Aadhaar biometric identification system, have revealed the countervailing tensions between the changing forms of technology, on the one hand, and basic rights and freedoms, on the other.

Section 1: The Current State of AI Surveillance in India

The use of AI surveillance in India is predominantly framed through facial recognition technology (FRT). Initiatives like the National Automated Facial Recognition System (NAFRS), a centralized facial recognition project run by the National Crime Records Bureau (NCRB), aim to organize facial records for tapping into already existing facial records obtained from CCTV cameras and utilized by law enforcement in the nation – in effect taking away rising municipal use of FRT in areas like Hyderabad (Integrated People Information Hub (IPIH) operated by the Telangana State Police and the AI-based crime prediction system of Delhi Police). Local and national institutional context somewhat gloss the reliance on algorithmic surveillance. Urban governance likewise utilizes AI to monitor and administrate differently; “smart” cities across India governed by the Smart Cities Mission are equipped with AI-enabled CCTV networks to monitor traffic, crowd control, and crime; deployed and enabled AI surveillance systems are combined with existing data analytics platforms to take advantage of real-time monitoring features.

In the wake of issues of national security AI is being used in some types of border surveillance, drones for aerial monitoring and even looking at social media and the sentiment expressed on it to predict unrest or extremism and things like this. These technologies can process huge amounts of data and spot patterns that suggest some sort of threat.

However, in India, when it comes to AI surveillance systems the transparency levels are very minimal. There is very little available public knowledge on how those systems work, what individual data is collected, how long data is retained, and who has access to these systems. Transparency and a standard set of operations are critical for analyzing evaluating effectiveness and fairness.

In addition civil society organizations have raised alarm regarding data bias, potential misuse of data holding authorities and the lack of consent in the communication of that data. It has been demonstrated through studies that similar facial recognition systems have performed worst on women and marginalised communities, which increases the likelihood of discriminatory outcomes.

AI surveillance systems provide obvious benefits for policing capabilities and managing the ever-growing urban space. But there is uncertainty and a lack and accountability in the current ecosystem created by all. Without public guidance from the Government and accountability frameworks, AI surveillance systems are just as likely to become a tool of the state and not for the public good.

Section 2: Legal Gaps in India’s AI Surveillance Framework

With AI technology being integrated increasingly into the uses of surveillance, India do not have a specific regulatory framework to address the use of AI for surveilling citizens and residents. Laws that currently (but imperfectly) address the use of AI technologies for surveillance are either out of date, fragmented, or fail to account for the complexity of the issues presented by AI technologies.

In India, the Information Technology Act, 2000 serves as the primary legislation for addressing digital data and cybersecurity issues. However, this act lacks specific provisions for AI and surveillance technologies. Existing regulatory mechanisms for monitoring government surveillance are limited to telecommunications systems, governed by the Indian Telegraph Act, 1885, and the Indian Wireless Telegraphy Act, 1933.

Recent legislation has been passed such as the Digital Personal Data Protection Act (DPDP), 2023, which is a progressive development towards stronger protections of data; however, the DPDP Act has generated numerous discussions based on criticisms that the Act vests excessive power in the central government, including the authority to exempt any agency of the government from audit or compliance, which gives rise to concerns on state surveillance persistingly un-monitored.

India’s law does not yet have specific provisions for Ethical AI. Although policy documents, such as the NITI Aayog’s ‘National Strategy for Artificial Intelligence’ point to the importance of responsible AI applications and deployment, they cannot carry the same legal weight as legislation, and to date, India has not adopted legislation that outlines a comprehensive regulatory body and framework for ethical AI applications and deployment, or substantive privacy law equivalent to the European Union’s AI Act or the General Data protection Regulation (GDPR) with respect to stringent rules governing standards for AI implementation especially in the context of surveillance.

Equally concerning, there is no oversight either through the courts or Parliament. Most surveillance programs are sanctioned by executive orders that leave little room for accountability or transparency. There are no independent scrutinies to assess the legality, necessity, or proportionality of any AI surveillance measure. Essentially this opens up opportunities for misuse and abuses of power.

The vagueness of legal standards around ai and surveillance also extends to issues including data retention, algorithmic opacity and reviewal mechanisms for pursuing remedy or redress. Ordinary citizens have very little understanding about the ways in which their data is collected, and processed and shared. There are currently no legally enforceable rights to appeal AI based intrusive actions or demand reasonable explainability, which creates further barriers for seeking justice for wrongful surveillance.

Section 3: Risks and Consequences of Unregulated AI Surveillance

The absence of adequate legal protections in India’s AI surveillance environment has created a multitude of threats and harms to individuals’ rights within the AI-enabled surveillance system, including privacy breaches, algorithmic bias, authoritarianism, and violations of civil liberties. The most immediate harm involves infringement of our right to privacy.

In the landmark case emphasized in Justice K.S. Puttaswamy v. Union of India (2017), the Apex Court of India established privacy as a constitutional right. However, given the absence of any enforceable privacy law (an absence called out by the Indian Supreme Court) and enforceable privacy enforcement framework, AI-enabled surveillance practices will continue to operate unfettered by courts and undercut privacy rights that are legally recognized under the Indian Constitution.

AI-enabled systems also suffer from bias, further compounding the problem. Facial recognition technologies have shown a disconcerting amount of error, substantial and significant for darker-skinned people and women. In India’s context, where caste, religion, and social class has always influenced policing and surveillance, biased AI systems can reinforce social inequalities and injustices claiming it to be due to pre-programmed algorithms and human biases. Those with the greatest risks are the marginalized communities that are most likely to be wrongfully classified, over-surveilled, or unfairly targeted by authorities.

The lack of transparency associated with AI systems makes accountability even more difficult. Unlike humans or human decision making, algorithmic judgment and decision-making tends to work as a ‘black box,’ making it near impossible to follow the logic behind the decision, and even more difficult to dispute errors with accountability. Not only is this great injustice to individuals, it also undermines due process and right to fair trial.

A major fear is that state actors will use surveillance systems in an effort to silence dissent and target political rivals. A surveillance system that lacks acceptable checks can be used to intimidate journalists, activists, and minority groups. The Pegasus spyware incident is an example of the dangers posed by surveillance left unchecked. This example also points to the absence of processes which could enhance accountability and oversight.

The pervasive effects of the constant feeling of being surveilled can also be psychologically destructive, ending up deterring free expression and civic engagement. The fear of being surveilled can lead to self-censorship and the chilling effect of surveilled participation in democracy. This is particularly dangerous in a diverse, pluralistic society like India, where free association and free speech are important to the success of public discourse.

In conclusion, without regulation, there are clear and present dangers with AI-surveillance to rights and liberties, democracy and social equity. If regulated appropriately, the technologies risk being put to work against the very people they are intended to protect.

Section 4: Potential Solutions and Recommendations

We need to think about the problem of AI surveillance in India in many ways, including law reform, institutional architecture, technology, and civil movements.

The first step we need to do is pass legislation to clearly ban the use of AI and surveillance. This legislation would clearly define the line drawn by public and private actors around when AI technology can be used, and if it is reasonable, proportionate, and legal. The legislation should also establish transparency and openness across the functional sphere of AI, including the enhancement of algorithmic impact assessments and data protection processes.

Independent oversight is needed from a dedicated regulatory body that strives to be as far removed from executive influence as possible. This body would be mandated to provide oversight for AI surveillance endeavors, and be permitted to audit AI systems, demand implementation of ethical standards for oversight, as well as be able to investigate public complaints.

In addition, data protection laws must be in place if privacy rights are to be protected. These laws should limit data collection for specific lawful purposes, compel informed consent for data collection, stipulate data retention times, and require safe data storage and auditing for third-party access.

Algorithmic accountability represents another invaluable principle. Not only should developers and consumers of AI be obliged to undertake bias testing and make results publicly available, they must also provide remedy for individuals affected by algorithmic harm through a right to explanation and appeal.

Moreover, stakeholder engagement is essential to guarantee an inclusive policy-making process. Policymakers must engage technologists, civil society organizations, lawyers, and structurally marginalized groups, in the preparation of law, to reflect that domestic context as an interact(s). Likewise, awareness campaigns can help citizens understand their rights and responsibilities, as well as the implications of AI surveillance.

Lastly, international cooperation can facilitate this. India may draw on global best practices such as, the EU GDPR, and, the EU AI Act, and, cooperate with fellow democracies under the auspices of establishing global norms for responsible AI.

By implementing such measures, India can introduce a surveillance regime marked by security and individual rights, promoting trust in technology, and protecting democratic principles.

Conclusion

India is at a pivotal moment in its technological development. The application of AI in surveillance measures presents significant possibilities for the domains of public safety, governance, and national security, but not at the price of individual rights and democratic values.

This piece has provided a context for understanding the rapid growth of AI surveillance in India, and the recognised benefits that will follow. However, it has emphasised the dangers of having very few, if any, frameworks in place. There are serious consequences of poor individual outcomes and the emergence of a Surveillance State. These consequences include breaches of privacy, algorithmic discrimination, and potential authoritarianism.

For India’s use of AI surveillance to be meaningful and positive, it must respond to that risk. Comprehensive law that includes independent monitoring, the development of AI in a responsible and ethical manner, and the involvement of civil society will help lay the groundwork for a fair and accountable surveillance ecosystem.

India will benefit from an all-polities approach. Collaboration among policymakers, private entities, civil society and citizens is cool, but transparent, accountable, rights-based governance is a must to get the most out of the technology inputs that it thinks is empowered by AI. With respect to AI surveillance, there is little benefit to having the programmed, autonomous, silent observer serving whatever public good but undermining the values of democratic governance.

****

1-Name: Aakanksha Agarwal.
2-Co-authors: [Colleague 1 Atridev Pandey], [Colleague 2 Alok Krishnanshoo].
3-Institution: [Shri Ramswaroop Memorial University].
4-Bio: Law students passionate about legal reforms, digital innovation, and public policy.