Table of Contents
We currently know that the three discussion papers on artificial intelligence published in February 2021 and August 2021, each in two parts, were an update to the document published in June 2018 by Niti Aayog, a public policy think tank tasked with highlighting the benefits of responsible AI. The 2018 document gave a detailed overview of artificial intelligence, including its definition, prospective uses, implementation issues in India, strategies for integrating AI into the economy, efficiency goals, and recommendations for governmental action.
The 2021 publications highlighted two main areas: the Principles of Responsible AI, which are presented in Part 1, and the Operationalizing Principles for Responsible AI, which are discussed in Part 2. We will take the information in these documents and narrow it down to explain whether the current AI-based technologies are being deployed correctly. Part 1 of this Article will concentrate on comprehending the definition of Artificial Intelligence from a worldwide standpoint. Part 2 will describe how AI is employed as an integrated component of CCTV surveillance. Part 3 will look at the flaws of the AI integration process in facial recognition.
Understanding the definition of artificial intelligence
In his paper published by the International Association of Defense Counsel, Professor Gary E. Marchant defined AI in its most basic form as “the development and use of computer programs that perform tasks that normally require human intelligence.”
Section 3(3) of the National Artificial Intelligence Initiative Act [NIAA], 2020, defines Artificial Intelligence as a machine-based system that may generate predictions, recommendations, or judgments influencing actual or virtual environments for a specified set of human-defined objectives. Machine and human-based inputs are used by artificial intelligence systems to-
- perceive real and virtual environments;
- abstract such perceptions into models through analysis in an automated manner; and
- use model inference to formulate options for information or action.
According to Article 3(1) of the proposed Artificial Intelligence Act of 2021, an “artificial intelligence system” is software that is developed with one or more of the techniques and approaches listed in Annex I and can generate outputs such as content, predictions, recommendations, or decisions influencing the environments with which they interact for a given set of human-defined objectives.
According to Appendix I of the discussion paper published in June 2018, from an Indian perspective, “AI has been achieved when we apply machine learning to large data sets.” Rather than receiving explicit programming instructions, machine learning algorithms recognize patterns and learn how to make predictions and recommendations through analyzing data and experiences. In addition, the algorithms adjust in reaction to fresh data and experiences in order to enhance efficacy over time.”
2019 Kumbh Mela: AI usage in CCTV surveillance
Kumbh Mela is one of the world’s largest religious gatherings, and what we’ve seen so far is that artificial intelligence has shown to be valuable at every stage of the way, particularly in camera as a legal instrument. The biggest example of AI use may be found in the 2019 Kumbh Mela, which featured the deployment of over 1000 cameras across 3200 hectares.
The entire system was capable of CCTV security monitoring, facial identification, automatic number plate recognition system, and red light violation system, as detailed in the 2021 article authored by Biru Rajak, Sharabani Mallick, and Kumar Gaurav. The use of cameras with AI to identify suspected “trouble-makers” was best illustrated at the 2019 Kumbh Mela. This demonstrates that the utility of AI affects both people and society as a whole.
The AI utilized here is responsible for identifying the person by utilizing a face detection algorithm, after which it will go to the next stage, which will produce a list of behavioral patterns that will suit the profile of a possible criminal, and then it will attempt to prevent a crime based on this knowledge. It will eventually serve as a tool for risk assessment.
In a detailed article written by Emaneulla Halfeld, she has thoroughly explained the need for ethics and its involvement in assessing incarcerated individuals by criticizing the Risk Assessment Tool known as COMPASS- (Correctional Offender Management Profiling for Alternative Sanctions) on the assignment of danger score, due to the unfortunate application of machine bias. It demonstrated AI’s discriminatory inclination with detailed photos, rendering it untrustworthy.
The issue here is that if AI regulation rules are not enforced, the same behavioral pattern will prevail in India.
Consequences of AI-driven CCTV on Individuals
Let’s examine the problem using a few examples and questions. To begin, we can concentrate on the issue at hand, which is the personal impact of AI on an individual level, particularly in circumstances when the technology may incorrectly identify an individual or falsely implicate an innocent person.
For example, if AI determines that a person ‘A’ is a ‘Troublemaker,’ the question that would be incredibly difficult to prove in court is how that AI program came to the judgment that the behavior was suspicious.
What elements did AI cameras consider when deciding that claim, how easy would it be for the court to grasp the procedure used by AI to reach that decision, and whether the claim made by AI regarding suspicion is legal in a court of law? The Black Box Problem refers to a situation in which it is impossible to understand the decisions made by AI and their machine learning capability. In such cases, it is unclear who bears the blame for the AI’s blunder. The government looks to have unwittingly obtained an excessive amount of ability to intrude into citizens’ personal lives.
The judgment cannot be attributed to the police or administrative system because it was made by an artificial intelligence system, providing a gap in where to place the accountability bias. A person who has been wrongfully accused may sue for damages in a defamation claim. The absence of responsibility for AI is owing to its lack of personality, which places it beyond the ambit of the penal system, which only applies to human entities. This person will be powerless in this situation.
Furthermore, the question of who would recompense the wrongfully accused for the loss caused to individuals on a personal basis emerges. Should we hold the creators of that AI system accountable? Should we instead hold the government agency accountable? Or the private commercial company, entity, or think tank that the government has hired as a consultant? Or the government employee who decided to summon the suspect for questioning? These questions will arise and must be addressed before implementing AI in the Indian economy.
Consequences of AI-driven CCTV on society
On a societal level, implementing AI as a solution has significant ramifications. The issue here is that businesses investing in AI technologies will have the chance to create AI to meet their own needs, which may influence their applicability on the ground. On a social level, Adam Schwartz argued in an article that the government’s use of surveillance cameras and equipment in Chicago’s Surveillance system was a disturbing step toward the society described in the dystopian novel 1984.
His recommendation was to include a privacy safeguard to assure fundamental rights protection. One thing to keep in mind is that the COMPAS tool, which we previously described, was created by a commercial firm called Equivalent.
Because AI is based on probability and accuracy, there is a high possibility that AI surveillance cameras will tag the same person ‘A’ as a said ‘trouble-maker,’ resulting in a bias known as machine bias, in which AI will make an unconscious decision with the help of machine learning, that since this person ‘A’ was flagged previously and this person may show the same behavioral pattern inadvertently, the AI will tag him/her again, which is a grave violation of several
On the other hand, there is the possibility of bias in AI, also known as Discriminating Artificial Intelligence, in which individuals/entities/companies/think tanks/research centers who design AI algorithms will create a machine bias, meaning they will design it in a way that will cater to their own personal needs, such as ‘if I am a developer, I might feel like I need to design AI that will help my own community!’ Worse, I’ll build it in such a way that it favors individuals from my hometown,’ which would be a structural violation of Article 15.
The implementation of artificial intelligence in cameras occurred without the passage of a proper regulatory law, such as the European Union’s Artificial Intelligence Act and the National Artificial Intelligence Initiative Act [NIAA], 2020. Artificial intelligence implementation prior to the establishment of regulatory rules may cause major disruption, as proven by the experiences of both the United States and the European Union. AI technology deployment predated the formation of regulating and regulatory laws. We must be able to fill in the gaps by answering the questions created by enacting legislation first and then deploying these technologies.
For example, according to the June 2018 discussion paper and the summarized article written by Professor V P. Gupta, facial recognition cameras fall under Artificial Narrow Intelligence System, and based on that, what we need to be able to do is convert Artificial Narrow Intelligence System into Artificial General Intelligence System so that the inquiry pertains to the justification for labeling the person in question as a “Trouble-Maker.” The suggested policy is expected to not only have legal validity but also to show a stronger proclivity for providing rational justification.
The Artificial Intelligence Act of 2021, Article 5 Point 1, describes in great detail the actions that cannot be carried out while using artificial intelligence (AI), with the purpose of preventing these difficulties from occurring. It should not be too difficult for our government to draft new laws if they look to existing laws in other countries for inspiration and use that inspiration as a starting point. The legislative body should create legislation or at the very least a strategy, to police AI so that it can be managed by being given bounds.
Article 9 of the same Act provides the specific requirements for developing Risk Assessment Tools. It is critical to recognize the possibility that employees tasked with managing AI Risk Assessment Tools will violate privacy for personal gain. It is critical to implement a mechanism inside the government infrastructure, similar to the Right to Information Act of 2005, that allows anyone to scrutinize or request an audit from the government in the event of any potential exploitation of the CCTV monitoring system. The government must provide complete transparency in determining any potential misuse of the Closed-Circuit Television (CCTV) system.
In addition to the currently enforced CCTV system, it is recommended that the government establish and enforce a mandatory insurance scheme under which, in the event of an error by the AI, compensation is provided to the affected party on behalf of the AI by the insurance company, ensuring accountability on the part of the AI.
It may be deduced that, in light of the government’s implementation of AI tools for welfare, equivalent safety measures must be adopted to protect residents from potential misuse of the technology, especially in the absence of legislative frameworks to control its operation.