Command and Ctrl: India’s Place in the Lethal Autonomous Weapons Regime


Technological advancement in artificial intelligence has created a situation where the deployment of Lethal Autonomous Weapons has become practically, if not legally, possible within a few years. As the international community struggles to arrive at a definition of ‘autonomous weapons’, the need to regulate their use has become paramount. Apart from the legal and ethical considerations in the use of autonomous weapons, there are also concerns about a new arms race that might emerge as a result of the military-technological divide between developed and developing nations. For countries like India which enjoy conventional military superiority in the region, it is important to not allow autonomous weapons to offset such superiority. This can be achieved by proactively investing in the research and development of these weapons instead of attempting to maintain the status quo.

I. Introduction

War is as old as politics. Carl von Clausewitz, the preeminent military theorist, considers war a continuation of politics by other means.1 Indeed, history is replete with examples of how war has guided and defined the politics of its age. The advent of nuclear weapons 70 years ago brought with it a political regime coloured not only by the moral imperative of preventing catastrophic conflicts but also the strategic goal of export controls. Technology, after all, has an overbearing influence on politics when it ceases being neutral and begins to carry the potential to change lives.

In the 21st century, the development of sentient technologies has elicited heated debate among experts and futurists around the world: Do these technologies represent that paradigmatic shift in technology and in people’s lives? Artificial Intelligence (AI) has become an ubiquitous presence in people’s lives in the form of personal assistants on smartphones, spam filters in email inboxes and curated playlists on music-streaming services. Yet even as these are extremely useful, they represent rudimentary AI.

Today what scientists across many parts of the world are engaged in is more cutting-edge research and development into technologies that will potentially replicate the decision-making prowess of the human brain.2 Ray Kurzweil, a renowned futurist and inventor from the US, once described the process of development when a new paradigm is about to emerge: a slow growth phase; an exponential growth phase; and a plateau phase, when the paradigm finally matures.3 Some argue that the world is on the cusp of the exponential growth phase, where even a marginal breakthrough in the existing state of art could usher an era where AI will not only match human intelligence but even surpass it.4

An important fulcrum for this debate has been the development of artificial intelligence weapons — called Lethal Autonomous Weapons Systems (LAWS) — that can literally make life-or-death decisions. While the international community struggles to imagine potential uses of these weapons, no consensual definition of LAWS has emerged. The US Department of Defence, for example, defines autonomous weapons as, “A weapon system that, once activated, can select and engage targets without further intervention by a human operator. This includes human-supervised autonomous weapon systems that are designed to allow human operators to override operation of the weapon system, but can select and engage targets without further human input after activation.”5 Other organisations like the International Committee of the Red Cross (ICRC), meanwhile, use a more expansive definition of autonomous weapons: “Any weapon system with autonomy in its critical functions. That is, a weapon system that can select (i.e. search for or detect, identify, track, select) and attack (i.e. use force against, neutralize, damage or destroy) targets without human intervention.”6

Absent a consensus on what may be defined as ‘autonomous weapons’, the international community is already taking note of their growing threat. In fact, various deliberations on the use and legality of such weapons have been held recently. The United Nations Institute for Disarmament Research (UNIDIR) and the Convention on Certain Conventional Weapons (CCW) are two such bodies and instruments that are attempting to organise a dialogue among various stakeholders and lay down norms around the development and use of lethal autonomous weapons. Given that most global powers are already in the advanced stages of developing lethal autonomous weapons, it is important to analyse how their proliferation will impact geopolitics. This paper examines current trends in international discourse around LAWS and posits their use within provisions of international humanitarian law. The objective of the paper is to look ahead and identify key areas of concern for India’s foreign policy and national security, as well as issues that merit consideration while institutionalising research and development into LAWS in India. The paper argues that while the development of LAWS cannot be constrained by policymaking, their use can be. Countries like India should, therefore, actively participate in both scripting the norms around their use as well as pursuing the domestic production of these weapons.

II. Autonomy and Lethality

Autonomy is not always a clear identifier of a particular weapon. Sometimes, it can serve as the primary identifier of a weapon such as the ones – described in the Department of Defence Directive – that can select and engage targets without human intervention. Many other times, autonomy is only one of the many non-critical attributes of a weapon. These, dubbed ‘automated weapons’, represent one end of the spectrum; fully autonomous weapons represent the other. In automated systems like remote-controlled aerial vehicles and ordnance disarming devices, substantial human control is retained and therefore, responsibility is easily attributable. It is for this reason that the CCW’s focus has been on weapons that are either fully or primarily autonomous.

As futuristic weapons, however, the exact nature and extent of capabilities of these weapons is often murky. Delineating which particular technologies are permissible and which are not is not an easy task. To begin with, the very existence of these technologies is often shrouded in secrecy. Policy discussions around the issue have attempted to circumvent this by creating an inclusive definition of autonomous weapons. While this definition is considered central towards formulating regulatory norms, it has proved to be counter-productive. First, autonomous weapons are reliant on machine-learning technologies that are developed for both military and civilian purposes. Restricting the development, use and testing of these technologies would also constrain their deployment for civilian purposes where they can, presumably, serve the public good. Second, as these technologies are developed incrementally over time, it is hard to identify which particular technologies have the potential capacity for military use.

The other difficulty with regulating autonomous weapons lies in the strides taken by policy development. Any new regulatory mechanism would take at least a few years to materialise, taking into account the time required for consultations and consensus building. Technology, on the other hand, moves at a rapid pace, which is only likely to accelerate in the coming years. The widely accepted Moore’s law—which states that the “number of transistors in a dense integrated circuit” (i.e., the overall processing power of computers) will double every two years—is expected to hold true for two more decades. This is buttressed by the slightly less renowned “Law of Accelerating Returns”, which states that the rate of humans’ technological advancement is not linear with the passage of time, but rather exponential.7 This implies that the AI revolution that will lead to the proliferation of autonomous weapons is unlikely to occur at a rate that policymaking can keep up with. Policymakers, therefore, should seek not to regulate the development of autonomous weapons and instead focus on regulating the use of such weapons.

When it comes to the use of lethal autonomous weapons, the central point for consideration should therefore be lethality and not autonomy. Various functionally autonomous systems have been deployed by militaries for years. The US Navy, for instance, has been using the Phalanx Close-in Weapons System on its surface combat ships since 1978. The Phalanx can sense incoming anti-ship missiles and “autonomously [perform] its own search, detect, evaluation, track, engage and kill assessment functions.”8 Another such example is the NBS Mantis used by German Army for forward base protection. The Mantis, which consists of six 35mm guns, is capable of automatically detecting, tracking and shooting down projectiles within a close range of the base.9 These weapons that primarily discharge a defensive function against inanimate threats perhaps do not merit the attention of the international community.10 The matter becomes considerably more complex while imagining a near future where these weapons are not only deployed in defensive roles but are also routinely assigned assault functions.11 This year, for instance, the Defense Advanced Research Projects Agency of the US DoD has begun sea trials for its Active Submarine Warfare Continuous Trail Unmanned Vessel (ACTUV).12 The ACTUV, which can autonomously operate 60-90 days will be primarily tasked with tracking enemy submarines in shallow waters.

Strategic-policymakers must train the spotlight on autonomous systems that are capable of, and designed to evaluate, “lethality”. This approach would not focus on the technical nature of these weapons or their development, as such matters are not fully determined by policy, in any case. Rather, an ends-based approach would lend clarity to the use of these weapons in battle. Existing laws of war that prescribe how combatants must conduct themselves in the battlefield can serve as a good starting point for this analysis.

III. LAWS and International Humanitarian Law

In international law, the conduct of warfare is governed through International Humanitarian Law (IHL), or jus in bello, which broadly regulates: weapons or methods of warfare whose use would be illegal, and the conduct of soldiers on the battlefield that violate the Geneva Conventions. Article 35 of the Additional Protocol to the Geneva Conventions (hereinafter Additional Protocol I) seeks to prohibit the use of weapons that “are of a nature to cause superfluous injury or unnecessary suffering.”13 It also prohibits weapons which are intended, or may be expected, to cause widespread, long-term and severe damage to the natural environment. This rule only bans weapons that by their very nature would cause indiscriminate harm or through their ‘intended’ use would have a long-term fallout. Weapons that would be per se banned under this rule would then include chemical and biological weapons.

Lethal Autonomous Weapons that are currently being developed as kinetic weapons may not be proscribed under Article 35. Even without human intervention, LAWS function within certain parameters that are determined by their algorithm. If these parameters are carefully set to function under certain thresholds, then autonomous weapons can be considered to qualify as legal weapons. Moreover, the per se banning of lethal autonomous weapons in warfare is not only difficult but may be ill advised, considering the tactical and strategic advantages that automation of weapons brings.

The law regulating the conduct of soldiers in the battlefield is found under Article 48 of the Additional Protocol I. Article 48 ensures that parties to a conflict “shall at all times distinguish between the civilian population and combatants and between civilian objects and military objectives and accordingly shall direct their operations only against military objectives.”14 These rules are further operationalised under the Protocol. Article 51(4)(a) prohibits indiscriminate attacks that are not against a ‘specific’ military objective. This can prove to be a contentious issue in the use of autonomous weapons as these are programmed to respond automatically to what they perceive as threats. Once the payload from these weapons has been released automatically, it will be difficult to intervene even if the weapon has mistakenly engaged civilian targets. This puts fully autonomous weapons in contention with Article 57(2)(b) of the Protocol which states that if it becomes apparent that the object of the attack is non-military then the attack shall be cancelled or suspended.15 Article 58(a) also puts an additional responsibility on the attacker to take necessary steps to ensure the removal of civilian population and objects from the area before engaging the enemy.16

The distinction rule, therefore, ensures that a weapon being used in the battlefield does not indiscriminately target enemy combatants as well as non-combatants. The rule of precaution ensures that the weapon and its wielder correctly assess that the use of the weapon does not unnecessarily put civilians in harm’s way. The disengagement principle ensures that one’s response does not cause unprecedented damage to civilians. Autonomous lethal weapons that are deployed on the frontlines of battle may not necessarily be able to make these distinctions.

It has been argued that compliance with IHL for autonomous weapons will vary considerably, depending on the rules of engagement, the threat faced, and primarily the nature of the battlefield.17 In naval and aerial warfare where the presence of civilian population and objects is negligible, the use of LAWS can be unrestrained. In fact, as technology progresses, their use may even become essential due to the rapid response times that may be necessary to deal with future threats. This does not change the fact that in the majority of wars which are fought on land and in urban environs, the use of autonomous weapons may fall afoul of existing IHL. In fact, under international law, nation states are required to analyse whether the study, development, acquisition or development of new weapons are in “some or all circumstances”18 prohibited by the Additional Protocol I. Further, it can even be argued that once these technologies have been developed by a certain country, it will be difficult to restrict their use to only certain branches of the military or battlefields. The solution must therefore come from policy that regulates conduct of war itself. Given that existing IHL does not fully encompass the range of difficulties posed by LAWS it might be prudent to develop specialised norms under international law that specifically regulate the use of autonomous weapons and attempt to prevent their misuse. This policy must also ensure that a certain level of human control is maintained over these weapons not only to prevent misuse but also to ensure a certain degree of responsibility over state actors. Therefore, a comprehensive and future-proof definition of Meaningful Human Control over LAWS must be developed through multilateral consultations.

IV. India’s Political and Strategic Considerations

The proliferation of LAWS now appears inevitable with the advancement of technology and is likely to further strengthen the military superiority of the world’s Great Powers. In response to increased focus by Russia and China towards military advancement, the United States is responding by funnelling an unprecedented amount of money into research and development of futuristic technologies. As a part of its third-offset strategy the Pentagon is reportedly dedicating $18 billion for its Future Years Defense Program. 19 A substantial portion of this amount has been allocated for human-machine collaboration and cyber and electronic warfare. These projects will include programs for human-machine collaboration not just for advanced decision-making processes but also for the development of exoskeleton suits and unmanned platforms.20 This expresses a willingness and intent on the part of the world’s largest military to develop and deploy autonomous weapons. Similarly, France has also sought to defend the legality of development and use of autonomous weapons within IHL.

These political developments create a distinct disadvantage for countries like India where research and development of advanced weapons systems is hindered by export control regimes like the Wassenaar Arrangement. India’s response in international fora has been to hedge against the future and, until such weapons are developed, attempt to retain the balance of conventional power that it currently enjoys in the sub-continent.21 At the Informal Meeting of Experts on Lethal Autonomous Weapons held in Geneva in April 2016, under the aegis of the Convention on Certain Conventional Weapons (CCW), India reiterated this strategy. India’s permanent representative to Conference on Disarmament, Ambassador DB Venkatesh Verma, called on to the CCW to ensure that the technology gap amongst states is not widened in the coming years and that the use of autonomous weapons in battlefields should not be encouraged with the presumption that it will lead to lesser casualties.22 Verma went on to acknowledge that clear definitions of key terms like Meaningful Human Control were difficult to arrive at and therefore significant deliberation is necessary before the international community arrives at definitive conclusions. While this may seem like a sound strategy for India in the short term, it may not be sustainable. As discussed earlier, international deliberations around the legality of Lethal Autonomous Weapons is unlikely to decrease the rate of their development. Moreover, many nations around the world have already expressed willingness to use these weapons in war. The urgency that the matter requires is further exacerbated by the fact that India’s political rivals in the region—like Pakistan who would be interested in these weapons in order to close the conventional superiority gap with India—are likely to obtain them from China which already has a significant headstart in their development. New Delhi must thus pursue two goals: one, foster the indigenous development of sentient technologies in collaboration with the public and private sectors, and two, help create an enabling international regime that permits the transfer of key LAWS technologies.

India has long needed the creation of a domestic military industrial complex that can help keep up with evolving 21st-century threats. Prime Minister Narendra Modi’s “Make in India” campaign offers an opportunity to close that gap. India’s technology giants like Infosys and Tata Technologies have already made inroads into AI and robotics. The challenge for the Indian political leadership is to imagine a cooperative framework where civilian organisations can collaborate with bodies like the Defence Research and Development Organisation to develop and weaponise autonomous systems. The other challenge relates to the inadequacy in India’s technological development in comparison to its international counterparts. India must also posit itself as an importer of this technology to make up for the gap. The United States, as the clear frontrunner in AI technologies, should be India’s natural strategic ally in autonomous technologies. The US-India Defence Technology and Trade Initiative, concluded in 2012 has served as the starting point for cooperation on defence technologies for the two nations. In fact, the draft US-India Defense Technology and Partnership Act, under consideration in the US Congress seeks to formally recognise India as a major partner of the United States and relax export control restrictions to ease the transfer of technology to India.23 Recent talks between Manohar Parikkar, India’s Minister of Defence and Ash Carter, the US Secretary of Defense, also concluded with assurances for cooperation in the development of “cutting-edge” projects.24 It is imperative that India keep up this momentum and leverage its partnership with the US to emerge as a future leader in autonomous weapons.

The exact manner in which autonomous weapons will be used in the future remains unclear. India is therefore correct in demanding increased deliberations about their use and place within IHL. It is, however, certain that many of these technologies exist today and will likely be used in the battlefields of tomorrow, with or without human intervention. One of the major proffered advantages of AI is that it will exponentially quicken decisionmaking during war. There is at least one school of thought which believes that in an attack kill chain – find, fix, track, target, engage, assess – the engagement link is the least time-intensive. Therefore, this argument goes, it is unlikely that LAWs will replace human beings in this link (in the immediate future) because the time saved would be insignificant. 25 If this is indeed true then the legal complexity around LAWS will be significantly reduced. However, the usefulness of autonomous systems in the other functions such as targeting, surveillance, and damage assessment, will remain unchanged. Automation of these functions will also provide a significant advantage to a party and it is essential that countries like India aggressively pursue research and development into autonomous systems.


  1.  Carl von Clausewitz, On War: Indexed Edition, (Princeton University Press, 1989)
  2.  Reynolds, Emily, “Harvard Awarded £19m to Build Brain-Inspired Artificial Intelligence (Wired UK),” Wired UK. Accessed 10 May 2016.
  3. Kurzweil, Ray, The Singularity Is Near: When Humans Transcend Biology, Penguin USA, 2006, 84
  4.  Bostrom, Nick, ‘How Long Before Superintelligence?’ International Journal of Futures Studies 2 (1998).
  5.  Department of Defence., “Directive Number 3000.09,” Accessed May 11, 2016,
  6.  Views of the International Committee of the Red Cross (ICRC) on autonomous weapon system at the Convention on Certain Conventional Weapons (CCW) Meeting of Experts on Lethal Autonomous Weapons Systems (LAWS) 11-15 April 2016, Geneva
  7.  Ray Kurzweil, “The Law of Accelerating Returns,” Kurzweil Accelerating Intelligence (blog), March 7, 2001. “We won’t experience 100 years of progress in the 21st century — it will be more like 20,000 years of progress (at today’s rate). The “returns,” such as chip speed and cost-effectiveness, also increase exponentially. There’s even exponential growth in the rate of exponential growth. Within a few decades, machine intelligence will surpass human intelligence, leading to The Singularity — technological change so rapid and profound it represents a rupture in the fabric of human history. ”
  8.  “The US Navy Fact File: MK 15- Phalanx Close-In Weapons System (CIWS),” Accessed May 11, 2016.
  9. “NBS MANTIS Air Defence Protection System,” Army Technology, Accessed May 11, 2016,
  10.  The United Nations Institute for Disarmament Research, “Framing Discussions on the Weaponization of Increasingly Autonomous Technologies,” Accessed May 11, 2016,
  11.  “Marines Are Testing A Robot Dog For War,” Popular Science, Accessed May 11, 2016.
  12.  Diplomat, Franz-Stefan Gady, “US Navy Is Speed Testing Sub-Hunting Robot Ship.” The Diplomat. Accessed May 19, 2016.
  13.  Article 35, “Protocol Additional to the Geneva Conventions of 12 August 1949, and relating to the Protection of Victims of International Armed Conflicts (Protocol I),” 8 June 1977.
  14.  Article 48, Supra note 13.
  15. Article 57(2)(b) Ibid.
  16. Article 58(a) Id.
  17. Anderson, Kenneth, Reisner, Daniel and Waxman, Matthew. “Adapting the Law of Armed Conflict to Autonomous Weapon Systems.” 90 INT’L L. STUD. 386 (2014).
  18. Article 36 Supra note 13
  19. Franz-Stefan Gady, The. “New US Defense Budget: $18 Billion for Third Offset Strategy.” The Diplomat. Accessed May 11, 2016.
  20. Ibid
  21. Arun Mohan Sukumar, “Weaponised Robotics Set to Shape Great Power Rivalry as India Seeks Rules of the Game,” The Wire, Accessed May 11, 2016.
  22. Statement by Ambassador DB Venkatesh Verma, Permanent Mission of India to the Conference on Disarmament at the CCW Informal Meeting of Experts on Lethal Autonomous Weapons Geneva 17 April, 2016, Accessed 10 May, 2016.$file/2015_LAWS_MX_IndiaConc.pdf.
  23. Holding, George. “Text – H.R.4825 – 114th Congress (2015-2016): U.S.-India Defense Technology and Partnership Act.” Legislation, March 22, 2016.
  24. “India-United States Joint Statement on the Visit of Secretary of Defense.” U.S. DEPARTMENT OF DEFENSE. Accessed May 11, 2016.
  25. Werner J.A. Dahm, “Killer Drones Are Science Fiction,” The Wall Street Journal, Accessed May 11, 2016.


Leave a Reply

Be the First to Comment!


Bedavyasa Mohanty

Bedavyasa Mohanty is a Junior Fellow with ORF's Cyber Initiative. A lawyer by training, he is... Read More

Most Viewed

The Changing Face of Internet Addresses and Online Identity


Gajendra Upadhyay
12 September, 2016

A specialist neurology clinic in Japan has this revealing signage with its website  address prominently proclaiming its function (photo courtesy:... 

Protection of Critical Information Infrastructure: An Indian Perspective


R K Sharma
1 August, 2016

Abstract Critical information infrastructure (CII) is a pillar on which modern nations function. The revolution in information and communication technologies... 

The NCIIPC & Its Evolving Framework


Saikat Datta
27 October, 2016

Only eight years after India passed the Information Technology Act, did the term cybersecurity appear in a statute through a... 

What we need to talk about when we talk about Artificial Intelligence


Urvashi Aneja
7 March, 2017

No longer the subject of science fiction, Artificial Intelligence (AI) is profoundly transforming our daily lives. While computers have been... 

Under the UN’s Shadow: Internet Governance Forum & the Urgent Need for Reforms


Jyoti Panday
8 July, 2016

The creation of the Internet Governance Forum (IGF) was a watershed moment in the history of the Internet. This article...