Friday
February 21, 2025

Development and Deployment of Autonomous Weapons in Defence and Security

Featured in:

By: Munira Qaiser, Research Analyst, GSDN

Autonomous weapons: source Internet

Autonomous weapons are weapons that use Artificial Intelligence (AI) technology to attack its target, they can make decisions and act on their own without needing any human intervention and decide when to shoot a weapon and which direction to move. These weapons are pre-programmed to kill a particular “target profile” using sensor data such as movements or facial recognition after being deployed into an environment and the moment the algorithm matches; it fires and kills the target. The use of AI development and innovation aimed to create programs like human enhancement and lethal autonomous weapon systems to decrease the overall risk to soldiers. According to the US Congress, “artificial intelligence” refers to a machine-based system capable of making predictions, recommendations, or decisions that affect the real world, based on a set of objectives defined by humans.

The main benefit of autonomous weapons is their ability to perform their task with utter precision that is too with minimum collateral damage and human casualties and do not need any human support to get it done. It is like once it is operated with human help the rest of the task will be carried out on its own without any support. Considering the potential of Artificial Intelligence (AI) and Machine Learning (ML) in making predictions and informed decisions, along with robotics and autonomics, the future of battlefield tactics for soldiers is poised for significant change.

Yet, these advantages can also present significant challenges and raise concerns, particularly regarding the use of autonomous weapons. On keen consideration, weapons that rely on algorithms to make lethal decisions without human oversight are not just immoral but also pose a serious threat to national and global security.

IMMORAL:  Algorithms cannot understand the worth of human life; therefore, they should never be entrusted with the authority to determine who lives or dies. United Nations Secretary-General António Guterres concurs that machines possessing the capability and autonomy to end lives without human intervention are politically unacceptable, morally abhorrent, and ought to be prohibited under international law.

THREAT TO SECURITY: Algorithmic decision-making enables weapons to operate with the speed, cost-effectiveness, and scalability of software. This could have highly destabilizing effects on national and international security, as it introduces risks such as proliferation, rapid escalation, unpredictability, and the potential development of weapons of mass destruction.

LACK OF ACCOUNTABILITY: “Who is responsible?” handing over these autonomous weapons always raises this question, if anything goes against the ethics considering their tendency and unpredictable nature then who shall be considered accountable for the use and action of force?

To address these concerns and risks associated with autonomous weapons, there are legal and ethical frameworks to govern their use. As with those efforts, to regulate the use of autonomous weapons International Committee on the Red Cross (ICRC) suggests that states should follow certain rules and legal implications to avoid great loss.

Autonomous weapons that are hard to read which implies their designs and effects are too complex to understand, predict and explain should be banned. This encompasses weapons that acquire knowledge about their targets during operation, potentially extending to all autonomous weapons controlled by machine learning algorithms.Top of Form

Autonomous weapons that are intended to attack people directly should be prohibited, both in their use and design.

Lastly, there is a strong need for strict restrictions on designing and usage of autonomous weapons that do not comply with law and other ethical concerns to mitigate the risks mentioned above.

Overall nature of warfare is evolving, with the machines now able to make decisions on the battlefield that were previously solely within the purview of humans. However, this shift does not diminish the importance of human control over using autonomous weapons. International humanitarian law (IHL) mandates that humans must be able to directly intervene in the operation of weapons systems during an attack. This requires ensuring accountability, oversight, and the ability to make nuanced decisions in complex and unpredictable combat situations, thereby safeguarding against potential violations of IHL and ethical considerations.

This perspective is grounded in the idea that humans are fundamentally responsible for the legal and moral obligations regarding the conduct of warfare. The Group of Governmental Experts on Emerging Technologies in the Lethal Autonomous Weapons System (GGE) expressed the stance in guiding principles (c) of its 2019 Guiding Principles.


“Human-machine interaction, which can vary in its forms and implementation throughout a weapon’s life cycle, should ensure that the potential use of autonomous weapons systems (AWS) complies with relevant international law, particularly international humanitarian law. When determining the quality and extent of human-machine interaction, several factors should be taken into account, including the operational context and the characteristics and capabilities of the weapons system as a whole.” This quote is from the ‘Report of the 2019 Session of the Group of Governmental Experts on Emerging Technologies in the Area of Lethal Autonomous Weapons Systems’ by the Group of Governmental Experts on Emerging Technologies in the Area of Lethal Autonomous Weapons Systems, dated September 25, 2019.

Human intervention would involve a variety of actions by an operator, traditionally linked with operating a weapon. These actions could include manually selecting a target, firing a weapon, stopping an attack, or other activities that are legally significant. To take a present-day example, Israel’s Aerospace industry ‘Harpy’ loitering munition automates several tasks that would require direct human intervention. These tasks include assessing potential enemy radiation signatures, aiming the munition at the radiation source, and stopping an attack if circumstances change. These tasks are still carried out as part of the attack process by the operating state through Harpy’s control system software, but not directly by human operators.

The ethical implications of autonomous weapons underscore the urgent need for international cooperation and regulations. The United Nations (UN) emphasizes the necessity of clear restrictions on all forms of autonomous weapons to ensure compliance with international law and ethical standards. These restrictions should encompass limitations on where, when, and for how long autonomous weapons can be deployed, the types of targets they can engage, the level of force they can exert, and the requirements for effective human oversight, intervention, and deactivation.

Despite increasing reports of testing and deployment of various autonomous weapon systems, it is not too late to take action. Over more than a decade, discussions within the United Nations, including in the Human Rights Council, the Convention on Certain Conventional Weapons, and the General Assembly, have laid the groundwork for explicit prohibition and restrictions. Now, states must build upon this foundation and engage in constructive negotiations to establish new rules that effectively address the real threats posed by these weapon technologies.

In conclusion, the development and deployment of autonomous weapons present profound ethical and security dilemmas that require immediate global attention and regulations. While these weapons offer potential advantages in terms of precision and reduced risk to military personnel, their capacity to make life-and-death decisions independently raises significant risks.

International collaboration is indispensable in establishing clear regulations and prohibitions on autonomous weapons to ensure adherence to international law and ethical standards. Initiatives such as those proposed by the United Nations and the International Committee of Red Cross, aimed at prohibiting certain types of autonomous weapons and imposing strict restrictions on others, are crucial steps forward.

States must take decisive action to tackle these challenges and avert the destabilizing impacts that autonomous weapons could have on global security. Upholding principles of human dignity and accountability in warfare is paramount. It is essential to ensure that technological advancements serve to enhance, rather than endanger, the safety and well-being of all individuals, both on and off the battlefield.

5 1 vote
Article Rating
Subscribe
Notify of
guest
0 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments

Find us on

Latest articles

Related articles

Will the Lankan Lion roar again?

By: Rishya Dharmani, Research Analyst, GSDN Sri Lanka’s politico-economic trajectory has been unique and also a basket case...

Amidst Contracts, Global Aircraft Engine Manufacturers Dominate Aero India...

By: Suman Sharma As India’s aerospace public sector unit – Hindustan Aeronautics Limited (HAL) faced flak over the...

DRDO unveils India’s indigenous ‘Iron Dome’ at Aero India...

By: Suman Sharma At the 15th edition of the Bengaluru-based biennial airshow- Aero India, India’s govt-run defence agency...

HAL’s Mk-1A enthralls Spectators at Aero India 2025

By: Suman Sharma India’s indigenous Light Combat Aircraft (LCA) Mk-1A, manufactured by the Hindustan Aeronautics Limited (HAL) made...

US-Canada relations under Donald Trump 2.0

By: Jai Verma, Research Analyst, GSDN The United States are now officially in a Trade War, as new...
Ads Blocker Image Powered by Code Help Pro

Ads Blocker Detected!!!

We have detected that you are using extensions to block ads. Please support us by disabling these ads blocker.

Powered By
Best Wordpress Adblock Detecting Plugin | CHP Adblock