Research Spotlight: The Human Cost of Giving AI the Right to Kill Through Autonomous Weapons
Autonomous weapon systems, facilitated by AI, are changing how global wars are fought. This newsletter reviews a recently published peer-reviewed research paper on this subject.
Automation is disrupting the battlefield, and autonomous weapons and AI are getting closer to making life-and-death decisions on their own. This newsletter unpacks what that means for international law, human responsibility, and the future of war. We break down the key takeaways from a recent peer-reviewed journal article on autonomous weapons systems and why keeping humans in control is not just a technical issue, but a moral and legal necessity.
What Are Autonomous Weapons and Who’s in Control?
Autonomous Weapon Systems (AWS) are machines that can select and attack targets without real-time human input. They are becoming part of modern military strategy due to their speed, efficiency, and potential to reduce human risk on the battlefield.
However, as these systems grow in complexity and decision-making power, concerns are mounting over the loss of human control in life-and-death scenarios.
The journal article highlights that there is no single definition of what counts as “meaningful human control.” While many countries and experts agree that some level of control is necessary, they differ on what that control should look like.
Some believe it means having a human approve each action, others think it is enough for a person to monitor the system, and some are satisfied with having a human design the rules in advance. Each position carries different implications for safety, accountability, and legality.
International discussions, including those at the United Nations, have not settled on a clear standard. But the global consensus leans toward ensuring humans are still involved in decisions to use force. The challenge is to define what “meaningful” means in operational, ethical, and legal terms.
The author points to growing interest in creating a legal and ethical framework around AWS. Groups like Human Rights Watch and Article 36 advocate for strict limits or outright bans unless human control is ensured. Their worry is that without such control, machines may make mistakes or cause harm that cannot be properly reviewed or held accountable under current laws.
The article underlines the need for “informed human conduct,” a concept that places responsibility on trained humans to supervise, override, or halt AWS actions when needed. This conduct should not be superficial or tokenistic. Real control must include the ability to understand the system, intervene when necessary, and make decisions based on legal and ethical standards.
AWS can support military operations, but the technology must not be allowed to replace human judgement. As states debate future regulations, the push for meaningful and informed human involvement is not just a legal or ethical issue; it is about keeping humans responsible for human outcomes.
What “Meaningful Human Control” Really Means
The phrase “meaningful human control” is often used in discussions about autonomous weapon systems, but it is rarely explained in concrete terms. In this article, Sehrawat outlines how the term has become a focal point in debates around law, ethics, and military policy.
While “human control” is clear, the qualifier “meaningful” is not. States and experts interpret it differently.
For instance, some governments define it as requiring a person to approve every action an AWS takes. Others are comfortable with a system that operates independently once activated, as long as it was originally programmed by a human. These differences have created uncertainty over what kind of control is enough to ensure that the use of force remains lawful and accountable.
According to the article, meaningful control involves more than just pressing a button. It includes having the ability to understand the situation, apply judgement, and intervene when needed. The operator must be able to stop or change the weapon’s behaviour in real time, especially if legal conditions change or the target no longer meets military criteria. This capacity for informed oversight is called "informed human conduct."
The article presents a working definition: meaningful control is the kind of control that allows a human to preserve agency and take moral responsibility when deciding to use force. This means knowing the target, understanding the legal and ethical context, and acting within clearly defined operational limits like time and location.
Systems must be predictable and transparent, with tools that let users make accurate decisions based on timely information.
Sehrawat argues that true human control cannot be symbolic or reactive. It must be active, informed, and designed into the system from the beginning. It is about setting rules, monitoring actions, and taking responsibility. Without these features, the use of AWS risks becoming both legally weak and ethically unacceptable.
This makes meaningful human control not just a policy preference, but a practical requirement to keep humans in charge of weapons that can end lives.
⚖️ Human vs Machine: Can AI Be Trusted With Life-or-Death Decisions?
Autonomous Weapon Systems raise deep ethical concerns about the role of humans in warfare. These systems are designed to detect, select, and engage targets without direct human involvement. The problem is not only technical but fundamentally moral. At its core is a simple fact: machines lack human conscience.
The article explains that ethics plays a central role in international discussions about AWS. Ethical questions surface when machines are allowed to make decisions that affect life and death. The human brain draws on instinct, emotion, experience, and moral awareness.
These traits determine how people judge when and how to use force. AWS cannot replicate that. They follow rules, but they do not understand them. They can be programmed to avoid civilian targets, but they do not grasp why civilian life should be protected in the first place.
Autonomous weapon systems also lack agency and intent. These are essential for taking responsibility for actions in armed conflict. Human responsibility is both a legal and moral safeguard.
Without a human in control, it becomes difficult to assign blame if the system makes a mistake. This is not just a technical risk but a threat to the idea of human dignity. The principle of human dignity holds that only humans should decide when to take a life.
The article makes clear that meaningful human control is not just a safeguard; it is a moral obligation.
Delegating decisions of force to a machine removes a critical layer of accountability.
This is why many countries and civil society groups insist that humans must always remain responsible for the use of lethal force.
The author notes that ethics are often shaped by emotional sensitivity and human instincts. AWS cannot feel hesitation or compassion. They cannot pause and reconsider based on new information or changing circumstances. Their lack of emotional reasoning means they cannot substitute for human decision-making.
For these reasons, the article supports strong international rules requiring human involvement at every stage of using AWS.
This includes setting boundaries, overseeing decisions, and taking responsibility when things go wrong. Without that, trust in these systems will always be in doubt.
Military Advantages; Why Some Are Betting on Killer Bots
Autonomous Weapon Systems are gaining support in military circles because of their technical capabilities and strategic value. The article by Vivek Sehrawat explains that autonomy allows machines to carry out tasks without constant human involvement.
AWS combines this ability with advanced sensors, targeting technology, and machine programming to perform specific missions under human-set conditions.
The main military advantages lie in speed, endurance, precision, and risk reduction. Autonomous weapon systems can operate in dangerous or inaccessible areas, monitor targets for extended periods, and act with consistency. They do not suffer fatigue, fear, or emotional stress. Unlike human soldiers, they do not make decisions based on emotion, which may improve adherence to rules during combat situations.
Some examples discussed in the article include Turkey’s Kargu-2 drone and Israel’s Harpy loitering munition. These systems are designed to operate with minimal oversight during specific missions.
For instance, the Harpy can patrol an area, identify threats, and attack radar targets. Even though it operates independently once launched, the mission parameters are set by humans.
The systems are designed to work within constraints. Autonomy is always limited by rules of engagement, geography, time, and specific mission goals. Ukraine’s systems reportedly include targeting data for 64 types of threats, showing how detailed human input can guide AWS performance. This control helps align their actions with international humanitarian law.
Autonomous weapon systems also bring economic and tactical benefits. They reduce the number of soldiers required in the field, cut costs of long missions, and improve battlefield coordination. Their ability to act faster than human reflexes means they can respond quickly during threats or emergencies.
However, the article warns that autonomous weapon systems without meaningful human control may pose risks. If these systems become self-learning, their actions may become unpredictable and harder to review.
This unpredictability makes it difficult to ensure compliance with legal norms and prevent unlawful attacks or accidental harm. The author argues that such systems should not be deployed unless their behaviour can be controlled and understood.
⚖️ Drawing the Legal Line: Can War Machines Follow Rules?
International Humanitarian Law (IHL) sets the rules of war, aiming to limit harm and protect civilians. As autonomous weapon systems become more capable, the question is how to ensure these technologies comply with those rules.
Vivek Sehrawat explores whether artificial intelligence systems can truly respect the principles of distinction, proportionality, military necessity, and precaution, all core to International Humanitarian Law.
The article explains that meaningful human control is not just a helpful feature. It is a legal and ethical necessity. According to the author, meaningful human control ensures that real people remain responsible for decisions that affect human lives. Without it, machines could act unpredictably and violate IHL principles without anyone to blame or hold accountable.
Distinction requires identifying combatants and separating them from civilians. Proportionality demands that attacks avoid excessive harm relative to the expected military advantage. These judgments require context, experience, and moral reasoning, capacities that autonomous machines lack. Sehrawat shows that delegating such decisions to machines without real-time human oversight threatens these principles.
The article draws attention to a growing consensus: IHL must be respected by design. In practice, this means programming strict limitations into the weapons' software, training human operators, and building legal reviews into each stage of development and deployment. If a system cannot be designed or operated in a way that respects IHL, it should not be used.
The paper also proposes that meaningful human control should be treated as a baseline legal standard under IHL. Without it, the use of autonomous weapons could be considered illegal, regardless of whether specific harm occurs. This perspective reinforces the need for clear boundaries in designing and using these systems.
In essence, Sehrawat calls for a legal architecture where war technologies remain tools, not independent agents. Keeping people in control is not just about ethics or policy. It is about ensuring that the laws of war apply, even when machines are doing the fighting.
AI and Autonomous Weapons: A Tense Relationship
Artificial intelligence plays a critical role in determining the future of warfare. This article also focuses on how AI is embedded in weapon systems and why its involvement raises legal, ethical, and operational concerns.
The core concern is about human control.
When a machine powered by AI can decide when to use force, it raises serious questions about accountability, legality, and safety.
AI is what enables autonomous weapon systems to process data, identify patterns, and act without human instruction in real time. These systems are being developed to operate faster than any human can, in environments too risky for human soldiers. That makes AI not only a technical tool but also a decision-making force.
In Sehrawat’s view, the integration of AI changes the very structure of how decisions are made during conflict. The article stresses that meaningful human control must remain central, even as AI takes on more tasks.
The author explains that there are varying degrees of control: having a human approve every action, having a human monitor the system, or having humans only design the rules in advance. AI challenges all three by making systems more independent and less predictable.
The concern grows as AI systems begin to “learn” from their environments. When machines adapt without direct human oversight, the ability to trace and explain their decisions becomes limited. This makes it harder to apply the rules of International Humanitarian Law, which demand that actions in war be justifiable and proportionate.
Beyond the article, several global actors have taken firm positions on this issue:
The United Nations is pushing for international consensus on limiting fully autonomous weapons.
Countries like Austria and Brazil have proposed treaties to prohibit AI from making targeting decisions without human input.
Other scholars highlight that AI cannot make moral judgments. It can calculate and simulate, but it lacks awareness, empathy, or a sense of right and wrong. War involves more than technical operations; it involves values and choices. AI cannot fulfil that role.
The concern is not just about whether AI can follow instructions. It is about whether those instructions can keep up with real-life combat scenarios. Situations change rapidly in warfare.
Civilian presence, the identity of targets, or the necessity of action can shift in seconds. Human operators use judgment to pause, change course, or abort a mission. AI, by contrast, may lack this flexibility.
Sehrawat proposes that meaningful human control must include informed human conduct. This means operators must understand how AI systems work, what they are likely to do, and how to step in when needed. He argues this should be built into the design and training of every autonomous system.
Autonomous weapons are being trained to make decisions in seconds. But wars are messy. People (both civilians and military personnel) move between geographical locations. Environments change. Autonomous weapons cannot always keep up. Without a human in control, these autonomous weapon systems might attack the wrong target or fail to stop when circumstances change during war. Unlike humans, they do not understand circumstances or feel the need to double-check before they strike. That is what makes keeping humans in the decision process so important.