Monday, March 9, 2015

Morallity of Military UAS





Introduction

The question of whether or not UAS or drones in the true sense of the word are acceptable to use on warfare is a settled issue.  UAS and drones have been in military use since the end of World War I with the “Kettering Bug” and have been continuously used and adapted for different missions since those early days.  Weaponized UAS are nothing new either, there were even nuclear armed UAS back in the 1950’s and 1960’s.  The U.S. Navy used the QH-50 DASH (Drone Anti-Submarine Helicopter) to hunt Soviet submarines with nuclear depth charges and homing torpedoes.  (Gyrodyne Helicopter Historical Foundation, 2013)  These systems have been used in various reconnaissance roles from Vietnam through the recent wars in Iraq and Afghanistan.  The question is not whether it is ethical to have UAS for military purposes but what are the ethical boundaries and constraints for using them.

Military UAS Morality

This question is now becoming more relevant not because of the proliferation of military UAVs but because of the rapid advance of technology, specifically technology allowing greater levels of autonomy for the UAS.  The big question is how much autonomy are we going to allow UAS to have and dare we give them the ability to shoot targets without direct human involvement.  This is where the true question lies.  While autonomy seems like a rather simple and easily understood term while regarding UAS it is a complex and confusing term.  Many UAVs today have high levels of autonomy when it comes to flying, but have to have direct human control for any weapons engagement.  Is this type of autonomy bad?  Some UAS currently under development can make their own flight plans, evade enemy air defenses, evade enemy aircraft, and locate their assigned target without direct human input.  Is this level of autonomy too much?  Is the breaking point where the UAS decides to kill a target, even a human being, without any input from a human operator?  “Autonomy has also been defined as the ability to pull the trigger, without a human initiation or confirmation.” (Johansson, 2011, p. 280)  Allowing UAS the ability to make the “kill” decision without human input is where the line needs to be drawn.  Human logic, reasoning, empathy, decision making ability is needed for this type of decision.  Even if we look at the future and the possibility of artificial intelligence (AI) that may be said to possess these capabilities it is still not an acceptable idea.  AIs would not be human.  Their core beliefs, morality, and ethics will most likely be different than ours.  They would not be killing one of their species, therefore they would not assign the same decision criteria making them unsuitable for this type of decision making.  Machines of any kind should never be allowed to target and kill without direct input and authorization from a human being.

Conclusion

The use of unmanned systems of any type, air, sea, or land in and of itself is not morally wrong and has been a common practice in the military for almost a century now.  The question comes in how they will be used and what level of autonomy will they be given.  UAS are extremely useful tools of war and in most cases there is nothing morally wrong in their use, even in killing targets as long as a human is “in-the-loop” and making those kill decisions.  It becomes morally wrong when there is no human in the kill decision.  No unmanned system should ever be allowed to fire at targets without direct human involvement. 

 


References



 

No comments:

Post a Comment