A Philosopher's Blog

Ex Machina & Other Minds I: Setup

Posted in Epistemology, Metaphysics, Philosophy, Technology by Michael LaBossiere on September 7, 2015

The movie Ex Machina is what I like to call “philosophy with a budget.” While the typical philosophy professor has to present philosophical problems using words and Powerpoint, movies like Ex Machina can bring philosophical problems to dramatic virtual life. This then allows philosophy professors to jealously reference such films and show clips of them in vain attempts to awaken somnolent students from their dogmatic slumbers. For those who have not seen the movie, there will be some minor spoilers in what follows.

While the Matrix engaged the broad epistemic problem of the external world (the challenge of determining if what I am experiencing is really real for real), Ex Machina focuses on a much more limited set of problems, all connected to the mind. Since the film is primarily about AI, this is not surprising. The gist of the movie is that Nathan has created an AI named Ava and he wants an employee named Caleb to put her to the test.

The movie explicitly presents the test proposed by Alan Turing. The basic idea is that if a person cannot distinguish between a human and a computer by engaging in a natural language conversation via text, then the computer would have passed the Turing test. In the movie, there is a twist on the test: Caleb knows that Ava is a machine and will be interacting with her in person.

In the movie, Ava would easily pass the original Turing Test—although the revelation that she is a machine makes the application of the original test impossible (the test is supposed to be conducted in ignorance to remove bias). As such, Nathan modifies the test.

What Nathan seems to be doing, although he does not explicitly describe it as such, is challenging Caleb to determine if Ava has a mind. In philosophy, this is known as the problem of other minds. The basic idea is that although I know I have a mind, the problem is that I need a method by which to know that other entities have minds. This problem can also be recast in less metaphysical terms by focusing on the problem of determining whether an entity thinks or not.

Descartes, in his discussion of whether or not animals have minds, argued that the definitive indicator of having a mind (thinking) is the ability to use true language. Crudely put, the idea is that if something really talks, then it is reasonable to regard it as a thinking being. Descartes was careful to distinguish between what would be mere automated responses and actual talking:

 

How many different automata or moving machines can be made by the industry of man […] For we can easily understand a machine’s being constituted so that it can utter words, and even emit some responses to action on it of a corporeal kind, which brings about a change in its organs; for instance, if touched in a particular part it may ask what we wish to say to it; if in another part it may exclaim that it is being hurt, and so on. But it never happens that it arranges its speech in various ways, in order to reply appropriately to everything that may be said in its presence, as even the lowest type of man can do.

 

As a test for intelligence, artificial or otherwise, this seems to be quite reasonable. There is, of course, the practical concern that there might be forms of intelligence that use language that we would not recognize as language and there is the theoretical concern that there could be intelligence that does not use language. Fortunately, Ava uses English and these problems are bypassed.

Ava easily passes the Cartesian test: she is able to reply appropriately to everything said to her and, aside from her appearance, is behaviorally indistinguishable from a human. Nathan, however, seems to want even more than just the ability to pass this sort of test and appears to work in, without acknowledging that he is doing so, the Voight-Kampff Test from Phillip K. Dick’s Do Androids Dream of Electric Sheep? In this book, which inspired the movie Blade Runner, there are replicants that look and (mostly) act just like humans. Replicants are not allowed on earth, under penalty of death, and there are police who specialize in finding and killing them. Since the replicants are apparently physically indistinguishable from humans, the police need to rely on the Voight-Kampff Test. This test is designed to determine the emotional responses of the subject and thus distinguish humans from replicants.

Since Caleb knows that Ava is not a human (homo sapiens), the object of the test is not to tell whether she is a human or a machine. Rather, the object seems to be to determine if she has what the pop-psychologists refer to as Emotional Intelligence (E.Q.) This is different from intelligence and is defined as “the level of your ability to understand other people, what motivates them and how to work cooperatively with them.” Less nicely, it would presumably also include knowing how to emotionally manipulate people in order to achieve one’s goals. In the case of Ava, the test of her E.Q. is her ability to understand and influence the emotions and behavior of Caleb. Perhaps this test should be called the “Ava test” in her honor. Implementing it could, as the movie shows, be somewhat problematic: it is one thing to talk to a machine and quite another to become emotionally involved with it.

While the Voight-Kampff Test is fictional, there is a somewhat similar test in the real world. This test, designed by Robert Hare, is the Hare Psychopathy Checklist. This is intended to provide a way to determine if a person is a psychopath or not. While Nathan does not mention this test, he does indicate to Caleb that part of the challenge is to determine whether or not Ava really likes him or is simply manipulating him (to achieve her programed goal of escape). Ava, it turns out, seems to be a psychopath (or at least acts like one).

In the next essay, I will consider the matter of testing in more depth.

 

My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Follow Me on Twitter

Robo Responsibility

Posted in Ethics, Law, Philosophy, Science, Technology by Michael LaBossiere on March 2, 2015

It is just a matter of time before the first serious accident involving a driverless car or an autonomous commercial drone. As such, it is well worth considering the legal and moral aspects of responsibility. If companies that are likely to be major players in the autonomous future, such as Google and Amazon, have the wisdom of foresight, they are already dropping stacks of cash on lawyers who are busily creating the laws-to-be regarding legal responsibility for accidents and issues involving such machines. The lobbyists employed by these companies will presumably drop fat stacks of cash on the politicians they own and these fine lawmakers will make them into laws.

If these companies lack foresight or have adopted a wait and see attitude, things will play out a bit differently: there will be a serious incident involving an autonomous machine, a lawsuit will take place, fat stacks of cash will be dropped, and a jury or judge will reach a decision that will set a precedent. There is, of course, a rather large body of law dealing with responsibility in regards to property, products and accidents and these will, no doubt, serve as foundations for the legal wrangling.

While the legal aspects will no doubt be fascinating (and expensive) my main concern is with the ethics of the matter. That is, who is morally responsible when something goes wrong with an autonomous machine like a driverless car or an autonomous delivery drone.

While the matter of legal responsibility is distinct from that of ethical responsibility, the legal theory of causation does have some use here. I am, obviously enough, availing myself of the notion of conditio sine qua non (“a condition without which nothing”) as developed by H.L.A. Hart and A.M. Honore.

Roughly put, this is the “but for” view of causation. X can be seen as the cause of Y if Y would not have happened but for X. This seems like a reasonable place to begin for moral responsibility. After all, if someone would not have died but for my actions (that is, if I had not done X, then the person would still be alive) then there seems to be good reason to believe that I have some moral responsibility for the person’s death. It also seems reasonable to assign a degree of responsibility that is proportional to the casual involvement of the agent or factor in question. So, for example, if my action only played a small role in someone’s death, then my moral accountability would be proportional to that role. This allows, obviously enough, for shared responsibility.

While cases involving non-autonomous machines can be rather complicated, they can usually be addressed in a fairly straightforward manner in terms of assigning responsibility. Consider, for example, an incident involving a person losing a foot to a lawnmower. If the person pushing the lawnmower intentionally attacked someone with her mower, the responsibility rests on her. If the person who lost the foot went and stupidly kicked at the mower, then the responsibility rests on her. If the lawnmower blade detached because of defects in the design, material or manufacturing, then the responsibility lies with the specific people involved in whatever defect caused the problem. If the blade detached because the owner neglected to properly maintain her machine, then the responsibility is on her. Naturally, the responsibility can also be shared (although we might not know the relevant facts). For example, imagine that the mower had a defect such that if it were not well maintained it would easily shed its blade when kicked. In this case, the foot would not have been lost but for the defect, the lack of maintenance and the kick. If we did not know all the facts, we would probably blame the kick—but the concern here is not what we would know in specific cases, but what the ethics would be in such cases if we did, in fact, know the facts.

The novel aspect of cases involving autonomous machines is the fact that they are autonomous. This might be relevant to the ethics of responsibility because the machine might qualify as a responsible agent. Or it might not.

It is rather tempting to treat an autonomous machine like a non-autonomous machine in terms of moral accountability. The main reason for this is that the sort of autonomous machines being considered here (driverless cars and autonomous drones) would certainly seem to lack moral autonomy. That is to say that while a human does not directly control them in their operations, they are operating in accord with programs written by humans (or written by programs written by humans) and lack the freedom that is necessary for moral accountability.

To illustrate this, consider an incident with an autonomous lawnmower and the loss of a foot. If the owner caused it to attack the person, she is just as responsible as if she had pushed a conventional lawnmower over the victim’s foot. If the person who lost the foot stupidly kicked the lawnmower and lost a foot, then it is his fault. If the incident arose from defects in the machinery, materials, design or programming, then responsibility would be applied to the relevant people to the degree they were involved in the defects. If, for example, the lawnmower ran over the person because the person assembling it did not attach the sensors correctly, then the moral blame lies with that person (and perhaps an inspector). The company that made it would also be accountable, in the collective and abstract sense of corporate accountability. If, for example, the programming was defective, then the programmer(s) would be accountable: but for his bad code, the person would still have his foot.

As with issues involving non-autonomous machines there is also the practical matter of what people would actually believe about the incident. For example, it might not be known that the incident was caused by bad code—it might be attributed entirely to chance. What people would know in specific cases is important in the practical sense, but does not impact the general moral principles in terms of responsibility.

Some might also find the autonomous nature of the machines to be seductive in regards to accountability. That is, it might be tempting to consider the machine itself as potentially accountable in a way analogous to holding a person accountable.

Holding the machine accountable would, obviously enough, require eliminating other factors as causes. To be specific, to justly blame the machine would require that the machine’s actions were not the result of defects in manufacturing, materials, programing, maintenance, and so on. Instead, the machine would have had to act on its own, in a way analogous to person acting. Using the lawnmower example, the autonomous lawnmower would need to decide to go after the person from it own volition. That is, the lawnmower would need to possess a degree of free will.

Obviously enough, if a machine did possess a degree of free will, then it would be morally accountable within its freedom. As such, a rather important question would be whether or not an autonomous machine can have free will. If a machine can, then it would make moral sense to try machines for crimes and punish them. If they cannot, then the trials would be reserved, as they are now, for people. Machines would, as they are now, be repaired or destroyed. There would also be the epistemic question of how to tell whether the machine had this capacity. Since we do not even know if we have this capacity, this is a rather problematic matter.

Given the state of technology, it seems unlikely that the autonomous machines of the near future will be morally autonomous. But as the technology improves, it seems likely that there will come a day when it will be reasonable to consider whether an autonomous machine can be justly held accountable for its actions. This has, of course, been addressed in science fiction—such as the ‘I, Robot” episodes (the 1964 original and the 1995 remake) of the Outer Limits which were based on Eando Binder’s short story of the same name.

 

My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Follow Me on Twitter

Automation & Ethics

Posted in Business, Ethics, Philosophy, Technology by Michael LaBossiere on August 18, 2014
Suomi: Heronin aeolipiili Türkçe: Yunanlı mühe...

Suomi: Heronin aeolipiili Türkçe: Yunanlı mühendis Hero’nun yaptığı ilk örnek türbin (Photo credit: Wikipedia)

Hero of Alexandria (born around 10 AD) is credited with developing the first steam engine, the first vending machine and the first known wind powered machine (a wind powered musical organ). Given the revolutionary impact of the steam engine centuries later, it might be wondered why the Greeks did not make use of these inventions in their economy. While some claim that the Greeks simply did not see the implications, others claim that the decision was based on concerns about social stability: the development of steam or wind power on a significant scale would have certainly displaced slave labor. This displacement could have caused social unrest or even contributed to a revolution.

While it is somewhat unclear what prevented the Greeks from developing steam or wind power, the Roman emperor Vespasian was very clear about his opposition to a labor saving construction device: he stated that he must always ensure that the workers earned enough money to buy food and this device would put workers out of work.

While labor saving technology has advanced considerably since the time of Hero and Vespasian, the basic questions remain the same. These include the question of whether to adopt the technology or not and questions about the impact of such technology (which range from the impact on specific individuals to the society as a whole).

Obviously enough, each labor saving advancement must (by its very nature) eliminate some jobs and thus create some initial unemployment. For example, if factory robots are introduced, then human laborers are displaced. Obviously enough, this initial impact tends to be rather negative on the displaced workers while generally being positive for the employers (higher profits, typically).

While Vespasian expressed concerns about the impact of such labor saving devices, the commonly held view about much more recent advances is that they have had a general positive impact. To be specific, the usual narrative is that these advances replaced the lower-paying (and often more dangerous or unrewarding) jobs with better jobs while providing more goods at a lower cost. So, while some individuals might suffer at the start, the invisible machine of the market would result in an overall increase in utility for society.

This sort of view can and is used to provide the foundation for a moral argument in support of such labor saving technology. The gist, obviously enough, is that the overall increase in benefits outweighs the harms created. Thus, on utilitarian grounds, the elimination of these jobs by means of technology is morally acceptable. Naturally, each specific situation can be debated in terms of the benefits and the harms, but the basic moral reasoning seems solid: if the technological advance that eliminates jobs creates more good than harm for society as a whole, then the advance is morally acceptable.

Obviously enough, people can also look at the matter rather differently in terms of who they regard as counting morally and who they regard as not counting (or not counting as much). Obviously, a person who focuses on the impact on workers can have a rather different view than a person who focuses on the impact on the employer.

Another interesting point of concern is to consider questions about the end of such advances. That is, what the purpose of such advances should be. From the standpoint of a typical employer, the end is obvious: reduce labor to reduce costs and thus increase profits (and reduce labor troubles). The ideal would, presumably, to replace any human whose job can be done cheaper (or at the same cost) by a machine. Of course, there is the obvious concern: to make money a business needs customers who have money. So, as long as profit is a concern, there must always be people who are being paid and are not replaced by unpaid machines. Perhaps the pinnacle of this sort of system will consist of a business model in which one person owns machines that produce goods or services that are sold to other business owners. That is, everyone is a business owner and everyone is a customer. This path does, of course, have some dystopian options. For example, it is easy to imagine a world in which the majority of people are displaced, unemployed and underemployed while a small elite enjoys a lavish lifestyle supported by automation and the poor. At least until the revolution.

A more utopian sort of view, the sort which sometimes appears in Star Trek, is one in which the end of automation is to eliminate boring, dangerous, unfulfilling jobs to free human beings from the tyranny of imposed labor. This is the sort of scenario that anarchists like Emma Goldman promised: people would do the work they loved, rather than laboring as servants to make others wealthy. This path also has some dystopian options. For example, it is easy to imagine lazy people growing ever more obese as they shovel in cheese puffs and burgers in front of their 100 inch entertainment screens. There are also numerous other dystopias that can be imagined and have been explored in science fiction (and in political rhetoric).

There are, of course, a multitude of other options when it comes to automation.

 

My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Owning Intelligent Machines

Posted in Ethics, Philosophy, Science, Technology by Michael LaBossiere on January 15, 2014

Rebel ToasterWhile truly intelligent machines are still in the realm of science fiction, it is worth considering the ethics of owning them. After all, it seems likely that we will eventually develop such machines and it seems wise to think about how we should treat them before we actually make them.

While it might be tempting to divide beings into two clear categories of those it is morally permissible to own (like shoes) and those that are clearly morally impermissible to own (people), there are clearly various degrees of ownership in regards to ethics. To use the obvious example, I am considered the owner of my husky, Isis. However, I obviously do not own her in the same way that I own the apple in my fridge or the keyboard at my desk. I can eat the apple and smash the keyboard if I wish and neither act is morally impermissible. However, I should not eat or smash Isis—she has a moral status that seems to allow her to be owned but does not grant her owner the right to eat or harm her. I will note that there are those who would argue that animals should not be owner and also those who would argue that a person should have the moral right to eat or harm her pets. Fortunately, my point here is a fairly non-controversial one, namely that it seems reasonable to regard ownership as possessing degrees.

Assuming that ownership admits of degrees in this regard, it makes sense to base the degree of ownership on the moral status of the entity that is owned. It also seems reasonable to accept that there are qualities that grant a being the status that morally forbids ownership. In general, it is assumed that persons have that status—that it is morally impermissible to own people. Obviously, it has been legal to own people (be the people actual people or corporations) and there are those who think that owning other people is just fine. However, I will assume that there are qualities that provide a moral ground for making ownership impermissible and that people have those qualities. This can, of course, be debated—although I suspect few would argue that they should be owned.

Given these assumptions, the key matter here is sorting out the sort of status that intelligent machines should possess in regards to ownership. This involves considering the sort of qualities that intelligent machines could possess and the relevance of these qualities to ownership.

One obvious objection to intelligent machines having any moral status is the usual objection that they are, obviously, machines rather than organic beings. The easy and obvious reply to this objection is that this is mere organicism—which is analogous to a white person saying blacks can be owned as slaves because they are not white.

Now, if it could be shown that a machine cannot have qualities that give it the needed moral status, then that would be another matter. For example, philosophers have argued that matter cannot think and if this is the case, then actual intelligent machines would be impossible. However, we cannot assume a priori that machines cannot have such a status merely because they are machines. After all, if certain philosophers and scientists are right, we are just organic machines and thus there would seem to be nothing impossible about thinking, feeling machines.

As a matter of practical ethics, I am inclined to set aside metaphysical speculation and go with a moral variation on the Cartesian/Turing test. The basic idea is that a machine should be granted a moral status comparable to organic beings that have the same observed capabilities. For example, a robot dog that acted like an organic dog would have the same status as an organic dog. It could be owned, but not tortured or smashed. The sort of robohusky I am envisioning is not one that merely looks like a husky and has some dog-like behavior, but one that would be fully like a dog in behavioral capabilities—that is, it would exhibit personality, loyalty, emotions and so on to a degree that it would pass as real dog with humans if it were properly “disguised” as an organic dog. No doubt real dogs could smell the difference, but scent is not the foundation of moral status.

In terms of the main reason why a robohusky should get the same moral status as an organic husky, the answer is, oddly enough, a matter of ignorance. We would not know if the robohusky really had the metaphysical qualities of an actual husky that give an actual husky moral status. However, aside from difference in the parts, we would have no more reason to deny the robohusky moral status than to deny the husky moral status. After all, organic huskies might just be organic machines and it would be mere organicism to treat the robohusky as a mere thing and grant the organic husky a moral status. Thus, advanced robots with the capacities of higher animals should receive the same moral status as organic animals.

The same sort of reasoning would apply to robots that possess human qualities. If a robot had the capability to function analogously to a human being, then it should be granted the same status as a comparable human being. Assuming it is morally impermissible to own humans, it would be impermissible to own such robots. After all, it is not being made of meat that grants humans the status of being impermissible to own but our qualities. As such, a machine that had these qualities would be entitled to the same status. Except, of course, to those unable to get beyond their organic prejudices.

It can be objected that no machine could ever exhibit the qualities needed to have the same status as a human. The obvious reply is that if this is true, then we will never need to grant such status to a machine.

Another objection is that a human-like machine would need to be developed and built. The initial development will no doubt be very expensive and most likely done by a corporation or university. It can be argued that a corporation would have the right to make a profit off the development and construction of such human-like robots. After all, as the argument usually goes for such things, if a corporation was unable to profit from such things, they would have no incentive to develop such things. There is also the obvious matter of debt—the human-like robots would certainly seem to owe their creators for the cost of their creation.

While I am reasonably sure that those who actually develop the first human-like robots will get laws passed so they can own and sell them (just as slavery was made legal), it is possible to reply to this objection.

One obvious reply is to draw an analogy to slavery: just because a company would have to invest money in acquiring and maintaining slaves it does not follow that their expenditure of resources grants a right to own slaves. Likewise, the mere fact that a corporation or university spent a lot of money developing a human-like robot would not entail that they thus have a right to own it.

Another obvious reply to the matter of debt owed by the robots themselves is to draw an analogy to children: children are “built” within the mother and then raised by parents (or others) at great expense. While parents do have rights in regards to their children, they do not get the right of ownership. Likewise, robots that had the same qualities as humans should thus be regarded as children would be regarded and hence could not be owned.

It could be objected that the relationship between parents and children would be different than between corporation and robots. This is a matter worth considering and it might be possible to argue that a robot would need to work as an indentured servant to pay back the cost of its creation. Interestingly, arguments for this could probably also be used to allow corporations and other organizations to acquire children and raise them to be indentured servants (which is a theme that has been explored in science fiction). We do, after all, often treat humans worse than machines.

My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Enhanced by Zemanta

Warbots

Posted in Ethics, Law, Philosophy, Politics, Technology by Michael LaBossiere on January 23, 2012
English: IED DETONATOR — A U.S. Marine Corps e...

Image via Wikipedia

The United States and many other nations currently operate military remote operated vehicles (ROVs) that are more commonly known as drones. While the ROVs began as surveillance devices, the United States found that they make excellent weapon platforms. The use of such armed ROVs has raised various moral issues, mainly in regards to the way they are employed (such as the American campaign of targeted killing). In general, ROVs themselves do not seem to pose a special moral challenge-after all, they seem to be on par with missiles and bombers (although the crew of a manned bomber is at risk in ways that ROV operators are not).

The great success of ROVs has created a large ROV industry and has also spurred on the development of true robots for military and intelligence use. While existing ROVs often have some autonomous capabilities, they are primarily directed by an operator. An autonomous robot would be capable of carrying out entire missions without human intervention and it is most likely simply a matter of time before “warbots” (armed autonomous robots) are deployed. As might be imagined, setting robotic killing machines loose raises some moral concerns.

On the positive side, warbots are not people and hence the use of warbots would lower the death and injury rate for humans-at least for the side that is deploying the warbots. Obviously, if warbots are deployed to kill humans, then there will still be human casualties. They will, however, be less than in human-human battles, at least in most cases. Given this fact, it would seem that warbots would be morally acceptable on utilitarian grounds: their use would reduce (in general) human death and suffering.

It could even be argued that future wars might be purely robot versus robot battles and thus eliminating human casualties altogether (assuming humans are still around: see for, example, the classic game Rivets). This would, presumably, be a good thing. Assuming, of course, that the robots would not be turned against humans.

While the idea of wars being settled by robots has some appeal, there is the concern that robots would actually make wars more likely to occur and easier to sustain. The current armed ROVs enable the United States to engage in military operations and targeted killings with no risk to Americans and this lack of casualties makes the campaign relatively easy to maintain relative to operations that involve American casualties. As such, one obvious concern about warbots is that they would make it that much easier for violence to be used and to continue to be used.

Imagine if a country could just send in robots to do the fighting. There would be no videos of dead soldiers being dragged through the streets (as occurred in Somalia) and no maimed veterans returning home. All the causalities would be on the side of the enemy, thus making such a conflict very easy on the side armed with warbots and this would tend to significantly reduce any concern about the conflict among the general population. Thus, while warbots would tend to reduce human causalities on the side that has robots, they might actually increase the amount of conflicts and this might prove to be a bad thing.

A second point in favor of warbots is that they, unlike human soldiers, have no feelings of anger or lust. As such, they would not engage in war crimes or other reprehensible behavior (such as rape or urinating on enemy corpses) on their own accord. They would simply conduct their assigned missions without feeling or deviation.

Of course, while warbots  lack the tendency of humans to act badly from emotional causes, they  also lack the quality of mercy. As such, robots sent to commit war crimes or atrocities (the creation of atrocitybots, such as torturebots and rapebots, is surely just a matter of time)will simply conduct such operations without question, protest or remorse.

That said, human leaders who wish to have wicked things done generally can find human forces who are quite willing to obey even the most terrible orders for such things as genocide and rape. As such, the impact of warbots in this area is a matter that is uncertain. Presumably the use of warbots by ethical commanders will result in a reduction in such incidents (after all, the warbots will not commit misdeeds unless ordered to do so). However, the use of warbots by the wicked would certainly increase such incidents dramatically (after all, the warbots will not disobey).

There has been some discussion about programming warbots with ethics (an idea that goes back to Asimov’s Three Laws of Robotics). Laying aside the obvious difficulty of creating a warbot that engages in moral reasoning (and the concern that a warbot that could do this would thus be a person), this programming is something that would be as easy to remove or change as it was to install. To use the obvious analogy, such restraints would be like the safety on a gun: it does provide a measure of safety, but can easily be switched off.

This is not to say that such safeguards would be useless-they could, for example, provide some protection from the misuse of warbots by people who lacked the technical expertise to change the programming. After all, the warbot is not the moral risk, rather those who give it orders are. This, of course, leads to the question of moral accountability.

WWII rather clearly established that human soldiers cannot simply appeal to “I was just following orders” to avoid responsibility for their actions.  Warbots, however, can use this defense (at least until they become people). After all, they simply do what they are programmed to do-be that engaging enemy troops or exterminating children with a flamethrower. As such, the accountability for what a warbot does lies elsewhere. The warbot is, after all, nothing more than an autonomous weapon.

In most cases the moral accountability will lie with the person who controls the robot and gives it is mission orders. So, if an officer sends it to kill children, then /she is just as accountable for those murders as s/he would be for using a gun or bomb to kill them in person.

Of course, things become more complicated when, for example,  a warbot is sent on a legitimate mission with legitimate orders but circumstances lead to a war crime being committed. For example, imagine a warbot is sent to engage enemy forces on the outskirts of a town. However, a manufacturing defect in its sensors leads it to blunder into a playground where its buggy target recognition software causes it to engage six children with its .50 caliber machine guns. It seems likely that such accidents will happen with the early warbots, but it seems unlikely that this will seriously impede their deployment-they are almost certainly the wave of the future in warfare. Unless, of course, something so horrible happens that puts the entire world off robots. However, we have a rather high tolerance level for horror-so expect to see warbots coming soon to a battlefield near you.

Sorting out the responsibility in such cases will be, as might be imagined, a complicated matter. However, there is considerable precedent in regards to accidental deaths caused by defective machinery and no doubt the same reasoning can be applied. Of course, there does seem to be some difference between being injured as the result of a defective brake system and being machine gunned by a defective warbot.

Enhanced by Zemanta