A Philosopher's Blog

The Trump Ban

Posted in Philosophy, Politics by Michael LaBossiere on January 18, 2016

While the United Kingdom is quite welcoming to its American cousins, many of its citizens have petitioned for a ban against the now leading Republican presidential candidate Donald Trump. This issue was debated in mid-January by the parliament, although no vote was taken to ban the Donald.

The petition to ban Trump was signed by 575,000 people and was created in response to his call to ban all Muslims from entering the United States. While this matter is mostly political theater, it does raise some matters of philosophical interest.

One interesting point is that the proposal to ban Trump appears to be consistent with the principles that seem to lurk behind the obscuring fog of Trump’s various proposals and assertions. One obvious concern is that attributing principles to Trump is challenging—he is a master of being vague and is not much for providing foundations for his proposed policies. Trump has, however, focused a great deal on the borders of the United States. He has made the comically absurd proposal to build a wall between the United States and Mexico and, as noted above, proposed a ban on all Muslims entering the United States. This seems to suggest that Trump accepts the principle that a nation has the right to control its borders and to keep out anyone that is deemed a threat or undesirable by the state. This principle, which might be one that Trump accepts, is certainly a reasonable one in general terms. While thinkers disagree about the proper functions of the state, there is general consensus that a state must, at a minimum, provide basic defense and police functions and these include maintaining borders. This principle would certainly warrant the UK from banning Trump.

Even if the is specific general principle is not one Trump accepts, he certainly seems to accept that a state can ban people from entering that state. As such, consistency would require that Trump accept that the UK has every right to ban him. Trump, if he were inclined to argue rationally, could contend that there are relevant differences between himself and those he proposes to ban. He could, for example, argue that the proposed wall between the United States and Mexico is to keep out illegals and point out that he would enter the UK legally rather than sneaking across the border. In regards to the proposed ban on all Muslims, Trump could point out that he is for banning Muslims but not for banning non-Muslims. As such, his principle of banning Muslims could not be applied to him.

A way to counter this is to focus again on the general principle that might be behind Trump’s proposals, namely the principle of excluding people who are regarded as a threat or at least undesirable. While Trump is not likely to engage in acts of terror in the UK, his behavior in the United States does raise concerns about his ideology and he could justly be regarded as a threat to the UK. He could, perhaps, radicalize some of the population. As such, Trump could be justly banned on the basis of a possible principle he is employing to justify his proposed bans (assuming that there are some principles lurking back there somewhere).

Trump could, of course, simply call the UK a bunch of losers and insist that they have no right to ban him. While that sort of thing is fine for political speeches, he would need a justification for his assertion. Then again, Trump might simply call them losers and say he does not want to go there anyway.

The criticism of Trump in the UK seems to be, at least in part, aimed at trying to reduce his chance of becoming the President of the United States.  Or perhaps there is some hope that the criticism will change his behavior. While a normal candidate might be influenced by such criticism from a close ally and decide to change, Trump is not a normal candidate. As has been noted many times, behavior that would have been politically damaging or fatal for other candidates has only served to keep Trump leading among the Republicans. As such, the petition against him and even the debate about the issue in Parliament will have no negative impact on his campaign. In fact, this sort of criticism will probably improve his poll numbers. As such, Trump is the orange Hulk of politics (not to be confused with Orange Hulk). The green Hulk gets stronger the angrier he gets, so attacking him just enables him to fight harder. The political orange Hulk, Trump, gets stronger the more he is rationally criticized and the more absurd and awful he gets. Like the green Hulk, Trump might be almost unbeatable. So, while Hulk might smash, Trump might win. And then smash.

 

My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Follow Me on Twitter

Robo Responsibility

Posted in Ethics, Law, Philosophy, Science, Technology by Michael LaBossiere on March 2, 2015

It is just a matter of time before the first serious accident involving a driverless car or an autonomous commercial drone. As such, it is well worth considering the legal and moral aspects of responsibility. If companies that are likely to be major players in the autonomous future, such as Google and Amazon, have the wisdom of foresight, they are already dropping stacks of cash on lawyers who are busily creating the laws-to-be regarding legal responsibility for accidents and issues involving such machines. The lobbyists employed by these companies will presumably drop fat stacks of cash on the politicians they own and these fine lawmakers will make them into laws.

If these companies lack foresight or have adopted a wait and see attitude, things will play out a bit differently: there will be a serious incident involving an autonomous machine, a lawsuit will take place, fat stacks of cash will be dropped, and a jury or judge will reach a decision that will set a precedent. There is, of course, a rather large body of law dealing with responsibility in regards to property, products and accidents and these will, no doubt, serve as foundations for the legal wrangling.

While the legal aspects will no doubt be fascinating (and expensive) my main concern is with the ethics of the matter. That is, who is morally responsible when something goes wrong with an autonomous machine like a driverless car or an autonomous delivery drone.

While the matter of legal responsibility is distinct from that of ethical responsibility, the legal theory of causation does have some use here. I am, obviously enough, availing myself of the notion of conditio sine qua non (“a condition without which nothing”) as developed by H.L.A. Hart and A.M. Honore.

Roughly put, this is the “but for” view of causation. X can be seen as the cause of Y if Y would not have happened but for X. This seems like a reasonable place to begin for moral responsibility. After all, if someone would not have died but for my actions (that is, if I had not done X, then the person would still be alive) then there seems to be good reason to believe that I have some moral responsibility for the person’s death. It also seems reasonable to assign a degree of responsibility that is proportional to the casual involvement of the agent or factor in question. So, for example, if my action only played a small role in someone’s death, then my moral accountability would be proportional to that role. This allows, obviously enough, for shared responsibility.

While cases involving non-autonomous machines can be rather complicated, they can usually be addressed in a fairly straightforward manner in terms of assigning responsibility. Consider, for example, an incident involving a person losing a foot to a lawnmower. If the person pushing the lawnmower intentionally attacked someone with her mower, the responsibility rests on her. If the person who lost the foot went and stupidly kicked at the mower, then the responsibility rests on her. If the lawnmower blade detached because of defects in the design, material or manufacturing, then the responsibility lies with the specific people involved in whatever defect caused the problem. If the blade detached because the owner neglected to properly maintain her machine, then the responsibility is on her. Naturally, the responsibility can also be shared (although we might not know the relevant facts). For example, imagine that the mower had a defect such that if it were not well maintained it would easily shed its blade when kicked. In this case, the foot would not have been lost but for the defect, the lack of maintenance and the kick. If we did not know all the facts, we would probably blame the kick—but the concern here is not what we would know in specific cases, but what the ethics would be in such cases if we did, in fact, know the facts.

The novel aspect of cases involving autonomous machines is the fact that they are autonomous. This might be relevant to the ethics of responsibility because the machine might qualify as a responsible agent. Or it might not.

It is rather tempting to treat an autonomous machine like a non-autonomous machine in terms of moral accountability. The main reason for this is that the sort of autonomous machines being considered here (driverless cars and autonomous drones) would certainly seem to lack moral autonomy. That is to say that while a human does not directly control them in their operations, they are operating in accord with programs written by humans (or written by programs written by humans) and lack the freedom that is necessary for moral accountability.

To illustrate this, consider an incident with an autonomous lawnmower and the loss of a foot. If the owner caused it to attack the person, she is just as responsible as if she had pushed a conventional lawnmower over the victim’s foot. If the person who lost the foot stupidly kicked the lawnmower and lost a foot, then it is his fault. If the incident arose from defects in the machinery, materials, design or programming, then responsibility would be applied to the relevant people to the degree they were involved in the defects. If, for example, the lawnmower ran over the person because the person assembling it did not attach the sensors correctly, then the moral blame lies with that person (and perhaps an inspector). The company that made it would also be accountable, in the collective and abstract sense of corporate accountability. If, for example, the programming was defective, then the programmer(s) would be accountable: but for his bad code, the person would still have his foot.

As with issues involving non-autonomous machines there is also the practical matter of what people would actually believe about the incident. For example, it might not be known that the incident was caused by bad code—it might be attributed entirely to chance. What people would know in specific cases is important in the practical sense, but does not impact the general moral principles in terms of responsibility.

Some might also find the autonomous nature of the machines to be seductive in regards to accountability. That is, it might be tempting to consider the machine itself as potentially accountable in a way analogous to holding a person accountable.

Holding the machine accountable would, obviously enough, require eliminating other factors as causes. To be specific, to justly blame the machine would require that the machine’s actions were not the result of defects in manufacturing, materials, programing, maintenance, and so on. Instead, the machine would have had to act on its own, in a way analogous to person acting. Using the lawnmower example, the autonomous lawnmower would need to decide to go after the person from it own volition. That is, the lawnmower would need to possess a degree of free will.

Obviously enough, if a machine did possess a degree of free will, then it would be morally accountable within its freedom. As such, a rather important question would be whether or not an autonomous machine can have free will. If a machine can, then it would make moral sense to try machines for crimes and punish them. If they cannot, then the trials would be reserved, as they are now, for people. Machines would, as they are now, be repaired or destroyed. There would also be the epistemic question of how to tell whether the machine had this capacity. Since we do not even know if we have this capacity, this is a rather problematic matter.

Given the state of technology, it seems unlikely that the autonomous machines of the near future will be morally autonomous. But as the technology improves, it seems likely that there will come a day when it will be reasonable to consider whether an autonomous machine can be justly held accountable for its actions. This has, of course, been addressed in science fiction—such as the ‘I, Robot” episodes (the 1964 original and the 1995 remake) of the Outer Limits which were based on Eando Binder’s short story of the same name.

 

My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Follow Me on Twitter