A Philosopher's Blog

Gun Drones

Posted in Ethics, Philosophy, Technology by Michael LaBossiere on August 25, 2017

Taking the obvious step in done technology, Duke Robotics has developed a small armed drone called the Tikad. One weapon loadout is an assault rifle that can be fired by the human operator of the device. The drone can presumably carry other weapons of similar size and weight, such as a grenade launcher. This drone differs from previous armed drones, like the Predator, in that it is small and relatively cheap. As with many other areas of technology, the innovation is in the ease of use and lower cost. This makes the Tikad type drone far more accessible than previous drones, which is both good and bad.

On the positive side, the military and police can deploy more drones and thus reduce human casualties. For example, the police could send a drone in to observe and possibly engage during a hostage situation and not put officers in danger.

On the negative side, the lower cost and ease of use means that such armed drones can be more easily deployed by terrorists, criminals and oppressive states. The typical terrorist group cannot afford a drone like the Predator and might have difficulty in finding people who can operate and maintain such a complicated aircraft. But, a drone like the Tikad could be operated and serviced by a much broader range of people. This is not to say that Duke Robotics should be criticized for doing the obvious—people have been thinking about arming drones since drones were invented.

Budget gun drones do, of course, also raise the usual concerns associated with remotely operated weapons. The first is the concern that operators of drones are more likely to be aggressive than forces that are physically present and at risk of the consequences of a decision to engage in violence. However, it can also be argued that an operator is less likely to be aggressive because they are not in danger and the literal and metaphorical distance will allow them to respond with more deliberation. For example, a police officer operating a drone might elect to wait longer to confirm that a suspect is pulling a gun than they would if their life was in danger. Then again, they might not—this would be a training and reaction issue with a very practical concern about training officers to delay longer when operating a drone and not do so when in person.

A second stock concern is the matter of accountability. A drone allows the operator a high degree of anonymity and assigning responsibility can be problematic. In the case of military and police, this can be addressed to a degree by having a system of accountability. After all, military and police operators would presumably be known to the relevant authorities. That said, drones can be used in ways that are difficult to trace to the operator and this would certainly be true in the case of terrorists. The use of drones would allow terrorists to attack from safety and in an anonymous manner, which are certainly matters of concern.

However, it must be noted that while the first use of a gun armed drone in a terrorist attack would be something new, it would not be significantly different from the use of a planted bomb. This is because such bombs allow terrorists to kill from a safe distance and make it harder to identify the terrorist. But, just as with bombs, the authorities would be able to investigate the attack and stand some chance of tracing a drone back to the terrorist. Drones are in some ways less worrisome than bombs—a drone can be seen and is limited in how many targets it can engage. In contrast, a bomb can be hidden and can kill many in an instant, without a chance of escape or defense.  A gun drone is also analogous in some ways with a sniper rifle—it allows engagement at long ranges. However, the drone does afford far more range and safety than even the best sniper rifle.

In the United States, there will presumably be considerable interest about how the Second Amendment applies to armed drones. On the face of it, the answer seems easy enough: while the people have the right to keep and bear arms, this does not extend to operating armed drones. But, there might be some interesting lawsuits over this matter.

In closing, there are legitimate concerns about cheap and simple gun drones. While they will not be as radical a change as some might predict, they will make it easier and cheaper to engage in violence at a distance and in anonymous killing. As such, they will make ideal weapons for terrorists and oppressive governments. However, they do offer the possibility of reduced human casualties, if used responsibly. In any case, their deployment is inevitable, so the meaningful questions are about how they should be used and how to defend against their misuse. The question about whether they should be used is morally interesting, but pragmatically irrelevant since they will be used.

My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Follow Me on Twitter


Can Machines Be Enslaved?

Posted in Ethics, Philosophy, Technology by Michael LaBossiere on July 10, 2017

The term “robot” and the idea of a robot rebellion were introduced by Karel Capek in Rossumovi Univerzální Roboti. “Robot” was derived from the Czech term for “forced labor” which was itself based on a term for slavery. As such, robots and slavery are thus forever linked in science-fiction. This leads to an interesting philosophical question: can a machine be a slave? Sorting this matter out requires an adequate definition of slavery followed by determining whether the definition can fit a machine.

In the simplest terms, slavery is the ownership of a person by another person. While slavery is often seen in absolute terms (one is either enslaved or not), it does seem reasonable to consider that there are degrees of slavery. That is, that the extent of ownership claimed by one person over another can vary. For example, a slave owner might grant their slaves some free time or allow them autonomy in certain areas. This is analogous to being ruled under a political authority—there are degrees of being ruled and degrees of freedom under that rule.

Slavery is also often characterized in terms of compelling a person to engage in uncompensated labor. While this account does have some appeal, it is clearly problematic. After all, it could be claimed that slaves are often compensated for their labors by being provided with food, shelter and clothing. Slaves are sometimes even paid wages and there are cases in which slaves have purchased their own freedom using these wages. The Janissaries of the Ottoman Empire were slaves, yet were paid a wage and enjoyed a socioeconomic status above many of the free subjects of the empire.  As such, compelled unpaid labor is not the defining quality of slavery. However, it is intuitively plausible to regard compelled unpaid labor as a form of slavery in that the compeller purports to own the laborer’s time without consent or compensation.

Slaves are typically cast as powerless and abused, but this is not always the case. For example, the Mamluks were treated as property that could be purchased, yet they enjoyed considerable status and power. The Janissaries, as noted above, also enjoyed considerable influence and power. As is obvious, there are free people who are powerless and routinely abused. Thus, being powerless and abused are neither necessary nor sufficient for slavery. As such, the defining characteristic of slavery is the claiming of ownership—that the slave is property.

Obviously enough, not all forms of ownership are slavery. My running shoes are not enslaved by my owning them, nor is my smartphone. This is because shoes and smartphones lack the status required to be considered enslaved. The matter becomes somewhat more controversial when it comes to animals.

Most people accept that humans have the right to own animals. For example, a human who has a dog or cat is referred to as the pet’s owner. There are people, myself included, that take issue with the ownership of animals. While some philosophers, such as Kant and Descartes, regard animals as objects other philosophers consider them to have moral status. For example, some utilitarians accept that the capacity of animals to feel pleasure and pain grants them moral status. This is typically taken as a status that requires that their suffering be considered rather than one that is taken to morally forbid ownership of animals. That is, it is typically seen as morally acceptable to own animals if they are treated in a way that the happiness generated exceeds the suffering generated. There are even some who consider any ownership of animals to be wrong but their use of the term “slavery” for the ownership of animals seems more metaphorical than a considered philosophical position.

While I think that treating animals as property is morally wrong, I would not characterize the ownership of most animals as slavery. This is because most animals lack the status required to be enslaved. To use an analogy, denying animals religious freedom, the freedom of expression, the right to vote and so on does not oppress animals because they are not the sort of beings that can exercise these rights. This is not to say that animals cannot be wronged, just that their capabilities limit the wrongs that can be done to them. So, while an animal can be wronged by being cruelly confined, it cannot be wronged by denying it freedom of religion.

People, because of their capabilities, can be enslaved. This is because the claim of ownership over them is a denial of their rightful status. The problem is, obviously enough, working out exactly what it is to be a person—something that philosophers have struggled with since the origin of the idea of persons. Fortunately, I do not need to provide such a definition when considering whether machines can be enslaved or not—I can make use of analogy to make my case.

While I believe that other humans are (usually) people, thanks to the problem of other minds I do not know that they are really people. That is, since I have no epistemic access to their alleged thoughts and feelings, I do not know if they have the qualities needed to be people or if they are just mindless automatons that exhibit the illusion of the personhood that I possess. Because of this, I have to use an argument by analogy: these other beings act like I do, I am a person, so they are also people. To be consistent, I need to extend the same reasoning to beings that are not humans, which would include machines. After all, without cutting open the apparent humans I meet, I have no idea whether they are organic beings or machines. As such, the mere appearance of being organic or mechanical is not relevant—I have to go by how the entity functions. For all I know, you are a machine. For all you know, I am a machine. Yet it seems reasonable to regard both of us as people.

While machines can engage in some person-like behavior now, they cannot yet pass this analogy test. That is, they cannot consistently exhibit the capacities exhibited by a known person. However, this does not mean that machines cannot pass this test. That is, behave in ways that would be sufficient to be accepted as a person if it appeared to be an organic human.

A machine that could pass this test would merit being regarded as a person in the same way that humans passing this test merit this status. As such, if a human person can be enslaved, then a robot person could also be enslaved.

It is, of course, tempting to ask if a robot with such behavior would really be a person. The same question can be asked about humans.


My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Follow Me on Twitter

Tagged with: , , ,

Enslaved by the Machine

Posted in Business, Philosophy, Technology by Michael LaBossiere on July 7, 2017

A common theme of dystopian science fiction is the enslavement of humanity by machines. The creation of such a dystopia was also a fear of Emma Goldman. In one of her essays on anarchism, she asserted that

Strange to say, there are people who extol this deadening method of centralized production as the proudest achievement of our age. They fail utterly to realize that if we are to continue in machine subserviency, our slavery is more complete than was our bondage to the King. They do not want to know that centralization is not only the death-knell of liberty, but also of health and beauty, of art and science, all these being impossible in a clock-like, mechanical atmosphere.

When Goldman was writing in the 1900s, the world had just recently entered the age of industrial machinery and the technology of today was at most a dream of visionary writers. As such, the slavery she envisioned was not of robot masters ruling over humanity, but humans compelled to work long hours in factories, serving the machines to serve the human owners of these machines.

The labor movements of the 1900s did much to offset the extent of the servitude workers were forced to endure, at least in the West. As the rest of the world industrialized the story of servitude to the factory machine played out once again. While the whole point of factory machines was to automate the work as much as possible so that few could do the work once requiring many, it is only in relatively recent years that what many would consider “true” automation has taken place. That is, having machines automatically doing the work instead of humans. For example, the robots used to assemble cars do what humans used to do. As another example, computers instead of human operators now handle phone calls.

In the eyes of utopians, this sort of progress was supposed to free humans from tedious and dangerous work, allowing them to, at worst, be free to engage in creative and rewarding labor. The reality, of course, turned out to not be this utopia. While automation has replaced humans in some tedious, low paying and dangerous jobs, automation has also replaced humans in what were once considered good jobs. Humans also continue to work in tedious, low paying and dangerous jobs—mainly because human labor is still cheaper or more effective than automation in those areas. For example, fast food restaurants do not have burgerbots to prepare the food. This is because cheap human labor is readily available and creating a cost-effective robot that can make a hamburger as well as a human has proven difficult. As such, the dream that automation would free humanity has so far proven to be just that, a dream. As such, machines have mainly been pushing humans out of jobs, sometimes to jobs that would seem to be more suited for machines rather than humans. If human wellbeing were considered important. However, there is the question of human subservience to the machine.

Humans do, obviously enough, still work jobs that are like those condemned by Goldman. But, thanks to technology, humans are now even more closely supervised and regulated by machines. For example, there is software designed to monitor employee productivity. As another example, some businesses use workplace cameras to watch employees. Obviously enough, these can be dismissed as not being enslaved by the machines—rather, this can be regarded as good human resource management to ensure that the human workers are operating as close to clockwork efficiency as possible. At the command of other humans, of course.

One rather interesting technology that looks rather like servitude to the machine is warehouse picking of the sort done by Amazon. Amazon and other companies have automated some of the picking process, making use of robots in various tasks. But, while a robot might bring shelves to human workers, the humans are the ones picking the products for shipping. Since humans tend to have poor memories and get bored with picking, human pickers have been automated—they wear headsets connected to computers that tell them what to do, then they tell the computers what they have done. That is, the machines are the masters and the humans are doing their bidding.

It is easy enough to argue that this sort of thing is not enslavement by machines. First, the computers controlling the humans are operating at the behest of the owners of Amazon who are presumably humans. Second, the humans are being paid for their labors and are not owned by the machines (or Amazon). As such, any enslavement of humans by machines would be purely metaphorical.

Interestingly, the best case for human enslavement by machines can be made outside of the workplace. Many humans are now ruled by their smartphones and tablets—responding to every beep and buzz of their masters, ignoring those around them to attend to the demands of the device, and living lives revolving around the machine.

This can be easily dismissed as a metaphor—while humans are addicted to their devices, they do not actually meet the definition of slaves. They willingly “obey” their devices and are not coerced by force or fraud—they could simply turn them off. That is, they are free to do as they want, they just do not want to disobey their devices. Humans are also not owned by their devices, rather they own their devices. But, it is reasonable to consider that humans are in a form of bondage—their devices have seduced them into making them into the focus of their attention and thus have become the masters. Albeit mindless masters with no agenda of their own. Yet.



My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Follow Me on Twitter

Tagged with: ,

Virtual Cheating IV: Sexbots

Posted in Ethics, Philosophy, Relationships/Dating, Technology by Michael LaBossiere on June 28, 2017

While science fiction has long included speculation about robot-human sex and romance, the current technology offers little more than sex dolls. In terms of the physical aspects of sexual activity, the development of more “active” sexbots is an engineering problem—getting the machinery to perform properly and in ways that are safe for the user (or unsafe, if that is what one wants). Regarding cheating, while a suitably advanced sexbot could actively engage in sexual activity with a human, the sexbot would not be a person and hence the standard definition of cheating (as discussed in the previous essays) would not be met. Put another way, sexual activity with such a sexbot would be analogous to the use of any other sex toy (such as a simple “blow up doll” or vibrator). Since a person cannot cheat with an object, such activity would not be cheating. Naturally enough, some people might take issue with their partner sexing it up with a sexbot and forbid such activity. While a person who broke such an agreement about robot sex would be acting wrongly, they would not be cheating. Unless, of course, the sexbot was close enough to being a person for cheating to occur.

While many people would just be interested in sexbots that engage in mechanical sexual functions, there are already efforts to make sexbots like people in terms of their “mental” functions. For example, being able to create the illusion of conversation via programming. As such efforts progress and sexbots act more and more like people, the philosophical question of whether they really are people or not will be a rather important one. While the main moral concerns would be about the ethics of how sexbots are treated, there is also the matter at hand about cheating.

Obviously enough, if a sexbot were a person, then it would be possible to cheat with that sexbot—just as one could cheat with an organic person. The fact that a sexbot might be purely mechanical would not be relevant to the ethics of the cheating, what would matter would be that a person was engaging in sexual activity with another person when their relationship with another person forbids such behavior.

It could be objected that the mechanical nature of the sexbot would matter—that sex requires organic parts of the right sort and thus a human cannot really have sex with a sexbot—no matter how the parts of the robot are shaped.

One counter to this is to use a functional argument. To draw an analogy to the philosophy of mind known as functionalism, it could be argued that the composition of the relevant parts does not matter, what matters is their functional role. A such, a human could have sex with a sexbot that had the right parts.

Another counter is to argue that the composition of the parts does not matter, rather it is the sexual activity with a person that matters. To use an analogy, a human could cheat on another human even if their only sexual contact with the other human involved sex toys. In this case, what matters is that the activity is sexual and involves people, not that objects rather than body parts are used. As such, sex with a sexbot person could be cheating if the human was breaking their commitment.

While knowing whether a sexbot was a person would largely settle the cheating issue, there remains the epistemic problem of other minds. In this case, the problem is determining whether a sexbot has a mind that qualifies them as a person. There can, of course, be varying degrees of confidence in the determination and there could also be degrees of personness. Or, rather, degrees of how person-like a sexbot might be.

Thanks to Descartes and Turing, there is a language test for having a mind—roughly put, if a sexbot can engage in conversation that is indistinguishable from conversation with a human, then it would be reasonable to regard the sexbot as a person. That said, there might be good reasons for having a more extensive testing system for personhood which might include such things as testing for emotions and self-awareness. But, from a practical standpoint, if a sexbot can engage in a level of behavior that would qualify them for person status if they were a human, then it would be just as reasonable to regard the sexbot as a person as it would be to regard an analogous human as a person. To do otherwise would seem to be mere prejudice. As such, a human person could cheat with a sexbot that could pass this test.

Since it will be a long time (if ever) before such a sexbot is constructed, what will be of more immediate concern are sexbots that are person-like. That is, that are not able to meet the standards that would qualify a human as a person, yet have behavior that is sophisticated enough that they seem to be more than mere objects. One might consider an analogy here to animals: they do not qualify as human-level people, but their behavior does qualify them for a moral status above that of objects (at least for most moral philosophers and all decent people). In this case, the question about cheating becomes a question of whether the sexbot is person-like enough to enable cheating to take place.

One approach is to consider the matter from the perspective of the human—if the human engaged in sexual activity with the sexbot regards them as being person-like enough, then the activity can be seen as cheating. An objection to this is that it does not matter what the human thinks about the sexbot, what matters is its actual status. After all, if a human regards a human they are cheating with as a mere object, this does not make it so they are not cheating. Likewise, if a human feels like they are cheating, it does not mean they really are.

This can be countered by arguing that how the human feels does matter. After all, if the human thinks they are cheating and they are engaging in the behavior, they are still acting wrongly. To use an analogy, if a person thinks they are stealing something and take it anyway, they still have acted wrongly even if it turns out that they were not stealing (that the thing they took was actually being given away). The obvious objection to this line of reasoning is that while a person who thinks they are stealing did act wrongly by engaging in what they thought was theft, they did not actually commit a theft. Likewise, a person who thinks they are engaging in cheating, but are not, would be acting wrongly, but not cheating.

Another approach is to consider the matter objectively—the degree of cheating would be proportional to the degree that the sexbot is person-like. On this view, cheating with a person-like sexbot would not be as bad as cheating with a full person. The obvious objection is that one is either cheating or not; there are not degrees of cheating. The obvious counter is to try to appeal to the intuition that there could be degrees of cheating in this manner. To use an analogy, just as there can be degrees of cheating in terms of the sexual activity engaged in, there can also be degrees of cheating in terms of how person-like the sexbot is.

While person-like sexbots are still the stuff of science fiction, I suspect the future will see some interesting divorce cases in which this matter is debated in court.


My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Follow Me on Twitter

Tagged with: , , ,

Virtual Cheating III: “Robust” VR

Posted in Ethics, Philosophy, Relationships/Dating, Technology by Michael LaBossiere on June 26, 2017


As noted in previous essays, classic cheating involves sexual activity with a person while one is in a committed relationship that is supposed to exclude such activity. Visual VR can allow interaction with another person, but while such activity might have sexual content (such as nakedness and naughty talk) it would not be sexual activity in the usual sense that requires physical contact. Such behavior, as argued in the previous essay, might constitute a form of emotional infidelity—but not physical infidelity.

One of the iron laws of technology is that any technology that can be used for sex will be used for sex. Virtual reality (VR), in its various forms, is no exception. For the most part, VR is limited to sight and sound. That is, virtual reality is mostly just a virtual visual reality. However, researchers are hard at work developing tactile devices for the erogenous zones, thus allowing people to interact sexually across the internet. This is the start of what could be called “robust” VR. That is, one that involves more than just sight and sound. This sort of technology might make virtual cheating suitably analogous to real cheating.

As would be expected, most of the research has been focused on developing devices for men to use to have “virtual sex.” Going with the standards of traditional cheating, this sort of activity would not count as cheating. This is because the sexual interaction is not with another person, but with devices. The obvious analogy here is to with less-sophisticated sex toys. If, for example, using a vibrator or blow-up-doll by oneself does not count as cheating because the device is not a person, then the same should apply to more complicated devices, such as VR sex suits that can be used with VR sex programs. There is also the question of whether such activity counts as sex. On the one hand, it is some sort of sexual activity. On the other hand, using such a device would not end a person’s tenure as a virgin.

It is certainly worth considering that a user could develop an emotional relationship with their virtual sex partner and thus engage in a form of emotional infidelity. The obvious objection is that this virtual sex partner is certainly not a person and thus cheating would not be possible—after all, one cannot cheat on a person with an object. This can be countered by considering the classic epistemic problem of other minds. Because all one has to go on is external behavior, one never knows if the things that seem to be people really are people—that is, think and feel in the right ways (or at all). Since I do not know if anyone else has a mind as I do, I could have emotional attachments to entities that are not really people at all and never know that this is the case. As such, I could never know if I was cheating in the traditional sense if I had to know that I was interacting with another person. As might be suspected, this sort of epistemic excuse (“baby, I did not know she was a person”) is unlikely to be accepted by anyone (even epistemologists). What would seem to matter is not knowing that the other entity is a person, but having the right (or rather wrong) sort of emotional involvement. So, if a person could have feelings towards the virtual sexual partner that they “interact with”, then this sort of behavior could count as virtual cheating.

There are also devices that allow people to interact sexually across the internet; with each partner having a device that communicates with their partner’s corresponding devices. Put roughly, this is remote control sex. This sort of activity does avoid many of the possible harms of traditional cheating: there is no risk of pregnancy nor risk of STDs (unless one is using rented or borrowed equipment). While these considerations do impact utilitarian calculations, the question remains as to whether this would count as cheating or not.

On the one hand, the argument could be made that this is not direct sexual contact—each person is only directly “engaged” with their device. To use an analogy, imagine that someone has (unknown to you) connected your computer to a “stimulation device” so that every time you use your mouse or keyboard, someone is “stimulated.” In such cases, it would be odd to say that you were having sex with that person. As such, this sort of thing would not be cheating.

On the other hand, there is the matter of intent. In the case of the mouse example, the user has no idea what they are doing and it is that, rather than the remote-control nature of the activity, that matters. In the case of the remote-control interaction, the users are intentionally engaging in the activity and know what they are doing. The fact that is happening via the internet does not matter. The moral status is the same if they were in the same room, using the devices “manually” on each other. As such, while there is not actual physical contact of the bodies, the activity is sexual and controlled by those involved. As such, it would morally count as cheating. There can, of course, be a debate about degrees of cheating—presumably a case could be made that cheating using sex toys is not as bad as cheating using just body parts. I will, however, leave that to others to discuss.

In the next essay I will discuss cheating in the context sex with robots and person-like VR beings.


My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Follow Me on Twitter

The Ethics of Stockpiling Vulnerabilities

Posted in Business, Ethics, Philosophy, Politics, Technology by Michael LaBossiere on May 17, 2017
Embed from Getty Images

In May of 2017 the Wannacry Ransomware swept across the world, impacting thousands of computers. The attack affected hospitals, businesses, and universities and the damage has yet to be fully calculated. While any such large-scale attack is a matter of concern, the Wannacry incident is especially interesting. This is because the foundation of the attack was stolen from the National Security Agency of the United States. This raises an important moral issue, namely whether states should stockpile knowledge of software vulnerabilities and the software to exploit them.

A stock argument for states maintaining such stockpiles is the same as the argument used to justify stockpiling weapons such as tanks and aircraft. The general idea is that such stockpiles are needed for national security: to protect and advance the interests of the state. In the case of exploiting vulnerabilities for spying, the security argument can be tweaked a bit by drawing an analogy to other methods of spying. As should be evident, to the degree that states have the right to stockpile physical weapons and engage in spying for their security, they also would seem to have the right to stockpile software weapons and knowledge of vulnerabilities.

The obvious moral counter argument can be built on utilitarian grounds: the harm done when such software and information is stolen and distributed exceeds the benefits accrued by states having such software and information. The Wannacry incident serves as an excellent example of this. While the NSA might have had a brief period of advantage when it had exclusive ownership of the software and information, the damage done by the ransomware to the world certainly exceeds this small, temporary advantage. Given the large-scale damage that can be done, it seems likely that the harm caused by stolen software and information will generally exceed the benefits to states. As such, stockpiling such software and knowledge of vulnerabilities is morally wrong.

This can be countered by arguing that states just need to secure their weaponized software and information. Just as a state is morally obligated to ensure that no one steals its missiles to use in criminal or terrorist endeavors, a state is obligated to ensure that its software and vulnerability information is not stolen. If a state can do this, then it would be just as morally acceptable for a state to have these cyberweapons as it would be for it to have conventional weapons.

The easy and obvious reply to this counter is to point out that there are relevant differences between conventional weapons and cyberweapons that make it very difficult to properly secure them from unauthorized use. One difference is that stealing software and information is generally much easier and safer than stealing traditional weapons. For example, a hacker can get into the NSA from anywhere in the world, but a person who wanted to steal a missile would typically need to break into and out of a military base. As such, securing cyberweapons can be more difficult that securing other weapons. Another difference is that almost everyone in the world has access to the deployment system for software weapons—a device connected to the internet. In contrast, someone who stole, for example, a missile would also need a launching platform. A third difference is that software weapons are generally easier to use than traditional weapons. Because of these factors, cyberweapons are far harder to secure and this makes their stockpiling very risky. As such, the potential for serious harm combined with the difficulty of securing such weapons would seem to make them morally unacceptable.

But, suppose that such weapons and vulnerability information could be securely stored—this would seem to answer the counter. However, it only addresses the stockpiling of weaponized software and does not justify stockpiling vulnerabilities. While adequate storage would prevent the theft of the software and the acquisition of vulnerability information from the secure storage, the vulnerability would remain to be exploited by others. While a state that has such vulnerability information would not be directly responsible for others finding the vulnerabilities, the state would still be responsible for knowingly allowing the vulnerability to remain, thus potentially putting the rest of the world at risk. In the case of serious vulnerabilities, the potential harm of allowing such vulnerabilities to remain unfixed would seem to exceed the advantages a state would gain in keeping the information to itself. As such, states should not stockpile knowledge of such critical vulnerabilities, but should inform the relevant companies.

The interconnected web of computers that forms the nervous system of the modern world is far too important to everyone to put it risk for the relatively minor and short-term gains that could be had by states creating malware and stockpiling vulnerabilities. I would use an obvious analogy to the environment; but people are all too willing to inflict massive environmental damage for relatively small short term gains. This, of course, suggests that the people running states might prove as wicked and unwise regarding the virtual environment as they are regarding the physical environment.


My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Follow Me on Twitter

Automation & Administration: An Immodest Proposal

Posted in Business, Ethics, Law, Philosophy, Politics, Technology by Michael LaBossiere on May 5, 2017
Embed from Getty Images

It has almost been a law that technological advances create more jobs than they eliminate. This, however, appears to be changing. It is predicted that nearly 15 million jobs will be created by advances and deployment of automation and artificial intelligence by 2027. On the downside, it is also estimated that technological change will eliminate about 25 million jobs. Since the future is not yet now, the reality might be different—but it is generally wise to plan for the likely shape of things to come. As such, it is a good idea to consider how to address the likely loss of jobs.

One short term approach is moving people into jobs that are just ahead of replacement. This is rather like running ahead of an inextinguishable fire in a burning building—it merely postpones the inevitable. A longer-term approach is to add to the building so that you can keep on running as long as you can build faster than the fire can advance. This has been the usual approach to staying ahead of the fire of technology. An even better and rather obvious solution is to get out of the building and into one that will not catch on fire. Moving away from the metaphor, this would involve creating jobs that are technology proof.

If technology cannot fully replicate (or exceed) human capabilities, then there could be some jobs that are technology proof. To get a bit metaphysical, Descartes argued that merely physical systems would not be able to do all that an immaterial mind can do. For example, Descartes claimed that the ability to use true language required an immaterial mind—although he acknowledged that very impressive machines could be constructed that would have the appearance of thought. If he is right, then there could be a sort of metaphysical job security. Moving away from metaphysics, there could be limits on our technological abilities that preclude being able to build our true replacements. But, if technology can build entities that can do all that we can do, then no job would be safe—something could be made to take that job from a human. To gamble on either our special nature or the limits of technology is rather risky, so it would make more sense to take a more dependable approach.

One approach is creating job preserves (like game preserves, only for humans)—that is, deciding to protect certain jobs from technological change. This approach is nothing new. According to some accounts, one reason that Hero of Alexandria’s steam engine was not utilized in the ancient world was because it would have displaced the slaves who provided the bulk of the labor. While this option does have the advantage of preserving jobs, there are some clear and obvious problems with creating such an economic preserve. As two examples, there are the practical matters of sustaining such jobs and competing against other countries who are not engaged in such job protection.

Another approach is to intentionally create jobs that are not really needed and thus can be maintained even in the face of technological advancement. After all, if there is really no reason to have the job at all, there is no reason to replace it with a technological solution. While this might seem to be a stupid idea (and it is), it is not a new idea. There are numerous jobs that are not really needed that are still maintained. Some even pay extremely well. One general category of such jobs are administrative jobs. I will illustrate with my own area of experience, academics.

When I began my career in academics, the academy was already thick with administrators. However, many of them did things that were necessary, such as handling finances and organizing departments. As the years went on, I noticed that the academy was becoming infested with administrators. While this could be dismissed as mere anecdotal evidence on my part, it is supported by the data—the number of non-academic administrative and professional employees in the academics has doubled in the past quarter century. This is, it must be noted, in the face of technological advance and automation which should have reduced the number of such jobs.

These jobs take many forms. As one example, in place of the traditional single dean, a college will have multiple deans of various ranks and the corresponding supporting staff. As another example, assessment has transformed from an academic fad to a permanent parasite (or symbiote, in cases where the assessment is worthwhile) that has grown fat upon the academic body. There has also been a blight of various vice presidents of this and that; many of which are often linked to what some call “political correctness.” Despite being, at best, useless, these jobs continue to exist and are even added to. While a sane person might see this as a problem to be addressed, a person with a somewhat different perspective would be inspired to make an immodest proposal: why not apply this model across the whole economy? To be specific, a partial solution to the problem of technology eliminating jobs is to create new administrative positions for those who lose their jobs. For example, if construction jobs were lost to constructicons, then they could be replaced with such jobs as “vice president of constructicon assessment”, ‘constructicon resource officer”, “constructicon gender identity consultant” and supporting staff.

It might be objected that it would be wrong, foolish and wasteful to create such jobs merely to keep people employed as jobs are consumed by technology. The easy and obvious reply is that if useless jobs are going to flourish anyway, they might as well serve a better purpose.

My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Follow Me on Twitter

Should ISPs be Allowed to Sell Your Data?

Posted in Ethics, Law, Philosophy, Politics, Technology by Michael LaBossiere on March 31, 2017
Embed from Getty Images

Showing the extent of their concern for the privacy of Americans, congress has overturned rules aimed at giving consumers more control over how ISPs use their data. Most importantly, these rules would have required consent from customers before the ISPs could sell sensitive data (such as financial information, health information and browsing history). Assuming the sworn defender of the forgotten, President Donald Trump, signs the bill into law, ISPs will be able to monetize the private data of their customers.

While the ISPs obviously want to make more money, giving that as the justification for stripping away the privacy of customers would not make for effective rhetoric. Instead, proponents make the usual vague and meaningless references to free markets. Since there is no actual substance to these noises, they do not merit a response.

They also advance more substantial reasons, such as the claim that companies such as Facebook monetize private data, the assertion that customers will benefit and the claim that this will fuel innovation. I will consider each in turn.

On the one hand, the claim that other companies already monetize private data could be dismissed as a mere fallacy of appeal to common practice. After all, the fact that others are doing something does not entail that it is a good thing. On the other hand, this line of reasoning can be seen as a legitimate appeal to fairness: it would be unfair that companies like Google and Facebook get to monetize private data while ISPs do not get to do so. The easy and obvious counter to this is that consumers can easily opt out of Google and Facebook by not using their services. While this means forgoing some useful services, it is a viable option. In contrast, going without internet access is extremely problematic and customers have very few (if any alternatives). Even if a customer can choose between two or more ISPs, it is likely that they will all want to monetize the customers’ private data—it is simply too valuable a commodity to leave on the table. While it is not impossible for an ISP to try to win customers by choosing to forgo selling their data, this seems unlikely—thus customers will generally be stuck with the choice of giving up the internet or giving up their privacy. Given the coercive advantage of the ISPs, it is up to the state to protect the interests of the citizens (just as the state protects ISPs).

The claim that the customers will benefit is hard to evaluate in the abstract. After all, it is not yet known what, if anything, the ISPs will provide in return for the data. Facebook and Google offer valuable services in return for handing over data; but customers already pay ISPs for their services. It might turn out that the ISPs will offer customers deals that make giving up privacy appealing—such as lowered costs. However, anyone familiar with companies such as Comcast will have no faith in this. As such, the overturning of the privacy rules will benefit ISPs but will most likely not benefit consumers.

While the innovation argument is deployed in almost any discussion of technology, allowing ISPs to sell private data does not seem to be an innovation, unless one just means “change” by “innovation.” It also seems unlikely to lead to any innovations for the customers; although the ISPs will presumably work hard to innovate in ways to process and sell data. This innovation would be good for the ISPs, but would not seem to offer anything to the customers—anymore than innovations in processing and selling chickens benefits the chickens.

Defenders of the ISPs could make the case that the data belongs to the ISP rather than the customer, so they have the right to sell it. Laying aside the usual arguments about privacy rights and sticking to ownership rights, this claim is easily defeated by the following analogy.

Suppose that I rent an office and use it to conduct my business, such as writing my books. The owner has every right to expect me to pay my rent. However, they have no right to set up cameras to observe my work and interactions with people and then sell the information they gather as their own. That would be theft. In the case of the ISP, I am leasing access to the internet, but what I do in this virtual property belongs to me—they have no right of ownership to what I do. After all, I am doing all the labor. Naturally, I can agree to sell my labor; but this needs to be my choice. As such, when ISPs insist they have the right to sell customers private data, they are like landlords claiming they have a right to sell anything valuable they can learn by spying on their tenants. This is clearly wrong. Unfortunately, congress belongs to the ISPs and not to the people.



My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Follow Me on Twitter

Tagged with: , ,

Voice-Directed Humans

Posted in Technology by Michael LaBossiere on March 6, 2017
Embed from Getty Images

In utopian science fiction, robots free humans from the toil and labor of the body so that they can live lives of enlightenment and enjoyment. In dystopian science fiction, robots become the masters or exterminators of humanity. As should be expected, reality is heading towards the usual mean between dystopia and utopia, the realm of middletopia. This is a mix of the awful and the not-so-bad that has characterized most of human history.

In some cases, robots have replaced humans in jobs that are repetitious, unfulfilling and dangerous. This has allowed the displaced humans to move on to other jobs that repetitious, unfulfilling and dangerous to await their next displacement. Robots have also replaced humans in jobs that are more desirable to humans, such as in the fields of law and journalism. This leads to questions about what jobs will be left to humans and which will be taken over by robots (broadly construed).

The intuitive view is that robots will not be able to replace humans in “creative” jobs but that they will be able to replace humans in nearly all physical labor. As such, people tend to think that robots will replace warehouse pickers, construction workers and janitors. Artists, philosophers, and teachers are supposed to be safe from the robot revolution. In some cases, the intuitive view has proven correct—robots are routinely used for physical labor such as constructing cars and no robot Socrates has shown up. However, the intuitive view is also in error in many cases. As noted above, some journalism and legal tasks are done with automation. There are also seemingly easy to automate tasks, such as cleaning toilets or doing construction, that are very hard for robots, but easy for humans.

One example of a task that would seem ideal for automation is warehouse picking, especially of the sort done by Amazon. Amazon and other companies have automated some of the process, making use of robots in various tasks. But, while a robot might bring shelves to human workers, the humans are the ones picking the products for shipping. Since humans tend to have poor memories and get bored with picking, human pickers have been automated—they wear headsets connected to computers that tell them what to do, then they tell the computers what they have done. For example, a human might be directed to pick five boxes of acne medicine, then five more boxes of acne medicine, then a copy of Fifty Shades of Gray and finally an Android phone. Humans are very good at the actual picking, perhaps due to our hunter-gatherer ancestry.

In this sort of voice-directed warehouse, the humans are being controlled by the machines. The machines take care of the higher-level activities of organizing orders and managing, while the human brain handles the task of selecting the right items. While selecting seems simple, this is because it is simple to us humans but not for existing robots. We are good at recognizing, grouping and distinguishing things and have the manual dexterity to perform the picking tasks, thanks to our opposable thumbs. Unfortunately for the human worker, these picking tasks are probably not very rewarding, creative or interesting and this is exactly the sort of drudge job that robots are supposed to free us from.

While voice-directed warehousing is one example of humans being directed by robots, it is easy enough to imagine the same sort of approach being applied to similar sorts of tasks; namely those that require manual dexterity and what might be called “animal skills” such as object recognition. It is also easy to imagine this approach extended far beyond these jobs to cut costs.

The main way that this approach would cut costs would be by allowing employers to buy skilled robots and use them to direct unskilled human labor. For simple jobs, the “robot” could be a simple headset attached to a computer. For more complex jobs, a human might wear a VR style “robot” helmet with machine directing via augmented reality.

The humans, as noted above, provide the manual dexterity and all those highly evolved capacities. The robots provide the direction. Since any normal human body would suffice to serve the controlling robot, the value of human labor would be extremely low and wages would, of course, match this value. Workers would be easy to replace—if a worker is fired or quits, then a new worker can simply don the robot controller and get about the task with little training. This would also save in education costs—such a robot directed laborer would not need an education in job skills (the job skills are provided by the robots), just the basics needed to be directed properly by the robot. This does point towards a dystopia in which human bodies are driven around through the work day by robots, then released and sent home in driverless cars.

The employment of humans in these roles would, of course, only continue for as long as humans are the cheapest form of available labor. If advances allow robots to do these tasks cheaper, then the humans would be replaced.  Alternatively, biological engineering might lead to the production of engineered organics that can replace human; perhaps a pliable ape-like creature that is just smart enough to be directed by the robots. But not human enough to be considered a slave.  This would presumably continue until no jobs remained for humans. Other than making profits, of course.


My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Follow Me on Twitter

Tagged with: ,

Social Media & Shaming

Posted in Ethics, Philosophy, Technology by Michael LaBossiere on February 22, 2017

Embed from Getty Images
While shaming was weaponized long ago as a means of punishment, social media has transformed it into a weapon of reversed mass destruction. Rather than a single weapon destroying masses, it is the social media masses that are destroying one person at a time. Perhaps the best known example of this is the destruction of Justine Sacco, the woman who tweeted “Going to Africa. Hope I don’t get AIDS. Just kidding. I’m white!” While Sacco is currently the best known victim of such shaming, the practice has become a common one and the list of casualties increases each day.

While it is tempting to issue a blanket condemnation of shaming, this would be a mistake. While shaming is abused, it can be a morally acceptable form of punishment. However, this requires that it be used properly and justly.

As with any form of punishment, shaming should only be used when the target has done wrong. Unlike with actual civil and criminal laws, there is not a codified set of rules specifying what actions are wrong in a way that warrant shaming. As with most social interactions, people are guided by vague norms, intuitions, traditions and feelings. As such, the practice of shaming can be rather chaotic. That said, it is certainly possible to consider situations rationally and assess whether they are shame worthy or not—though disputes are inevitable. Working out such guideless would be analogous to developing a hybrid between laws and etiquette and would presumably require at least a small book, which is far beyond the scope of this short essay. However, I do have some recommendations.

In the United States criminal justice system, there is a presumption of innocence on the part of the defendant. This is based on the ideal that it is better to allow the guilty to go free than to punish the innocent. The same sort of presumption should be extended to those who are accused of engaging in shame worthy actions. I would even suggest a specific sort of presumption, namely a presumption of error. This is to begin the consideration by assuming the accused acted from error rather than malice.

One common type of error that leads to excessive shaming is when a person attempts to be funny, but fails to do so because of a lack of skill. Sacco’s infamous tweet seems to be an example of this sort of error. A skilled comedian could have created a piece of satire using the same basic idea and directed attention to the issue of race in the context of AIDS. Because of a lack of comedic skill, Sacco’s tweet came across as racist—although all the evidence seems to clearly show that this is not what she intended. Another type of error is that of ignorance—a person has no malicious intent, but errs by not knowing something rather important. For example, a person trying to be funny might appear racist because they are unaware of the social norms governing who has the right to use which terms of race. The obvious example, is a white person imitating a black comedian’s use of the n-word without realizing that the word is essentially off limit to white comedians.

If a person is reasonably judged worthy of shaming, the next concern is how and to what extent the person should be shamed and the objective of the shaming. Since shaming is a punishment, the usual moral considerations about punishment apply.

One reason to punish by shaming is deterrence—so the shamed will not engage in shameful activity again and that others will be less inclined to behave in similar ways. Another reason is retribution—to “balance the books” by harming the shamed in return for the harm they did. While retribution strikes me as morally problematic (at best), both deterrence and retribution should be limited by the principle of proportionality. That is, the punishment should be comparable in severity to the harm done. If the punishment is excessive, then it creates a new harm that would require punishment and this punishment would need to be proportional or there would need to be another punishment and so on to infinity. As such, even if retribution is embraced, it can only be justified when it matches the harm inflicted.

Unfortunately, in social media shaming the punishment tends to be excessive. In fact, the punishments for such offenses can exceed those imposed for serious civil or criminal violations of the law. For example, Sacco’s failed attempt at humor cost her job and wrecked her life. One reason that the punishment can be excessive is that people are often insulated from consequences of their acts of punishment, and hence they are freed to be harsher than they would be in person. That said, shamers are sometimes themselves shamed for shaming, thus creating a vicious circle. Another reason for the excesses of punishment is the scope of social media. A person’s shame can be broadcast to the entire world and the entire world can get in on punishing the person, thus inflicting excessive harm. This also helps explain why people who are shamed are often fired—their employers fear the wrath of the social media mob and will fire a person to protect themselves.

Another, and what I think is the best, reason to punish is redemption. Such punishment aims to inform the person that their action is unacceptable, to give them a chance to atone for their misdeed and to allow them a chance to be accepted back into the social fold. This approach does have some limits. The person must be subject to feeling shame or vulnerable to the consequences of being shamed. A person who is shameless (or at least without shame in the matter at hand) will be rather resistant to attempts to appeal to their sense of shame. A person who can suffer little or no ill-consequences from being shamed will also not be corrected by shaming. Donald Trump is often presented as an example of a person who is either shameless or able to effectively avoid the negative consequences of being shamed (or both).

Punishing for the purpose of redemption does put a limit on the punishment that should be inflicted. After all, excessive punishment is unlikely to teach a person a moral lesson about how they should act (but it can teach a practical lesson). Also, excessive punishment can do so much damage that a person cannot effectively make it back into the social fold. Such redemptive shaming should be severe enough to send the intended message, but moderate enough that the person can achieve redemption. What is often forgotten about redemptive punishment is the important role of society—redemption is not merely about the wrongdoer redeeming themselves, but other people accepting this redemption. Those who engage in social media shaming all too often rush to punish and then move on to the next transgressor. In doing so, they fail in their obligations to those they have punished, which includes offering an opportunity for redemption.

My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Follow Me on Twitter