A Philosopher's Blog

Should ISPs be Allowed to Sell Your Data?

Posted in Ethics, Law, Philosophy, Politics, Technology by Michael LaBossiere on March 31, 2017

Showing the extent of their concern for the privacy of Americans, congress has overturned rules aimed at giving consumers more control over how ISPs use their data. Most importantly, these rules would have required consent from customers before the ISPs could sell sensitive data (such as financial information, health information and browsing history). Assuming the sworn defender of the forgotten, President Donald Trump, signs the bill into law, ISPs will be able to monetize the private data of their customers.

While the ISPs obviously want to make more money, giving that as the justification for stripping away the privacy of customers would not make for effective rhetoric. Instead, proponents make the usual vague and meaningless references to free markets. Since there is no actual substance to these noises, they do not merit a response.

They also advance more substantial reasons, such as the claim that companies such as Facebook monetize private data, the assertion that customers will benefit and the claim that this will fuel innovation. I will consider each in turn.

On the one hand, the claim that other companies already monetize private data could be dismissed as a mere fallacy of appeal to common practice. After all, the fact that others are doing something does not entail that it is a good thing. On the other hand, this line of reasoning can be seen as a legitimate appeal to fairness: it would be unfair that companies like Google and Facebook get to monetize private data while ISPs do not get to do so. The easy and obvious counter to this is that consumers can easily opt out of Google and Facebook by not using their services. While this means forgoing some useful services, it is a viable option. In contrast, going without internet access is extremely problematic and customers have very few (if any alternatives). Even if a customer can choose between two or more ISPs, it is likely that they will all want to monetize the customers’ private data—it is simply too valuable a commodity to leave on the table. While it is not impossible for an ISP to try to win customers by choosing to forgo selling their data, this seems unlikely—thus customers will generally be stuck with the choice of giving up the internet or giving up their privacy. Given the coercive advantage of the ISPs, it is up to the state to protect the interests of the citizens (just as the state protects ISPs).

The claim that the customers will benefit is hard to evaluate in the abstract. After all, it is not yet known what, if anything, the ISPs will provide in return for the data. Facebook and Google offer valuable services in return for handing over data; but customers already pay ISPs for their services. It might turn out that the ISPs will offer customers deals that make giving up privacy appealing—such as lowered costs. However, anyone familiar with companies such as Comcast will have no faith in this. As such, the overturning of the privacy rules will benefit ISPs but will most likely not benefit consumers.

While the innovation argument is deployed in almost any discussion of technology, allowing ISPs to sell private data does not seem to be an innovation, unless one just means “change” by “innovation.” It also seems unlikely to lead to any innovations for the customers; although the ISPs will presumably work hard to innovate in ways to process and sell data. This innovation would be good for the ISPs, but would not seem to offer anything to the customers—anymore than innovations in processing and selling chickens benefits the chickens.

Defenders of the ISPs could make the case that the data belongs to the ISP rather than the customer, so they have the right to sell it. Laying aside the usual arguments about privacy rights and sticking to ownership rights, this claim is easily defeated by the following analogy.

Suppose that I rent an office and use it to conduct my business, such as writing my books. The owner has every right to expect me to pay my rent. However, they have no right to set up cameras to observe my work and interactions with people and then sell the information they gather as their own. That would be theft. In the case of the ISP, I am leasing access to the internet, but what I do in this virtual property belongs to me—they have no right of ownership to what I do. After all, I am doing all the labor. Naturally, I can agree to sell my labor; but this needs to be my choice. As such, when ISPs insist they have the right to sell customers private data, they are like landlords claiming they have a right to sell anything valuable they can learn by spying on their tenants. This is clearly wrong. Unfortunately, congress belongs to the ISPs and not to the people.

 

 

My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Follow Me on Twitter

Tagged with: , ,

Voice-Directed Humans

Posted in Technology by Michael LaBossiere on March 6, 2017

In utopian science fiction, robots free humans from the toil and labor of the body so that they can live lives of enlightenment and enjoyment. In dystopian science fiction, robots become the masters or exterminators of humanity. As should be expected, reality is heading towards the usual mean between dystopia and utopia, the realm of middletopia. This is a mix of the awful and the not-so-bad that has characterized most of human history.

In some cases, robots have replaced humans in jobs that are repetitious, unfulfilling and dangerous. This has allowed the displaced humans to move on to other jobs that repetitious, unfulfilling and dangerous to await their next displacement. Robots have also replaced humans in jobs that are more desirable to humans, such as in the fields of law and journalism. This leads to questions about what jobs will be left to humans and which will be taken over by robots (broadly construed).

The intuitive view is that robots will not be able to replace humans in “creative” jobs but that they will be able to replace humans in nearly all physical labor. As such, people tend to think that robots will replace warehouse pickers, construction workers and janitors. Artists, philosophers, and teachers are supposed to be safe from the robot revolution. In some cases, the intuitive view has proven correct—robots are routinely used for physical labor such as constructing cars and no robot Socrates has shown up. However, the intuitive view is also in error in many cases. As noted above, some journalism and legal tasks are done with automation. There are also seemingly easy to automate tasks, such as cleaning toilets or doing construction, that are very hard for robots, but easy for humans.

One example of a task that would seem ideal for automation is warehouse picking, especially of the sort done by Amazon. Amazon and other companies have automated some of the process, making use of robots in various tasks. But, while a robot might bring shelves to human workers, the humans are the ones picking the products for shipping. Since humans tend to have poor memories and get bored with picking, human pickers have been automated—they wear headsets connected to computers that tell them what to do, then they tell the computers what they have done. For example, a human might be directed to pick five boxes of acne medicine, then five more boxes of acne medicine, then a copy of Fifty Shades of Gray and finally an Android phone. Humans are very good at the actual picking, perhaps due to our hunter-gatherer ancestry.

In this sort of voice-directed warehouse, the humans are being controlled by the machines. The machines take care of the higher-level activities of organizing orders and managing, while the human brain handles the task of selecting the right items. While selecting seems simple, this is because it is simple to us humans but not for existing robots. We are good at recognizing, grouping and distinguishing things and have the manual dexterity to perform the picking tasks, thanks to our opposable thumbs. Unfortunately for the human worker, these picking tasks are probably not very rewarding, creative or interesting and this is exactly the sort of drudge job that robots are supposed to free us from.

While voice-directed warehousing is one example of humans being directed by robots, it is easy enough to imagine the same sort of approach being applied to similar sorts of tasks; namely those that require manual dexterity and what might be called “animal skills” such as object recognition. It is also easy to imagine this approach extended far beyond these jobs to cut costs.

The main way that this approach would cut costs would be by allowing employers to buy skilled robots and use them to direct unskilled human labor. For simple jobs, the “robot” could be a simple headset attached to a computer. For more complex jobs, a human might wear a VR style “robot” helmet with machine directing via augmented reality.

The humans, as noted above, provide the manual dexterity and all those highly evolved capacities. The robots provide the direction. Since any normal human body would suffice to serve the controlling robot, the value of human labor would be extremely low and wages would, of course, match this value. Workers would be easy to replace—if a worker is fired or quits, then a new worker can simply don the robot controller and get about the task with little training. This would also save in education costs—such a robot directed laborer would not need an education in job skills (the job skills are provided by the robots), just the basics needed to be directed properly by the robot. This does point towards a dystopia in which human bodies are driven around through the work day by robots, then released and sent home in driverless cars.

The employment of humans in these roles would, of course, only continue for as long as humans are the cheapest form of available labor. If advances allow robots to do these tasks cheaper, then the humans would be replaced.  Alternatively, biological engineering might lead to the production of engineered organics that can replace human; perhaps a pliable ape-like creature that is just smart enough to be directed by the robots. But not human enough to be considered a slave.  This would presumably continue until no jobs remained for humans. Other than making profits, of course.

 

My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Follow Me on Twitter

Tagged with: ,

Social Media & Shaming

Posted in Ethics, Philosophy, Technology by Michael LaBossiere on February 22, 2017


While shaming was weaponized long ago as a means of punishment, social media has transformed it into a weapon of reversed mass destruction. Rather than a single weapon destroying masses, it is the social media masses that are destroying one person at a time. Perhaps the best known example of this is the destruction of Justine Sacco, the woman who tweeted “Going to Africa. Hope I don’t get AIDS. Just kidding. I’m white!” While Sacco is currently the best known victim of such shaming, the practice has become a common one and the list of casualties increases each day.

While it is tempting to issue a blanket condemnation of shaming, this would be a mistake. While shaming is abused, it can be a morally acceptable form of punishment. However, this requires that it be used properly and justly.

As with any form of punishment, shaming should only be used when the target has done wrong. Unlike with actual civil and criminal laws, there is not a codified set of rules specifying what actions are wrong in a way that warrant shaming. As with most social interactions, people are guided by vague norms, intuitions, traditions and feelings. As such, the practice of shaming can be rather chaotic. That said, it is certainly possible to consider situations rationally and assess whether they are shame worthy or not—though disputes are inevitable. Working out such guideless would be analogous to developing a hybrid between laws and etiquette and would presumably require at least a small book, which is far beyond the scope of this short essay. However, I do have some recommendations.

In the United States criminal justice system, there is a presumption of innocence on the part of the defendant. This is based on the ideal that it is better to allow the guilty to go free than to punish the innocent. The same sort of presumption should be extended to those who are accused of engaging in shame worthy actions. I would even suggest a specific sort of presumption, namely a presumption of error. This is to begin the consideration by assuming the accused acted from error rather than malice.

One common type of error that leads to excessive shaming is when a person attempts to be funny, but fails to do so because of a lack of skill. Sacco’s infamous tweet seems to be an example of this sort of error. A skilled comedian could have created a piece of satire using the same basic idea and directed attention to the issue of race in the context of AIDS. Because of a lack of comedic skill, Sacco’s tweet came across as racist—although all the evidence seems to clearly show that this is not what she intended. Another type of error is that of ignorance—a person has no malicious intent, but errs by not knowing something rather important. For example, a person trying to be funny might appear racist because they are unaware of the social norms governing who has the right to use which terms of race. The obvious example, is a white person imitating a black comedian’s use of the n-word without realizing that the word is essentially off limit to white comedians.

If a person is reasonably judged worthy of shaming, the next concern is how and to what extent the person should be shamed and the objective of the shaming. Since shaming is a punishment, the usual moral considerations about punishment apply.

One reason to punish by shaming is deterrence—so the shamed will not engage in shameful activity again and that others will be less inclined to behave in similar ways. Another reason is retribution—to “balance the books” by harming the shamed in return for the harm they did. While retribution strikes me as morally problematic (at best), both deterrence and retribution should be limited by the principle of proportionality. That is, the punishment should be comparable in severity to the harm done. If the punishment is excessive, then it creates a new harm that would require punishment and this punishment would need to be proportional or there would need to be another punishment and so on to infinity. As such, even if retribution is embraced, it can only be justified when it matches the harm inflicted.

Unfortunately, in social media shaming the punishment tends to be excessive. In fact, the punishments for such offenses can exceed those imposed for serious civil or criminal violations of the law. For example, Sacco’s failed attempt at humor cost her job and wrecked her life. One reason that the punishment can be excessive is that people are often insulated from consequences of their acts of punishment, and hence they are freed to be harsher than they would be in person. That said, shamers are sometimes themselves shamed for shaming, thus creating a vicious circle. Another reason for the excesses of punishment is the scope of social media. A person’s shame can be broadcast to the entire world and the entire world can get in on punishing the person, thus inflicting excessive harm. This also helps explain why people who are shamed are often fired—their employers fear the wrath of the social media mob and will fire a person to protect themselves.

Another, and what I think is the best, reason to punish is redemption. Such punishment aims to inform the person that their action is unacceptable, to give them a chance to atone for their misdeed and to allow them a chance to be accepted back into the social fold. This approach does have some limits. The person must be subject to feeling shame or vulnerable to the consequences of being shamed. A person who is shameless (or at least without shame in the matter at hand) will be rather resistant to attempts to appeal to their sense of shame. A person who can suffer little or no ill-consequences from being shamed will also not be corrected by shaming. Donald Trump is often presented as an example of a person who is either shameless or able to effectively avoid the negative consequences of being shamed (or both).

Punishing for the purpose of redemption does put a limit on the punishment that should be inflicted. After all, excessive punishment is unlikely to teach a person a moral lesson about how they should act (but it can teach a practical lesson). Also, excessive punishment can do so much damage that a person cannot effectively make it back into the social fold. Such redemptive shaming should be severe enough to send the intended message, but moderate enough that the person can achieve redemption. What is often forgotten about redemptive punishment is the important role of society—redemption is not merely about the wrongdoer redeeming themselves, but other people accepting this redemption. Those who engage in social media shaming all too often rush to punish and then move on to the next transgressor. In doing so, they fail in their obligations to those they have punished, which includes offering an opportunity for redemption.

My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Follow Me on Twitter

Autonomous Vehicles: Solving an Unnecessary Problem?

Posted in Ethics, Philosophy, Technology by Michael LaBossiere on February 15, 2017

While motor vehicle fatalities do not get the attention of terrorist attacks (unless a celebrity is involved), the roads of the United States are no stranger to blood.  From 2000 to 2015, the motor vehicle deaths per year ranged from a high of 43,005 in 2005 to a low of 32,675 in 2014. In 2015 there were 35,092 motor vehicle deaths and last year the number went back up to around 40,000. Given the high death toll, there is clearly a problem that needs to be solved.

One of the main reasons being advanced for the deployment of autonomous vehicles is that they will make the roads safer and thus reduce the carnage. While predictions of the imminent arrival of autonomous vehicles are overly optimistic, the idea that they would reduce motor vehicle deaths is certainly plausible. After all, autonomous vehicles will not be subject to road rage, exhaustion, intoxication, poor judgment, distraction and the other maladies that inflict human drivers and contribute to the high death tolls. Motor vehicle deaths will certainly not be eliminated even if all vehicles were autonomous, but the likely reduction in the death toll does present a very strong moral and practical reason to deploy such vehicles. That said, it is still worth considering whether the autonomous vehicle is aimed at solving an unnecessary problem. Considering this matter requires going back in time, to the rise of the automobile in the United States.

As the number of cars increased in the United States, so did the number of deaths. One contributing factor to the high number of deaths was that American cars were rather unsafe and this led Ralph Nader to write his classic work, Unsafe at Any Speed. Thanks to Nader and others, the American automobile became much safer and motor vehicle fatalities decreased. While making cars safer was certainly a good thing, it can be argued that this approach was fundamentally flawed. I will use an analogy to make my point.

Imagine, if you will, that people insist on swinging hammers around as they go about their day.  As would be suspected, the hammer swinging would often result in injuries and property damage. Confronted by these harms, solutions are proposed and implemented. People wear ever better helmets and body armor to protect them from wild swings. Hammers are also continuously redesigned so that they inflict less damage when hitting, for example, a face.  Eventually Google and other companies start work on autonomous swinging hammers that will be much better than humans at avoiding hitting other people and things. While all these safety improvements would be better than the original situation of unprotected people swinging very dangerous hammers around, this approach seems to be fundamentally flawed. After all, if people stopped swinging hammers around, then the problem would be solved.

An easy and obvious reply to my analogy is that using motor vehicles, unlike random hammer swinging, is rather important. For one thing, a significant percentage of the economy is built around the motor vehicle. This includes the obvious things like vehicle sales, vehicle maintenance, gasoline sales, road maintenance and so on. It also includes less obvious aspects of the economy that involve the motor vehicle, such as how they contribute to the success of stores like Wal Mart. The economic value of the motor vehicle, it can be argued, provides a justification for accepting the thousands of deaths per year. While it is certainly desirable to reduce these deaths, getting rid of motor vehicles is not a viable economic option—thus autonomous vehicles are a good potential partial solution to the death problem. Or are they?

One obvious problem with the autonomous vehicle solution is that they are trying to solve the death problem within a system created around human drivers and their wants. This system of lights, signs, turn lanes, crosswalks and such is extremely complicated—thus creating difficult engineering and programing problems. It would seem to make more sense to use the resources being poured into autonomous vehicles to develop a better and safer transportation system that does not center around a bad idea: the individual motor vehicle operating within a complicated road system. On this view, autonomous vehicles are solving an unnecessary problem: they are merely better hammers.

This line of argumentation can be countered in a couple ways. One way is to present the economic argument again: autonomous vehicles preserve the individual motor vehicle that is economically critical while being likely to reduce the death fee paid for this economy. Another way is to argue that the cost of creating a new transportation system would be far more than the cost of developing autonomous vehicles that can operate within the existing system. A third way is to make the plausible case that autonomous vehicles are a step towards developing a new transportation system. People tend to need a slow adjustment period to major changes and the autonomous vehicles will allow a gradual transition from distracted human drivers to autonomous vehicles operating with the distracted humans to a transportation infrastructure rebuilt entirely around autonomous vehicles (perhaps with a completely distinct system for walkers, bikers and runners). Going back to the hammer analogy, the self-swinging hammer would reduce hammer injuries and could allow a transition to be made away from hammer swinging altogether.

My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Follow Me on Twitter

Swarms

Posted in Ethics, Philosophy, Technology by Michael LaBossiere on December 16, 2016

The Swarm (film)

Anyone who has played RTS games such as Blizzard’s Starcraft knows the basics of swarm warfare: you build a vast swarm of cheap units and hurl them against the enemy’s smaller force of more expensive units. The plan is that although the swarm will be decimated, the enemy will be exterminated. The same tactic was the basis of the classic tabletop game Ogre—it pitted a lone intelligent super tank against a large force of human infantry and armor. And, of course, the real world features numerous examples of swarm warfare—some successful for those using the swarm tactic (ants taking out a larger foe), some disastrous (massed infantry attacks on machineguns).

The latest approach to swarm tactics is to build a swarm of drones and deploy them against the enemy. While such drones will tend to be airborne units, they could also be ground or sea machines. In terms of their attacks, there are many options. The drones could be large enough to be equipped with weapons, such as small caliber guns, that would allow them to engage and return to reload for future battles. Some might be equipped with melee weapons, poisons, or biological weapons. The drones could also be suicide machines—small missiles intended to damage the enemy by destroying themselves.

While the development of military drone swarms will no doubt fall within the usual high cost of developing new weapon technology, the drones themselves can be relatively cheap. After all, they will tend to be much smaller and simpler than existing weapons such as aircraft, ships and ground vehicles. The main cost will most likely be in developing the software to make the drones operate effectively in a swarm; but after that it will be just a matter of mass producing the hardware.

If effective software and cost-effective hardware can be developed, one of the main advantages of the battle swarm will be its low cost. While such low-cost warfare might be problematic for defense contractors who have grown accustomed to massive contracts for big ticket items, it would certainly be appealing to those who are concerned about costs and reducing government spending. After all, if low cost drones could replace expensive units, defenses expenses could be significantly reduced. The savings could be used for other programs or allow for tax cuts. Or perhaps they will just build billions of dollars of drones.

Low cost units, if effective, can also confer a significant attrition advantage. If, for example, thousands of dollars of drones can take down millions of dollars of aircraft, then the side with the drones stands a decent chance of winning. If hundreds of dollars of drones can take down millions of dollars of aircraft, then the situation is even better for the side with the drones.

The low cost does raise some concerns, though. Once the drone controlling software makes its way out into the world (via the inevitable hack, theft, or sale), then everyone will be using swarms. This will recreate the IED and suicide bomber situation, only at an exponential increase. Instead of IEDs in the road, they will be flying around cities, looking for targets. Instead of a few suicide bombers with vests, there will be swarms of drones loaded with explosives. Since Uber comparisons are now mandatory, the swarm will be the Uber of death.

This does raise moral concerns about the development of the drone software and technology; but the easy and obvious reply is that there is nothing new about this situation: every weapon ever developed eventually makes the rounds. As such, the usual ethics of weapon development applies here, with due emphasis on the possibility of providing another cheap and effective way to destroy and kill.

One short term advantage of the first swarms is that they will be facing weapons designed primarily to engage small numbers of high value targets. For example, air defense systems now consist mainly of expensive missiles designed to destroy very expensive aircraft. Firing a standard anti-aircraft missile into a swarm will destroy some of the drones (assuming the missile detonates), but enough of the swarm will probably survive the attack for it to remain effective. It is also likely that the weapons used to defend against the drones will cost far more than the drones, which ties back into the cost advantage.

This advantage of the drones would be quickly lost if effective anti-swarm weapons are developed. Not surprisingly, gamers have already worked out effective responses to swarms. In D&D/Pathfinder players generally loath swarms for the same reason that ill-prepared militaries will loath drone swarms: while the individual swarm members are easy to kill, it is all but impossible to kill enough of them with standard weapons. In the game, players respond to swarms with area of effect attacks, such as fireballs (or running away). These sorts of attacks can consume the entire swarm and either eliminate it or reduce its numbers so it is no longer a threat. While the real world has an unfortunate lack of wizards, the same basic idea will work against drone swarms: cheap weapons that do moderate damage over a large area. One likely weapon is a battery of large, automatic shotguns that would fill the sky with pellets or flechettes. Missiles could also be designed that act like claymore mines in the sky, spraying ball bearings in almost all directions.  And, obviously enough, swarms will be countered by swarms.

The drones would also be subject to electronic warfare—if they are being remotely controlled, this connection could be disrupted. Autonomous drones would be far less vulnerable, but they would still need to coordinate with each other to remain a swarm and this coordination could be targeted.

The practical challenge would be to make the defenses cheap enough to make them cost effective. Then again, countries that are happy to burn money for expensive weapon systems, such as the United States, would not need to worry about the costs. In fact, defense contractors will be lobbying hard for expensive swarm and anti-swarm systems.

The swarms also inherit the existing moral concerns about non-swarm drones, be they controlled directly by humans or deployed as autonomous killing machines. The ethical problems of swarms controlled by a human operator would be the same as the ethical problems of a single drone controlled by a human, the difference in numbers would not seem to make a moral difference. For example, if drone assassination with a single drone is wrong (or right), then drone assassination with a swarm would also be wrong (or right).

Likewise, an autonomous swarm is not morally different from a single autonomous unit in terms of the ethics of the situation.  For example, if deploying a single autonomous killbot is wrong (or right), then deploying an autonomous killbot swarm is wrong (or right).  That said, perhaps there is a greater chance that an autonomous killbot swarm will develop a rogue hive mind and turn against us. Or perhaps not. In any case, Will Rodgers will be proven right once again: “You can’t say that civilization don’t advance, however, for in every war they kill you in a new way.”

 

My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Follow Me on Twitter

Tagged with: , ,

Gaming & Groping II: Obligations

Posted in Ethics, Philosophy, Technology, Video Games by Michael LaBossiere on November 2, 2016

In my previous essay, I discussed some possible motivations for groping in VR games, which is now a thing. The focus of what follows is on the matter of protecting gamers from such harassment on the new frontiers of gaming.

Since virtual groping is a paradigm of a first world problem, it might be objected that addressing it is a waste of time. After all, the objection can be made that resources that might be expended on combating virtual groping should be spent on addressing real groping After all, a real grope is far worse than a virtual grope—and virtual gropes can be avoided by simply remaining outside of the virtual worlds.

This sort of objection does have some merit. After all, it is sensible to address problems in order of their seriousness. To use an analogy, if a car is skidding out of control at the same time an awful song comes on the radio, then the driver should focus on getting the car back under control and not waste time on the radio.  Unless, of course, it is “The Most Unwanted Song.”

The reasonable reply to this objection is that this is not a situation where it is one or the other, but not both. While time spent addressing virtual groping is time not spent on addressing real groping, addressing virtual groping does not preclude addressing real groping. Also, pushing this sort of objection can easily lead into absurdity: for anything a person is doing, there is almost certainly something else they could be doing that would have better moral consequences. For example, a person who spends time and money watching a movie could use that time and money to address a real problem, such as crime or drug addiction. But, as so often been argued, this would impose unreasonable expectations on people and would ultimately create more harm than good. As such, while I accept that real groping is worse than virtual groping, I am not failing morally by taking time to address the virtual rather than the real in this essay.

It could also be objected that there is no legitimate reason to be worried about virtual groping on the obvious grounds that it is virtual rather than real. After all, when people play video games, they routinely engage in virtual violence against each other—yet this is not seen as a special problem (although virtual violence does have its critics). Put roughly, if it is fine to shoot another player in a game (virtual killing) it should be equally fine to grope another player in a game. Neither the killing nor groping are real and hence should not be taken seriously.

This objection does have some merit, but can be countered by considering an analogy to sports. When people are competing in boxing or martial arts, they hit each other and this is accepted because it is the purpose of the sport. However, it is not acceptable for a competitor to start pawing away at their opponent’s groin in a sexual manner (and not just because of the no hitting below the belt rules of boxing). Punching is part of the sport, groping is not. The same holds for video games. If a person is playing a combat video game that pits players against each other, the expectation is that they will be subject to virtual violence. They know this and consent to it by playing, just as boxers know they will be punched and consent to it. But, unless the players know and consent to playing a groping game, using the game mechanics to virtually grope other players would not be acceptable—they did not agree to that game.

Another counter is that while the virtual groping is not as bad as real groping, it can still harm the target of the groping. To use an analogy, being verbally abused over game chat is not as bad as having a person physically present engaging in such abuse, but it is still unpleasant for the target. Virtual groping is a form of non-verbal harassment, intended to get a negative reaction from the target and to make the gaming experience unpleasant. There is also the fact that being the victim of such harassment can rob a player of the enjoyment of the game—which is the point of playing. While it is not as bad as groping a player in a real-world game (which would be sexual assault), it has an analogous effect on the player’s experience.

It could be replied that a player should just be tough and put up with the abuse. This reply lacks merit and is analogous to saying that people should just put up with being assaulted robbed or spit on. It is the reply of an abuser who wants to continue the abuse while shifting blame onto the target.

While players are in the wrong when they engage in virtual groping, there is the question of what gaming companies should do to protect their customers from such harassment. They do have a practical reason to address this concern—players will tend to avoid games where they are subject to harassment and abuse, thus costing the gaming company money. They also have a moral obligation, analogous to the obligation of those in the real world who host an event. For example, a casino that allowed players to grope others with impunity would be failing in its obligation to its customers; the same would seem to hold for a gaming company operating a VR game.

Companies do already operate various forms of reporting, although their enforcement tends to vary. Blizzard, for example, has policies about how players should treat each other in World of Warcraft. This same approach can and certainly will be applied to VR games that allow a broader range of harassment, such as virtual groping.

Because of factors such as controller limitations, most video games do not have the mechanics that would allow much in the way of groping—although some players do work very hard trying to make that happen. While non-VR video games could certainly support things like glove style controllers that would allow groping, VR games are far more likely to support controllers that would allow players to engage in virtual groping behavior (something that has, as noted above, already occurred).

Eliminating such controller options would help prevent VR groping, but at the cost of taking away a rather interesting and useful aspect of VR controller systems. As such, this is not a very viable option. A better approach would be to put in the software limits on how players can interact with the virtual bodies of other players. While some might suggest a punitive system for when one player’s virtual hands (or groin) contacting another player’s virtual naught bits, the obvious problem is that wily gamers would exploit this. For example, if a virtual hand contacting a virtual groin caused the character damage or filed an automatic report, then some players would be trying their best to get their virtual groins in contact with other players’ virtual hands. As such, this would be a bad idea.

A better, but less than ideal system, would be to have a personal space zone around each player’s VR body to keep other players at a distance. The challenge would be working this effectively into the game mechanics, especially for such things as hand-to-hand combat. It might also be possible to have the software recognize and prevent harassing behavior. So, for example, a player could virtually punch another player, but not make grabbing motions on the target’s groin.

It should be noted that these concerns are about contexts in which players do not want to be groped; I have no moral objection to VR applications that allow consensual groping—which, I infer, will be very popular.

 

My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Follow Me on Twitter

Gaming & Groping I: Motivations

Posted in Ethics, Philosophy, Technology by Michael LaBossiere on October 31, 2016

On the positive side, online gaming allows interaction with gamers all over the world. On the negative side, some gamers are horrible. While I have been a gamer since the days of Pong, one of my early introductions to “the horrible” was on Xbox live. In a moment of deranged optimism, I hoped that chat would allow me to plan strategy with my team members and perhaps make new gamer friends. While this did sometimes happen, the dominate experience was an unrelenting spew of insults and threats between gamers. I solved this problem by clipping the wire on a damaged Xbox headset and sticking the audio plug into my controller—the spew continued, but had nowhere to go.

There is an iron law of technology that any technology that can be misused will be misused. There are also specific laws that fall under this general law. One is the iron law of gaming harassment:  any gaming medium that allows harassment will be used to harass. While there have been many failed attempts at virtual reality gaming, it seems that it might become the new gaming medium. In any case, harassment in online VR games is already a thing. Just as VR is supposed to add a new level to gaming, it also adds a new level to harassment—such as virtual groping. This is an escalation over the harassment options available in most games. Non VR games are typical limited to verbal harassment and some action harassment, such as the classic tea bagging. For those not familiar with this practice, it is when one player causes their character to rapidly repeat crouch on top of a dead character. The idea is that the players is repeatedly slapping their virtual testicles against the virtual corpse of a foe. This presumably demonstrates contempt for the opponent and dominance on the part of the bagger. As might be imagined, this act speaks clearly about a player’s mental and moral status.

Being a gamer and a philosopher, I do wonder a bit about the motivations of those that engage in harassment and how their motivation impacts the ethics of their behavior. While I will not offer a detailed definition of harassment, the basic idea is that it requires sustained abuse. This is to distinguish it from a quick expression of anger.

In some cases, harassment seems to be motivated primarily by the enjoyment the harasser gets from getting a response from their target. The harasser is not operating from a specific value system that leads them to attack certain people; they are equal opportunity in their attacks. Back when I listened to what other gamers said, it was easy to spot this sort of person—they would go after everyone and tailor their spew based on what they seemed to believe about the target’s identity. As an example, if the harasser though their target was African-American, they would spew racist comments. As another example, if the target was the then exceedingly rare female gamer, they would spew sexist remarks. As a third example, if the target was believed to be a white guy, the attack would usually involve comments about the guy’s mother or assertions that the target is homosexual.

While the above focuses on what a person says, the discussion also applies to the virtual actions in the game. As noted above, some gamers engage in tea-bagging because that is the worst gesture they can make in the game. In games that allow more elaborate interaction, the behavior will tend to be analogous to groping in the real world. This is because such behavior is the most offensive behavior possible in the game and thus will create the strongest reaction.

While a person who enjoys inflicting this sort of abuse does have some moral problems, they are probably selecting their approach based on what they think will most hurt the target rather than based on a commitment to sexism, racism or other such value systems. To use an obvious analogy, think of a politician who is not particularly racist but is willing to use this language in order to sway a target audience.

There are also those who engage in such harassment as a matter of ideology and values. While their behavior is often indistinguishable from those who engage in attacks of opportunity, their motivation is based on a hatred of specific types of people. While they might enjoy the reaction of their target, that is not their main objective. Rather, the objectives are to express their views and attack the target of their hate because of that hate. Put another way, they are sincere racists or sexists in that it matters to them who they attack. To use the analogy to a politician, they are like a demagogue who truly believes in their own hate speech.

In terms of virtual behavior, such as groping, these people are not just using groping as a tool to get a reaction. It is an attack to express their views about their target based on their hatred and contempt. The groping might also not merely be a means to an end, but a goal in itself—the groping has its own value to them.

While both sorts of harassers are morally wrong, it is an interesting question as to which is worse. It could be argued that the commitment to evil of the sincere harasser (the true racist or sexist) make them worse than the opportunist. After all, the opportunist is not committed to evil views, they just use their tools for their amusement. In contrast, the sincere harasser not only uses the tools, but believes in their actions and truly hates their target. That is, they are evil for real.

While this is very appealing, it is worth considering that the sincere harasser has the virtue of honesty; their expression of hatred is not a deceit.  To go back to the politician analogy, they are like the politician who truly believes in their professed ideology—their evil does have the tiny sparkle of the virtue of honesty.

In contrast, the opportunist is dishonest in their attacks and thus compound their other vices with that of dishonesty. To use the politician analogy, they are like the Machiavellian manipulator who has no qualms about using hate to achieve their ends.

While the moral distinctions between the types of harassers is important, they generally do not matter to their targets. After all, what matters to (for example) a female gamer who is being virtually groped while trying to enjoy a VR game is not the true motivation of the groper, but the groping. Thus, from the perspective of the target, the harasser of opportunity and the sincere harasser are on equally bad moral footing—they are both morally wrong. In the next essay, the discussion will turn to the obligations of gaming companies in regards to protecting gamers from harassment.

 

My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Follow Me on Twitter

Tagged with: , , ,

The Simulation I: The Problem of the External World

Posted in Epistemology, Metaphysics, Philosophy, Technology by Michael LaBossiere on October 24, 2016

Elon Musk and others have advanced the idea that we exist within a simulation. The latest twist on this is that he and others are allegedly funding efforts to escape this simulation. This is, of course, the most recent chapter in the ancient philosophical problem of the external world. Put briefly, this problem is the challenge of proving that what seems to be a real external world is, in fact, a real external world. As such, it is a problem in epistemology (the study of knowledge).

The problem is often presented in the context of metaphysical dualism. This is the view that reality is composed of two fundamental categories of stuff: mental stuff and physical stuff. The mental stuff is supposed to be what the soul or mind is composed of, while things like tables and kiwis (the fruit and the bird) are supposed to be composed of physical stuff. Using the example of a fire that I seem to be experiencing, the problem would be trying to prove that the idea of the fire in my mind is being caused by a physical fire in the external world.

Renee Descartes has probably the best known version of this problem—he proposes that he is being deceived by an evil demon that creates, in his mind, an entire fictional world. His solution to this problem was to doubt until he reached something he could not doubt: his own existence. From this, he inferred the existence of God and then, over the rest of his Meditations on First Philosophy, he established that God was not a deceiver. Going back to the fire example, if I seem to see a fire, then there probably is an external, physical fire causing that idea. Descartes did not, obviously, decisively solve the problem: otherwise Musk and his fellows would be easily refuted by using Descartes’ argument.

One often overlooked contribution Descartes made to the problem of the external world is consideration of why the deception is taking place. Descartes attributes the deception of the demon to malice—it is an evil demon (or evil genius). In contrast, God’s goodness entails he is not a deceiver. In the case of Musk’s simulation, there is the obvious question of the motivation behind it—is it malicious (like Descartes’ demon) or more benign? On the face of it, such deceit does seem morally problematic—but perhaps the simulators have excellent moral reasons for this deceit. Descartes’s evil demon does provide the best classic version of Musk’s simulation idea since it involves an imposed deception. More on this later.

John Locke took a rather more pragmatic approach to the problem. He rejected the possibility of certainty and instead argued that what matters is understanding matters enough to avoid pain and achieve pleasure. Going back to the fire, Locke would say that he could not be sure that the fire was really an external, physical entity. But, he has found that being in what appears to be fire has consistently resulted in pain and hence he understands enough to want to avoid standing in fire (whether it is real or not). This invites an obvious comparison to video games: when playing a game like World of Warcraft or Destiny, the fire is clearly not real. But, because having your character fake die in fake fire results in real annoyance, it does not really matter that the fire is not real. The game is, in terms of enjoyment, best played as if it is.

Locke does provide the basis of a response to worries about being in a simulation, namely that it would not matter if we were or were not—from the standpoint of our happiness and misery, it would make no difference if the causes of pain and pleasure were real or simulated. Locke, however, does not consider that we might be within a simulation run by others. If it were determined that we are victims of a deceit, then this would presumably matter—especially if the deceit were malicious.

George Berkeley, unlike Locke and Descartes, explicitly and passionately rejected the existence of matter—he considered it a gateway drug to atheism. Instead, he embraces what is called “idealism”, “immaterialism” and “phenomenalism.” His view was that reality is composed of metaphysical immaterial minds and these minds have ideas. As such, for him there is no external physical reality because there is nothing physical. He does, however, need to distinguish between real things and hallucinations or dreams. His approach was to claim that real things are more vivid that hallucinations and dreams. Going back to the example of fire, a real fire for him would not be a physical fire composed of matter and energy. Rather, I would have a vivid idea of fire. For Berkeley, the classic problem of the external world is sidestepped by his rejection of the external world.  However, it is interesting to speculate how a simulation would be handled by Berkeley’s view.

Since Berkeley does not accept the existence of matter, the real world outside the simulation would not be a material world—it would a world composed of minds. A possible basis for the difference is that the simulated world is less vivid than the real world (to use his distinction between hallucinations and reality). On this view, we would be minds trapped in a forced dream or hallucination. We would be denied the more vivid experiences of minds “outside” the simulation, but we would not be denied an external world in the metaphysical sense. To use an analogy, we would be watching VHS, while the minds “outside” the simulation would be watching Blu-Ray.

While Musk does not seem to have laid out a complete philosophical theory on the matter, his discussion indicates that he thinks we could be in a virtual reality style simulation. On this view, the external world would presumably be a physical world of some sort. This distinction is not a metaphysical one—presumably the simulation is being run on physical hardware and we are some sort of virtual entities in the program. Our error, then, would be to think that our experiences correspond to material entities when they, in fact, merely correspond to virtual entities. Or perhaps we are in a Matrix style situation—we do have material bodies, but receive virtual sensory input that does not correspond to the physical world.

Musk’s discussion seems to indicate that he thinks there is a purpose behind the simulation—that it has been constructed by others. He does not envision a Cartesian demon, but presumably envisions beings like what we think we are.  If they are supposed to be like us (or we like them, since we are supposed to be their creation), then speculation about their motives would be based on why we might do such a thing.

There are, of course, many reasons why we would create such a simulation. One reason would be scientific research: we already create simulations to help us understand and predict what we think is the real world. Perhaps we are in a simulation used for this purpose. Another reason would be for entertainment. We created games and simulated worlds to play in and watch; perhaps we are non-player characters in a game world or unwitting actors in a long running virtual reality show (or, more likely, shows).

One idea, which was explored in Frederik Pohl’s short story “The Tunnel under the World”, is that our virtual world exists to test advertising and marketing techniques for the real world. In Pohl’s story, the inhabitants of Tylerton are killed in the explosion of the town’s chemical plant and they are duplicated as tiny robots inhabiting a miniature reconstruction of the town. Each day for the inhabitants is June 15th and they wake up with their memories erased, ready to be subject to the advertising techniques to be tested that day.  The results of the methods are analyzed, the inhabitants are wiped, and it all starts up again the next day.

While this tale is science fiction, Google and Facebook are working very hard to collect as much data as they can about us with an end to monetize all this information. While the technology does not yet exist to duplicate us within a computer simulation, that would seem to be a logical goal of this data collection—just imagine the monetary value of being able to simulate and predict people’s behavior at the individual level. To be effective, a simulation owned by one company would need to model the influences of its competitors—so we could be in a Google World or a Facebook World now so that these companies can monetize us to exploit the real versions of us in the external world.

Given that a simulated world is likely to exist to exploit the inhabitants, it certainly makes sense to not only want to know if we are in such a world, but also to try to undertake an escape. This will be the subject of the next essay.

 

My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Follow Me on Twitter

Automated Trucking

Posted in Business, Ethics, Philosophy, Science, Technology by Michael LaBossiere on September 23, 2016

Having grown up in the golden age of the CB radio, I have many fond memories of movies about truck driving heroes played by the likes of Kurt Russell and Clint Eastwood. While such movies seem to have been a passing phase, real truck drivers are heroes of the American economy. In addition to moving stuff across this great nation, they also earn solid wages and thus also contribute as taxpayers and consumers.

While most of the media attention is on self-driving cars, there are also plans underway to develop self-driving trucks. The steps towards automation will initially be a boon to truck drivers as these technological advances manifest as safety features. This progress will most likely lead to a truck with a human riding in the can as a backup (more for the psychological need of the public than any actual safety increase) and eventually to a fully automated truck.

Looked at in terms of the consequences of full automation, there will be many positive impacts. While the automated trucks will probably be more expensive than manned vehicles initially, not need to pay drivers will result in considerable savings for the companies. Some of this might even be passed on to consumers, resulting in a tiny decrease in some prices. There is also the fact that automated trucks, unlike human drivers, would not get tired, bored or distracted. While there will still be accidents involving these trucks, it would be reasonable to expect a very significant decrease. Such trucks would also be able to operate around the clock, stopping only to load/unload cargo, to refuel and for maintenance. This could increase the speed of deliveries. One can even imagine an automated truck with its own drones that fly away from the truck as it cruises the highway, making deliveries for companies like Amazon. While these will be good things, there will also be negative consequences.

The most obvious negative consequence of full automation is the elimination of trucker jobs. Currently, there are about 3.5 million drivers in the United States. There are also about 8.7 million other people employed in the trucking industry who do not drive. One must also remember all the people indirectly associated with trucking, ranging from people cooking meals for truckers to folks manufacturing or selling products for truckers. Finally, there are also the other economic impacts from the loss of these jobs, ranging from the loss of tax revenues to lost business. After all, truckers do not just buy truck related goods and services.

While the loss of jobs will be a negative impact, it should be noted that the transition from manned trucks to robot rigs will not occur overnight. There will be a slow transition as the technology is adopted and it is certain that there will be several years in which human truckers and robotruckers share the roads. This can allow for a planned transition that will mitigate the economic shock. That said, there will presumably come a day when drivers are given their pink slips in large numbers and lose their jobs to the rolling robots. Since economic transitions resulting from technological changes are nothing new, it could be hoped that this transition would be managed in a way that mitigated the harm to those impacted.

It is also worth considering that the switch to automated trucking will, as technological changes almost always do, create new jobs and modify old ones. The trucks will still need to be manufactured, managed and maintained. As such, new economic opportunities will be created. That said, it is easy to imagine these jobs also becoming automated as well: fleets of robotic trucks cruising America, loaded, unloaded, managed and maintained by robots. To close, I will engage in a bit of sci-fi style speculation.

Oversimplifying things, the automation of jobs could lead to a utopian future in which humans are finally freed from the jobs that are fraught with danger and drudgery. The massive automated productivity could mean plenty for all; thus bringing about the bright future of optimistic fiction. That said, this path could also lead into a dystopia: a world in which everything is done for humans and they settle into a vacuous idleness they attempt to fill with empty calories and frivolous amusements.

There are, of course, many dystopian paths leading away from automation. Laying aside the usual machine takeover in which Google kills us all, it is easy to imagine a new “robo-planation” style economy in which a few elite owners control their robot slaves, while the masses have little or no employment. A rather more radical thought is to imagine a world in which humans are almost completely replaced—the automated economy hums along, generating numbers that are duly noted by the money machines and the few remaining money masters. The ultimate end might be a single computer that contains a virtual economy; clicking away to itself in electronic joy over its amassing of digital dollars while around it the ruins of  human civilization decay and the world awaits the evolution of the next intelligent species to start the game anew.

 

My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Follow Me on Twitter

Engineering Astronauts

Posted in Ethics, Technology by Michael LaBossiere on September 2, 2016

If humanity remains a single planet species, our extinction is all but assured—there are so many ways the world could end. The mundane self-inflicted apocalypses include such things as war and environmental devastation. There are also more exotic dooms suitable for speculative science fiction, such as a robot apocalypse or a bioengineered plague. And, of course, there is the classic big rock from space scenario. While we will certainly bring our problems with us into space, getting off world would dramatically increase our chances of survival as a species.

While species do endeavor to survive, there is the moral question of whether or not we should do so. While I can easily imagine humanity reaching a state where it would be best if we did not continue, I think that our existence generates more positive value than negative value—thus providing the foundation for a utilitarian argument for our continued existence and endeavors to survive. This approach can also be countered on utilitarian grounds by contending that the evil we do outweighs the good, thus showing that the universe would be morally better without us. But, for the sake of the discussion that follows, I will assume that we should (or at least will) endeavor to survive.

Since getting off world is an excellent way of improving our survival odds, it is somewhat ironic that we are poorly suited for survival in space and on other worlds such as Mars. Obviously enough, naked exposure to the void would prove fatal very quickly; but even with technological protection our species copes poorly with the challenges of space travel—even those presented by the very short trip to our own moon. We would do somewhat better on other planets or on moons; but these also present significant survival challenges.

While there are many challenges, there are some of special concern. These include the danger presented by radiation, the health impact of living in gravity significantly different from earth, the resource (food, water and air) challenge, and (for space travel) the time problem. Any and all of these can prove to be fatal and must be addressed if humanity is to expand beyond earth.

Our current approach is to use our technology to recreate as closely as possible our home environment. For example, our manned space vessels are designed to provide some degree of radiation shielding, they are filled with air and are stocked with food and water. One advantage of this approach is that it does not require any modification to humans; we simply recreate our home in space or on another planet. There are, of course, many problems with this approach. One is that our technology is still very limited and cannot properly address some challenges. For example, while artificial gravity is standard in science fiction, we currently rely on rather ineffective means of addressing the gravity problem. As another example, while we know how to block radiation, there is the challenge of being able to do this effectively on the journey from earth to Mars. A second problem is that recreating our home environment can be difficult and costly. But, it can be worth the cost to allow unmodified humans to survive in space or on other worlds. This approach points towards a Star Trek style future: normal humans operating within a bubble of technology. There are, however, alternatives.

Another approach is also based in technology, but aims at either modifying humans or replacing them entirely. There are two main paths here. One is that of machine technology in which humans are augmented in order to endure conditions that differ radically from that of earth. The scanners of Cordwainer Smith’s “Scanners Live in Vain” are one example of this—they are modified and have implants to enable them to survive the challenges of operating interstellar vessels. Another example is Man Plus, Frederik Pohl’s novel about a human transformed into a cyborg in order to survive on Mars. The ultimate end of this path is the complete replacement of humans by intelligent machines, machines designed to match their environments and free of human vulnerabilities and short life spans.

The other is the path of biological technology. On this path, humans are modified biologically in order to better cope with non-earth environments. These modifications would presumably start fairly modestly, such as genetic modifications to make humans more resistant to radiation damage and better adapted to lower gravity. As science progressed, the modifications could become far more radical, with a complete re-engineering of humans to make them ideally match their new environments. This path, unnaturally enough, would lead to the complete replacement of humans with new species.

These approaches do have advantages. While there would be an initial cost in modifying humans to better fit their new environments, the better the adaptations, the less need there would be to recreate earth-like conditions. This could presumably result in considerable cost-savings and there is also the fact that the efficiency and comfort of the modified humans would be greater the better they matched their new environments. There are, however, the usual ethical concerns about such modifications.

Replacing homo sapiens with intelligent machines or customized organisms would also have a high initial startup cost, but these beings would presumably be far more effective than humans in the new environments. For example, an intelligent machine would be more resistant to radiation, could sustain itself with solar power, and could be effectively immortal as long as it is repaired. Such a being would be ideal to crew (or be) a deep space mission vessel. As another example, custom created organisms or fully converted humans could ideally match an environment, living and working in radical conditions as easily as standard humans work on earth. Clifford D. Simak’s “Desertion” discusses such an approach; albeit one that has unexpected results on Jupiter.

In addition to the usual moral concerns about such things, there is also the concern that such creations would not preserve the human race. On the one hand, it is obvious that such beings would not be homo sapiens. If the entire species was converted or gradually phased out in favor of the new beings, that would be the end of the species—the biological human race would be no more. The voice of humanity would fall silent. On the other hand, it could be argued that the transition could suffice to preserve the identity of the species—a likely way to argue this would be to re-purpose the arguments commonly used to argue for the persistence of personal identity across time. It could also be argued that while the biological species homo sapiens could cease to be, the identity of humanity is not set by biology but by things such as values and culture. As such, if our replacements retained the relevant connection to human culture and values (they sing human songs and remember the old, old places where once we walked), they would still be human—although not homo-sapiens.

My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Follow Me on Twitter