A Philosopher's Blog

Autonomous Vehicles: Solving an Unnecessary Problem?

Posted in Ethics, Philosophy, Technology by Michael LaBossiere on February 15, 2017

While motor vehicle fatalities do not get the attention of terrorist attacks (unless a celebrity is involved), the roads of the United States are no stranger to blood.  From 2000 to 2015, the motor vehicle deaths per year ranged from a high of 43,005 in 2005 to a low of 32,675 in 2014. In 2015 there were 35,092 motor vehicle deaths and last year the number went back up to around 40,000. Given the high death toll, there is clearly a problem that needs to be solved.

One of the main reasons being advanced for the deployment of autonomous vehicles is that they will make the roads safer and thus reduce the carnage. While predictions of the imminent arrival of autonomous vehicles are overly optimistic, the idea that they would reduce motor vehicle deaths is certainly plausible. After all, autonomous vehicles will not be subject to road rage, exhaustion, intoxication, poor judgment, distraction and the other maladies that inflict human drivers and contribute to the high death tolls. Motor vehicle deaths will certainly not be eliminated even if all vehicles were autonomous, but the likely reduction in the death toll does present a very strong moral and practical reason to deploy such vehicles. That said, it is still worth considering whether the autonomous vehicle is aimed at solving an unnecessary problem. Considering this matter requires going back in time, to the rise of the automobile in the United States.

As the number of cars increased in the United States, so did the number of deaths. One contributing factor to the high number of deaths was that American cars were rather unsafe and this led Ralph Nader to write his classic work, Unsafe at Any Speed. Thanks to Nader and others, the American automobile became much safer and motor vehicle fatalities decreased. While making cars safer was certainly a good thing, it can be argued that this approach was fundamentally flawed. I will use an analogy to make my point.

Imagine, if you will, that people insist on swinging hammers around as they go about their day.  As would be suspected, the hammer swinging would often result in injuries and property damage. Confronted by these harms, solutions are proposed and implemented. People wear ever better helmets and body armor to protect them from wild swings. Hammers are also continuously redesigned so that they inflict less damage when hitting, for example, a face.  Eventually Google and other companies start work on autonomous swinging hammers that will be much better than humans at avoiding hitting other people and things. While all these safety improvements would be better than the original situation of unprotected people swinging very dangerous hammers around, this approach seems to be fundamentally flawed. After all, if people stopped swinging hammers around, then the problem would be solved.

An easy and obvious reply to my analogy is that using motor vehicles, unlike random hammer swinging, is rather important. For one thing, a significant percentage of the economy is built around the motor vehicle. This includes the obvious things like vehicle sales, vehicle maintenance, gasoline sales, road maintenance and so on. It also includes less obvious aspects of the economy that involve the motor vehicle, such as how they contribute to the success of stores like Wal Mart. The economic value of the motor vehicle, it can be argued, provides a justification for accepting the thousands of deaths per year. While it is certainly desirable to reduce these deaths, getting rid of motor vehicles is not a viable economic option—thus autonomous vehicles are a good potential partial solution to the death problem. Or are they?

One obvious problem with the autonomous vehicle solution is that they are trying to solve the death problem within a system created around human drivers and their wants. This system of lights, signs, turn lanes, crosswalks and such is extremely complicated—thus creating difficult engineering and programing problems. It would seem to make more sense to use the resources being poured into autonomous vehicles to develop a better and safer transportation system that does not center around a bad idea: the individual motor vehicle operating within a complicated road system. On this view, autonomous vehicles are solving an unnecessary problem: they are merely better hammers.

This line of argumentation can be countered in a couple ways. One way is to present the economic argument again: autonomous vehicles preserve the individual motor vehicle that is economically critical while being likely to reduce the death fee paid for this economy. Another way is to argue that the cost of creating a new transportation system would be far more than the cost of developing autonomous vehicles that can operate within the existing system. A third way is to make the plausible case that autonomous vehicles are a step towards developing a new transportation system. People tend to need a slow adjustment period to major changes and the autonomous vehicles will allow a gradual transition from distracted human drivers to autonomous vehicles operating with the distracted humans to a transportation infrastructure rebuilt entirely around autonomous vehicles (perhaps with a completely distinct system for walkers, bikers and runners). Going back to the hammer analogy, the self-swinging hammer would reduce hammer injuries and could allow a transition to be made away from hammer swinging altogether.

My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Follow Me on Twitter

Swarms

Posted in Ethics, Philosophy, Technology by Michael LaBossiere on December 16, 2016

The Swarm (film)

Anyone who has played RTS games such as Blizzard’s Starcraft knows the basics of swarm warfare: you build a vast swarm of cheap units and hurl them against the enemy’s smaller force of more expensive units. The plan is that although the swarm will be decimated, the enemy will be exterminated. The same tactic was the basis of the classic tabletop game Ogre—it pitted a lone intelligent super tank against a large force of human infantry and armor. And, of course, the real world features numerous examples of swarm warfare—some successful for those using the swarm tactic (ants taking out a larger foe), some disastrous (massed infantry attacks on machineguns).

The latest approach to swarm tactics is to build a swarm of drones and deploy them against the enemy. While such drones will tend to be airborne units, they could also be ground or sea machines. In terms of their attacks, there are many options. The drones could be large enough to be equipped with weapons, such as small caliber guns, that would allow them to engage and return to reload for future battles. Some might be equipped with melee weapons, poisons, or biological weapons. The drones could also be suicide machines—small missiles intended to damage the enemy by destroying themselves.

While the development of military drone swarms will no doubt fall within the usual high cost of developing new weapon technology, the drones themselves can be relatively cheap. After all, they will tend to be much smaller and simpler than existing weapons such as aircraft, ships and ground vehicles. The main cost will most likely be in developing the software to make the drones operate effectively in a swarm; but after that it will be just a matter of mass producing the hardware.

If effective software and cost-effective hardware can be developed, one of the main advantages of the battle swarm will be its low cost. While such low-cost warfare might be problematic for defense contractors who have grown accustomed to massive contracts for big ticket items, it would certainly be appealing to those who are concerned about costs and reducing government spending. After all, if low cost drones could replace expensive units, defenses expenses could be significantly reduced. The savings could be used for other programs or allow for tax cuts. Or perhaps they will just build billions of dollars of drones.

Low cost units, if effective, can also confer a significant attrition advantage. If, for example, thousands of dollars of drones can take down millions of dollars of aircraft, then the side with the drones stands a decent chance of winning. If hundreds of dollars of drones can take down millions of dollars of aircraft, then the situation is even better for the side with the drones.

The low cost does raise some concerns, though. Once the drone controlling software makes its way out into the world (via the inevitable hack, theft, or sale), then everyone will be using swarms. This will recreate the IED and suicide bomber situation, only at an exponential increase. Instead of IEDs in the road, they will be flying around cities, looking for targets. Instead of a few suicide bombers with vests, there will be swarms of drones loaded with explosives. Since Uber comparisons are now mandatory, the swarm will be the Uber of death.

This does raise moral concerns about the development of the drone software and technology; but the easy and obvious reply is that there is nothing new about this situation: every weapon ever developed eventually makes the rounds. As such, the usual ethics of weapon development applies here, with due emphasis on the possibility of providing another cheap and effective way to destroy and kill.

One short term advantage of the first swarms is that they will be facing weapons designed primarily to engage small numbers of high value targets. For example, air defense systems now consist mainly of expensive missiles designed to destroy very expensive aircraft. Firing a standard anti-aircraft missile into a swarm will destroy some of the drones (assuming the missile detonates), but enough of the swarm will probably survive the attack for it to remain effective. It is also likely that the weapons used to defend against the drones will cost far more than the drones, which ties back into the cost advantage.

This advantage of the drones would be quickly lost if effective anti-swarm weapons are developed. Not surprisingly, gamers have already worked out effective responses to swarms. In D&D/Pathfinder players generally loath swarms for the same reason that ill-prepared militaries will loath drone swarms: while the individual swarm members are easy to kill, it is all but impossible to kill enough of them with standard weapons. In the game, players respond to swarms with area of effect attacks, such as fireballs (or running away). These sorts of attacks can consume the entire swarm and either eliminate it or reduce its numbers so it is no longer a threat. While the real world has an unfortunate lack of wizards, the same basic idea will work against drone swarms: cheap weapons that do moderate damage over a large area. One likely weapon is a battery of large, automatic shotguns that would fill the sky with pellets or flechettes. Missiles could also be designed that act like claymore mines in the sky, spraying ball bearings in almost all directions.  And, obviously enough, swarms will be countered by swarms.

The drones would also be subject to electronic warfare—if they are being remotely controlled, this connection could be disrupted. Autonomous drones would be far less vulnerable, but they would still need to coordinate with each other to remain a swarm and this coordination could be targeted.

The practical challenge would be to make the defenses cheap enough to make them cost effective. Then again, countries that are happy to burn money for expensive weapon systems, such as the United States, would not need to worry about the costs. In fact, defense contractors will be lobbying hard for expensive swarm and anti-swarm systems.

The swarms also inherit the existing moral concerns about non-swarm drones, be they controlled directly by humans or deployed as autonomous killing machines. The ethical problems of swarms controlled by a human operator would be the same as the ethical problems of a single drone controlled by a human, the difference in numbers would not seem to make a moral difference. For example, if drone assassination with a single drone is wrong (or right), then drone assassination with a swarm would also be wrong (or right).

Likewise, an autonomous swarm is not morally different from a single autonomous unit in terms of the ethics of the situation.  For example, if deploying a single autonomous killbot is wrong (or right), then deploying an autonomous killbot swarm is wrong (or right).  That said, perhaps there is a greater chance that an autonomous killbot swarm will develop a rogue hive mind and turn against us. Or perhaps not. In any case, Will Rodgers will be proven right once again: “You can’t say that civilization don’t advance, however, for in every war they kill you in a new way.”

 

My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Follow Me on Twitter

Tagged with: , ,

Gaming & Groping II: Obligations

Posted in Ethics, Philosophy, Technology, Video Games by Michael LaBossiere on November 2, 2016

In my previous essay, I discussed some possible motivations for groping in VR games, which is now a thing. The focus of what follows is on the matter of protecting gamers from such harassment on the new frontiers of gaming.

Since virtual groping is a paradigm of a first world problem, it might be objected that addressing it is a waste of time. After all, the objection can be made that resources that might be expended on combating virtual groping should be spent on addressing real groping After all, a real grope is far worse than a virtual grope—and virtual gropes can be avoided by simply remaining outside of the virtual worlds.

This sort of objection does have some merit. After all, it is sensible to address problems in order of their seriousness. To use an analogy, if a car is skidding out of control at the same time an awful song comes on the radio, then the driver should focus on getting the car back under control and not waste time on the radio.  Unless, of course, it is “The Most Unwanted Song.”

The reasonable reply to this objection is that this is not a situation where it is one or the other, but not both. While time spent addressing virtual groping is time not spent on addressing real groping, addressing virtual groping does not preclude addressing real groping. Also, pushing this sort of objection can easily lead into absurdity: for anything a person is doing, there is almost certainly something else they could be doing that would have better moral consequences. For example, a person who spends time and money watching a movie could use that time and money to address a real problem, such as crime or drug addiction. But, as so often been argued, this would impose unreasonable expectations on people and would ultimately create more harm than good. As such, while I accept that real groping is worse than virtual groping, I am not failing morally by taking time to address the virtual rather than the real in this essay.

It could also be objected that there is no legitimate reason to be worried about virtual groping on the obvious grounds that it is virtual rather than real. After all, when people play video games, they routinely engage in virtual violence against each other—yet this is not seen as a special problem (although virtual violence does have its critics). Put roughly, if it is fine to shoot another player in a game (virtual killing) it should be equally fine to grope another player in a game. Neither the killing nor groping are real and hence should not be taken seriously.

This objection does have some merit, but can be countered by considering an analogy to sports. When people are competing in boxing or martial arts, they hit each other and this is accepted because it is the purpose of the sport. However, it is not acceptable for a competitor to start pawing away at their opponent’s groin in a sexual manner (and not just because of the no hitting below the belt rules of boxing). Punching is part of the sport, groping is not. The same holds for video games. If a person is playing a combat video game that pits players against each other, the expectation is that they will be subject to virtual violence. They know this and consent to it by playing, just as boxers know they will be punched and consent to it. But, unless the players know and consent to playing a groping game, using the game mechanics to virtually grope other players would not be acceptable—they did not agree to that game.

Another counter is that while the virtual groping is not as bad as real groping, it can still harm the target of the groping. To use an analogy, being verbally abused over game chat is not as bad as having a person physically present engaging in such abuse, but it is still unpleasant for the target. Virtual groping is a form of non-verbal harassment, intended to get a negative reaction from the target and to make the gaming experience unpleasant. There is also the fact that being the victim of such harassment can rob a player of the enjoyment of the game—which is the point of playing. While it is not as bad as groping a player in a real-world game (which would be sexual assault), it has an analogous effect on the player’s experience.

It could be replied that a player should just be tough and put up with the abuse. This reply lacks merit and is analogous to saying that people should just put up with being assaulted robbed or spit on. It is the reply of an abuser who wants to continue the abuse while shifting blame onto the target.

While players are in the wrong when they engage in virtual groping, there is the question of what gaming companies should do to protect their customers from such harassment. They do have a practical reason to address this concern—players will tend to avoid games where they are subject to harassment and abuse, thus costing the gaming company money. They also have a moral obligation, analogous to the obligation of those in the real world who host an event. For example, a casino that allowed players to grope others with impunity would be failing in its obligation to its customers; the same would seem to hold for a gaming company operating a VR game.

Companies do already operate various forms of reporting, although their enforcement tends to vary. Blizzard, for example, has policies about how players should treat each other in World of Warcraft. This same approach can and certainly will be applied to VR games that allow a broader range of harassment, such as virtual groping.

Because of factors such as controller limitations, most video games do not have the mechanics that would allow much in the way of groping—although some players do work very hard trying to make that happen. While non-VR video games could certainly support things like glove style controllers that would allow groping, VR games are far more likely to support controllers that would allow players to engage in virtual groping behavior (something that has, as noted above, already occurred).

Eliminating such controller options would help prevent VR groping, but at the cost of taking away a rather interesting and useful aspect of VR controller systems. As such, this is not a very viable option. A better approach would be to put in the software limits on how players can interact with the virtual bodies of other players. While some might suggest a punitive system for when one player’s virtual hands (or groin) contacting another player’s virtual naught bits, the obvious problem is that wily gamers would exploit this. For example, if a virtual hand contacting a virtual groin caused the character damage or filed an automatic report, then some players would be trying their best to get their virtual groins in contact with other players’ virtual hands. As such, this would be a bad idea.

A better, but less than ideal system, would be to have a personal space zone around each player’s VR body to keep other players at a distance. The challenge would be working this effectively into the game mechanics, especially for such things as hand-to-hand combat. It might also be possible to have the software recognize and prevent harassing behavior. So, for example, a player could virtually punch another player, but not make grabbing motions on the target’s groin.

It should be noted that these concerns are about contexts in which players do not want to be groped; I have no moral objection to VR applications that allow consensual groping—which, I infer, will be very popular.

 

My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Follow Me on Twitter

Gaming & Groping I: Motivations

Posted in Ethics, Philosophy, Technology by Michael LaBossiere on October 31, 2016

On the positive side, online gaming allows interaction with gamers all over the world. On the negative side, some gamers are horrible. While I have been a gamer since the days of Pong, one of my early introductions to “the horrible” was on Xbox live. In a moment of deranged optimism, I hoped that chat would allow me to plan strategy with my team members and perhaps make new gamer friends. While this did sometimes happen, the dominate experience was an unrelenting spew of insults and threats between gamers. I solved this problem by clipping the wire on a damaged Xbox headset and sticking the audio plug into my controller—the spew continued, but had nowhere to go.

There is an iron law of technology that any technology that can be misused will be misused. There are also specific laws that fall under this general law. One is the iron law of gaming harassment:  any gaming medium that allows harassment will be used to harass. While there have been many failed attempts at virtual reality gaming, it seems that it might become the new gaming medium. In any case, harassment in online VR games is already a thing. Just as VR is supposed to add a new level to gaming, it also adds a new level to harassment—such as virtual groping. This is an escalation over the harassment options available in most games. Non VR games are typical limited to verbal harassment and some action harassment, such as the classic tea bagging. For those not familiar with this practice, it is when one player causes their character to rapidly repeat crouch on top of a dead character. The idea is that the players is repeatedly slapping their virtual testicles against the virtual corpse of a foe. This presumably demonstrates contempt for the opponent and dominance on the part of the bagger. As might be imagined, this act speaks clearly about a player’s mental and moral status.

Being a gamer and a philosopher, I do wonder a bit about the motivations of those that engage in harassment and how their motivation impacts the ethics of their behavior. While I will not offer a detailed definition of harassment, the basic idea is that it requires sustained abuse. This is to distinguish it from a quick expression of anger.

In some cases, harassment seems to be motivated primarily by the enjoyment the harasser gets from getting a response from their target. The harasser is not operating from a specific value system that leads them to attack certain people; they are equal opportunity in their attacks. Back when I listened to what other gamers said, it was easy to spot this sort of person—they would go after everyone and tailor their spew based on what they seemed to believe about the target’s identity. As an example, if the harasser though their target was African-American, they would spew racist comments. As another example, if the target was the then exceedingly rare female gamer, they would spew sexist remarks. As a third example, if the target was believed to be a white guy, the attack would usually involve comments about the guy’s mother or assertions that the target is homosexual.

While the above focuses on what a person says, the discussion also applies to the virtual actions in the game. As noted above, some gamers engage in tea-bagging because that is the worst gesture they can make in the game. In games that allow more elaborate interaction, the behavior will tend to be analogous to groping in the real world. This is because such behavior is the most offensive behavior possible in the game and thus will create the strongest reaction.

While a person who enjoys inflicting this sort of abuse does have some moral problems, they are probably selecting their approach based on what they think will most hurt the target rather than based on a commitment to sexism, racism or other such value systems. To use an obvious analogy, think of a politician who is not particularly racist but is willing to use this language in order to sway a target audience.

There are also those who engage in such harassment as a matter of ideology and values. While their behavior is often indistinguishable from those who engage in attacks of opportunity, their motivation is based on a hatred of specific types of people. While they might enjoy the reaction of their target, that is not their main objective. Rather, the objectives are to express their views and attack the target of their hate because of that hate. Put another way, they are sincere racists or sexists in that it matters to them who they attack. To use the analogy to a politician, they are like a demagogue who truly believes in their own hate speech.

In terms of virtual behavior, such as groping, these people are not just using groping as a tool to get a reaction. It is an attack to express their views about their target based on their hatred and contempt. The groping might also not merely be a means to an end, but a goal in itself—the groping has its own value to them.

While both sorts of harassers are morally wrong, it is an interesting question as to which is worse. It could be argued that the commitment to evil of the sincere harasser (the true racist or sexist) make them worse than the opportunist. After all, the opportunist is not committed to evil views, they just use their tools for their amusement. In contrast, the sincere harasser not only uses the tools, but believes in their actions and truly hates their target. That is, they are evil for real.

While this is very appealing, it is worth considering that the sincere harasser has the virtue of honesty; their expression of hatred is not a deceit.  To go back to the politician analogy, they are like the politician who truly believes in their professed ideology—their evil does have the tiny sparkle of the virtue of honesty.

In contrast, the opportunist is dishonest in their attacks and thus compound their other vices with that of dishonesty. To use the politician analogy, they are like the Machiavellian manipulator who has no qualms about using hate to achieve their ends.

While the moral distinctions between the types of harassers is important, they generally do not matter to their targets. After all, what matters to (for example) a female gamer who is being virtually groped while trying to enjoy a VR game is not the true motivation of the groper, but the groping. Thus, from the perspective of the target, the harasser of opportunity and the sincere harasser are on equally bad moral footing—they are both morally wrong. In the next essay, the discussion will turn to the obligations of gaming companies in regards to protecting gamers from harassment.

 

My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Follow Me on Twitter

Tagged with: , , ,

The Simulation I: The Problem of the External World

Posted in Epistemology, Metaphysics, Philosophy, Technology by Michael LaBossiere on October 24, 2016

Elon Musk and others have advanced the idea that we exist within a simulation. The latest twist on this is that he and others are allegedly funding efforts to escape this simulation. This is, of course, the most recent chapter in the ancient philosophical problem of the external world. Put briefly, this problem is the challenge of proving that what seems to be a real external world is, in fact, a real external world. As such, it is a problem in epistemology (the study of knowledge).

The problem is often presented in the context of metaphysical dualism. This is the view that reality is composed of two fundamental categories of stuff: mental stuff and physical stuff. The mental stuff is supposed to be what the soul or mind is composed of, while things like tables and kiwis (the fruit and the bird) are supposed to be composed of physical stuff. Using the example of a fire that I seem to be experiencing, the problem would be trying to prove that the idea of the fire in my mind is being caused by a physical fire in the external world.

Renee Descartes has probably the best known version of this problem—he proposes that he is being deceived by an evil demon that creates, in his mind, an entire fictional world. His solution to this problem was to doubt until he reached something he could not doubt: his own existence. From this, he inferred the existence of God and then, over the rest of his Meditations on First Philosophy, he established that God was not a deceiver. Going back to the fire example, if I seem to see a fire, then there probably is an external, physical fire causing that idea. Descartes did not, obviously, decisively solve the problem: otherwise Musk and his fellows would be easily refuted by using Descartes’ argument.

One often overlooked contribution Descartes made to the problem of the external world is consideration of why the deception is taking place. Descartes attributes the deception of the demon to malice—it is an evil demon (or evil genius). In contrast, God’s goodness entails he is not a deceiver. In the case of Musk’s simulation, there is the obvious question of the motivation behind it—is it malicious (like Descartes’ demon) or more benign? On the face of it, such deceit does seem morally problematic—but perhaps the simulators have excellent moral reasons for this deceit. Descartes’s evil demon does provide the best classic version of Musk’s simulation idea since it involves an imposed deception. More on this later.

John Locke took a rather more pragmatic approach to the problem. He rejected the possibility of certainty and instead argued that what matters is understanding matters enough to avoid pain and achieve pleasure. Going back to the fire, Locke would say that he could not be sure that the fire was really an external, physical entity. But, he has found that being in what appears to be fire has consistently resulted in pain and hence he understands enough to want to avoid standing in fire (whether it is real or not). This invites an obvious comparison to video games: when playing a game like World of Warcraft or Destiny, the fire is clearly not real. But, because having your character fake die in fake fire results in real annoyance, it does not really matter that the fire is not real. The game is, in terms of enjoyment, best played as if it is.

Locke does provide the basis of a response to worries about being in a simulation, namely that it would not matter if we were or were not—from the standpoint of our happiness and misery, it would make no difference if the causes of pain and pleasure were real or simulated. Locke, however, does not consider that we might be within a simulation run by others. If it were determined that we are victims of a deceit, then this would presumably matter—especially if the deceit were malicious.

George Berkeley, unlike Locke and Descartes, explicitly and passionately rejected the existence of matter—he considered it a gateway drug to atheism. Instead, he embraces what is called “idealism”, “immaterialism” and “phenomenalism.” His view was that reality is composed of metaphysical immaterial minds and these minds have ideas. As such, for him there is no external physical reality because there is nothing physical. He does, however, need to distinguish between real things and hallucinations or dreams. His approach was to claim that real things are more vivid that hallucinations and dreams. Going back to the example of fire, a real fire for him would not be a physical fire composed of matter and energy. Rather, I would have a vivid idea of fire. For Berkeley, the classic problem of the external world is sidestepped by his rejection of the external world.  However, it is interesting to speculate how a simulation would be handled by Berkeley’s view.

Since Berkeley does not accept the existence of matter, the real world outside the simulation would not be a material world—it would a world composed of minds. A possible basis for the difference is that the simulated world is less vivid than the real world (to use his distinction between hallucinations and reality). On this view, we would be minds trapped in a forced dream or hallucination. We would be denied the more vivid experiences of minds “outside” the simulation, but we would not be denied an external world in the metaphysical sense. To use an analogy, we would be watching VHS, while the minds “outside” the simulation would be watching Blu-Ray.

While Musk does not seem to have laid out a complete philosophical theory on the matter, his discussion indicates that he thinks we could be in a virtual reality style simulation. On this view, the external world would presumably be a physical world of some sort. This distinction is not a metaphysical one—presumably the simulation is being run on physical hardware and we are some sort of virtual entities in the program. Our error, then, would be to think that our experiences correspond to material entities when they, in fact, merely correspond to virtual entities. Or perhaps we are in a Matrix style situation—we do have material bodies, but receive virtual sensory input that does not correspond to the physical world.

Musk’s discussion seems to indicate that he thinks there is a purpose behind the simulation—that it has been constructed by others. He does not envision a Cartesian demon, but presumably envisions beings like what we think we are.  If they are supposed to be like us (or we like them, since we are supposed to be their creation), then speculation about their motives would be based on why we might do such a thing.

There are, of course, many reasons why we would create such a simulation. One reason would be scientific research: we already create simulations to help us understand and predict what we think is the real world. Perhaps we are in a simulation used for this purpose. Another reason would be for entertainment. We created games and simulated worlds to play in and watch; perhaps we are non-player characters in a game world or unwitting actors in a long running virtual reality show (or, more likely, shows).

One idea, which was explored in Frederik Pohl’s short story “The Tunnel under the World”, is that our virtual world exists to test advertising and marketing techniques for the real world. In Pohl’s story, the inhabitants of Tylerton are killed in the explosion of the town’s chemical plant and they are duplicated as tiny robots inhabiting a miniature reconstruction of the town. Each day for the inhabitants is June 15th and they wake up with their memories erased, ready to be subject to the advertising techniques to be tested that day.  The results of the methods are analyzed, the inhabitants are wiped, and it all starts up again the next day.

While this tale is science fiction, Google and Facebook are working very hard to collect as much data as they can about us with an end to monetize all this information. While the technology does not yet exist to duplicate us within a computer simulation, that would seem to be a logical goal of this data collection—just imagine the monetary value of being able to simulate and predict people’s behavior at the individual level. To be effective, a simulation owned by one company would need to model the influences of its competitors—so we could be in a Google World or a Facebook World now so that these companies can monetize us to exploit the real versions of us in the external world.

Given that a simulated world is likely to exist to exploit the inhabitants, it certainly makes sense to not only want to know if we are in such a world, but also to try to undertake an escape. This will be the subject of the next essay.

 

My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Follow Me on Twitter

Automated Trucking

Posted in Business, Ethics, Philosophy, Science, Technology by Michael LaBossiere on September 23, 2016

Having grown up in the golden age of the CB radio, I have many fond memories of movies about truck driving heroes played by the likes of Kurt Russell and Clint Eastwood. While such movies seem to have been a passing phase, real truck drivers are heroes of the American economy. In addition to moving stuff across this great nation, they also earn solid wages and thus also contribute as taxpayers and consumers.

While most of the media attention is on self-driving cars, there are also plans underway to develop self-driving trucks. The steps towards automation will initially be a boon to truck drivers as these technological advances manifest as safety features. This progress will most likely lead to a truck with a human riding in the can as a backup (more for the psychological need of the public than any actual safety increase) and eventually to a fully automated truck.

Looked at in terms of the consequences of full automation, there will be many positive impacts. While the automated trucks will probably be more expensive than manned vehicles initially, not need to pay drivers will result in considerable savings for the companies. Some of this might even be passed on to consumers, resulting in a tiny decrease in some prices. There is also the fact that automated trucks, unlike human drivers, would not get tired, bored or distracted. While there will still be accidents involving these trucks, it would be reasonable to expect a very significant decrease. Such trucks would also be able to operate around the clock, stopping only to load/unload cargo, to refuel and for maintenance. This could increase the speed of deliveries. One can even imagine an automated truck with its own drones that fly away from the truck as it cruises the highway, making deliveries for companies like Amazon. While these will be good things, there will also be negative consequences.

The most obvious negative consequence of full automation is the elimination of trucker jobs. Currently, there are about 3.5 million drivers in the United States. There are also about 8.7 million other people employed in the trucking industry who do not drive. One must also remember all the people indirectly associated with trucking, ranging from people cooking meals for truckers to folks manufacturing or selling products for truckers. Finally, there are also the other economic impacts from the loss of these jobs, ranging from the loss of tax revenues to lost business. After all, truckers do not just buy truck related goods and services.

While the loss of jobs will be a negative impact, it should be noted that the transition from manned trucks to robot rigs will not occur overnight. There will be a slow transition as the technology is adopted and it is certain that there will be several years in which human truckers and robotruckers share the roads. This can allow for a planned transition that will mitigate the economic shock. That said, there will presumably come a day when drivers are given their pink slips in large numbers and lose their jobs to the rolling robots. Since economic transitions resulting from technological changes are nothing new, it could be hoped that this transition would be managed in a way that mitigated the harm to those impacted.

It is also worth considering that the switch to automated trucking will, as technological changes almost always do, create new jobs and modify old ones. The trucks will still need to be manufactured, managed and maintained. As such, new economic opportunities will be created. That said, it is easy to imagine these jobs also becoming automated as well: fleets of robotic trucks cruising America, loaded, unloaded, managed and maintained by robots. To close, I will engage in a bit of sci-fi style speculation.

Oversimplifying things, the automation of jobs could lead to a utopian future in which humans are finally freed from the jobs that are fraught with danger and drudgery. The massive automated productivity could mean plenty for all; thus bringing about the bright future of optimistic fiction. That said, this path could also lead into a dystopia: a world in which everything is done for humans and they settle into a vacuous idleness they attempt to fill with empty calories and frivolous amusements.

There are, of course, many dystopian paths leading away from automation. Laying aside the usual machine takeover in which Google kills us all, it is easy to imagine a new “robo-planation” style economy in which a few elite owners control their robot slaves, while the masses have little or no employment. A rather more radical thought is to imagine a world in which humans are almost completely replaced—the automated economy hums along, generating numbers that are duly noted by the money machines and the few remaining money masters. The ultimate end might be a single computer that contains a virtual economy; clicking away to itself in electronic joy over its amassing of digital dollars while around it the ruins of  human civilization decay and the world awaits the evolution of the next intelligent species to start the game anew.

 

My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Follow Me on Twitter

Engineering Astronauts

Posted in Ethics, Technology by Michael LaBossiere on September 2, 2016

If humanity remains a single planet species, our extinction is all but assured—there are so many ways the world could end. The mundane self-inflicted apocalypses include such things as war and environmental devastation. There are also more exotic dooms suitable for speculative science fiction, such as a robot apocalypse or a bioengineered plague. And, of course, there is the classic big rock from space scenario. While we will certainly bring our problems with us into space, getting off world would dramatically increase our chances of survival as a species.

While species do endeavor to survive, there is the moral question of whether or not we should do so. While I can easily imagine humanity reaching a state where it would be best if we did not continue, I think that our existence generates more positive value than negative value—thus providing the foundation for a utilitarian argument for our continued existence and endeavors to survive. This approach can also be countered on utilitarian grounds by contending that the evil we do outweighs the good, thus showing that the universe would be morally better without us. But, for the sake of the discussion that follows, I will assume that we should (or at least will) endeavor to survive.

Since getting off world is an excellent way of improving our survival odds, it is somewhat ironic that we are poorly suited for survival in space and on other worlds such as Mars. Obviously enough, naked exposure to the void would prove fatal very quickly; but even with technological protection our species copes poorly with the challenges of space travel—even those presented by the very short trip to our own moon. We would do somewhat better on other planets or on moons; but these also present significant survival challenges.

While there are many challenges, there are some of special concern. These include the danger presented by radiation, the health impact of living in gravity significantly different from earth, the resource (food, water and air) challenge, and (for space travel) the time problem. Any and all of these can prove to be fatal and must be addressed if humanity is to expand beyond earth.

Our current approach is to use our technology to recreate as closely as possible our home environment. For example, our manned space vessels are designed to provide some degree of radiation shielding, they are filled with air and are stocked with food and water. One advantage of this approach is that it does not require any modification to humans; we simply recreate our home in space or on another planet. There are, of course, many problems with this approach. One is that our technology is still very limited and cannot properly address some challenges. For example, while artificial gravity is standard in science fiction, we currently rely on rather ineffective means of addressing the gravity problem. As another example, while we know how to block radiation, there is the challenge of being able to do this effectively on the journey from earth to Mars. A second problem is that recreating our home environment can be difficult and costly. But, it can be worth the cost to allow unmodified humans to survive in space or on other worlds. This approach points towards a Star Trek style future: normal humans operating within a bubble of technology. There are, however, alternatives.

Another approach is also based in technology, but aims at either modifying humans or replacing them entirely. There are two main paths here. One is that of machine technology in which humans are augmented in order to endure conditions that differ radically from that of earth. The scanners of Cordwainer Smith’s “Scanners Live in Vain” are one example of this—they are modified and have implants to enable them to survive the challenges of operating interstellar vessels. Another example is Man Plus, Frederik Pohl’s novel about a human transformed into a cyborg in order to survive on Mars. The ultimate end of this path is the complete replacement of humans by intelligent machines, machines designed to match their environments and free of human vulnerabilities and short life spans.

The other is the path of biological technology. On this path, humans are modified biologically in order to better cope with non-earth environments. These modifications would presumably start fairly modestly, such as genetic modifications to make humans more resistant to radiation damage and better adapted to lower gravity. As science progressed, the modifications could become far more radical, with a complete re-engineering of humans to make them ideally match their new environments. This path, unnaturally enough, would lead to the complete replacement of humans with new species.

These approaches do have advantages. While there would be an initial cost in modifying humans to better fit their new environments, the better the adaptations, the less need there would be to recreate earth-like conditions. This could presumably result in considerable cost-savings and there is also the fact that the efficiency and comfort of the modified humans would be greater the better they matched their new environments. There are, however, the usual ethical concerns about such modifications.

Replacing homo sapiens with intelligent machines or customized organisms would also have a high initial startup cost, but these beings would presumably be far more effective than humans in the new environments. For example, an intelligent machine would be more resistant to radiation, could sustain itself with solar power, and could be effectively immortal as long as it is repaired. Such a being would be ideal to crew (or be) a deep space mission vessel. As another example, custom created organisms or fully converted humans could ideally match an environment, living and working in radical conditions as easily as standard humans work on earth. Clifford D. Simak’s “Desertion” discusses such an approach; albeit one that has unexpected results on Jupiter.

In addition to the usual moral concerns about such things, there is also the concern that such creations would not preserve the human race. On the one hand, it is obvious that such beings would not be homo sapiens. If the entire species was converted or gradually phased out in favor of the new beings, that would be the end of the species—the biological human race would be no more. The voice of humanity would fall silent. On the other hand, it could be argued that the transition could suffice to preserve the identity of the species—a likely way to argue this would be to re-purpose the arguments commonly used to argue for the persistence of personal identity across time. It could also be argued that while the biological species homo sapiens could cease to be, the identity of humanity is not set by biology but by things such as values and culture. As such, if our replacements retained the relevant connection to human culture and values (they sing human songs and remember the old, old places where once we walked), they would still be human—although not homo-sapiens.

My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Follow Me on Twitter

Drug Prices

Posted in Ethics, Philosophy, Science, Technology by Michael LaBossiere on July 20, 2016

Ritalin

Martin Shkreli became the villain of drug pricing when he increased the price of a $13.50 pill to $750. While the practice of buying up smaller drug companies and increasing the prices of their products is a standard profit-making venture, the scale of the increase and Shkreli’s attitude drew attention to this incident. Unfortunately, while the Shkreli episode is the best known case, drug pricing is a sweeping problem. The August 2016 issue of Consumer Reports features an article on high drug prices in the United States and provides an excellent analysis of the matter—I am using it as the basis for the numbers I mention.

From the standpoint of consumers, the main problem is that drugs are priced extremely high—sometimes to a level that literally bankrupts patients. Faced with social pushback, drug companies do provide some attempts to justify the high prices. One standard reason is that the high prices are needed to pay the R&D costs of the drugs. While a company does have the right to pass on the cost of drug development, consideration of the facts tells another story about the pricing of drugs.

First, about 38% of the basic research science is actually funded by taxpayer money—so the public is paying twice: once in taxes and once again for the drugs resulting from the research. This, of course, leaves a significant legitimate area of expenses for companies, but hardly enough to warrant absurdly high prices.

Second, most large drug companies spend almost twice as much on promotion and marketing as they do on R&D. While these are legitimate business expenses, this fact does undercut using R&D expenses to justify excessive drug prices. Obviously, telling the public that pills are pricy because of the cost of marketing pills so people will buy them would not be an effective strategy. There is also the issue of the ethics of advertising drugs, which is another matter entirely.

Third, many “new” drugs are actually slightly tweaked old drugs. Common examples including combining two older drugs to create a “new” drug, changing the delivery method (from an injectable to a pill, for example) or altering the release time. In many cases, the government will grant a new patent for these minor tweaks and this will grant the company up to a 20-year monopoly on the product, preventing competition. This practice, though obviously legal, is certainly sketchy. To use an analogy, imagine a company held the patent on a wheel and an axle. Then, when those patents expired, they patented wheel + axle as a “new” invention. That would obviously be absurd.

Companies also try other approaches to justify the high cost, such as arguing that the drugs treat serious conditions or can save money by avoiding a more expensive treatment. While these arguments do have some appeal, it seems morally problematic to argue that the price of a drug can be legitimately based on the seriousness of the condition it treats. This smells of a protection scheme or coercion: “pay what we want…or you die.” The money saving argument is less odious, but is still problematic. By this logic, car companies should be able to charge vast sums for safety features since they protect people from very expensive injuries. It is, of course, reasonable to make a profit on products that provide significant benefits—but there need to be moral limits to the profits.

The obvious counter to my approach is to argue that drug prices should be set by the free-market: if people are willing to pay large sums for drugs, then the drug companies should be free to charge those prices. After all, companies like Apple and Porsche sell expensive products without (generally) being demonized for making profits.

The easy response is that luxury cars and iWatches are optional luxuries that a person can easily do without and there are many cheaper (and better) alternatives. However, drug companies sell drugs that are necessary for a person’s health and even survival—they are generally not optional products. There is also the fact that drug companies enjoy patent protection that precludes effective competition. While Apple does hold patents on its devices, there are many competitors. For example, since I would rather not shell out $350 for an iWatch, I use a Pebble Watch. I could also have opted to go with a $10 watch. But, if I had hepatitis C and wanted to be cured, I would be stuck with only one drug option.

While defenders of drug prices laud the free market and decry “government interference”, their ability to charge high prices depends on the interference of the state. As noted above, the United States and other governments issue patents to drug companies that grant them exclusive ownership. Without this protection, a company that wanted to charge $750 for a $13.50 pill would find competitors rushing to sell the pill for far less. After all, it would be easy enough for competing drug company to analyze a drug and produce it. By accepting the patent system, the drug companies accept that the state has a right to engage in legal regulation in the drug industry—that is, to replace the invisible hand with a very visible hand of the state. Once this is accepted, the door is opened to allowing additional regulation on the grounds that the state will provide protection for the company’s property using taxpayer money in return for the company agreeing not to engage in harmful pricing of drugs. Roughly put, if the drug companies expect people to obey the social contract with the state, they also need to operate within the social contract, Companies could, of course, push for a truly free market: they would be free to charge whatever they want for drugs without state interference, but there would be no state interference into the free market activities of their competitors when they duplicate the high price drugs and start undercutting the prices.

In closing, if the drug companies want to keep the patent protection they need for high drug prices, they must be willing to operate within the social contract. After all, citizens should not be imposed upon to fund the protection of the people who are, some might claim, robbing them.

 

My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Follow Me on Twitter

Tagged with: ,

Body Hacking III: Better than Human

Posted in Ethics, Philosophy, Technology by Michael LaBossiere on March 25, 2016

While most of the current body hacking technology is merely gimmicky and theatrical, it does have potential. It is, for example, easy enough to imagine that the currently very dangerous night-vision eye drops could be made into a safe product, allowing people to hack their eyes for good or nefarious reasons. There is also the model of the cyberpunk future envisioned by such writers as William Gibson and games like Cyberpunk and Shadowrun. In such a future, people might body hack their way to being full cyborgs. In the nearer future, there might be such augmentations as memory backups for the brain, implanted phones, and even subdermal weapons. Such augmenting hacks do raise various moral issues that go beyond the basic ethics of self-modification. Fortunately, these ethical matters can be effectively addressed by the application of existing moral theories and principles.

Since the basic ethics of self-modification were addressed in the previous essay, this essay will focus solely on the ethical issue of augmentation through body hacking. This issue does, of course, stack with the other moral concerns.

In general, there seems to be nothing inherently wrong with the augmentation of the body through technology. The easy way to argue for this is to draw the obvious analogy to external augmentation: starting with sticks and rocks, humans augmented their natural capacities. If this is acceptable, then moving the augmentation under the skin should not open up a new moral world.

The easy and obvious objection is to contend that under the skin is a new moral world—that, for example, a smart phone carried in the pocket is one thing, while a smartphone embedded in the skull is quite another.

This objection does have merit: implanting the technology is morally significant. At the very least, there are the moral concerns about potential health risks. However, this moral concern is about the medical aspects, not about the augmentation and this is the focus of the moral discussion at hand. This is not to say that the health issues are not important—they are actually very important; but fall under another moral issue.

If it is accepted that augmentation is, in general, morally acceptable, there are still legitimate concerns about specific types of augmentation and the context in which they are employed. Fortunately, there is already considerable moral discussion about these categories of augmentation.

One area in which augmentation is of considerable concern is in sports and games. Athletes have long engaged in body hacking—if the use of drugs can be considered body hacking. While those playing games like poker generally do not use enhancing drugs, they have attempted to make use of technology to cheat. While future body hacks might be more dramatic, they would seem to fall under the same principles that govern the use of augmenting substances and equipment in current sports. For example, an implanted device that stores extra blood to be added during the competition would be analogous to existing methods of blood doping. As another example, a poker or chess player might implant a computer that she can use to cheat at the game.

While specific body hacks will need to be addressed by the appropriate governing bodies of sports and games, the basic principle that cheating is morally unacceptable still applies. As such, the ethics of body hacking in sports and games is easy enough to handle in the general—the real challenge will be sorting out which hacks are cheating and which are acceptable. In any case, some interesting scandals can be expected.

The field of academics is also an area of concern. Since students are quite adept at using technology to cheat in school and on standardized tests, it must be expected that there will be efforts to cheat through body hacking. As with cheating in sports and games, the basic ethical framework is well-established: creating in morally unacceptable in such contexts. As with sports and games, the challenge will be sorting out which hacks are considered cheating and which are not. If body hacking becomes mainstream, it can be expected that education and testing will need to change as will what counts as cheating. To use an analogy, calculators are often allowed on tests and thus the future might see implanted computers being allowed for certain tests. Testing of memory might also become pointless—if most people have implanted devices that can store data and link to the internet, memorizing things might cease to be a skill worth testing. This does, however, segue into the usual moral concerns about people losing abilities or becoming weaker due to technology. Since these are general concerns that have applied to everything from the abacus to the automobile, I will not address this issue here.

There is also the broad realm composed of all the other areas of life that do not generally have specific moral rules about cheating through augmentation. These include such areas as business and dating. While there are moral rules about certain forms of cheating, the likely forms of body hacking would not seem to be considered cheating in such areas, though they might be regarded as providing an unfair advantage—especially in cases in which the wealthy classes are able to gain even more advantages over the less well-off classes.

As an example, a company with considerable resources might use body hacking to upgrade its employees so they can be more effective, thus providing a competitive edge over lesser companies.  While it seems likely that certain augmentations will be regarded as unfair enough to require restriction, body hacking would merely change the means and not the underlying game. That is, the well-off always have considerable advantages over the less-well off. Body hacking would just be a new tool to be used in the competition. Hence, existing ethical principles would apply here as well. Or not be applied—as is so often the case when vast sums of money are on the line.

So, while body hacking for augmentation will require some new applications of existing moral theories and principles, it does not make a significant change in the moral landscape. Like almost all changes in technology it will merely provide new ways of doing old things. Like cheating in school or sports. Or life.

 

 

My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Follow Me on Twitter

Tagged with: ,

Body Hacking II: Restoration & Replacement

Posted in Ethics, Philosophy, Technology by Michael LaBossiere on March 23, 2016

While body hacking is sometimes presented as being new and radical, humans have been engaged in the practice (under other names) for quite some time. One of the earliest forms of true body hacking was probably the use of prosthetic parts to replace lost pieces, such as a leg or hand. These hacks were aimed at restoring a degree of functionality, so they were practical hacks.

While most contemporary body hacking seems aimed at gimmickry or rather limited attempts at augmentation, there are some serious applications that involve replacement and restoration. One example of this is the color blind person who is using a skull mounted camera to provide audio clues regarding colors. This hack serves as a replacement to missing components of the eye, albeit in a somewhat odd way.

Medicine is, obviously enough, replete with body hacks ranging from contact lenses to highly functional prosthetic limbs. These technologies and devices provide people with some degree of replacement and restoration for capabilities they lost or never had. While these sort of hacks are typically handled by medical professionals, advances in existing technology and the rise of new technologies will certainly result in more practical hacks aimed not at gimmickry but at restoration and replacement. There will also certainly be considerable efforts aimed at augmentation, but this matter will be addressed in another essay.

Since humans have been body hacking for replacement and restoration for thousands of years, the ethics of this matter are rather well settled. In general, the use of technology for medical reasons of replacement or restoration is morally unproblematic. After all, this process is simply fulfilling the main purpose of medicine: to get a person as close to their normal healthy state as possible. To use a specific example, there really is no morally controversy over the use of prosthetic limbs that are designed to restore functionality. In the case of body hacks, the same general principle would apply: hacks that aim at restoration or replacement are generally morally unproblematic. That said, there are some potential areas of concern.

One area of both moral and practical concern is the risk of body hacking done by non-professionals. That is, amateur or DIY body hacking. The concern is that such hacking could have negative consequences—that is, the hack could turn out to do more harm than good. This might be due to bad design, poor implementation or other causes. For example, a person might endeavor a hack to replace a missing leg and have it fail catastrophically, resulting in a serious injury. This is, of course, not unique to body hacking—this is a general matter of good decision making.

As with health and medicine in general, it is generally preferable to go with a professional rather than an amateur or a DIY endeavor. Also, the possibility of harm makes it a matter of moral concern. That said, there are many people who cannot afford professional care and technology will afford people an ever-growing opportunity to body hack for medical reasons. This sort of self-help can be justified on the grounds that some restoration or replacement is better than none. This assumes that the self-help efforts do not result in worse harm than doing nothing. As such, body hackers and society will need to consider the ethics of the risks of amateur and DIY body hacking. Guidance can be found here in existing medical ethics—such as moral guides for people attempting to practice medicine on themselves and others without proper medical training.

A second area of moral concern is that some people will engage in replacing fully functional parts with body hacks that are equal or inferior to the original (augmentation will be addressed in the next essay). For example, a person might want to remove a finger to replace it with a mechanical finger with a built in USB drive. As another example, a person might want to replace her eye with a camera comparable or inferior to her natural eye.

One clear moral concern is the potential dangers in such hacks—removing a body part can be rather dangerous. One approach would be to weigh the harms and benefits of such hacking. On the face of it, such replacement hacks would seem to be at best neutral—that is, the person will end up with the same capabilities as before. It is also possible, perhaps likely, that the replacement attempt will result in diminished capabilities, thus making the hack wrong because of the harm inflicted. Some body hackers might argue that such hacks have a value beyond the functionality. For example, the value of self-expression or achieving a state of existence that matches one’s conception or vision of self. In such cases, the moral question would be whether or not these factors are worth considering and if they are, how much weight they should be given morally.

There is also the worry that such hacks would be a form of unnecessary self-mutilation and thus at best morally dubious. A counter to this is to argue, as John Stuart Mill did, that people have a right to self-harm, provided that they do not harm others.  That said, arguing that people do not have a right to interfere with self-harm (provided the person is acting freely and rationally) does not entail that self-harm is morally acceptable. It is certainly possible to argue against self-harm on utilitarian grounds and also on the basis of moral obligations to oneself. Arguments from the context of virtue theory would also apply—self harm is certainly contrary to developing one’s excellence as a person.

These approaches could be countered. Utilitarian arguments can be met with utilitarian arguments that offer a different evaluation of the harms and benefits. Arguments based on obligations to oneself can be countered by arguing that there are not such obligations or that the obligations one does have allows from this sort of modification. Argument from virtue theory could be countered by attacking the theory itself or showing how such modifications are consistent with moral excellence.

My own view, which I consistently apply to other areas such as drug use, diet, and exercise, is that people have a moral right to the freedom of self-abuse/harm. This requires that the person is capable of making an informed decision and is not coerced or misled. As such, I hold that a person has every right to DIY body hacking. Since I also accept the principle of harm, I hold that society has a moral right to regulate body hacking of others as other similar practices (such as dentistry) are regulated. This is to prevent harm being inflicted on others. Being fond of virtue theory, I do hold that people should not engage in self-harm, even though they have every right to do so without having their liberty restricted. To use a concrete example, if someone wants to spoon out her eyeball and replace it with a LED light, then she has every right to do so. However, if an untrained person wants to set up shop and scoop eyeballs for replacement with lights, then society has every right to prevent that. I do think that scooping out an eye would be both foolish and morally wrong; which is also how I look at heroin use and smoking tobacco.

 

My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Follow Me on Twitter