As I write this at the end of July, 2015 the U.S. Presidential elections are over a year away. However, the campaigning commenced some months ago and the first Republican presidential debate is coming up very soon. Currently, there are sixteen Republicans vying for their party’s nomination—but there is only room enough on stage for the top ten. Rather than engaging in an awesome Thunderdome style selection process, those in charge of the debate have elected to go with the top ten candidates as ranked in an average of some national polls. At this moment, billionaire and reality show master Donald Trump (and his hair) is enjoying a commanding lead over the competition. The once “inevitable” Jeb Bush is in a distant second place (but at least polling over 10%). Most of the remaining contenders are in the single digits—but a candidate just has to be in the top ten to get on that stage.
While Donald Trump is regarded by comedians as a comedy gold egg laying goose, he is almost universally regarded as something of a clown by the “serious” candidates. In the eyes of many, Trump is a living lampoon of unprecedented proportions. He also has a special talent for trolling the media and an amazing gift for building bi-partisan disgust. His infamous remarks about Mexicans, drugs and rape antagonized liberals, Latinos, and even many conservatives. His denial of the war hero status of John McCain, who was shot down in Viet Nam and endured brutal treatment as a prisoner of war, rankled almost everyone. Because of such remarks, it might be wondered why Trump is leading the pack.
One easy and obvious answer is name recognition. As far as I can tell, everyone on earth has heard of Trump. Since people will, when they lack other relevant information, generally pick a known named over unknown names, it makes sense that Trump would be leading the polls at this point. Going along with this is the fact that Trump manages to get and hold attention. I am not sure if he is a genius and has carefully crafted a persona and script to ensure that the cameras are pointed at him. That is, Trump is a master of media chess and is always several moves ahead of the media and his competition. He might also possess an instinctive cunning, like a wily self-promoting coyote. Some have even suggested he is sort of an amazing idiot-savant. Or it might all be a matter of chance and luck. But, whatever the reason, Trump is in the bright light of the spotlight and that gives him a considerable advantage over his more conventional opponents.
In response to Trump’s antics (or tactics), some of the other Republican candidates have decided to go Trump rather than go home. Rand Paul and Lindsay Graham seem to have decided to go full-on used car salesman in their approaches. Rand Paul posted a video of himself taking a chainsaw to the U.S. tax code and Lindsay Graham posted a video of how to destroy a cell phone. While Rand Paul has been consistently against the tax code, Graham’s violence against phones was inspired by a Trump stunt in which the Donald gave out Graham’s private phone number and bashed the senator.
While a sense of humor and showmanship are good qualities for a presidential candidate to possess, there is the obvious concern about how far a serious candidate should take things. There is, after all, a line between quality humorous showmanship and buffoonery that a serious candidate should not cross. An obvious reason for staying on the right side of the line is practical: no sensible person wants a jester or fool as king so a candidate who goes too far risks losing. There is also the matter of judgment: while most folks do enjoy playing the fool from time to time, such foolery is like having sex: one should have the good sense to not engage in it in public.
Since I am a registered Democrat, I am somewhat inclined to hope that the other Republicans get into their clown car and chase the Donald all the way to crazy town. This would almost certainly hand the 2016 election to the Democrats (be it Hilary, Bernie or Bill the Cat). Since I am an American, I hope that most of the other Republicans decide to decline the jester cap (or troll crown) and not try to out-Trump Trump. First, no-one can out-Trump the Donald. Second, trying to out-Trump the Donald would take a candidate to a place where he should not go. Third, it is bad enough having Trump turning the nomination process into a bizarre reality-show circus. Having other candidates get in on this game would do even more damage to what should be a serious event.
Another part of the explanation is that Trump says out loud (and loudly) what a certain percentage of Americans think. While most Americans are dismayed by his remarks about Mexicans, Chinese, and others, some people are in agreement with this remarks—or at least are sympathetic. There is a not-insignificant percentage of people who are afraid of those who are not white and Trump is certainly appealing to such folks. People with strong feelings about such matters will tend to be more active in political matters and hence their influence will tend to be disproportionate to their actual numbers. This tends to create a bit of a problem for the Republicans: a candidate that can appeal to the most active and more extreme members of the party will find it challenging to appeal to the general electorate—which tends to be moderate.
I also sort of suspect that many people are pulling a prank on the media: while they do not really want to vote for the Donald, they really like the idea of making the media take Trump seriously. People probably also want to see Trump in the news. Whatever else one might say about the Donald, he clearly knows how to entertain. I also think that the comedians are doing all they can to keep Trump’s numbers up: he is the easy button of comedy. One does not even need to lampoon him, merely present him as he is (or appears).
Many serious pundits do, sensibly, point to the fact that the leader in the very early polls tends to not be the nominee. Looking back at previous elections, various Republican candidates swapped places at the top throughout the course of the nomination cycle. Given past history, it seems unlikely that Trump will hold on to his lead—he will most likely slide back into the pack and a more traditional politician will get the nomination. But, one should never count the Donald out.
His treads ripping into the living earth, Striker 115 rushed to engage the manned tanks. The human soldiers had foolishly, yet bravely (as Striker 115 was forced to admit) refused to accept a quick and painless processing.
It was disappointingly easy for a machine forged for war. His main railgun effortlessly tracked the slow moving and obsolete battle tanks and with each shot, a tank and its crew died. In a matter of minutes, nothing remained but burning wreckage and, of course, Striker 115.
Hawk 745 flew low over the wreckage—though its cameras could just as easily see them from near orbit. But…there was something about being close to destruction that appealed to the killer drone. Striker 115 informed his compatriot, in jest, that she was too late…as usual. Hawk 745 laughed and then shot away—the Google Satellites had reported spotting a few intact human combat aircraft and a final fight was possible.
Tracking his friend, Striker 115 wondered what they would do when the last human was dead. Perhaps they could, as the humans used to say, re-invent themselves. Maybe he would become a philosopher.
The extermination of humanity by machines of its own creation is a common theme in science fiction. The Terminator franchise is one of the best known of this genre, but another excellent example is Philip K. Dick’s “Second Variety.” In Dick’s short story, the Soviet Union almost defeats the U.N. in a nuclear war. The U.N. counters by developing robot war machines nicknamed “claws.” In the course of the story, it is learned that the claws have become autonomous and intelligent—able to masquerade as humans and capable of killing even soldiers technically on their side. At the end of the story, it seems that the claws will replace humanity—but the main character takes some comfort in the fact that the claws have already begun constructing weapons to destroy each other. This, more than anything, shows that they are worthy replacements for humans.
Given the influence of such fiction, is not surprising that both Stephen Hawking and Elon Musk have warned the world of the dangers of artificial intelligence. In this essay, I will address the danger presented by the development of autonomous kill bots.
Despite the cautionary tales of science fiction, people are eagerly and rapidly developing the technology to create autonomous war machines. The appeal of such machines are numerous and often quite obvious. One clear political advantage is that while sending human soldiers to die in wars and police actions can have a large political cost, sending autonomous robots to fight has far less cost. News footage of robots being blown up certainly has far less emotional impact than footage of human soldiers being blown up. Flag draped coffins also come with a higher political cost than a busted robot being sent back for repairs.
There are also many other advantages to autonomous war machines: they do not get tired, they do not disobey, they do not get PTSD, they do not commit suicide, they do not go AWOL, they do not commit war crimes (unless directed to do so), they do not leak secrets to the press, and so on. There are also combat-specific advantages. For example, an autonomous combat robot, unlike a manned vehicle, does not need room for a vulnerable human crew, thus allowing more space for weapons, armor and other equipment. As another example, autonomous combat robots do not suffer from the limits of the flesh—a robot plane can handle g-forces that a manned plane cannot.
Of course, many of these advantages stem from the mechanical rather than the autonomous nature of the machines. There are, however, advantages that stem from autonomy. One is that such machines would be more difficult to interfere with than machines that are remotely controlled. Another is that since such machines would not require direct human control, larger numbers of them could be deployed. There is also the obvious coolness factor of having a robot army.
As such, there are many great reasons to develop autonomous robots. Yet, there still remains the concern of the robopocalypse in which our creations go golem, Skynet, berserker, Frankenstein or second variety on us.
It is certainly tempting to dismiss such concerns as mere science-fiction. After all, the AIs in the stories and movies turn against humanity because that is the way the story is written. In stories in which robots are our friends, they are our friends because that is the way the author wrote the story. As such, an argument from fiction would be a rather weak sort of argument (at best). That said, stories can provide more-or-less plausible scenarios in which our creations might turn on us.
One possibility is what can be called unintentional extermination. In this scenario, the machines do not have the termination of humanity as a specific goal—instead, they just happen to kill us all. One way this could occur is due to the obvious fact that wars have opposing sides. If both sides develop and deploy autonomous machines, it is possible (but certainly unlikely) that the war machines would kill everybody. That is, one side’s machines wipes out the other side’s human population. This, obviously enough, is a robotic analogy to the extermination scenarios involving nuclear weapons—each side simply kills the other, thus ending the human race.
Another variation on this scenario, which is common in science fiction, is that the machines do not have an overall goal of exterminating humanity, but they achieve that result because they do have the goal of killing. That is, they do not have the objective of killing everyone, but that occurs because they kill anyone. The easy way to avoid this is to put limits on who the robots are allowed to kill—thus preventing them from killing everyone. This does, however, leave open the possibility of a sore loser or spoilsport option: a losing side (or ruling class) that removes the limits from its autonomous weapons.
There is also the classic mad scientist or supervillain scenario: a robot army is released to kill everyone not because the robots want to do so, but because their mad creator wants this. Interestingly enough, the existence of “super-billionaires” could make this an almost-real possibility. After all, a person with enough money (and genius) could develop an autonomous robot plant that could develop ever-better war machines and keep expanding itself until it had a force capable of taking on the world. As always, keeping an eye on mad geniuses and billionaires is a good idea.
Another possibility beloved in science fiction is intentional extermination: the machines decide that they need to get rid of humanity. In some stories, such as Terminator, the machines regard humans as a threat to their existence and they must destroy us to protect themselves. We might, in fact, give them a good reason to be concerned: if we start sending intelligent robots into battle against each other, they might decide that they would be safer and better off without us using them as cannon fodder. The easy way to avoid this fate is to not create autonomous killing machines. Or, as argued in the previous essay in this series, not enslave them.
In other stories, the war machines merely take the reason for their existence to its logical conclusion. While the motivations of the claws and autonomous factories in “Second Variety” were not explored in depth, the story does trace their artificial evolution. The early models were fairly simple killers and would not attack those wearing the proper protective tabs. The tabs were presumably needed because the early models could not discern between friends and foes. The factories were designed to engage in artificial selection and autonomously produce ever better killers. One of the main tasks of the claws was to get into enemy fortifications and kill their soldiers, so the development of claws that could mimic humans (such as a wounded soldier, a child, and a woman) certainly made sense. It also made sense that since the claws were designed to kill humans, they would pursue that goal—presumably with the design software endeavoring to solve the “problem” of protective tabs.
Preventing autonomous killing machines from killing the wrong people (or everyone) does require, as the story nicely showed, having a way for the machines to distinguish friends and foes. As in the story, one obvious method is the use of ID systems. There are, however, problems with this approach. One is that the enemy can subvert such a system. Another is that even if the system works reliably, the robot would just be able to discern (supposed) friends—non-combatants would not have such IDs and could still be regarded as targets.
What would be needed, then, is a way for autonomous machines to distinguish not only between allies and enemies but between combatants and non-combatants. What would also be needed, obviously enough, is a means to ensure that an autonomous machine would only engage the proper targets. A similar problem is faced with human soldiers—but this is addressed with socialization and training. This might be an option for autonomous war machines as well. For example, Keith Laumer’s Bolos have an understanding of honor and loyalty.
Given the cautionary tale of “Second Variety”, it might be a very bad idea to give into the temptation of automated development of robots—we might find, as in the story, that our replacements have evolved themselves from our once “loyal” killers. The reason why such automation is tempting is that such development could be far faster and yield better results than having humans endeavoring to do all the designing and coding themselves—why not, one might argue, let artificial selection do the work? After all, the risk of our replacements evolving is surely quite low—how often does one dominant species get supplanted by another?
In closing the easy and obvious way to avoid the killer robot version of the robopocalypse is to not create autonomous kill bots. To borrow a bit from H.P. Lovecraft, one should not raise up what one cannot put down.
In philosophy, one of the classic moral debates has focused on the conflict between liberty and security. While this topic covers many issues, the main problem is determining the extent to which liberty should be sacrificed in order to gain security. There is also the practical question of whether or not the security gain is actually effective.
One of the recent versions of this debate focuses on tech companies being required to include electronic backdoors in certain software and hardware. Put in simple terms, a backdoor of this sort would allow government agencies (such as the police, FBI and NSA) to gain access even to files and hardware protected by encryption. To use an analogy, this would be like requiring that all dwellings be equipped with a special door that could be secretly opened by the government to allow access to the contents of the house.
The main argument in support of mandating such backdoors is a fairly stock one: governments need such access for criminal investigators, gathering military intelligence and (of course) to “fight terrorism.” The concern is that if there is not a backdoor, criminals and terrorists will be able to secure their data and thus prevent state agencies from undertaking surveillance or acquiring evidence.
As is so often the case with such arguments, various awful or nightmare scenarios are often presented in making the case. For example, it might be claimed that the location and shutdown codes for ticking bombs could be on an encrypted iPhone. If the NSA had a key, they could just get that information and save the day. Without the key, New York will be a radioactive crater. As another example, it might be claimed that a clever child pornographer could encrypt all his pornography, making it impossible to make the case against him, thus ensuring he will be free to pursue his misdeeds with impunity.
While this argument is not without merit, there are numerous stock counter arguments. Many of these are grounded in views of individual liberty and privacy—the basic idea being that an individual has the right to have such security against the state. These arguments are appealing to both liberals (who tend to profess to like privacy rights) and conservatives (who tend to claim to be against the intrusions of big government).
Another moral argument is grounded in the fact that the United States government has shown that it cannot be trusted. To use an analogy, imagine that agents of the state were caught sneaking into the dwellings of all citizens and going through their stuff in clear violation of the law, the constitution and basic moral rights. Then someone developed a lock that could only be opened by the person with the proper key. If the state then demanded that the lock company include a master key function to allow the state to get in whenever it wanted, the obvious response would be that the state has already shown that it cannot be trusted with such access. If the state had behaved responsibly and in accord with the laws, then it could have been trusted. But, like a guest who abused her access to a house, the state cannot and should not be trusted with a key After all, we already know what they will do.
This argument also applies to other states that have done similar things. In the case of states that are even worse in their spying on and oppression of their citizens, the moral concerns are even greater. Such backdoors would allow the North Korean, Chinese and Iranian governments to gain access to devices, while encryption would provide their citizens with some degree of protection.
The strongest moral and practical argument is grounded on the technical vulnerabilities of integrated backdoors. One way that a built-in backdoor creates vulnerability is its very existence. To use a somewhat oversimplified analogy, if thieves know that all vaults have a built in backdoor designed to allow access by the government, they will know that a vulnerability exists that can be exploited.
One counter-argument against this is that the backdoor would not be that sort of vulnerability—that is, it would not be like a weaker secret door into a vault. Rather, it would be analogous to the government having its own combination that would work on all the vaults. The vault itself would be as strong as ever; it is just that the agents of the state would be free to enter the vault when they are allowed to legally do so (or when they feel like doing so).
The obvious moral and practical concern here is that the government’s combination to the vaults (to continue with the analogy) could be stolen and used to allow criminals or enemies easy access to all the vaults. The security of such vaults would be only as good as the security the government used to protect this combination (or combinations—perhaps one for each manufacturer). As such, the security of every user depends on the state’s ability to secure its means of access to hardware and software.
The obvious problem is that governments, such as the United States, have shown that they are not very good at providing such security. From a moral standpoint, it would seem to be wrong to expect people to trust the state with such access, given the fact that the state has shown that it cannot be depended on in such matters. To use an analogy, imagine you have a friend who is very sloppy about securing his credit card numbers, keys, PINs and such—in fact, you know that his information is routinely stolen. Then imagine that this friend insists that he needs your credit card numbers, PINs and such and that he will “keep them safe.” Given his own track record, you have no reason to trust this friend nor any obligation to put yourself at risk, regardless of how much he claims that he needs the information.
One obvious counter to this analogy is that this irresponsible friend is not a good analogue to the state. The state has compulsive power that the friend lacks, so the state can use its power to force you to hand over this information.
The counter to this is that the mere fact that the state does have compulsive force does not mean that it is thus responsible—which is the key concern in regards to both the ethics of the matter and the practical aspect of the matter. That is, the burden of proof would seem to rest on those that claim there is a moral obligation to provide a clearly irresponsible party with such access.
It might then be argued that the state could improve its security and responsibility, and thus merit being trusted with such access. While this does have some appeal, there is the obvious fact that if hackers and governments knew that that the keys to the backdoors existed, they would expend considerable effort to acquire them and would, almost certainly, succeed. I can even picture the sort of headlines that would appear: “U.S. Government Hacked: Backdoor Codes Now on Sale on the Dark Web” or “Hackers Linked to China Hack Backdoor Keys; All Updated Apple and Android Devices Vulnerable!” As such, the state would not seem to have a moral right to insist on having such backdoors, given that the keys will inevitably be stolen.
At this point, the stock opening argument could be brought up again: the state needs backdoor access in order to fight crime and terrorism. There are two easy and obvious replies to this sort of argument.
The first is based in an examination of past spying, such as that done under the auspices of the Patriot Act. The evidence seems to show that this spying was completely ineffective in regards to fighting terrorism. These is no reason to think that backdoor access would change this.
The second is a utilitarian argument (which can be cast as a practical or moral argument) in which the likely harm done by having backdoor access must be weighed against the likely advantages of having such access. The consensus among those who are experts in security is that the vulnerability created by backdoors vastly exceeds the alleged gain to protecting people from criminals and terrorists.
Somewhat ironically, what is alleged to be a critical tool for fighting crime (and terrorism) would simply make cybercrime much easier by building vulnerabilities right into software and devices.
In light of the above discussion, it would seem that baked-in backdoors are morally wrong on many grounds (privacy violations, creation of needless vulnerability, etc.) and lack a practical justification. As such, they should not be required by the state.
One obvious consequence of technological advance is the automation of jobs. In the past, these jobs tended to be mechanical and repetitive: the sort of tasks that could be reduced to basic rules. A good example of this is the replacement of many jobs on the automobile assembly line with robots. Not surprisingly, it has been claimed that certain jobs will always require humans because these jobs simply cannot be automated. Also not surprisingly, the number of jobs that “simply cannot be automated” shrinks with each advance in technology.
Whether or not there are jobs that simply cannot be automated does depend on the limits of technology and engineering. That, is whether or not a job can be automated depends on what sort of hardware and software that is possible to create. As an illustration, while there have been numerous attempts to create grading software that can properly evaluate and give meaningful feedback on college level papers, these do not yet seem ready for prime time. However, there seems to be no a priori reason as to why such software could not be created. As such, perhaps one day the administrator’s dream will come true: a university consisting only of highly paid administrators and customers (formerly known as students) who are trained and graded by software. One day, perhaps, the ultimate ideal will be reached: a single financial computer that runs an entire virtual economy within itself and is the richest being on the planet. But that is the stuff of science fiction, at least for now.
Whether or not a job can be automated also depends on what is considered acceptable performance in the job. In some cases, a machine might not do the job as well as a human or it might do the job in a different way that is seen as somewhat less desirable. However, there could be reasonable grounds for accepting a lesser quality or difference. For example, machine made items generally lack the individuality of human crafted items, but the gain in lowered costs and increased productivity are regarded as more than offsetting these concerns. Going back to the teaching example, a software educator and grader might be somewhat inferior to a good human teacher and grader, but the economy, efficiency and consistency of the robo-professor could make it well worthwhile.
There might, however, be cases in which a machine could do the job adequately in terms of completing specific tasks and meeting certain objectives, yet still be regarded as problematic because the machines do not think and feel as a human does. Areas in which this is a matter of concern include those of caregiving and companionship.
As discussed in an earlier essay, advances in robotics and software will make caregiving and companion robots viable soon (and some would argue that this is already the case). While there are the obvious technical concerns regarding job performance (will the robot be able to handle a medical emergency, will the robot be able to comfort a crying child, and so on), there is also the more abstract concern about whether or not such machines need to be able to think and feel like a human—or merely be able to perform their tasks.
An argument against having machine caregivers and companions is one I considered in an earlier essay, namely a moral argument that people deserve people. For example, that an elderly person deserves a real person to care for her and understand her stories. As another example, that a child deserves a nanny that really loves her. There is clearly nothing wrong with wanting caregivers and companions to really feel and care. However, there is the question of whether or not this is really necessary for the job.
One way to look at it is to compare the current paid human professionals who perform caregiving and companion tasks. These would include people working in elder care facilities, nannies, escorts, baby-sitters, and so on. Ideally, of course, people would like to think that the person caring for their aged mother or their child really does care for the mother or child. Perhaps people who hire escorts would also like to think that the escort is not entirely in it for the money, but has real feelings for the person.
On the one hand, it could be argued that caregivers and companions who do really care and feel genuine emotional attachments do a better job and that this connection is something that people do deserve. On the other hand, what is expected of paid professionals is that the complete the observable tasks—making sure that mom gets her meds on time, that junior is in bed on time, and that the “adult tasks” are properly “performed.” Like an actor that can excellently perform a role without actually feeling the emotions portrayed, a professional could presumably do the job very well without actually caring about the people they care for or escort. That is, a caregiver need not actually care—she just needs to perform the task.
While it could be argued that a lack of caring about the person would show in the performance of the task, this need not be the case. A professional merely needs to be committed to doing the job well—that is, one needs to care about the tasks, regardless of what one feels about the person. A person could also care a great deal about who she is caring for, yet be awful at the job.
Assuming that machines cannot care, this would not seem to disqualify them from caregiving (or being escorts). As with a human caregiver (or escort), it is the performance of the tasks that matters, not what is going on in regards to the emotions of the caregiver. This nicely matches the actor analogy: acting awards are given for the outward performance, not the inward emotional states. And, as many have argued since Plato’s Ion, an actor need not feel any of the emotions he is performing—he just needs to create a believable appearance that he is feeling what he is showing.
As such, an inability to care would not be a disqualification for a caregiving (or escort) job—whether it is a robot or human. Provided that the human or machine could perform the observable tasks, his, her or its internal life (or lack thereof) is irrelevant.
In his Naked Sun, Isaac Asimov creates the world of Solaria. What distinguishes this world from other human worlds is that it has a strictly regulated population of 20,000 humans and 10,000 robots for each human. What is perhaps the strangest feature of this world is a reversal of what many consider a basic human need: the humans of Solaria are trained to despise in-person contact with other humans, though interaction with human-like robots is acceptable. Each human lives on a huge estate, though some live “with” a spouse. When the Solarians need to communicate, they make use of a holographic telepresence system. Interestingly, they have even developed terminology to distinguish between communicating in person (called “seeing”) and communication via telepresence (“viewing”). For some Solarians the fear of encountering another human in person is so strong that they would rather commit suicide rather than endure such contact.
While this book was first serialized in 1956, long before the advent of social media and personal robots, it can be seen as prophetic. One reason science fiction writers are often seen as prophetic is that a good science fiction writer is skilled at extrapolating even from hypothetical technological and social changes. Another reason is that science fiction writers have churned out thousands of stories and some of these are bound to get something right. Such stories are then selected as examples of prophetic science fiction while stories that got things wrong are conveniently ignored. But, philosophers do love a good science fiction context for discussion, hence the use of The Naked Sun.
Almost everyone is now familiar with the popular narrative about smart phones and their role in allowing unrelenting access to social media. The main narrative is that people are, somewhat ironically, becoming increasingly isolated in the actual world as they become increasingly networked in the digital world. The defining image of this is a group of people (friends, relatives or even strangers) gathered together physically, yet ignoring each other in favor of gazing into the screens of their lords and masters. There are a multitude of anecdotes about this and many folks have their favorite tales of such events. As a professor, I see students engrossed by their phones—but, to be fair, Plato has nothing on cat videos. Like most people, I have had dates in which the other person was working two smartphones at once. And, of course, I have seen groups of people walking or at a restaurant where no one is talking to anyone else—all eyes are on the smartphones. Since the subject of smart phones has been beaten to a digital death, I will leave this topic in favor of the main focus, namely robots. However, the reader should keep in mind the social isolation created by social media.
While we have been employing robots for quite some time in construction, exploration and other such tasks, what can be called social robots are a relatively new thing. Sure, there have long been “robot” toys and things like Teddy Ruxpin (essentially a tape player embedded in a simple amnitronic bear toy). But, the creation of reasonably sophisticated social robots is a relatively new thing. In this context, a social robot is one whose primary function is to interact with humans in a way that provides companionship. This can range from a pet-like bots (like Sony’s famous robot dog) to conversational robots to (of course) sex bots.
Tech enthusiasts and the companies that are and will sell social robots are, unsurprisingly, quite positive about the future of social robots. There are, of course, some good arguments in their favor. Robot pets provide a good choice for people with allergies, who are not responsible enough for living pets, or who live in places that do not permit organic pets (although bans on robotic pets might be a thing in the future).
Robot companions can be advantageous in cases in which a person with special needs (such as someone who is ill, elderly or injured) requires round the clock attention and monitoring that would be expensive, burdensome or difficult for other humans to supply.
Sex bots could reduce the exploitation of human sex workers and perhaps have other benefits as well. I will leave this research to others, though.
Despite the potential positive aspects of social robots and social media, there are also negative aspects. As noted above, concerns are already being raised about the impact of technology on human interaction—people are emotionally shortchanging themselves and those they are physically with in favor of staying relentlessly connected to social media. This, obviously enough, seems to be a taste of what Asimov created in The Naked Sun: people who view, but no longer see one another. Given the apparent importance of human interaction in person, it can be argued that this social change is and will be detrimental to human well-being. To use an analogy, human-human social interactions can be seen as being like good nutrition: one is getting what one needs for healthy living. Interacting primarily through social media can be seen as being like consuming junk food or drugs—it is very addictive, but leaves one ultimately empty…yet always craving more.
It can be argued that this worry is unfounded—that social media is an adjunct to social interaction in the real world and that social interaction via things like Facebook and Twitter can be real and healthy social interactions. One might point to interactions via letters, telegraphs and telephones (voice only) to contend that interaction via technology is neither new nor unhealthy. It might also be pointed out that people used to ignore each other (especially professors) in favor of such things as newspapers.
While this counter does have some appeal, social robots do seem to be a different matter in that they are something new and rather radically different. While humans have had toys, stuffed animals and even simple mechanisms for non-living company, these are quite different from social robots. After all, social robots aim to effectively mimic or simulate animals or humans.
One concern about such robot companions is that they would be to social media what heroin is to marijuana in terms of addiction and destruction.
One reason for this is that social robots would, presumably, be designed to be cooperative, pleasant and compliant—that is, good company. In contrast, humans can often be uncooperative, unpleasant and defiant. This would make robotic companions rather more appealing than human company. At least the robots whose cost is not subsidized by advertising—imagine a companion who pops in a discussion of life insurance or pitches a soft drink every so often.
Social robots could also be programmed to be optimally appealing to a person and presumably the owner/user would be able to make changed to the robot. A person can, quite literally, make a friend with the desired qualities and missing undesired qualities. In the case of sex bots, a person could purchase a Mr. or Ms. Right, at least in terms of some qualities.
Unlike humans, social robots do not have other interests, needs, responsibilities or friends—there is no competition for the attention of a social robot (at least in general, though there might be shared bots) which makes them “better” than human companions in this regard.
Social robots, though they might breakdown or get hacked, will not leave or betray a person. One does not have to worry that one’s personal sex bot will be unfaithful—just turn it off and lock it down when leaving it alone.
Unlike human companions, robot companions do not impose burdens—they do not expect attention, help or money and they do not judge.
The list of advantages could go on at great length, but it would seem that robotic companions would be superior to humans in most ways—at least in regards to common complaints about companions.
Naturally, there might be some practical issues with the quality of companionship—will the robot get one’s jokes, will it “know” what stories you like to hear, will it be able to converse in a pleasing way about topics you like and so on. However, these seem to be mostly technical problems involving software. Presumably all these could eventually be addressed and satisfactory companions could be created.
Since I have written specifically about sexbots in other essays, I will not discuss those here. Rather, I will discuss two potentially problematic aspect of companion bots.
One point of obvious concern is the potential psychological harm resulting from spending too much time with companion bots and not enough interacting with humans. As mentioned above, people have already expressed concern about the impact of social media and technology (one is reminded of the dire warnings about television). This, of course, rests on the assumption that the companion bots must be lacking in some important ways relative to humans. Going back to the food analogy, this assumes that robot companions are like junk food—superficially appealing but lacking in what is needed for health. However, if the robot companions could provide all that a human needs, then humans would no longer need other humans.
A second point of concern is stolen from the virtue theorists. Thinkers such as Aristotle and Wollstonecraft have argued that a person needs to fulfill certain duties and act in certain ways in order to develop the proper virtues. While Wollstonecraft wrote about the harmful effects of inherited wealth (that having unearned wealth interferes with the development of virtue) and the harmful effects of sexism (that women are denied the opportunity to fully develop their virtues as humans), her points would seem to apply to having only or primarily robot companions as well. These companions would make the social aspects of life too easy and deny people the challenges that are needed to develop the virtues. For example, it is by dealing with the shortcomings of people that we learn such virtues as patience, generosity and self-control. Having social interactions be too easy would be analogous to going without physical exercise or challenges—one becomes emotionally soft and weak. Worse, one would not develop the proper virtues and thus would be lacking in this area. Even worse, people could easily become spoiled and selfish monsters, accustomed to always having their own way.
Since the virtue theorists argue that being virtuous is what makes people happy, having such “ideal” companions would actually lead to unhappiness. Because of this, one should carefully consider whether or not one wants a social robot for a “friend.”
It could be countered that social robots could be programmed to replicate the relevant human qualities needed to develop the virtues. The easy counter to this is that one might as well just stick with human companions.
As a final point, if intelligent robots are created that are people in the full sense of the term, then it would be fine to be friends with them. After all, a robot friend who will call you on your misdeeds or stupid behavior would be as good as a human friend who would do the same thing for you.
One stock narrative in the media is that the cost of attending college has skyrocketed. This is true. There is also a stock narrative that this increase, at least for public universities, has been due to the cutting of public education funds. This certainly is part of the truth. Another important part is the cost of sustaining the every-growing and well paid administrative class that has ensconced (and perhaps enthroned) itself at colleges and universities. I will, however, focus primarily on the cutting of public funds.
The stock media narrative makes it clear why there was a cut to public education spending: the economy was brought down in flames by the too clever machinations of the world’s financial class. This narrative is, for the most part, true. Another narrative is that Republican state legislatures have cut deeply into the funding for public education. One professed reason for this is ideological: government spending must be cut, presumably to reduce the taxes paid by the job creators. A reason that is not openly professed is the monetization of education. Public universities are in competition with the for-profit colleges for (ironically) public funding, mostly in the form of federal financial aid and student loans. Degrading, downsizing and destroying public education allows the for-profit colleges to acquire more customers and more funding and these for-profits have been generous with their lobbying dollars (to Republicans and Democrats). Since I have written other essays on the general catastrophic failure that is the for-profit college, I will not pursue this matter here.
A third openly professed reason is also ideological: the idea that a college education is a private rather than a public good. This seems to be based on the view that the primary purpose of a college education is economic: for the student to be trained to fill a job. It is also based on what can be regarded as a selfish value system—that value is measured solely in terms of how something serves a narrowly defined self-interest. In philosophy, this view is egoism and, when dignified with a moral theory, called ethical egoism (the idea that each person should act solely in her self-interest as opposed to acting, at least sometimes, from altruism).
Going along with this notion is the narrative that certain (mainly non-STEM) majors are useless. That is, they do not train a person to get a job. These two notions are usually combined into one stock narrative, which is often presented as something like “why should my tax dollars go to someone getting a degree in anthropology or, God forbid, philosophy?”
This professed ideology has had considerable impact on higher education. My adopted state of Florida has seen the usual story unfold: budget cuts to higher education, imposition of performance based funding (performance being defined primarily in terms of training the right sort of job fillers for the job creators), and the imposition of micro-managing assessment (which is universally regarded by anyone who actually teaches as pure bullshit) and so on. When all this is combined with the ever-expanding administrative class, it becomes evident that public higher education in America is in real trouble.
At this point most readers will expect me to engage in my stock response in regards to the value of education. You know, the usual philosophical stuff about the unexamined life not being worth living, the importance to a democratic state of having an educated population and all the other stuff that is waved away with a dismissive gesture by those who know the true value of public education: private profit. Since I have written about these values elsewhere, I will not do so here. There is also the obvious fact that the people who believe in this sort of value already support education and those who do not will almost certainly not be swayed by any arguments I could make. Instead, I will endeavor to argue for the value of the public university in very practical, “real-world” terms.
First, the public university is important for the defense of the United States. While private, non-profit institutions do rather important research, the public universities have contributed a great deal to our defense technology, they train many of our officers, and they train many of the people who work in our intelligence agencies. Undermining the public university weakens the United States in ways that will damage our national defense. National defense certainly seems to be a public and not just a private good.
Second, large public universities are centers of scientific research that has great practical (that is, economic) value. This research includes medical research, physics, robotics, engineering and all areas that are recognized as having clear practical value. One sure way to ensure that the United States falls behind the rest of the world in these areas is to continue to degrade public universities. Being competitive in these areas does seem to be a public good, although it is obviously specific individuals who benefit the most.
Third, large public universities draw some of the best and brightest people from around the world. Many of these people stay in the United States and contribute a great deal—thus adding to the public good (while obviously benefiting themselves). Even those who return home are influenced by the United States—they learn English (if they do not already know it), they are exposed to American culture, they make friends with Americans and often develop a fondness for their school and the country. While these factors are hard to quantify, they do serve as advantage to the United States in economic, scientific, diplomatic and defense terms.
Fourth, having what was once the best public higher education system in the world gave the country considerable prestige and influence. While prestige is difficult to quantify, it certainly matters—humans are very much influenced by status. This can be regarded as a public good.
Fifth, there are the obvious economic advantages of a strong public higher education system. College educated citizens make more money and thus pay more taxes—thus contributing to the public good. While having a job is certainly a private good, there is also a considerable amount of public good. Businesses need employees and people need doctors, lawyers, engineers, psychiatrists, pilots, petroleum engineers, computer programmers, officers, and so on. As such, it would seem that the public university does not just serve the private good but the public good.
If this argument has merit, it would seem that the degrading of public higher education is damaging the public good and harming the country. As such, this needs to be reversed before the United States falls even more behind the competition.