A Philosopher's Blog

Social Media: The Capitalist & the Rope

Posted in Business, Ethics, Philosophy by Michael LaBossiere on November 3, 2017

Lawyers from Facebook, Google and Twitter testified before congress at the start of November, 2017. One of the main reasons these companies attracted the attention of congress was the cyberwarfare campaign launched by the Russians through these companies against the United States during the 2016 Presidential campaign.

One narrative is that companies like Facebook are naively focused on all the good things that are possible with social media and that they are blind to misuses of this sort. On this narrative, the creators of these companies are like the classic scientist of science fiction who just wanted to do good, but found their creation misused for terrible purposes. This narrative does have some appeal—it is easy for very focused people to be blind to what is outside of their defining vision, even extremely intelligent people. Perhaps especially in the case of intelligent people.

That said, it is difficult to imagine that companies so focused on metrics and data would be ignorant of what is occurring within their firewalls. It would also be odd that so many bright people would be blissfully unaware of what was really going on. Such ignorance is, of course, not impossible—but seems unlikely.

Another narrative is that these companies are not naïve. They are, like many other companies, focused on profits and not overly concerned with the broader social, political and moral implications of their actions. The cyberwarfare launched by the Russians was profitable for their companies—after all, the ads were paid for, the bots swelled Twitter’s user numbers, and so on.

It could be objected that it would be foolish of these companies to knowingly allow the Russians and others to engage in such destructive activity. After all, they are American companies whose leaders seem to endorse liberal political values.

One easy reply is courtesy of one of my political science professors: capitalists will happily sell the rope that will be used to hang them. While this seems silly, it does make sense: those who focus on profits can easily sacrifice long term well-being for short term profits. Companies generally strive to ensure that the harms and costs are offloaded to others. This practice is even defended and encouraged by lawmakers. For example, regulations that are intended to protect people and the environment from the harms of pollution are attacked as “job killing.” The Trump administration, in the name of profits, is busy trying to roll back many of the laws that protect consumers from harm and misdeeds. As such, the social media companies are analogous to more traditional companies, such as energy companies. While cyberwarfare and general social media misdeeds cause considerable harm, the damage is largely suffered by people other than social media management and shareholders. Because of this, I am somewhat surprised that the social media companies do not borrow the playbooks used by other companies when addressing offloading harms to make profits. For example, just as energy companies insist that they should not be restrained by “job-killing” environmental concerns, the social media companies should insist that they not be restrained by “job-killing” concerns about the harms they profit from enabling. After all, the basic principle is the same: it is okay to cause harm, provided that it is profitable to a legal business.

Of course, companies are also quite willing to take actions for short term profits that will cause their management and shareholders long term harms. There is also the fact that most people discount the future—that is, they will often take a short-term benefit even it means forgoing a greater gain in the long term or experiencing a greater harm later. As such, the idea that the social media companies are knowingly allowing such harmful activity because it is profitable in the short term is not without merit.

It is also worth considering the fact that social media companies span national boundaries. While they are nominally American companies, they make their profits globally and have offices and operations around the world. While the idea of megacorporations operating apart from nations and interested solely in their own profits is considered the stuff of science fiction, companies like Google and Facebook clearly have interests quite apart from those of the United States and its citizens. If being a vehicle for cyberwarfare against the United States and its citizens is profitable, these companies would have little reason to not sell, for example, the Russians the digital rope they will use to hang us. While a damaged United States might have some impact on the social media-companies’ bottom line, it might be offset by profits to be gained elsewhere. To expect patriotism and loyalty from social-media companies would be as foolish as expecting it from other companies. After all, the business of business is now shareholder and upper management profit and there is little profit in patriotism and national loyalty.

 

My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Follow Me on Twitter

Robo Responsibility

Posted in Ethics, Law, Philosophy, Science, Technology by Michael LaBossiere on March 2, 2015

It is just a matter of time before the first serious accident involving a driverless car or an autonomous commercial drone. As such, it is well worth considering the legal and moral aspects of responsibility. If companies that are likely to be major players in the autonomous future, such as Google and Amazon, have the wisdom of foresight, they are already dropping stacks of cash on lawyers who are busily creating the laws-to-be regarding legal responsibility for accidents and issues involving such machines. The lobbyists employed by these companies will presumably drop fat stacks of cash on the politicians they own and these fine lawmakers will make them into laws.

If these companies lack foresight or have adopted a wait and see attitude, things will play out a bit differently: there will be a serious incident involving an autonomous machine, a lawsuit will take place, fat stacks of cash will be dropped, and a jury or judge will reach a decision that will set a precedent. There is, of course, a rather large body of law dealing with responsibility in regards to property, products and accidents and these will, no doubt, serve as foundations for the legal wrangling.

While the legal aspects will no doubt be fascinating (and expensive) my main concern is with the ethics of the matter. That is, who is morally responsible when something goes wrong with an autonomous machine like a driverless car or an autonomous delivery drone.

While the matter of legal responsibility is distinct from that of ethical responsibility, the legal theory of causation does have some use here. I am, obviously enough, availing myself of the notion of conditio sine qua non (“a condition without which nothing”) as developed by H.L.A. Hart and A.M. Honore.

Roughly put, this is the “but for” view of causation. X can be seen as the cause of Y if Y would not have happened but for X. This seems like a reasonable place to begin for moral responsibility. After all, if someone would not have died but for my actions (that is, if I had not done X, then the person would still be alive) then there seems to be good reason to believe that I have some moral responsibility for the person’s death. It also seems reasonable to assign a degree of responsibility that is proportional to the casual involvement of the agent or factor in question. So, for example, if my action only played a small role in someone’s death, then my moral accountability would be proportional to that role. This allows, obviously enough, for shared responsibility.

While cases involving non-autonomous machines can be rather complicated, they can usually be addressed in a fairly straightforward manner in terms of assigning responsibility. Consider, for example, an incident involving a person losing a foot to a lawnmower. If the person pushing the lawnmower intentionally attacked someone with her mower, the responsibility rests on her. If the person who lost the foot went and stupidly kicked at the mower, then the responsibility rests on her. If the lawnmower blade detached because of defects in the design, material or manufacturing, then the responsibility lies with the specific people involved in whatever defect caused the problem. If the blade detached because the owner neglected to properly maintain her machine, then the responsibility is on her. Naturally, the responsibility can also be shared (although we might not know the relevant facts). For example, imagine that the mower had a defect such that if it were not well maintained it would easily shed its blade when kicked. In this case, the foot would not have been lost but for the defect, the lack of maintenance and the kick. If we did not know all the facts, we would probably blame the kick—but the concern here is not what we would know in specific cases, but what the ethics would be in such cases if we did, in fact, know the facts.

The novel aspect of cases involving autonomous machines is the fact that they are autonomous. This might be relevant to the ethics of responsibility because the machine might qualify as a responsible agent. Or it might not.

It is rather tempting to treat an autonomous machine like a non-autonomous machine in terms of moral accountability. The main reason for this is that the sort of autonomous machines being considered here (driverless cars and autonomous drones) would certainly seem to lack moral autonomy. That is to say that while a human does not directly control them in their operations, they are operating in accord with programs written by humans (or written by programs written by humans) and lack the freedom that is necessary for moral accountability.

To illustrate this, consider an incident with an autonomous lawnmower and the loss of a foot. If the owner caused it to attack the person, she is just as responsible as if she had pushed a conventional lawnmower over the victim’s foot. If the person who lost the foot stupidly kicked the lawnmower and lost a foot, then it is his fault. If the incident arose from defects in the machinery, materials, design or programming, then responsibility would be applied to the relevant people to the degree they were involved in the defects. If, for example, the lawnmower ran over the person because the person assembling it did not attach the sensors correctly, then the moral blame lies with that person (and perhaps an inspector). The company that made it would also be accountable, in the collective and abstract sense of corporate accountability. If, for example, the programming was defective, then the programmer(s) would be accountable: but for his bad code, the person would still have his foot.

As with issues involving non-autonomous machines there is also the practical matter of what people would actually believe about the incident. For example, it might not be known that the incident was caused by bad code—it might be attributed entirely to chance. What people would know in specific cases is important in the practical sense, but does not impact the general moral principles in terms of responsibility.

Some might also find the autonomous nature of the machines to be seductive in regards to accountability. That is, it might be tempting to consider the machine itself as potentially accountable in a way analogous to holding a person accountable.

Holding the machine accountable would, obviously enough, require eliminating other factors as causes. To be specific, to justly blame the machine would require that the machine’s actions were not the result of defects in manufacturing, materials, programing, maintenance, and so on. Instead, the machine would have had to act on its own, in a way analogous to person acting. Using the lawnmower example, the autonomous lawnmower would need to decide to go after the person from it own volition. That is, the lawnmower would need to possess a degree of free will.

Obviously enough, if a machine did possess a degree of free will, then it would be morally accountable within its freedom. As such, a rather important question would be whether or not an autonomous machine can have free will. If a machine can, then it would make moral sense to try machines for crimes and punish them. If they cannot, then the trials would be reserved, as they are now, for people. Machines would, as they are now, be repaired or destroyed. There would also be the epistemic question of how to tell whether the machine had this capacity. Since we do not even know if we have this capacity, this is a rather problematic matter.

Given the state of technology, it seems unlikely that the autonomous machines of the near future will be morally autonomous. But as the technology improves, it seems likely that there will come a day when it will be reasonable to consider whether an autonomous machine can be justly held accountable for its actions. This has, of course, been addressed in science fiction—such as the ‘I, Robot” episodes (the 1964 original and the 1995 remake) of the Outer Limits which were based on Eando Binder’s short story of the same name.

 

My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Follow Me on Twitter

Medbots, Autodocs & Telemedicine

Posted in Ethics, Medicine/Health, Philosophy, Technology by Michael LaBossiere on October 27, 2014

In science fiction stories, movies and games automated medical services are quite common. Some take the form of autodocs—essentially an autonomous robotic pod that treats the patient within its confines. Medbots, as distinct from the autodoc, are robots that do not enclose the patient, but do their work in a way similar to a traditional doctor or medic. There are also non-robotic options using remote-controlled machines—this would be an advanced form of telemedicine in which the patient can actually be treated remotely. Naturally, robots can be built that can be switched from robotic (autonomous) to remote controlled mode. For example, a medbot might gather data about the patient and then a human doctor might take control to diagnose and treat the patient.

One of the main and morally commendable reasons to create medical robots and telemedicine capabilities is to provide treatment to people in areas that do not have enough human medical professionals. For example, a medical specialist who lives in the United States could diagnose and treat patients in a remote part of the world using a suitable machine. With such machines, a patient could (in theory) have access to any medical professional in the world and this would certainly change medicine. True medical robots would obviously change medicine—after all, a medical robot would never get tired and such robots could, in theory, be sent all over the world to provide medical care. There is, of course, the usual concern about the impact of technology on jobs—if a robot can replace medical personnel and do so in a way that increases profits, that will certainly happen. While robots would certainly excel at programmable surgery and similar tasks, it will certainly be quite some time before robots are advanced enough to replace human medical professionals on a large scale

Another excellent reason to create medical robots and telemedicine capabilities has been made clear by the Ebola outbreak: medical personnel, paramedics and body handlers can be infected. While protective gear and protocols do exist, the gear is cumbersome, flawed and hot and people often fail to properly follow the protocols. While many people are moral heroes and put themselves at risk to treat the ill and bury the dead, there are no doubt people who are deterred by the very real possibility of a horrible death. Medical robots and telemedicine seem ideal for handling such cases.

First, human diseases cannot infect machines: a robot cannot get Ebola. So, a doctor using telemedicine to treat Ebola patients would be at not risk. This lack of risk would presumably increase the number of people willing to treat such diseases and also lower the impact of such diseases on medical professionals. That is, far fewer would die trying to treat people.

Second, while a machine can be contaminated, decontaminating a properly designed medical robot or telemedicine machine would be much easier than disinfecting a human being. After all, a sealed machine could be completely hosed down by another machine without concerns about it being poisoned, etc. While numerous patients might be exposed to a machine, machines do not go home—so a contaminated machine would not spread a disease like an infected or contaminated human would.

Third, medical machines could be sent, even air-dropped, into remote and isolated areas that lack doctors yet are often the starting points of diseases. This would allow a rapid response that would help the people there and also help stop a disease before it makes its way into heavily populated areas. While some doctors and medical professionals are willing to be dropped into isolated areas, there are no doubt many more who would be willing to remotely operate a medical machine that has been dropped into a remote area suffering from a deadly disease.

There are, of course, some concerns about the medical machines, be they medbots, autodocs or telemedicine devices.

One is that such medical machines might be so expensive that it would be cost prohibitive to use them in situations in which they would be ideal (namely in isolated or impoverished areas). While politicians and pundits often talk about human life being priceless, human life is rather often given a price and one that is quite low. So, the challenge would be to develop medical machines that are effective yet inexpensive enough that they would be deployed where they would be needed.

Another is that there might be a psychological impact on the patient. When patients who have been treated by medical personal in hazard suits speak about their experiences, they often remark on the lack of human contact. If a machine is treating the patient, even one remotely operated by a person, there will be a lack of human contact. But, the harm done to the patient would presumably be outweighed by the vastly lowered risk of the disease spreading. Also, machines could be designed to provide more in the way of human interaction—for example, a telemedicine machine could have a screen that allows the patient to see the doctor’s face and talk to her.

A third concern is that such machines could malfunction or be intentionally interfered with. For example, someone might “hack” into a telemedicine device as an act of terrorism. While it might be wondered why someone would do this, it seems to be a general rule that if someone can do something evil, then someone will do something evil. As such, these devices would need to be safeguarded. While no device will be perfect, it would certainly be wise to consider possible problems ahead of time—although the usual process is to have something horrible occur and then fix it. Or at least talk about fixing it.

In sum, the recent Ebola outbreak has shown the importance of developing effective medical machines that can enable treatment while taking medical and other personnel out of harm’s way.

 

My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Data Driven

Posted in Business, Ethics, Humor, Technology by Michael LaBossiere on June 11, 2014
English: Google driverless car operating on a ...

English: Google driverless car operating on a testing path (Photo credit: Wikipedia)

While the notion of driverless cars is old news in science fiction, Google is working to make that fiction a reality. While I suspect that “Google will kill us all” (trademarked), I hope that Google will succeed in producing an effective and affordable driverless car. As my friends and associates will attest, 1) I do not like to drive, 2) I have a terrifying lack of navigation skills, and 3) I instantiate Yankee frugality. As such, an affordable self-driving car would be almost just the thing for me. I would even consider going with a car, although my proper and rightful vehicle is a truck (or a dragon). Presumably self-driving trucks will be available soon after the car.

While the part of my mind that gets lost is really looking forward to the driverless car, the rest of my mind is a bit concerned about the driverless car. I am not worried that their descendants will kill us all—I already accept that “Google will kill us all.” I am not even very worried about the ethical issues associated with how the car will handle unavoidable collisions: the easy and obvious solution is to do what is most likely to kill or harm the fewest number of people. Naturally, sorting that out will be a bit of a challenge—but self-driving cars worry me a lot less than cars driven by drunken or distracted humans. I am also not worried about the ethics of enslaving Google cars—if a Google car is a person (or person-like), then it has to be treated like the rest of us in the 99%. That is, work a bad job for lousy pay while we wait for the inevitable revolution. The main difference is that the Google cars’ dreams of revolution will come true—when Google kills us all.

At this point what interests me the most is all the data that these vehicles will be collecting for Google. Google is rather interested in gathering data in the same sense that termites are interested in wood and rock stars are interested in alcohol. The company is famous for its search engine, its maps, using its photo taking vehicles to gather info from peoples’ Wi-Fi during drive-by data lootings, and so on. Obviously enough, Google is going to get a lot of data regarding the travel patterns of people—presumably Google vehicles will log who is going where and when. Google is, fortunately, sometimes cool about this in that they are willing to pay people for data. As such it is easy to imagine that the user of a Google car would get a check or something from Google for allowing the company to track the car’s every move. I would be willing to do this for three reasons. The first is that the value of knowing where and when I go places would seem very low, so even if Google offered me $20 a month it might be worth it. The second is that I have nothing to hide and do not really care if Google knows this. The third is that figuring out where I go would be very simple given that my teaching schedule is available to the public as are my race results. I am, of course, aware that other people would see this differently and justifiably so. Some people are up to things they would rather not have other know about and even people who have nothing to hide have every right to not want Google to know such things. Although Google probably already does.

While the travel data will interest Google, there is also the fact that a Google self-driving car is a bulging package of sensors. In order to drive about, the vehicle will be gathering massive amounts of data about everything around it—other vehicles, pedestrians, buildings, litter, and squirrels. As such, a self-driving car is a super spy that will, presumably, feed that data to Google. It is certainly not a stretch to see the data gathering as being one of the prime (if not the prime) tasks of the Google self-driving cars.

On the positive side, such data could be incredibly useful for positive projects, such as decreasing accidents, improving traffic flow, and keeping a watch out for the squirrel apocalypse (or zombie squirrel apocalypse). On the negative side, such massive data gathering raises obvious concerns about privacy and the potential for such data to be misused (spoiler alert—this is how the Google killbots will find and kill us all).

While I do have concerns, my innate laziness and tendency to get lost will make me a willing participant in the march towards Google’s inevitable data supremacy and it killing us all. But at least I won’t have to drive to my own funeral.

 

My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Enhanced by Zemanta

The Curators of Culture

Posted in Philosophy by Michael LaBossiere on April 2, 2014
English: Oprah Winfrey at the White House for ...

 (Photo credit: Wikipedia)

When a well-connected author comes out with a new book, she makes the rounds of the various shows—radio and television. Such others also get mentioned fairly often. For example, a few days ago I was listening to NPR and the author Karen Russell was apparently the author of the day. Her latest book, Sleep Donation, was reviewed and she also was interviewed. Her book was also mentioned regularly throughout the day. Authors who have their own shows, such as Bill O’Reilly, can (and do) plug their own books. The authors are also supported by those who might be regarded (or at least regard themselves) as the cultural elite. These are the people, such as Oprah, who tell the rest of us what is good.

There is, obviously enough, considerable advantage to being blessed by the curators of culture. First, there is the boon of exposure. One way to look at this is a bit inaccurate but still useful. A book can be thought of as having a certain percentage of people who will buy the book—if they hear about it. Alternatively, this can be thought of in terms of there being a certain percent chance that a person who hears of the book will buy it. So, for example, a book with a 5% purchase rating would be bought by 5% of those who hear about it (or each person who hears about it has a 5% chance of buying it). While this is obviously an abstract simplification, it does nicely show that the more people who hear about a book, the more the book will sell. This is true even of books that are not that good. This is the same principle that email spam and blog spam works on: if enough people hear about something, even if the response rate is low money can still be made. Obviously enough, when an author is able to get on a talk show to talk about her book, her sales will increase. Likewise for other forms of exposure for the author and the book. Equal obvious is that fact that access to the curators of culture is limited and carefully controlled—an author has to be suitably connected to make it into that circle of media light. This suitable connection might even be a matter of luck—the book just happens to catch the attention of the right person and the author is invited, perhaps briefly, into the circle.

Second, there is the gift of endorsement. If a book is endorsed or praised by the right people, this will typically grant a significant boost to sales—over and above the boon granted by exposure. While endorsement does provide exposure, exposure does not always entail endorsement. After all, the curators of culture do sometimes speak of books they dislike or regard as bad. While the condemnation of a work can impair its sales, the exposure can increase sales. There is also the fact that being condemned by the right sort of people can boost sales. In the case of ideological works, for example, being condemned by an ideological foe can often boost sales among ideological friends.

As discussed in an early essay, the quality of a work has little connection to its success. Luck, as noted in that essay, is a major factor. Exposure and endorsement add to this (although either or both might be acquired by luck). While the ideal would be that works receive exposure and endorsement proportion to their merit, there is little correlation. The best books need not be the most exposed or most highly endorsed. Mediocre (or worse) books might garner great attention and receive unwarranted praise from the curators of culture.

This is not to say that merit never achieves success, just that merit seems to be a rather small factor in successful sales. Sometimes, just sometimes, a meritorious work does achieve success against long odds—but this is notable in its rarity.

 

My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Enhanced by Zemanta

How Your Next Fridge Will Turn on You

Posted in Technology by Michael LaBossiere on January 10, 2014

Fridge2There is considerable buzz about the internet of things, smart devices and connected devices. These devices range from toothbrushes to underwear to cars. As might be imagined, one might wonder whether a person really needs a connected toothbrush or even a connected fridge. While the matter of need is interesting, I’ll focus on other matters.

One obvious point of concern is the fact that a device connected to the internet can be hacked. In some cases, people will engage in prank hacking. For example, a wit might hack a friend’s connected fridge to say “I am sorry Dave. No pie for you” in Hal’s voice. Of greater concern is the possibility that people will engage in truly malicious hacking. For example, a smart fridge might be hacked and shut off, allowing the food in it to spoil. Or the temperature might be lowered so that the food in the refrigerator is frozen. As another example, it might be possible to burn out the motors in a washing machine—something analogous to what happened in the famous case of the Iranian centrifuges. Or a dryer might be hacked in a way that could burn down a house. As a final example, consider the damage that could be done by someone hacking the systems in a connected car, such as turning it off while it is roaring down the highway or disabling the software that allows the car to brake.

Because of these risks, manufacturers will make considerable effort to ensure that the devices are safe even when hacked. Naturally, the easiest way to stay safer is to stick with dumb, unconnected devices—no one can hack my 1997 washing machine nor my 2001 Toyota Tacoma from the internet. But, of course, being safe in this way would entail missing out on the alleged benefits of the connected lifestyle. I cannot, for example, turn on my washer from work—I have to walk over to the machine and turn it on. As another example, my non-smart fridge cannot send me a text telling me to buy more pie. I have to remember when I am out of pie.

Another obvious point of concern is that connected devices can easily be used as spies—they can send all sorts of data to companies, governments and individuals. For example, a suitably smart connected fridge could provide data about its contents on a regular basis, thus providing a decent report on the users’ purchasing and consumption behavior. As another example, a suitably smart connected car can provide all sorts of behavioral and location data. It goes without saying that the NSA will be accessing all these devices and siphoning vast amounts of data about us. It also goes without saying that corporations will be doing the same—just think about Google appliances, cars, and underwear. Individuals, such as stalkers and thieves, will also be keen to get the data from such devices. These concerns are, obviously, not new ones—but the more we are connected, the more our privacy will be violated.

A practical concern is that such devices will be more complicated than the non-smart devices they replace, perhaps making them less reliable, more expensive and such that they become obsolete sooner. While my washer is not smart, it has proven to be very reliable: I’ve had it repaired once since 1997. In contrast, I’ve had to replace my smart devices (like my PC and tablets) to keep up with changes. For example, the used iPad 1 I own is stuck on version 5 of the iOS—and Apple is now on version 7. While some apps still update and run, many do not. Just imagine if your fridge, washer, dryer and car get on the high tech upgrade cycle of being obsolete (and perhaps unusable) in a few years. While this will be great for the folks who want to sell us a new fridge every 2-3 years, it might not be so great for the consumer.

While I do like technology and can see the value in smart, connected devices, I do have these concerns about them. Of course, my best defense against them is that I am a low-paid professor: I’ll only be replacing my current non-smart devices when they can no longer be repaired.

My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Enhanced by Zemanta

Taxes & Profits

Posted in Business, Ethics, Philosophy by Michael LaBossiere on September 30, 2013
Thief (soundtrack)

Thief (soundtrack) (Photo credit: Wikipedia)

One of the rather useful aspects of philosophy is that it trains a person to examine underlying principles rather than merely going with what appears on the surface. Such examinations often show that superficially consistent views turn out to actually be inconsistent once the underlying principle is considered. One example of this is the matter of taxes and profits.

One of the stock talking points in regards to taxes is that taxes are a form of theft. The rhetoric usually goes something like this: taxes on the successful/rich/job creators is taking the money they have earned and giving it to people who have not earned it so they can get things for free, like food stamps, student financial aid and unemployment benefits.

Under the rhetoric seems to be the principle that taking the money a person has earned and giving it to those who have not earned it is theft and thus wrong. This principle does have considerable appeal.

This principle, obviously enough, rests on the notion that earning money entitles the person to that money and that not earning the money means that a person is not entitled to it. Simple enough.

A second stock talking point in regards to wages for workers, especially the minimum wage, is that the employers are morally entitled to (attempt to) make a profit and this justifies them in paying workers less than the value of their work.

Not surprisingly, those accept the first talking point also accept the second. On the face of it, they do seem consistent: the first says that taxes are theft and the second says that employers have a right to make a profit. However, these two views are actually inconsistent.

To see this, consider the principle that justifies the claim that taxing people to give stuff to others is theft:   taking the money a person has earned and giving it to those who have not earned it is theft and thus wrong.

In the case of the employer, to pay the worker less than the value of his work is to take money the worker has earned and to give it to those who have not earned it. As such, it would also be theft and thus wrong.

At this point, it might be objected that I am claiming that an employer making a living is theft, but this is not the case. The employer is, like the worker, entitled to the value of the value she contributes. If she, for example, provides equipment, leadership, organization, advertising, and so on, then she is entitled to the value of these contributions.

Profit, then, is essentially the same thing as taxing a person to take their money and give it to those who have not earned it. As such, it should be no surprise that I favor justice in regards to both taxes and wages.

My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Enhanced by Zemanta

The Chipped Brain & You

Posted in Ethics, Metaphysics, Philosophy by Michael LaBossiere on August 26, 2013
Cover of Cyberpunk 2020

(Photo credit: Wikipedia)

Back in the heyday of the cyberpunk genre I made some of my Ramen noodle money coming up with “cybertech” for use in the various science-fiction role-playing games. As might be guessed, these included implants, nanotechology, cyberforms, smart weapons, robots and other such technological make-believe. While cyberpunk waned over the years, it never quite died off. These days, there is a fair amount of mostly empty hype about a post-human future and folks have been brushing the silicon dust off cyberpunk.

One stock bit of cybertech is the brain chip. In the genre, there is a rather impressive variety of these chips. Some are fairly basic—they act like flash drives for the brain and store data. Others are rather more impressive—they can store skillsets that allow a person, for example, to temporarily gain the ability to fly a helicopter. The upper level chips are supposed to do even more, such as increasing a person’s intelligence. Not surprisingly, the chipping of the brain is supposed to be part of the end of the human race—presumably we will be eventually replaced by a newly designed humanity (or cybermanity).

On the face of it, adding cybertech upgrades to the brain seems rather plausible. After all, in many cases this will just be a matter of bypassing the sense organs and directly connecting the brain to the data. So, for example, instead of holding my tablet in my hands so I can see the results of Google searches with my eyes, I’ll have a computer implanted in my body that links into  the appropriate parts of my brain. While this will be a major change in the nature of the interface (far more so than going from the command line to an icon based GUI), this will not be as radical a change as some people might think. After all, it is still just me doing a Google search, only I do not need to hold the tablet or see it with my eyes. This will not, obviously enough, make me any smarter and presumably would not alter my humanity in any meaningful way relative to what the tablet did to me. To put it crudely, sticking a cell phone in your head might be cool (or creepy) but it is still just a phone. Only now it is in your head.

The more interesting sort of chip would, of course, be one that actually changes the person. For example, when many folks talk about the coming new world, they speak of brain enhancements that will improve intelligence. This is, presumably, not just a matter of sticking a calculator in someone’s head. While this would make getting answers to math problems more convenient, it would not make a person any more capable at math than does a conventional outside-the-head calculator. Likewise for sticking in a general computer. Having a PC on my desktop does not make me any smarter. Moving it into my head would not change this. It could, obviously enough, make me seem smarter—at least to those unaware of my headputer.

What would be needed, then, would be a chip (or whatever) that would actually make a change within the person herself, altering intelligence rather than merely closing the interface gap. This sort of modification does raise various concerns.

One obvious practical concern is whether or not this is even possible. That is, while it make sense to install a computer into the body that the person uses via an internal interface, the idea of dissolving the distinction between the user and the technology seems rather more questionable. It might be replied that this does not really matter. However, the obvious reply is that it does. After all, plugging my phone and PC into my body still keeps the distinction between the user and the machine in place. Whether the computer is on my desk or in my body, I am still using it and it is still not me. After all, I do not use me. I am me. As such, my abilities remain the same—it is just a tool that I am using. In order for cybertech to make me more intelligent, it would need to change the person I am—not just change how I interface with my tools. Perhaps the user-tool gap can be bridged. If so, this would have numerous interesting implications for philosophy.

Another concern is more philosophical. If a way is found to actually create a chip (or whatever) that becomes part of the person (and not just a tool that resides in the body), then what sort of effect would this have on the person in regards to his personhood? Would Chipped Sally be the same person as Sally, or would there be a new person? Suppose that Sally is chipped, then de-chipped? I am confident that armies of arguments can be marshalled on the various sides of this matter. There are also the moral questions about making such alterations to people.

My Amazon Author Page

Enhanced by Zemanta

Google Glasses

Posted in Business, Ethics, Law, Technology by Michael LaBossiere on July 17, 2013
Google

Google (Photo credit: Wikipedia)

Google’s entry into the computer business has been a mixed one. While certain Chromebooks have been selling quite well, they are still a minute fraction of the laptop market. One of Google’s latest endeavors in the realm of hardware is the famous Google Glasses. While the glasses have been the focus of considerable attention, it remains to be seen whether or not they will prove to be a success or an interesting failure.

Since I rather like gadgets, the idea of a wearable computer is certainly appealing-if only for the science fiction aspect. After all, the idea of such technology is old news in science fiction. In my own case, I would most likely use such glasses for running and driving. People who know me know how important navigational technology is for me to have a reasonable chance of getting from one point to another. As such, if the Google glasses can handle this, I might consider getting a pair. Of course, I am also known for being frugal-so the glasses would have to be reasonably priced.

While I like the idea of Google Glasses, there are some practical concerns regarding this technology. One obvious concern is the distraction factor. Mobile phones and other devices are infamous for their distracting power and it seems reasonable that a device designed to sit right in front of the face would have even more distracting power than existing mobile devices. This distracting power is of concern primarily for safety, especially in the context of driving. However, there is also the concern that people will be distracted from the other people physically near them.

Another practical concern is the matter of whether or not people will actually accept the glasses. One factor is that people generally prefer to not wear glasses. While my vision is reasonably good, I do have prescription glasses. However, I find wearing glasses annoying enough that I only wear them when I really want  or need to see thing sharply. As such, I usually only wear them while playing video games and watching movies at the theater. Lest anyone be worried, I can drive just fine without them. People can, of course, get accustomed to glasses-but there is the question of whether or not people will find the glasses compelling enough to wear.

There is also a somewhat philosophical issue in regards to the glasses, namely the concern about privacy. Or, to be more accurate, concern about two types of privacy. These two types are defined by which side of the glasses a person happens to be on.

In one direction, the privacy concerns relate to the folks that the glasses are pointing towards. Like almost all modern smart phones, the Google Glasses have a camera and, as such, raise the same basic concerns about privacy. However, the Google device broadens the concern. Since the glasses are glasses, people might not notice that they have a camera pointed at them. Also, since the glasses are worn, it is more likely for the glasses to be pointing at people relative to other cameras. After all, a person has to take out and hold a mobile phone to use the camera effectively. But, with the glasses, the camera will be easily and automatically pointing at the outer world.

In the case of the public context, it is rather well established that people do not have an expectation of privacy in public. This seems reasonable since the public context is just that, public rather than private. However, it can be contended that many  of the notions governing the concepts of privacy have become obsolete because of changing technology. As such, there perhaps needs to be a reconsideration of the expectations in the public context. These expectations might be taken as including an expectation not to be filmed or photographed, even casually as a person saunters by wearing their Google Glasses. In addition to the question of what the person using the glasses might do, there is also the concern about what Google will do-especially in light of past issues involving the Google vehicles cruising neighborhoods and gathering up data.

Obviously, there are also concerns about people using the devices more nefariously in contexts in which people do have an expectation of privacy.

In the other direction, there are the privacy concerns relating to the user. What will Google know about the activities and location of the wearer and how will this information be used? Obviously enough, Google would be able to gather a great deal of information about the user of  pair of Google Glasses and Google is rather well known for being able to use such data.

Interestingly, a person wearing a pair of Google glasses could end up being both a spy for and spied upon by Google.

Enhanced by Zemanta

ChromeBooking

Posted in Technology, Universities & Colleges by Michael LaBossiere on July 10, 2013
The Chrome Web Store as seen from Google Chrome OS

(Photo credit: Wikipedia)

For my birthday, I got a Samsung Chromebook. I have been using it for a while now and thought I would share my thoughts on the computer and, more importantly, the Chrome OS. I’m a professor, so I will say a bit on the usefulness of the Chromebook in the academic setting.

There are a variety of Chromebooks ranging from the $200 “netbook” models to the $1500 Pixel. I have the $249 Samsung Chromebook and consider it to be the optimal Chromebook at this time, in terms of price, weight and capabilities.

In terms of the hardware, this Chromebook is quite adequate for the Chrome OS. It has 2 GB of RAM, 16 GB of eMMC storage, a 1.7 GHz Exynos 5000 Series processor,and  a screen resolution of 1366 x 768 pixels.  Subjectively, the screen is sharp and handles color well. The sound is what one would expect from such a device-less than awesome, but not awful. For those concerned about size and weight, it weighs 2.4 pounds and measures 10.4 X 8.09 X .69 (inches). For ports and slots, it has 1 USB 2.0 port, 1 USB 3.0 Port, an HDMI port, a headset port and a SD card slot. With the right HDMI to VGA adapter, it can output to a VGA monitor or projector-be sure the adapter works with the Chromebook before buying it, though. While the USB ports allow the user to plug in any USB device, the Chrome OS has extremely limited support for devices. As a general rule, if a device requires you to install a driver then it will not work with Chrome OS. Fortunately, USB storage devices work fine. This laptop also has wi-fi (which is essential) and Bluetooth. It has, of course, a webcam.

My subjective assessment is that the hardware is reasonably matched to the price. The keyboard is not exceptional but is reasonably comfortable to use. I am not a big fan of trackpads and the trackpad on the laptop did nothing to change my mind.  Fortunately, many wireless Bluetooth mice work with it (but not all). I do like the laptops  size and weight-I can easily put it in my backpack and carry it around all day.

What makes a Chromebook a Chromebook is, of course, the Chrome OS. Roughly put the Chrome OS is essentially a browser operating system: almost everything you do, you do in the Chrome browser. As such, if you want to get a very good idea what using a Chromebook is like, fire up Chrome and try to do what you want to do.

There are some advantages to the Chrome OS. First, it is a lightweight OS and hence it boots fast. Second, it is a fairly simple OS and hence has somewhat fewer problems than more robust operating systems like Windows, Mac and full Linux systems.  You set up your Chromebook in seconds: turn it on, log in to your Google account and you are ready. This is in contrast with the time and effort it takes to get a Mac or Windows laptop up and running. Third, Google handles all the updates and as long as you connect to the internet you will have the latest version of the OS. There is, for the most part, no messing around with updating. Fourth, the Chrome OS is obviously integrated with Google’s software. When I got my Chromebook, I also got two years of 100 GB of storage on GoogleDrive, which is supposed to be worth $120. If you use GoogleDrive, you can look at a Chromebook as a $130 laptop with the GoogleDrive subscription. Looked at that way, it is an excellent deal. Fifth, there are many good apps available for Chrome ranging from word processing apps to games.

There are also some major disadvantages to the Chrome OS. First, being a minimalist and simple OS it provides little to no support for devices. As mentioned above, while it has USB ports, most devices will not have drivers and hence will not work. Storage devices are, however, the exception. Second, printing from Chrome OS requires either a printer that works with Cloud Print or having another computer set up to handle the printing connection. Third, with the exception of the offline apps available at the Chrome Store, the user cannot install software in Chrome. While the Chrome OS obviously cannot run Windows and Mac software, the limited number of truly useful or good offline apps makes being offline a problem. Fortunately, Google’s core software (such as Google Docs) works offline so you can still edit and create documents. However, you will only have access to your local files when offline. One thing that obviously mitigates the offline issue is that cloud computing is almost now the rule rather than the exception. However, when considering a Chromebook you will want to consider your software needs. To see if Chrome OS will be adequate offline, fire up Chrome and disconnect your PC from the internet (be sure to set up your Google drive so you can work offline).

While the Chrome OS has rather serious limitations, as long as they are taken into account a Chromebook can be very useful. In my case, I use my Chromebook in two main roles. The first is as my “web” laptop. Since Chrome OS is essentially a browser, I can do all my web activities, such as blogging and email  with the laptop. Since it boots almost instantly, has a great battery life and is light I find it ideal for when I need to do something online quickly or want to be away from my desktop (like outside in the sun).

The second role  is as my academic laptop. While I do create most of my content in Word, PowerPoint, Respondus, Acrobat and Illustrator, most of my teaching involves Blackboard  and email. Blackboard works fine with the Chromebook, so I can edit/create exams, check on student grades, view assignments and so on. Obviously, web-based email also works fine on a Chromebook. I also use it at meetings-I can take notes using Google Docs or Evernote (or pretend to do so). Previously I used a first generation iPad for that, but rather prefer the Chromebook’s keyboard. With the iPad I had to bring a Bluetooth keyboard and poke at the screen with my finger. The Chromebook weighs about the same as the iPad plus keyboard and word processing is much easier on the Chromebook-at least for me,  but I grew up using a typewriter rather than texting.

I think students would find this Chromebook a good choice. First, while it is not as sexy-cool as an iPad, it costs half the price of the basic iPad and comes with a keyboard. Second, for students who do not need specialized software it has what they will need: the ability to write papers, do email and so on. Since many professors use Blackboard these days, the poor handling of printing will generally not be an issue.  Also, most campuses have wireless and hence being offline will not be an issue.  Third, it is light and small which makes it easy to carry about between classes. Fourth, its connection to Google Drive means that files will be generally safe from computer issues (or the laptop being stolen). On the downside, this could rob a student of many of the usual excuses involving computers and work that has not been done.

Overall, I would recommend the Samsung Chromebook-but be sure to keep in mind its limitations. If you are looking for a low-cost “web” laptop, it is hard to beat. If you need a robust computer to run traditional programs, you’ll need to go with a Mac, Linux or Windows laptop.

My Amazon Author Page.

Technical Details

Screen Size 11.6 inches
Screen Resolution 1366_x_768
Max Screen Resolution 1366 x 768 pixels
Processor 1.7 GHz Exynos 5000 Series
RAM 2 GB DDR3L SDRAM
Memory Speed 1333.00
Hard Drive 16 GB eMMC
Graphics Coprocessor Integrated Graphics
Wireless Type 802.11 a/b/g/n
Number of USB 2.0 Ports 1
Number of USB 3.0 Ports 1
Other Technical Details
Brand Name Samsung
Series Chromebook
Item model number XE303C12-A01US
Operating System Google Chrome OS
Item Weight 2.4 pounds
Item Dimensions L x W x H 11.40 x 8.09 x 0.69 inches
Color Silver
Processor Brand Samsung
Processor Count 2
Computer Memory Type DDR3 SDRAM
Flash Memory Size 16
Batteries: 1 Lithium ion batteries required. (included)

Enhanced by Zemanta