A Philosopher's Blog

The Ethics of Stockpiling Vulnerabilities

Posted in Business, Ethics, Philosophy, Politics, Technology by Michael LaBossiere on May 17, 2017
Embed from Getty Images

In May of 2017 the Wannacry Ransomware swept across the world, impacting thousands of computers. The attack affected hospitals, businesses, and universities and the damage has yet to be fully calculated. While any such large-scale attack is a matter of concern, the Wannacry incident is especially interesting. This is because the foundation of the attack was stolen from the National Security Agency of the United States. This raises an important moral issue, namely whether states should stockpile knowledge of software vulnerabilities and the software to exploit them.

A stock argument for states maintaining such stockpiles is the same as the argument used to justify stockpiling weapons such as tanks and aircraft. The general idea is that such stockpiles are needed for national security: to protect and advance the interests of the state. In the case of exploiting vulnerabilities for spying, the security argument can be tweaked a bit by drawing an analogy to other methods of spying. As should be evident, to the degree that states have the right to stockpile physical weapons and engage in spying for their security, they also would seem to have the right to stockpile software weapons and knowledge of vulnerabilities.

The obvious moral counter argument can be built on utilitarian grounds: the harm done when such software and information is stolen and distributed exceeds the benefits accrued by states having such software and information. The Wannacry incident serves as an excellent example of this. While the NSA might have had a brief period of advantage when it had exclusive ownership of the software and information, the damage done by the ransomware to the world certainly exceeds this small, temporary advantage. Given the large-scale damage that can be done, it seems likely that the harm caused by stolen software and information will generally exceed the benefits to states. As such, stockpiling such software and knowledge of vulnerabilities is morally wrong.

This can be countered by arguing that states just need to secure their weaponized software and information. Just as a state is morally obligated to ensure that no one steals its missiles to use in criminal or terrorist endeavors, a state is obligated to ensure that its software and vulnerability information is not stolen. If a state can do this, then it would be just as morally acceptable for a state to have these cyberweapons as it would be for it to have conventional weapons.

The easy and obvious reply to this counter is to point out that there are relevant differences between conventional weapons and cyberweapons that make it very difficult to properly secure them from unauthorized use. One difference is that stealing software and information is generally much easier and safer than stealing traditional weapons. For example, a hacker can get into the NSA from anywhere in the world, but a person who wanted to steal a missile would typically need to break into and out of a military base. As such, securing cyberweapons can be more difficult that securing other weapons. Another difference is that almost everyone in the world has access to the deployment system for software weapons—a device connected to the internet. In contrast, someone who stole, for example, a missile would also need a launching platform. A third difference is that software weapons are generally easier to use than traditional weapons. Because of these factors, cyberweapons are far harder to secure and this makes their stockpiling very risky. As such, the potential for serious harm combined with the difficulty of securing such weapons would seem to make them morally unacceptable.

But, suppose that such weapons and vulnerability information could be securely stored—this would seem to answer the counter. However, it only addresses the stockpiling of weaponized software and does not justify stockpiling vulnerabilities. While adequate storage would prevent the theft of the software and the acquisition of vulnerability information from the secure storage, the vulnerability would remain to be exploited by others. While a state that has such vulnerability information would not be directly responsible for others finding the vulnerabilities, the state would still be responsible for knowingly allowing the vulnerability to remain, thus potentially putting the rest of the world at risk. In the case of serious vulnerabilities, the potential harm of allowing such vulnerabilities to remain unfixed would seem to exceed the advantages a state would gain in keeping the information to itself. As such, states should not stockpile knowledge of such critical vulnerabilities, but should inform the relevant companies.

The interconnected web of computers that forms the nervous system of the modern world is far too important to everyone to put it risk for the relatively minor and short-term gains that could be had by states creating malware and stockpiling vulnerabilities. I would use an obvious analogy to the environment; but people are all too willing to inflict massive environmental damage for relatively small short term gains. This, of course, suggests that the people running states might prove as wicked and unwise regarding the virtual environment as they are regarding the physical environment.


My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Follow Me on Twitter

Cut Scenes

Posted in Aesthetics, Philosophy by Michael LaBossiere on January 25, 2016

While I have been playing video games since the digital dawn of gaming, it was not until I completed Halo 5 that I gave some philosophical consideration to video game cut scenes. For those not familiar with cut scenes, they are non-interactive movies within a game. They are used for a variety of purposes, such as providing backstory, showing the consequences of the player’s action or providing information, such as how adversaries or challenges work.

The reason that Halo 5 motivated me to write about cut scenes is an unfortunate one: I believe  that Halo 5 made poor use of cuts scenes and will argue for this point as part of my sketch of cut scene theory. Some gamers, including director Guillermo Del Toro and game designer Ken Levine, have spoken against the use of cut scenes. In support of their position, a fairly reasonable argument can be presented against cut scenes in games.

One fundamental difference between a game and a movie is the distinction between active and passive involvement. In the case of a typical movie, the audience merely experiences the movie as observers—they do not influence the outcome. In contrast, the players of a game experience the game as participants—they have a degree of control over the events. A cut scene, or in game movie, changes the person from being a player to being an audience member. This is analogous to taking a person playing sports and putting her into the bleachers to be a mere spectator. The person is, literally, taken out of the game. While there are some who enjoy watching sports, the athlete is there to play and not to be part of the audience. Likewise, while watching a movie can be enjoyable, a gamer is there to game and not be an audience member. To borrow from Aristotle, games and movies each have their own proper pleasures and mixing them together can harm the achievement of this pleasure.

Aristotle, in the Poetics, is critical of the use of the spectacle (such as what we would now call special effects) to produce the tragic feeling of tragedy. He contends that this should be done by the plot. Though this is harder to do, the effect is better. In the case of a video game, the use of cinematics can be regarded as an inferior way of bringing about the intended experience of a game. The proper means of bringing about the effect should lie within the game itself—that is, what the player is actually playing and not merely observing as a passive spectator. As such, cut scenes should be absent from games. Or, at the very least, kept to a minimum.

One way to counter this argument is to draw an analogy to role-playing games such D&D, Pathfinder and Call of Cthulhu. Such games typically begin with what is analogous to a game’s opening cinematic: the game master sets the stage for the adventure to follow. During the course of play, there are often important events that take considerable game world time but would be boring to actually play. For example, a stock phrase used by most game masters is “you journey for many days”, perhaps with some narrative about events that are relevant to the adventure, such as the party members (who are played by people who are friends in real life) becoming friends along the way. There are also other situations in which information needs to be conveyed or stories told that do not need to actually be played out because doing so would not be enjoyable or would be needlessly time consuming if done using game mechanics. A part of these games is shifting from active participant to briefly taking on the role of the audience. However, this is rather like being on the bench listening to the coach rather than being removed from the field and put into the bleachers. While one is not actively playing at that moment, it is still an important part of the game and the player knows that she will be playing soon.

In the case of video games, the same sort approach would also seem to fit, at least in games that have story elements that are important to the game (such as plot continuity, background setting, maintaining some realism, and so on) yet would be tedious, time consuming or beyond the mechanics of the game to actually play through. For example, if the game involves the player driving through a wasteland from a settlement to the ruins of a city she wishes to explore, then a short cut scene that illustrates the desolation of the world while the character is driving would certainly be appropriate. After all, driving for hours through a desolate wasteland would be very boring.

Because of the above argument, I do think that cut scenes can be a proper part of a video game, provided that they are used properly. This requires, but is not limited to, ensuring that the cut scenes are necessary and that the game would not be better served by either deleting the events covered in the movies or having them handled with actual game play. It is also critical that the player not feel that she has been put into the bleachers, although that bench feeling can be appropriate. As a general rule, I look at cut scenes as analogous to narrative in a tabletop role-playing game: a cut scene in a video game is fine if narrative would be fine in an analogous situation in a tabletop game.

Since I was motivated by Halo 5’s failings, I will use it as an example of the bad use of cut scenes. This will contain some possible spoilers, so those who plan to play the game might wish to stop reading.

Going with my narrative rule, a cut scene should not contain things that would be more fun to actually play than watch—unless there is some greater compelling reason why it must be a cut scene. Halo 5 routinely breaks this rule. A rather important sub-rule of this rule is that major enemies should be dealt with in game play and not simply defeated in a cut scene. Halo 5 broke this rule right away. In Halo 4 Jul ‘Mdama was built up as a major enemy. As such, it was rather surprising that he was knifed to death in a cut scene right near the start of Halo 5. This would be like setting out to kill a dragon in Dungeons & Dragons and having the dungeon master allow you to fight the orcs and goblins, but then just say “Fred the fighter hacks down the dragon. It dies” in lieu of playing out the fight with the dragon. Throughout Halo 5 there were cut scenes were I and my friend said “huh, that would have been fun to actually play rather than just watch.” That, in my view, is a mark of bad choices about cut scenes.

The designers also made the opposite sort of error: making players engage in tedious “play” that would have been far better served by short cut scenes. For example, there are parts where the player has to engage in tedious travel (such as ascending a damaged structure). While it would have been best to make it interesting, it would have been less bad to have a quick cut scene of the Spartans scrambling to safety. The worst examples, though, involved “game play” in which the player remains in first person shooter view, but cannot use any combat abilities. The goal is to walk around trying to find the various people to “talk” to. The conversations are scripted: when you reach the person, the non-player character just says a few things and your character says something back—there are no dialogue choices. These should have been handled by short cut scenes. After all, when I am playing a first person shooter, I do not want to have to walk around unable to shoot to trigger recorded conversations.  These games are supposed to be “shoot and loot” not “walk and talk.”

To conclude, I take the view of cut scenes that Aristotle takes of acting: while some condemn all cut scenes and all acting (it was argued by some that tragedy was inferior to the epic because it was acted out on stage), it is only poor use of cut scenes (and poor acting) that should be condemned. I do condemn Halo 5.


My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Follow Me on Twitter

Story & Games

Posted in Aesthetics, Philosophy, Technology, Video Games by Michael LaBossiere on June 11, 2012
La bildo estas kopiita de wikipedia:es. La ori...

All the roll playing you need.  (Photo credit: Wikipedia)

As a philosopher who teaches aesthetics and a gamer, I find questions about games and art to generally be rather interesting. As I have argued elsewhere, I take the intuitively plausible view that video games can be art. However, even if that matter is considered settled (which can be debated), there is still a rich vein of philosophical issues to mine.

One topic that I and many other gamers often find interesting is the matter of the importance of story in games. John Carmack, who knows a bit about games, said  that “story in a game is like a story in a porn movie. It’s expected to be there, but it’s not that important.” Folks who delight in story driven games no doubt disagree with this view and there does seem to be an issue worth discussing here. For the sake of this discussion, I will be assuming that games (specifically video games) can be art. I have argued for this in an earlier essay and hence will not repeat my arguments here.

Obviously enough, there are games that have no story at all and are still fine games. To use the obvious examples, Tetris and Asteroids are story free, yet fine games. Naturally, these are not the sort of games that people debate about when it comes to whether or not story is important. However, it is worth noting these sorts of games because they provide a relatively pure context in which to present two relevant points.

The first is that game mechanisms (that is, the purely game aspects of the game) are reasonably seen as being distinct from the art aspects of the game (that is, the game as art).  After all, while all games are games and some games are art, not all games are art.  This can, of course, be argued against. However, it does have enough intuitive plausibility that it is well worth considering.

The second point is that even the art aspects of a game that is (or contains) art can be distinguished from each other. For example, while Tetris and Asteroid do not have plots, they do have game artwork and sounds (which might be dismissed as mere sound effects rather than having any status as art). As another example, the music and visual art of Halo can be distinguished from each other in that one is music and the other visual art. This point seems reasonable certain.

The matter of the importance of story is most interesting when it comes to games that do, in fact, feature a story. Obviously enough, the story (or plot) of games have varying degrees of integration into the game. For games that have a story, in one end of the spectrum lives the games whose story have an extremely minimal role in the game. One excellent example of this is Serious Sam: The First Encounter. The game does have a story: an evil alien threatens earth and you, as Sam, have to travel in time and kill wave after wave of monsters. That is pretty much it. Despite the rather limited story, the game works amazingly well as a game-that is, it is fun to play. On the other end of the spectrum are games that are heavily story driven, such as Knights of the Old Republic and Star Wars the Old Republic. These games are, not surprisingly, role-playing games. In these games the player takes on the role of a character and spends considerable time talking to non-player characters, making decisions and experiencing the plot unfold. As might be imagined, the story in such games seems to be rather more important than in the typical first person shooter. In the middle are games like the Halo series which have well-developed stories and unfolding plots, but do not actually have any role-playing elements. For example, in Halo your choices mainly revolve on what gun to use to kill which alien in what way.

As might be imagined, the significance of the story would seem to be proportional to its role in the game. After all, a first person shooter whose plot is rather lacking or poor would suffer less than a full blown story-driven role-playing game whose plot is lacking or badly done. That said, it could still be argued that plot is important.

It is tempting to compare a game with a story to a movie and, obviously enough, plot seems to be somewhat important to a movie (although Michael Bay, some might claim, endeavors to prove otherwise). The idea of plot being the most important aspect of poetical works (broadly and classically construed to include theater) dates back at least to Aristotle. To steal his argument regarding tragedy, the following argument can be given for the importance of plot in games that have a story element.

Games are not an imitation of humans (or elves, aliens, or dragons), “but of an action and of life, and life consists in action, and its end is a mode of action, not a quality.” It is, of course, the actions taken by people that  “make them happy or miserable.” As such the “the incidents and the plot are the end of” the game  and “the end is the chief thing of all.” Thus the story is important, at least on the key assumptions made by Aristotle.

For Aristotle, a key part of having a good plot is ensuring “that the sequence of events, according to the law of probability or necessity permits a change from bad fortune to good or from good to bad.” In more general terms, the plot must be such that the events make sense and fit together to form a coherent whole. In my own experience as a gamer, I have consistently disliked games in which the story fails to meet that basic requirement that events play out in a way that makes sense (except, obviously enough, for games that are supposed to not make sense). After all, if you are running around in a game doing things that make no sense for no apparent reason that leads to nothing, then that will tend to be a disappointing gaming experience (although it would be a fair approximation of life).

The rather obvious reply to this is that there are games that are rather weak in the story department that seem to be great successes as games, thus helping to support Carmack’s claim. This seems to be a rather consistent aspect of the top tier first person shooters-they tend to be marked by weak, implausible or otherwise lame plots but are top-ranked for game play, especially competitive multi-player. As I once jokingly put it, “I don’t really care why I am killing, I just care about whether I’m enjoying it or not.” That, I think, nicely captures the view of most gamers.

Interestingly enough, this view often extends into games in which story would seem to be rather important, such as role-playing games. While some people do enjoy going through all the dialog and getting into the story, my general experience has been that the main focus is on the game-play rather than on the story.  This even extends to my experience in traditional role-playing games, like AD&D and Pathfinder:many players are far more into roll-playing (that is, simply killing monsters in combat) than role-playing (that is, talking to the monsters before killing them).

Getting back to the point raised earlier, namely that the game aspects of a game are not art this does seem to suggest that the story is not as important to the game as the game aspects of the game. Alternatively, it could be argued that the game aspects of the game are still art, but they are a different sort of art than a story. After all, the name of the game is, well, “game” and not “story.” In the case of a first person shooter, the game is (obviously enough) about shooting things from a first person perspective. Story is thus secondary. Even in role-playing games, such as Pathfinder, all the actual game mechanism are about rolling dice, usually while trying to kill monsters who are blatantly and shamelessly holding the loot that rightfully belongs to the party. While the game can be augmented by art (acting, beautiful maps, and well-crafted stories) the core of the game is , it can be argued, the game mechanics. As my friend Ron puts it, “if you are not rolling dice, you are not playing the game. You are just sitting around the table talking.”

The idea that a game should be focused on the game is, interestingly enough, also consistent with Aristotle’s view: “each art ought to produce, not any chance pleasure, but the pleasure proper to it.”

My Amazon Author Page

Enhanced by Zemanta

Gears of War 3: Planet of Steroids

Posted in Technology, Video Games by Michael LaBossiere on October 9, 2011
gears of war 3

Image by brandon shigeta via Flickr

I have been playing Gears 3 and mainly enjoying it. I can see why it has been well-rated and why it has an often vocal fan base. However, the game does annoy me in many ways which detracts from my enjoyment. I freely admit that my annoyance is based on my own views about how games should be and also based on what I enjoy. As such, I can accept that the game is regarded as great by others, yet is only good as I see it.

While games often have a distinct art style, the character style in Gears makes me laugh a bit: the main male characters are hulk-like steroid monsters with gigantic feet (or they are wearing space Ughs). To me, they look like D&D style dwarfs stretched out to human size. But, I can get over the weird feet and the fact that almost everyone seems to be on major steroids. After all, the violence is pretty awesome.

Being something of a “realist” in regards to gear, I do wonder why the guns and armor have glowing patches (aside from the fact that glowy is in). After all, the armor does not seem to have any powered aspects (shields or strength enhancement) but maybe it works like Bane’s gear: the guys are all puffed up by the armor power. In real combat, no one would want gear that glows-that makes a person an easy target. But, hey, it is a look that the kids presumably like.

Speaking of guns, the weapons in this game seem to be, well, lame. The Lancer, aside from the absurdly cool chainsaw, is a rather poor rifle design for a futuristic weapon. It seems to be on par with an AK-47 in terms of its capabilities. The shotgun is awful as a combat weapon (apparently automatic shotguns with decent range are forgotten in the future). The sniper rifle is like firing a musket in terms of its reload times, which is absurd. But, this is offset by the presence of some interesting alien weapons and the general fun of the game.

I am not a big fan of arcade style gimmicky boss fights. This is mainly because of years of being a DM have conditioned me to believe in a consistent set of game rules and to avoid mere gimmicks as a substitute for original and interesting ideas. The boss fights generally take the usual “the boss is only vulnerable in area X when Y occurs” and the boss attacks by overrun attacks. The berserker fight was, as  saw it, too much of a gimmick and a bit absurd. First, it stands up to a direct strike from a city destroying orbital weapon-but maybe the batteries were low or something. Second, it is only vulnerable when its chest pops open. Why? Third, it spends the battle leaking gallons of fluid that cover large swathes of ground-would it not eventually run dry? But, hey, some people love that stuff.

What I found most annoying was a factor that I am sure many people really like: the domination of the game by the story. While all such games have a fixed outcome and a script, Gears 3 was unable to make me feel that I was not simply following along with the script-I was painfully aware at all times that every event was set and I was just along for the ride. First, as my friend Ron and I were playing, we could predict pretty much everything that was going to happen (“okay, now we’ll just be forced to run back to the next area and defend that”) and it seemed like our actions had no effect. Obviously, games (like movies) are scripted. But a game (and a movie) has to make the audience feel that events are not pre-destined. Gears failed to do that for me. Second, the game only allows the player to make insignificant choices. For example, at a “choice point” I can go left or right-but it makes no difference since (in co-op) mode we just split up for a while and then are right back together. The vehicle combats also feel like being on an amusement park ride: I felt I was just going along for the ride towards a pre-set end. Third, while many of the cut scenes were cool, watching the game show cool things is not as cool as actually doing these things. In many cases it felt like we just fought to the climatic point and then the game resolved it for us with a pre-set cinematic. That served to take me out of the game. Fourth, the characters often made decisions that did not make much sense and went against what I would do and what they should do. For example, when Griffin demands that the player get the fuel for him and takes a hostage, it is absurd to think that the characters would just go along and not simply put a round into the back of Griffin’s head, whack his two minions and then shoot any of his scruffy followers if they got in the way. Of course, my dismay can be chalked up to the fact that the “decision” made by the game designers did not match what I would do-nor what the characters would seem to do given the setting and conditions. Some folks no doubt think that this behavior makes perfect sense and just go along with it.

Overall, Gears 3 is a good game. I don’t see it as great, but I can attribute this to my expectations and views of games.

Enhanced by Zemanta

On Being Freshly Pressed

Posted in Technology by Michael LaBossiere on September 27, 2010
Image via Wikipedia

Like most folks on WordPress, I see the Freshly Pressed blogs each time I log in. If a title or graphic interests me, I will go and check it out. I was recently pleased to see one of my own posts listed as Freshly Pressed.

There are two main effects of being Freshly Pressed. The first is that the hits to the blog go way up. Second, that post is flooded with comments.

In regards to the blog hits, it might interest some to know that it is a spike in two ways. First, there was a massive increased of hits from previous days. Second, the hits are a spike in that they are very large on the pressed post but there is little spread to the other posts. As such, it seems that people come to see the post and then most depart without looking around much more.

I did notice that the hits were greater on the second day of being Freshly Pressed. But this might be due to the day of the week rather than due to the second day being a spike day. I suspect that the long term impact of being pressed will be very modest or even minimal. My 15 minutes of blog fame, so to speak.

As far as the comments go, I suspect that people are mainly drawn to comment on a Freshly Pressed post out of a desire to funnel traffic to their own blogs. This is, of course, sensible and all part of the blogging game. However, some people are clearly interested in the post itself and have some interesting and relevant things to say. As with the hits, the comments also seem to be a spike. They increased dramatically and center on the post. While I did get some extra comments on my other posts, the comments are clearly focused where they can do other bloggers the most good-on the Freshly Pressed post.

I’m reasonable sure that the hits and comments will soon return to their previous quantities: good, but hardly remarkable. On one hand, I will be sorry to see my fleeting minor fame fade away. On the other hand, being Freshly Pressed is a bit like hosting a free beer keg party: people you don’t know show up, tap the kegs and leave…most likely to never be seen again now that the kegs are dry. While having such a party is fun for a while, having one everyday would get a bit tiring.

In any case, I appreciate the folks at WordPress picking my post and I am glad that so many people stopped by to read, comment, and plug their own blogs.

Enhanced by Zemanta

Kensington Expert Mouse

Posted in Technology by Michael LaBossiere on September 18, 2010
Kensington Expert Mouse 5 Trackball
Image via Wikipedia

When I was a poor graduate student, I wanted to get a Kensington trackball(right) for my Mac. However, the price was way too high for my budget and I settled for a cheaper trackball. Eventually, I forgot about the Kensington when I bought a Microsoft trackball. While people are supposed to loath Microsoft, I found the trackball to be almost perfect for me. Then it finally wore out and I got another one. When that one wore out I found that Microsoft no longer made them. I looked on Amazon and saw that I could get one for a few hundred dollars. While I loved the trackball, I was not in love with it and hence decided to pass.

Looking for a good replacement, I remembered the Kensington. I saw that the Expert Mouse (which is actually a trackball and not a mouse) was on sale at Amazon, so I got one. When it arrived, I installed the software and was prepared for it to live up to the glowing reviews I had read. However, my experience was horrible. The scroll ring seemed incapable of actually scrolling-I would move it and the scroll bars would go up or down seemingly at random. When I clicked on the lower left hand button (set for a single click) it would cause the scroll bars to move and would also sometimes “jump” to other fields. For example, when trying to blog in WordPress it would scroll the main text area, then the categories would suddenly start scrolling up and down. The same sort of thing happened in the Start Menu: I would try to click on a program icon but the click would cause the scroll bars to move up or down randomly instead.

I was not happy and was ready to send the mouse back.

However, I realized that the problems seemed like the mouse was somehow getting two sets of signals and seemed “confused.” I suspected that perhaps the custom Kensington software was somehow at odds with the standard mouse software. To test this, I uninstalled the Kensington software and the problem was solved: the scroll wheel worked flawlessly and the scroll-click problem had ceased.

So, if you run into this problem, uninstall the software. If you want to custom configure the trackball, you can install the software and then remove it after setting the preferences. Interestingly, the button assignments I did using the Kensington software stayed even after I uninstalled the software.

Overall, I really like the Expert Mouse. However, many people are not fond of trackballs, so be sure to give it a try before you buy. Assuming, of course, anyone still goes to a store to buy mice/trackballs.

Enhanced by Zemanta


Posted in Sports/Athletics, Technology, Video Games by Michael LaBossiere on June 27, 2010
Atari 2600 - Activision - Decathlon
Image by Sascha Grant via Flickr

Being both an athlete and a gamer I find the idea of a more active way to play video games interesting. Then again, I must admit, I often find the actual implementations a bit silly.

One of the latest attempts in this field is Microsoft’s Kinect. I gather that this clever name is derived from “kinetic” and because it sounds like “connect.” At the very least this shows that Microsoft has advanced in its naming methodology since the days of Bob. The gist of this system is that it allows gamers to control the game play via body movements. These are, of course, body movements other than using a standard controller. For example, a player might move her arm to swing a sword or move her legs to move her character in a game.

Since I am in favor of exercise, I think that almost anything that would get people to be more active would be a good thing. Using a system like Kinect would get the player to move more than he would using a normal controller. Of course, this would provide less exercise than actually doing exercise (like running or going for a real walk). But at least the gamer would be off the couch. Assuming, of course, that people actually decide to buy and use Kinect.

There have been various attempts to combine actual physical activity with video game play. These, as you might imagine, generally did not make it into most living rooms. One reason is that people often prefer not to sweat when playing video games. Another reason is that gamers are generally not the sorts of people who are into exercise and people who exercise obviously already do so. As such, it is not clear that there is a substantial market for this sort of technology.

In my own case, about the only thing that would motivate me to buy a Kinect device would be if some truly awesome video game came out that required this. Otherwise I’m content to get my exercise the old fashioned way and to play video games in the traditional manner (my hands on the controller and my ass in a chair).

One minor concern I have about such systems is that they seem to provide the illusion of exercise. For example, consider the Wii system. The Wii controllers were touted by some as a way to be physically active while playing video games. The idea was that players would swing the controller ferociously when sword fighting or swing it like a real club when playing a golf game. However, moving a little plastic stick around is not much exercise. Also, the controller also produces the same results via rather small motions. That is, you can play Wii in the traditional manner (hand on the controller, ass in the chair).

I do think that the sort of user interface being developed by Kinect does have some potential. After all, manipulating virtual objects with natural motions is…well, natural. Also, think of the really advanced user interfaces shown in some science fiction-the user interacts without a mouse or keyboard by using gestures and by manipulating virtual objects by “touch.” While this is currently being presented as a gaming technology, it might become part of a much more general user interface. For example, imagine never losing a remote again because you can control your TV by hand gestures. You would gesture to call up a virtual remote, then manipulate it from across the room. This would allow you to watch TV in the traditional manner (ass on the couch) and you would never have to get up and look for the remote.

Of course, this technology won’t get really cool until Apple starts developing it. No doubt it will be called iTouch or something equally “i” related.

Enhanced by Zemanta

Facebook Patent

Posted in Business, Technology by Michael LaBossiere on March 1, 2010
Facebook, Inc.

Image via Wikipedia

Facebook was recently awarded a patent for streaming feed technology, that is “dynamically providing a news feed about a user of a social network.” Obviously enough, that sort of streaming feed is an integral part of social networks. The most obvious example is Twitter.

When I read about this patent, I immediately thought of the social networks that are associated with online gaming. For example, Xbox Live seems to provide dynamic data about the users. As another example, so does World of Warcraft-accomplishments are broadcast over Guild channels and, of course, characters are updated on the Armory site. I suspect that Facebook will be reluctant to throw down legally with Microsoft over this, but a clever lawyer could probably make a case over this.

One serious concern is that, as noted above, the news feed model is standard fare in social networks.  Depending on how the Facebook folks wish to wield their patent, they could finally find that elusive revenue stream. In this case, lawsuits for patent violation or perhaps gathering up licensing fees.

It might be argued that Facebook should not be allowed such a patent because the news feed model is widespread and well established. This patent, it might be argued, would be damaging to the social networks and also impede progress rather than advance it. However, the fact that many people have “stolen” a patented process does not invalidate a patent. After all, they are intended to prevent just such theft. What does remain to be seen is how far the patent extends and what the folks are Facebook intend to do with their new legal tool.

Reblog this post [with Zemanta]

The Future of TV

Posted in Technology by Michael LaBossiere on November 8, 2009
Netflix, Inc.
Image via Wikipedia

While I’m rather fond of technology and gadgets, it is only recently that I tried Netflix streaming video on my Xbox 360. It was fairly easy to do: sign in via your Xbox Gold membership, download the Netflix app to the Xbox 360, get the code it provides, sign in to Netflix, input the code and you are ready to start streaming. While Netflix is accessed via a Gold Account, it does not actually link to that account, so if you switch your Gold account to another Xbox 360, you will need to go through the activation process once again.

On the plus side, the streaming video is part of  Netflix and does not add to the cost (as long as you have the appropriate level of membership, of course). I found that the quality was quite good-comparable to watching a DVD. Of course, my TV is not HD (yes, I bear that shame), so I could not provide a truly proper assessment of the quality. The only problems I had were with Comcast-but that is not the fault of Netflix. One the downside, the selection of movies is still somewhat limited. While there are some top tier movies, there are also many B grade flicks.

Watching movies stream over my Xbox 360 made me think about the future of cable TV. Not surprisingly, I began to wonder why anyone would pay for premium movie channels when they could get movies they want, when they want. Of course, some premium channels do offer content that is not available via services like Netflix and there are “on demand” services.

I am, of course, not going out on a limb to say that the path of the future is along the trail being cut by services like Netflix (and, of course, online TV like Hulu). In the near future, set programming schedules will be rather limited or perhaps even non-existent. True, media providers will still produce content on their own schedule, but perhaps there will be something like a “daily delivery” of content that people can view at their convenience. For example, the Daily Show might be filmed in the morning and be ready for viewing anytime after noon.

It also seems likely that the convergence of computers and TV will continue. While there have been various lame and failed attempts to merge the two in the past, the technology is clearly much better now. In fact, as I type this, I am watching V on my PC.

The web and the rise of ereaders like the Kindle show that even the printed media is blending even more into the realm of computers. Even radio is being streamed over the net, thus it seems that all forms of media are converging.

On the positive side, having the web, TV, radio and all sorts of media blended together into one super media does make it more convenient to get that media fix. Also, it might save consumers some money-rather than having to buy multiple devices and have numerous types of subscriptions, people might just need one main device and one subscription (or perhaps a variety will still be the order of the day).

On the negative side, media convergence can lead to monopolies and provide an even more reduced pool in regards to diversity of opinion. After all, one concern about the media today is that a few major companies own almost all the forms of media. Such convergence would put even more control into the hands of a even more limited number of people.

Of course, it could be argued that such convergence will allow for greater diversity. After all, almost everyone has access to the web and can thus be content creators. This convergence would allow (in theory) anyone to provide content and thus there would be an expansion rather than a contraction. Of course, this assumes that those in control of providing access will allow such diversity of content. On one hand, they have much to gain for allowing such content. After all, YouTube thrives (but seems to have yet make any money) on the basis of such user created content. On the other hand, companies often desire to control content and set limitations. After all, there has been considerable dispute over net neutrality in recent years.

Of course, normal TV will continue for quite some time. There are still many people who are just fine with it and, of course, there is the weight of inertia to overcome. But, the future, as always, brings change.

Reblog this post [with Zemanta]

Windows 7

Posted in Technology by Michael LaBossiere on November 5, 2009
Image representing Windows as depicted in Crun...
Image via CrunchBase

While I did attend a very nice Windows 7 launch party, I’m still running XP on my main PC and OS X on my iBook (with Windows 2000 running nicely in emulation). I do have Vista on a laptop, but only because it came with Vista and I could never quit muster up the gumption for a downgrade to XP.  Interestingly, though I have used it the least of any computer I own, it has thrown up the most blue screens of death. But, to get back to Windows 7.

My desktop PC (a repair job based on the burned out shell of a friend’s “one fan short” computer) is running XP Pro quite nicely. While I am more of a Mac person, I find XP Pro with Service Pack 3 to be fairly stable and good with resources (of course, it was released almost a decade ago). Most importantly, it does everything that I need an OS to do-that is, it allows me to run the software I use without too much trouble. When Vista was spawned to torment the world, I passed because I saw no compelling reason to “upgrade” to a annoying resource hog. I do not regret that at all.

When Windows 7 was announced, I knew that I would probably have to use it eventually-after all, my PC is reaching the end of its expected life. However, I also knew that I would not be shelling out money for an upgrade. Rather, I figured I would just buy a new PC after Microsoft got around to beating some of the worst bugs out of Windows 7.

Based on my limited experience and research on Windows 7, it seems to be roughly a service pack of Vista. That is, it is basically Vista that works a bit better: it is less annoying, a bit faster, and hogs slightly less resources. However, there seems to be nothing compelling about it-beyond the fact that Microsoft has discontinued XP and soon it will be the only real Windows game in town.

While Windows 7 has some nifty interface features, I can honestly do without them or, if I must have them, I can find some third party freeware to do the same thing. Of course, my view of an OS is to take it as a metaphor for a worktable-it is there to provide the foundation on which I work, not to be getting in my way with fancy features. I am, however, concerned with security and stability. Not surprisingly, I rather like Linux.

Like many people, I find Microsoft’s multiple versions of Windows to be annoying. I rather like Apple‘s approach: have one OS for consumers and a server OS. Don’t have numerous versions that seem to differ only in fairly minor ways (other than cost). Presumably Microsoft thinks that it can make more money with all these versions and perhaps this is correct. When I do buy a new PC, I’ll shop for the best hardware deal and then probably just deal with whatever version of Windows 7 is on there. I’d take a stab at sorting out all the different versions, but that should be something Microsoft makes clear. Fortunately, someone has taken the effort for me.

If you have an XP machine and are happy with it, then it makes sense to just stick with it until it dies. While it might be able to run Windows 7, it makes more sense to use the money for an upgrade and put it towards a new PC. After all, some new PCs are priced close to the cost of a full version Windows 7.

If you have a Vista machine, then you might be eligible for a free upgrade to Windows 7. If you bought your PC on or after July 1, 2009 then you are probably in luck. If you bought before then, then you will need to buy an upgrade. If you are a student, then you can get the upgrade for $29.99.

Of course, my view is that Microsoft should issue a free upgrade to all Vista users as an apology for that mess. At the very least, they should allow them that $30 deal.

Reblog this post [with Zemanta]