The All-Seeing Eye

Musings from the central tower…

Economics Foundation

One of my goals in this blog is to examine the interplay between certain postmodern theories and certain economic theories that, due to certain political and demographic realities, might never be considered together. In Constituting Feminist Subjects, Kathi Weeks points out that there is a “paradigm debate” between modernists and postmodernists that makes it difficult to constructively combine elements of, for instance, Foucault and Marx. However, someone whose area of interest is feminist politics would be highly likely to, in their course of study, come across somewhat favorable accounts of both of these thinkers. Perhaps socialist feminism and postmodern feminism would be presented as opposing movements, but they oppose each other only in their approach to meeting ostensibly similar goals. Thus the logic of Weeks’ attempt to bring some degree of reconciliation to the two.

This same student of feminist thought would be very unlikely to encounter certain other
theories, thinkers, or schools of thought, or if they were encountered, they’d be likely to be presented negatively, misrepresented, or dismissed as irrelevant for one reason or another. This is not an attack on the feminist movement – merely an observation that, in any movement or school of thought, there are areas of particular interest that are studied in great depth, and there are areas of no particular interest that are not studied at all. I could easily level the same critique against economics. In fact, I arguably already have, when I said that Libertarian thought needed to be reevaluated in the face of certain postmodern theories. I’ve spoken a bit about some of the formulations of power that will inform this project of deconstruction and reconstruction, so now, I’d like to talk a little bit about the economic side of things. My project here is to begin to lay the foundations for my postmodern theory of economics.

Continue reading

Advertisements

March 2, 2008 Posted by | Economics | , , , , , , , , , , , , , , , , , , , , , , , , | 1 Comment

The Prisoner’s Dilemma and the Panopticon

I’ll start this post with a brief recap:

The Prisoner’s Dilemma (PD) is a concept in game theory that describes the situation of two suspects who have been apprehended by the authorities. In the PD, the authorities need a confession in order to get the conviction they want, so they come up with a scenario to try to convince each suspect to confess. They offer each prisoner a reduced sentence in exchange for a confession that incriminates the other prisoner. If both prisoners stay silent – a play that is conventionally called “cooperate” – they both get a short sentence. If one prisoner chooses to “cooperate” but the other prisoner makes a confession – a play called “defect” – the defector goes free and the cooperator gets a full, long sentence. If both “defect” they both get a medium sentence.

Like the Traveler’s Dilemma, it is better in the Prisoner’s Dilemma for both players to cooperate – choosing (100) or choosing to stay silent. Also like the TD, in the PD if one player cooperates, the other player can increase his payoff by defecting – choosing (99), or choosing to confess. And finally, if one player defects – by choosing (2), or confessing – the other player can mitigate the harm done by also defecting.

The Panopticon is a philosophical concept that describes the situation of prisoners in a more general sense. The original panopticon was a design for a physical structure that would house prisoners in such a way as to maximize the number of inmates who could be supervised by one warden. This design consisted of a central tower where an observer could remain unseen by the inmates but from which all of the inmates could be seen. The inmates were situated in individual cells surrounding the central tower, separate from each other.

The idea of the panopticon is that this situation – isolation and the perpetual possibility of surveillance, would produce within each prisoner a sort of self-surveillance. Each prisoner would know at all times that he could be under supervision, and so each prisoner will act at all times as though he were under supervision.

The difference between self-surveillance and regular surveillance, though, is that self-surveillance can be much more intrusive. After all, an outside observer can only see certain physical manifestations of our actions – in other words, can only see what our actions look like. We, on the other hand, can, in a sense, see what our actions are. We form the intent that turns a motion into a gesture, an activity into an action, a sound into a word. We can read our own minds.

This paves the way for what I like to call the panoptic model of power. The panoptic model of power says that power is constituted and magnified by the effects of isolation and self-surveillance. Isolation and self-surveillance are interlocking, mutually reinforcing forces – in other words, isolation helps constitute self-surveillance and self-surveillance helps constitute isolation. A good example of how this works is the Prisoner’s Dilemma.

The most obvious intersection of the PD and the panoptic model of power is isolation. Without isolation, the PD would not be a dilemma. Imagine the PD with both prisoners in the same room. They can talk to each other, they can see each other, and they know what the other one is doing at all times. In other words, you’ve removed the hope that one player can defect without the other player defecting, and so now the options are only (defect, defect) or (cooperate, cooperate). Between those two options, one is strictly better, and it’s the one that benefits both players the most – so there’s no dilemma.

The self-surveillance part of the PD may not be as obvious. First we can look at the effects: The expected effect of the PD is that both prisoners confess. Is not confession a form of self-surveillance? It’s self-incrimination, certainly. One might expect the prisoners to provide additional information to the authorities in the course of their confession – details of the crime, perhaps the location of weapons used in the crime, perhaps details about other accomplices, or motives, or planning. In other words, the PD goes a lot deeper than the surveillance the authorities were able to place upon the prisoners without the PD.

To find the cause, we need only locate the central observer. In the panopticon, the prisoner exercises self-surveillance because the prisoner might be under surveillance. In the PD, the prisoner confesses because the other prisoner might confess. In the panopticon, the possibility of being watched leads the prisoner to watch himself. In the PD, the possibility of being incriminated leads the prisoner to incriminate himself.

The Prisoner’s Dilemma, thus, provides both an example of the panoptic model of power at work, and an insight into one of the mechanisms of the panoptic model of power.

February 17, 2008 Posted by | Game Theory, Power | , , , , , , | 2 Comments

Traveler’s Dilemma and Opportunity Cost

As a followup to my last post, I thought now would be a good time to say some things about opportunity cost, or The OC.  Please do not confuse this with any other things called The OC.

Opportunity Cost is an economic analytical tool – a measure of the cost of a missed opportunity.  It goes like this.  Let’s say you have a dollar and you’re standing on the street at a hot dog cart.  The cart is selling pretzels for $1 and hot dogs for $1.  If you buy the hot dog, you can’t buy the pretzel.  Therefore the opportunity cost of buying the hot dog is one pretzel.  Conversely if you buy the pretzel you can’t buy the hot dog.  So the opportunity cost of the pretzel is one hot dog.  Simple, right?  At first glance this seems a trivial and reductive measure for an economist to be thinking about, but in real life, when applied to more complex situations, we can see the value of considering the opportunity cost.

Let’s try another example.  You have a dollar but you aren’t hungry, and you’ve got a year.  You can keep the dollar in your pocket and at the end of the year you’ll have a dollar.  Or you can put the dollar in a bank and at the end of the year you’ll have, let’s say, $1.05.  The average person would think that if they kept the dollar in their pocket, they haven’t lost anything, and in one sense this is true.  However, they have missed something – the opportunity to earn $.05.  The opportunity cost of holding onto the dollar was five cents.  Doesn’t seem like much, but what if it’s a thousand dollars?  A million?

The point is, when there’s money at stake it pays to consider the opportunities that you have when making choices, because in some sense, missing the opportunity to earn money is sort of like losing money, even if it’s money you never actually had.  An opportunity is worth something.  If you don’t believe me, play poker.  If you fold a hand, and it turns out at the end that you would have won if you had stayed in, you will feel like you have lost something.  What you’ve done is missed an opportunity, and the loss you’re feeling is the opportunity cost.  Folding may have been the right decision based on the odds, but you’ll still feel bad that you didn’t get the pot.

So what does opportunity cost have to do with the Traveler’s Dilemma?  Well, it’s another way of evaluating possible plays in the TD, and it demonstrates a major flaw in the models used by game theorists to “solve” the TD.

To recap, in the TD, game theory says that logically speaking, a player ought to play (2).  Many people intuitively feel that they should play higher, and (100) is perhaps the most common play, with (95) to (100) comprising the majority of plays in some experiments.  According to Kaushik Basu, (2) is the correct or best play, because of a game theory concept known as the Nash Equilibrium.  Further, (100) is the worst play, because it is the only play in the game that is “beaten” by every other play.  In other words, if player 1 plays (100) and player 2 plays anything else, player 2 will be rewarded more points than player 1.  If this logic is to be believed, (2) is a better play than (100), and people, if they are acting “rationally,” ought to play it.

But let’s look at the OC to see if that’s true.  Let’s say player 1 plays (100) and player 2 plays (2).  The rewards, then, are 0 points to player 1 and 4 points to player 2.  However, given player 2’s play of (2), the highest score player 1 could possibly have acheived by making a different play is 2, by playing (2).  Any play other than (2) results in a score of 0.  So, player 1 lost 2 points by not playing (2), or, to put it another way, his opportunity cost for playing (100) was 2 points.

Player 2, however, is in a much worse situation.  He played (2) and got 4 points.  Given player 1’s play of (100), the highest score possible for player 2 was 101, with a play of (99).  In other words, by playing differently player 2 could have gotten 101 points, but instead he got four.  That means that his opportunity cost for playing (2) was 97 points.

So, in the above example, on the face of it it seems like player 2 won – he got 4 points, while player 1 got none.  However, if you look at it a different way, player 2 lost 97 points while player 1 only lost 2.  If you consider the scale of a loss of 2 vs. a loss of 97, you see that a play of (100) is much less risky than a play of (2).

In fact, for an opponent’s play of 2, the OC of (100) is 2.  For any number between (3) and (99), the OC is 3.  And for (100), the OC of (100) is 1.  The OC of (2), however, is (Opponent’s play – 3).  That means that for any play above (6), the opportunity cost of (2) is higher than that of the opponent’s play.

So the (2) player almost always loses more money than his opponent – not that the players are losing money that they actually possessed, but money that they could have – and perhaps should have – earned.  If you ask any economist or poker player, that loss can sting just as much as a loss of cold hard cash.

The thing is, if you evaluate the Traveler’s Dilemma in terms of Opportunity Cost, the definition of improving one’s position changes, and therefore so does the Nash equilibirum.  It’s a situation where gaining money and not losing opportunity are not the same thing – and this situation probably comes up fairly often in the real economy, which is why opportunity cost is important as an economic concept.  Rational choices and selfishness, therefore, cannot necessarily be evaluated successfully using only the rubric of amassing the most gain by the end of the game.  There are other measures of success, and people do use them.  Game theorists and economists alike would do well to remember that.

January 25, 2008 Posted by | Economics, Game Theory | , , , , | 1 Comment

The Traveler’s Dilemma

Now for some real content. I came across this article in Scientific American about the Traveler’s Dilemma. To explain briefly, the TD is a game in which two players are each asked to select a number within certain boundaries (2 and 100, in the example). If both players select the same number, they are rewarded that number of points. (In the example, each point is worth $1, which makes the game of more than academic interest.) If one player’s number is lower, they are each awarded points equal to the lower number, modified by a reward for the player who selected the lower number and a penalty for the player who selected the higher number. So, for instance, if you choose (48) and I choose (64), you get 50 points and I get 46 points.

The intuition that I had upon reading the rules of this game was that it would be “best” for both players to choose (100). That is certainly true from a utilitarian point of view: (100, 100) results in the highest total number of points being given out – 200. The runners up are (99, 99), (100, 99), and (99, 100) with 198. However, there are two small problems – here’s the dilemma part – that prevent (100, 100) from being the “best” choice: one, the players are not allowed to communicate, and two, the (100, 99) and (99, 100) plays result in one player receiving 101 points – an improvement, for that player, over a 100 point reward.

So, the reasoning goes, if player one predicts that her opponent will play (100), she should play (99) in order to catch the 101 point reward. Her opponent, however, ought to use this same strategy, and also play (99), in which case player one ought to play (98) in order to trump her opponent, and so on and so forth. This reasoning degenerates to a play of the minimum number – in the example, (2). According to Basu, the author of the article, “Virtually all models used by game theorists predict this outcome for TD.”

However, reality does not follow these models. When people are asked to play the TD, many of them choose 100. Many of them choose other high numbers. Some seem to choose at random. Very few choose the “correct” solution – (2) – predicted by game theory. Something’s up.

Basu takes this to mean that all of our assumptions about rational behavior need to be questioned. With my philosophical background, I happen to have different assumptions about rational behavior than the mainstream, and so for me the results of the TD are not surprising in any way. But perhaps the best way to explain why the results to not surprise me is that I am a gambling man. Continue reading

January 24, 2008 Posted by | Economics, Game Theory | , , , , , , , , , , , | 6 Comments