The All-Seeing Eye

Musings from the central tower…

Economics Foundation

One of my goals in this blog is to examine the interplay between certain postmodern theories and certain economic theories that, due to certain political and demographic realities, might never be considered together. In Constituting Feminist Subjects, Kathi Weeks points out that there is a “paradigm debate” between modernists and postmodernists that makes it difficult to constructively combine elements of, for instance, Foucault and Marx. However, someone whose area of interest is feminist politics would be highly likely to, in their course of study, come across somewhat favorable accounts of both of these thinkers. Perhaps socialist feminism and postmodern feminism would be presented as opposing movements, but they oppose each other only in their approach to meeting ostensibly similar goals. Thus the logic of Weeks’ attempt to bring some degree of reconciliation to the two.

This same student of feminist thought would be very unlikely to encounter certain other
theories, thinkers, or schools of thought, or if they were encountered, they’d be likely to be presented negatively, misrepresented, or dismissed as irrelevant for one reason or another. This is not an attack on the feminist movement – merely an observation that, in any movement or school of thought, there are areas of particular interest that are studied in great depth, and there are areas of no particular interest that are not studied at all. I could easily level the same critique against economics. In fact, I arguably already have, when I said that Libertarian thought needed to be reevaluated in the face of certain postmodern theories. I’ve spoken a bit about some of the formulations of power that will inform this project of deconstruction and reconstruction, so now, I’d like to talk a little bit about the economic side of things. My project here is to begin to lay the foundations for my postmodern theory of economics.

Continue reading

March 2, 2008 Posted by | Economics | , , , , , , , , , , , , , , , , , , , , , , , , | 1 Comment

The Prisoner’s Dilemma and the Panopticon

I’ll start this post with a brief recap:

The Prisoner’s Dilemma (PD) is a concept in game theory that describes the situation of two suspects who have been apprehended by the authorities. In the PD, the authorities need a confession in order to get the conviction they want, so they come up with a scenario to try to convince each suspect to confess. They offer each prisoner a reduced sentence in exchange for a confession that incriminates the other prisoner. If both prisoners stay silent – a play that is conventionally called “cooperate” – they both get a short sentence. If one prisoner chooses to “cooperate” but the other prisoner makes a confession – a play called “defect” – the defector goes free and the cooperator gets a full, long sentence. If both “defect” they both get a medium sentence.

Like the Traveler’s Dilemma, it is better in the Prisoner’s Dilemma for both players to cooperate – choosing (100) or choosing to stay silent. Also like the TD, in the PD if one player cooperates, the other player can increase his payoff by defecting – choosing (99), or choosing to confess. And finally, if one player defects – by choosing (2), or confessing – the other player can mitigate the harm done by also defecting.

The Panopticon is a philosophical concept that describes the situation of prisoners in a more general sense. The original panopticon was a design for a physical structure that would house prisoners in such a way as to maximize the number of inmates who could be supervised by one warden. This design consisted of a central tower where an observer could remain unseen by the inmates but from which all of the inmates could be seen. The inmates were situated in individual cells surrounding the central tower, separate from each other.

The idea of the panopticon is that this situation – isolation and the perpetual possibility of surveillance, would produce within each prisoner a sort of self-surveillance. Each prisoner would know at all times that he could be under supervision, and so each prisoner will act at all times as though he were under supervision.

The difference between self-surveillance and regular surveillance, though, is that self-surveillance can be much more intrusive. After all, an outside observer can only see certain physical manifestations of our actions – in other words, can only see what our actions look like. We, on the other hand, can, in a sense, see what our actions are. We form the intent that turns a motion into a gesture, an activity into an action, a sound into a word. We can read our own minds.

This paves the way for what I like to call the panoptic model of power. The panoptic model of power says that power is constituted and magnified by the effects of isolation and self-surveillance. Isolation and self-surveillance are interlocking, mutually reinforcing forces – in other words, isolation helps constitute self-surveillance and self-surveillance helps constitute isolation. A good example of how this works is the Prisoner’s Dilemma.

The most obvious intersection of the PD and the panoptic model of power is isolation. Without isolation, the PD would not be a dilemma. Imagine the PD with both prisoners in the same room. They can talk to each other, they can see each other, and they know what the other one is doing at all times. In other words, you’ve removed the hope that one player can defect without the other player defecting, and so now the options are only (defect, defect) or (cooperate, cooperate). Between those two options, one is strictly better, and it’s the one that benefits both players the most – so there’s no dilemma.

The self-surveillance part of the PD may not be as obvious. First we can look at the effects: The expected effect of the PD is that both prisoners confess. Is not confession a form of self-surveillance? It’s self-incrimination, certainly. One might expect the prisoners to provide additional information to the authorities in the course of their confession – details of the crime, perhaps the location of weapons used in the crime, perhaps details about other accomplices, or motives, or planning. In other words, the PD goes a lot deeper than the surveillance the authorities were able to place upon the prisoners without the PD.

To find the cause, we need only locate the central observer. In the panopticon, the prisoner exercises self-surveillance because the prisoner might be under surveillance. In the PD, the prisoner confesses because the other prisoner might confess. In the panopticon, the possibility of being watched leads the prisoner to watch himself. In the PD, the possibility of being incriminated leads the prisoner to incriminate himself.

The Prisoner’s Dilemma, thus, provides both an example of the panoptic model of power at work, and an insight into one of the mechanisms of the panoptic model of power.

February 17, 2008 Posted by | Game Theory, Power | , , , , , , | 2 Comments

Free Will, Determinism, and Motivation

It is possible to look at the universe like a giant computer. If you know the software a computer is running and all of its inputs you can predict the result. Similarly, one might think that if you knew all the rules of the universe – that is, if you understood physics perfectly and accurately – and if you knew the position and velocity of each particle in the universe, you would be able to predict the results – that is, how everything would turn out. Such a view is called determinism. When Newton first proposed that all matter obeyed certain laws, he was accused of atheism, because the obvious implication of his theories was determinism, which is a theory that leaves no place for God and no place for free will.

The question of free will vs. predestination or determinism is, of course, older than Newtonian physics, but physics is the way in which I first conceived of the question. One might ask, if God is all powerful, how can anyone act in a way God does not want? One might ask, if God knows all, then isn’t destiny written – isn’t there no way to change things? I have never been particularly into theology, but physics always fascinated and frightened me.

Some time ago I read Isaac Asimov‘s Foundation trilogy, which despite its name consisted of approximately thirteen thousand books. This series is based upon the idea that there was a mathematician named Hari Seldon who was able to predict the course of history using mathematical models and a deep understanding of historical trends. I did not find this idea credible. People, after all, are far too complicated to reduce to a mathematical model. Aren’t they?

Can we predict what people will do in a given situation? If we can, what does that say about free will? If we can’t, how can we enact social change?

In Freakonomics, authors Levitt and Dubner describe a scenario in which parents were charged a small fee (I think it was $3) for being late to pick their children up from daycare. The result of this fee was that lateness increased dramatically. According to Levitt and Dubner, the fee was too low, and parents felt as though paying $3 justified their lateness. In other words, when no provision is made for lateness, the parents have to pick their kids up on time or risk their kids being scared and alone. When the daycare center charges for lateness, watching the kids for a few more minutes becomes just another service that the parent can buy, and buy they do. The point of this story is that incentives don’t necessarily work the way we think they will. There are complicated issues at stake even in something as simple as daycare. As we saw in the Traveler’s Dilemma, it’s not a simple task to predict how people will make their decisions, and sometimes rational behavior isn’t what theorists think is rational.

However, what both of these scenarios show is that despite the difficulty, despite the complications, it is possible to develop models and predictions for how people will behave. It is possible to find, with experimentation, the fee amount at which parents will begin picking their children up on time to avoid the fee. It is possible to find, with experimentation, the punishment amount at which people will begin picking the low number rather than the high number in the Traveler’s Dilemma. In other words, people’s behavior may be more complicated than we think, but it is not unreasonable. People act based on motivations, and although these motivations are often not obvious, they are there and they can be found.

Of course there will always be exceptions. There’s always room for free will. There will always be people on the far ends of the bell curve, people who defy expectations and act inexplicably. But in order to effect positive change in the world, we have to believe that we can predict behaviors for most people. We have to believe that there’s a number of dollars that will decrease the amount of late pickups from daycare. After all, isn’t this how we determine prison sentences? Isn’t there a number of years of incarceration that we believe will make the commission of murder unattractive to most potential criminals? Isn’t there a number of dollars that we believe will deter people from speeding and thus decrease the number of traffic accident fatalities?

I’ve never really believed in free will. I’ve always thought that everything is already determined by particle vectors, that everything I do is explainable by something that happened to me in childhood or by a set of circumstances that outlined my choice to such an extent that I didn’t really have a choice. And that’s why, for me, I think it is important for us to search for these motivations, search for these incentives, to build and discredit and rebuild these mathematical models to predict behavior. Because I want to set things up so that people have no choice but to make the right choices. I want a society full of people who pick 100 on the Traveler’s dilemma and pick their children up on time from daycare, and if we’re going to have that we have to pick the right game.

And that, in turn, is why it’s worth looking at something like the Traveler’s Dilemma and finding out that people will cooperate with each other as long as the risk for doing so isn’t too high. It’s why it’s worth looking at the daycare paradox to find out how much guilt is worth. It’s why it’s worth asking why people follow their king or their president against their best interests. We need to find out what motivates people. And in exploring incentives and economics, game theory and modeling, philosophy and psychoanalysis, that’s exactly what I hope to do. I hope to find a solution, a way to set up society so that we’re all playing a game that everyone can win.

In closing, right now I feel that most people are not playing a game that everyone can win. There’s a game called the Prisoner’s Dilemma. In this game, two prisoners are in the custody of law enforcement, but the police don’t have enough evidence to convict them of a serious crime. Each of them is told that they are both suspects and given the following options. If one prisoner gives up the other, that prisoner will go free and the other will go to jail for a long time. If neither of them confesses they will both serve a short sentence for whatever smaller crimes the police can put on them. If they both confess they’ll both serve a short sentence. The implications of the game are that it is better for each player, no matter what the other player does, to confess their crime. Unlike the Traveler’s Dilemma, the Prisoner’s Dilemma tends to lead to uncooperative behavior – in other words, it is much better for each player to screw the other player over, unlike in the TD in which screwing the other player leads to a greater loss.

The Prisoner’s Dilemma game describes many situations in modern life – situations in which people have a great incentive to hurt other people. If there is some way to change the rules of the game so that, like in the Traveler’s Dilemma, or many other games, people have an incentive to help other people, then everyone could benefit immensely. Changing the rules of the game is what I’m aiming for, but it’s going to take a lot of searching to find the right game and a lot of convincing to get people to play it.

February 10, 2008 Posted by | About, Economics, Game Theory | , , , , , , , , | 2 Comments

Traveler’s Dilemma and Opportunity Cost

As a followup to my last post, I thought now would be a good time to say some things about opportunity cost, or The OC.  Please do not confuse this with any other things called The OC.

Opportunity Cost is an economic analytical tool – a measure of the cost of a missed opportunity.  It goes like this.  Let’s say you have a dollar and you’re standing on the street at a hot dog cart.  The cart is selling pretzels for $1 and hot dogs for $1.  If you buy the hot dog, you can’t buy the pretzel.  Therefore the opportunity cost of buying the hot dog is one pretzel.  Conversely if you buy the pretzel you can’t buy the hot dog.  So the opportunity cost of the pretzel is one hot dog.  Simple, right?  At first glance this seems a trivial and reductive measure for an economist to be thinking about, but in real life, when applied to more complex situations, we can see the value of considering the opportunity cost.

Let’s try another example.  You have a dollar but you aren’t hungry, and you’ve got a year.  You can keep the dollar in your pocket and at the end of the year you’ll have a dollar.  Or you can put the dollar in a bank and at the end of the year you’ll have, let’s say, $1.05.  The average person would think that if they kept the dollar in their pocket, they haven’t lost anything, and in one sense this is true.  However, they have missed something – the opportunity to earn $.05.  The opportunity cost of holding onto the dollar was five cents.  Doesn’t seem like much, but what if it’s a thousand dollars?  A million?

The point is, when there’s money at stake it pays to consider the opportunities that you have when making choices, because in some sense, missing the opportunity to earn money is sort of like losing money, even if it’s money you never actually had.  An opportunity is worth something.  If you don’t believe me, play poker.  If you fold a hand, and it turns out at the end that you would have won if you had stayed in, you will feel like you have lost something.  What you’ve done is missed an opportunity, and the loss you’re feeling is the opportunity cost.  Folding may have been the right decision based on the odds, but you’ll still feel bad that you didn’t get the pot.

So what does opportunity cost have to do with the Traveler’s Dilemma?  Well, it’s another way of evaluating possible plays in the TD, and it demonstrates a major flaw in the models used by game theorists to “solve” the TD.

To recap, in the TD, game theory says that logically speaking, a player ought to play (2).  Many people intuitively feel that they should play higher, and (100) is perhaps the most common play, with (95) to (100) comprising the majority of plays in some experiments.  According to Kaushik Basu, (2) is the correct or best play, because of a game theory concept known as the Nash Equilibrium.  Further, (100) is the worst play, because it is the only play in the game that is “beaten” by every other play.  In other words, if player 1 plays (100) and player 2 plays anything else, player 2 will be rewarded more points than player 1.  If this logic is to be believed, (2) is a better play than (100), and people, if they are acting “rationally,” ought to play it.

But let’s look at the OC to see if that’s true.  Let’s say player 1 plays (100) and player 2 plays (2).  The rewards, then, are 0 points to player 1 and 4 points to player 2.  However, given player 2’s play of (2), the highest score player 1 could possibly have acheived by making a different play is 2, by playing (2).  Any play other than (2) results in a score of 0.  So, player 1 lost 2 points by not playing (2), or, to put it another way, his opportunity cost for playing (100) was 2 points.

Player 2, however, is in a much worse situation.  He played (2) and got 4 points.  Given player 1’s play of (100), the highest score possible for player 2 was 101, with a play of (99).  In other words, by playing differently player 2 could have gotten 101 points, but instead he got four.  That means that his opportunity cost for playing (2) was 97 points.

So, in the above example, on the face of it it seems like player 2 won – he got 4 points, while player 1 got none.  However, if you look at it a different way, player 2 lost 97 points while player 1 only lost 2.  If you consider the scale of a loss of 2 vs. a loss of 97, you see that a play of (100) is much less risky than a play of (2).

In fact, for an opponent’s play of 2, the OC of (100) is 2.  For any number between (3) and (99), the OC is 3.  And for (100), the OC of (100) is 1.  The OC of (2), however, is (Opponent’s play – 3).  That means that for any play above (6), the opportunity cost of (2) is higher than that of the opponent’s play.

So the (2) player almost always loses more money than his opponent – not that the players are losing money that they actually possessed, but money that they could have – and perhaps should have – earned.  If you ask any economist or poker player, that loss can sting just as much as a loss of cold hard cash.

The thing is, if you evaluate the Traveler’s Dilemma in terms of Opportunity Cost, the definition of improving one’s position changes, and therefore so does the Nash equilibirum.  It’s a situation where gaining money and not losing opportunity are not the same thing – and this situation probably comes up fairly often in the real economy, which is why opportunity cost is important as an economic concept.  Rational choices and selfishness, therefore, cannot necessarily be evaluated successfully using only the rubric of amassing the most gain by the end of the game.  There are other measures of success, and people do use them.  Game theorists and economists alike would do well to remember that.

January 25, 2008 Posted by | Economics, Game Theory | , , , , | 1 Comment

The Traveler’s Dilemma

Now for some real content. I came across this article in Scientific American about the Traveler’s Dilemma. To explain briefly, the TD is a game in which two players are each asked to select a number within certain boundaries (2 and 100, in the example). If both players select the same number, they are rewarded that number of points. (In the example, each point is worth $1, which makes the game of more than academic interest.) If one player’s number is lower, they are each awarded points equal to the lower number, modified by a reward for the player who selected the lower number and a penalty for the player who selected the higher number. So, for instance, if you choose (48) and I choose (64), you get 50 points and I get 46 points.

The intuition that I had upon reading the rules of this game was that it would be “best” for both players to choose (100). That is certainly true from a utilitarian point of view: (100, 100) results in the highest total number of points being given out – 200. The runners up are (99, 99), (100, 99), and (99, 100) with 198. However, there are two small problems – here’s the dilemma part – that prevent (100, 100) from being the “best” choice: one, the players are not allowed to communicate, and two, the (100, 99) and (99, 100) plays result in one player receiving 101 points – an improvement, for that player, over a 100 point reward.

So, the reasoning goes, if player one predicts that her opponent will play (100), she should play (99) in order to catch the 101 point reward. Her opponent, however, ought to use this same strategy, and also play (99), in which case player one ought to play (98) in order to trump her opponent, and so on and so forth. This reasoning degenerates to a play of the minimum number – in the example, (2). According to Basu, the author of the article, “Virtually all models used by game theorists predict this outcome for TD.”

However, reality does not follow these models. When people are asked to play the TD, many of them choose 100. Many of them choose other high numbers. Some seem to choose at random. Very few choose the “correct” solution – (2) – predicted by game theory. Something’s up.

Basu takes this to mean that all of our assumptions about rational behavior need to be questioned. With my philosophical background, I happen to have different assumptions about rational behavior than the mainstream, and so for me the results of the TD are not surprising in any way. But perhaps the best way to explain why the results to not surprise me is that I am a gambling man. Continue reading

January 24, 2008 Posted by | Economics, Game Theory | , , , , , , , , , , , | 6 Comments