The All-Seeing Eye

Musings from the central tower…

Panoptic Power and Competition

It is fairly uncontroversial in classic economic theory that free and fair competition is often vastly more productive than limited competition or no competition.  Many economists view a monopoly as a market failure and believe that anti-trust laws must be created and enforced in order to preserve competition.  So-called “no-bid contracts,” in which firms are granted lucrative government contracts based on cronyism rather than competition, are slammed, correctly, for costing a great deal more money than competitive contracts would cost.  As a general rule, when agents compete on the market, the goods or services that the agents are selling become more productive and/or less expensive – in other words, competition allows buyers to get more for less.

How is this related to panoptic power?  Economic competition conforms closely to the panoptic model of power.  Let us compare economic competition to the two hallmarks of the panopticon:  self-surveillance, and isolation.

In the panopticon, self-surveillance is produced within a subject by causing that subject to behave as though at any moment she might be under surveillance by a central observer.  Who is the central observer of the competitive market?  The consumer.  At any time, the consumer might evaluate the quality of the products offered up for sale by the competitors.  Competitors earn reputations based on the quality of their products, and these reputations greatly affect the profits of the competitors.  The consumer is also somewhat unpredictable, in that one never knows exactly what a consumer’s preferences might be.  Perhaps your innovative new product might become the next iPod – or perhaps it might become the next Betamax. Competitors must strive towards innovation and invention and reinvention, and must also master marketing, and still success is not guaranteed.  The point here is that competitors are always being evaluated, and they may live or die based on the results of these evaluations.  This is a powerful incentive towards self-surveillance.

In the panopticon, isolation is caused by physically separating prisoners in individual cells.  In a competitive market economy, isolation comes in the form of patents, trade secrets, and the information asymmetries that arise when competing agents each try to find and maintain a competitive edge.  If you own a restaurant, you might make the best marinara sauce in the county, but if you give away your secret recipe, that will not be the case for long.  Isolation also comes from the fact that individual agents may earn more profits through competition than through cooperation – because there will be fewer people to share the wealth.  Laws against cartelization and other forms of corporate cooperation can produce isolation effects.  In a situation where workers are competing for jobs, the workers may become isolated from each other because some wish to go on strike for higher wages while others wish to take over their jobs.

If the productive power of a competitive market is related to the productive power of the panopticon, then do the same downsides exist?  Sure.  One of these is that PD-like situations may arise in which competitors end up reaching a suboptimal equilibrium state because of their isolation.  An example of this is an industry in which advertising costs comprise a significant percentage of the industry’s income but do not effect a significant redistribution of market share for any one firm nor attract a significant number of new buyers to the market.  Each firm would be better off if no firm advertised, but if any firm advertises, they all must in order to avoid losses.  In the end every firm advertises, and the entire industry essentially throws money away.  The tobacco industry is one such example (although you won’t hear me mourning their suboptimal profits.)  There are also cases like railroads or utilities where, without collusion or intervention, redundant services may be established (imagine the case of two competing rail lines running parallel to each other).

Because the effects of panoptic power are generally experienced as difficult and unpleasant, free markets tend toward a mix of competition and cooperation.  Many agents would like to collude with each other in order to avoid the panoptic effects – in other words, break isolation to resist the power of the panopticon.  A cartel is a good example of this – competitors get together and decide that rather than compete with each other, they’ll fix prices and production at a certain level so they can all profit equally.  These cartels often result in higher prices and lower quality and quantity for goods produced – they are less productive, but they make things easier for those involved.  Labor unions are a form of cartel for workers, who agree to band together to achieve higher labor prices (wages) and shorter working hours (less production).  Such cartels and unions suffer from the risk that one agent will defect or a new agent will enter the market, thus destroying the cartel or union, and as a result an equilibrium can be reached.

The free market, which depends on competition for its functioning, is thus an example of panoptic power at work.  This is an important insight because many experience panoptic power as something which imprisons them, which calls into question how much “freedom” agents in the free market actually have and provides a theoretical framework to contrast, rather than conflate, liberty and productivity.

October 25, 2008 Posted by | Economics, Power | , , , , , , , | Leave a comment

Monopoly

The name of the game is Monopoly.  The object of the game is to win.  You win by having the most net worth at the end of the game or by being the last player left after all other players have gone bankrupt.

Basically, your goal is to collect money and property – as much as possible, by any means available.

Most people are probably familiar with Monopoly, which makes it a good example for a thought experiment.

Imagine that four friends are playing Monopoly and a fifth friend shows up and asks to get into the game even though some number of turns have already passed.  How can this fifth friend be integrated into the game?

One way is to start the person the way everyone else started: at Go, with $1500 and a pair of dice.  The beginning is the logical place to start, after all.  This method presents problems, though.  The four original players have had many turns to increase their wealth and their earning potential.  Many good properties have already been bought.  Monopolies may have already been established.  Depending on how late in the game it is, this fifth player may be at some great disadvantage.  Imagine if 90% of the properties on the board are already owned.  The fifth player has virtually no chance of winning – of surviving on the board – under these circumstances.

Another way is to grant the person some portion of the money/property on the board.  You could total the value of the properties each player owns, average the totals, and then randomly assign the new player  unowned properties until that average is approximated.  You could do the same for money.  However, if there isn’t enough unowned property to do this, you’d have to take property away from some of the players who are already playing.  How can this be done fairly?  Should the property be taken from the winning player(s), or equally from all?

Another way is to simply restart the game.  This isn’t necessarily fair to the players who were doing well – their good luck and good strategy ends up going unrewarded.  However, the player(s) who think(s) he/she/they would have won can at least declare victory in this case.  In my experience, this is the most commonly chosen option for inserting a new player into an existing game, for the simple reason that usually at least half of the players are not winning and usually the choice of methods comes down to a loosely democratic vote:  All of the players who are losing choose to restart.

Aside from the highly practical use that this line of thinking has in actually inserting new players into existing games – a situation I have encountered in life from time to time – we can also consider the larger implications, like when we insert new players into the more realistic economic systems presented by, for instance, the economy.  Imagine, for instance, that half the population of some country was playing some game analogous to Monopoly – attempting to acquire money and property and personal enrichment – for years, or decades, or centuries.  Imagine then that the other half demanded to be inserted into the game.  How would we fairly insert these newcomers?

Obviously this question is not simply theoretical.  Various large population groups have been granted property rights in our history – women, for instance, and African Americans – rights which amount to $1500 and a pewter thimble.  These groups were then allowed to compete freely with the people who already owned almost all of the property, people who were busily going through the Monopoly winning strategies of bankrupting whoever they could and consolidating and developing their assets.

Just letting someone into the game doesn’t establish fairness.  These groups weren’t really given a chance.  Even those who did start off with some property – many former slaves were given land during Reconstruction, and women could always inherit an estate from a husband or father – were still at a disadvantage.  Imagine starting a game of Monopoly with a house on Baltic Avenue when another player has hotels on Boardwalk and Park Place.

In the game of the American economy, women, African Americans, and immigrant groups have had to claw their way up from the bottom with the help of luck, charity, and government aid.  It’s no wonder that the players who are already winning want to deny entry to immigrants, why they fought to keep women from having the right to own property.  It’s no wonder that the players who aren’t doing so well want to restart the game and distribute everything evenly.  But when we assess some data – the wage gap between men and women, for instance – it’s important to keep in mind that some of the players started late.  If women owned half the property and controlled half the wealth in the American economy, would there still be a wage gap?

And before we say that some group has had enough opportunity to improve their lot, let’s ask ourselves how many turns we would need before we caught up in a game of Monopoly if we started fifty turns late.

Again, no solution presents itself.  What is fairness?  How can all players be satisfied with a solution?  Certainly whatever happens, it will require the cooperation of people who don’t currently acknowledge that there is a significant problem with how the game was set up in the first place.

June 4, 2008 Posted by | Economics, Feminism, Game Theory | , , , | 3 Comments

Economics Foundation

One of my goals in this blog is to examine the interplay between certain postmodern theories and certain economic theories that, due to certain political and demographic realities, might never be considered together. In Constituting Feminist Subjects, Kathi Weeks points out that there is a “paradigm debate” between modernists and postmodernists that makes it difficult to constructively combine elements of, for instance, Foucault and Marx. However, someone whose area of interest is feminist politics would be highly likely to, in their course of study, come across somewhat favorable accounts of both of these thinkers. Perhaps socialist feminism and postmodern feminism would be presented as opposing movements, but they oppose each other only in their approach to meeting ostensibly similar goals. Thus the logic of Weeks’ attempt to bring some degree of reconciliation to the two.

This same student of feminist thought would be very unlikely to encounter certain other
theories, thinkers, or schools of thought, or if they were encountered, they’d be likely to be presented negatively, misrepresented, or dismissed as irrelevant for one reason or another. This is not an attack on the feminist movement – merely an observation that, in any movement or school of thought, there are areas of particular interest that are studied in great depth, and there are areas of no particular interest that are not studied at all. I could easily level the same critique against economics. In fact, I arguably already have, when I said that Libertarian thought needed to be reevaluated in the face of certain postmodern theories. I’ve spoken a bit about some of the formulations of power that will inform this project of deconstruction and reconstruction, so now, I’d like to talk a little bit about the economic side of things. My project here is to begin to lay the foundations for my postmodern theory of economics.

Continue reading

March 2, 2008 Posted by | Economics | , , , , , , , , , , , , , , , , , , , , , , , , | 1 Comment

Traveler’s Dilemma and Opportunity Cost

As a followup to my last post, I thought now would be a good time to say some things about opportunity cost, or The OC.  Please do not confuse this with any other things called The OC.

Opportunity Cost is an economic analytical tool – a measure of the cost of a missed opportunity.  It goes like this.  Let’s say you have a dollar and you’re standing on the street at a hot dog cart.  The cart is selling pretzels for $1 and hot dogs for $1.  If you buy the hot dog, you can’t buy the pretzel.  Therefore the opportunity cost of buying the hot dog is one pretzel.  Conversely if you buy the pretzel you can’t buy the hot dog.  So the opportunity cost of the pretzel is one hot dog.  Simple, right?  At first glance this seems a trivial and reductive measure for an economist to be thinking about, but in real life, when applied to more complex situations, we can see the value of considering the opportunity cost.

Let’s try another example.  You have a dollar but you aren’t hungry, and you’ve got a year.  You can keep the dollar in your pocket and at the end of the year you’ll have a dollar.  Or you can put the dollar in a bank and at the end of the year you’ll have, let’s say, $1.05.  The average person would think that if they kept the dollar in their pocket, they haven’t lost anything, and in one sense this is true.  However, they have missed something – the opportunity to earn $.05.  The opportunity cost of holding onto the dollar was five cents.  Doesn’t seem like much, but what if it’s a thousand dollars?  A million?

The point is, when there’s money at stake it pays to consider the opportunities that you have when making choices, because in some sense, missing the opportunity to earn money is sort of like losing money, even if it’s money you never actually had.  An opportunity is worth something.  If you don’t believe me, play poker.  If you fold a hand, and it turns out at the end that you would have won if you had stayed in, you will feel like you have lost something.  What you’ve done is missed an opportunity, and the loss you’re feeling is the opportunity cost.  Folding may have been the right decision based on the odds, but you’ll still feel bad that you didn’t get the pot.

So what does opportunity cost have to do with the Traveler’s Dilemma?  Well, it’s another way of evaluating possible plays in the TD, and it demonstrates a major flaw in the models used by game theorists to “solve” the TD.

To recap, in the TD, game theory says that logically speaking, a player ought to play (2).  Many people intuitively feel that they should play higher, and (100) is perhaps the most common play, with (95) to (100) comprising the majority of plays in some experiments.  According to Kaushik Basu, (2) is the correct or best play, because of a game theory concept known as the Nash Equilibrium.  Further, (100) is the worst play, because it is the only play in the game that is “beaten” by every other play.  In other words, if player 1 plays (100) and player 2 plays anything else, player 2 will be rewarded more points than player 1.  If this logic is to be believed, (2) is a better play than (100), and people, if they are acting “rationally,” ought to play it.

But let’s look at the OC to see if that’s true.  Let’s say player 1 plays (100) and player 2 plays (2).  The rewards, then, are 0 points to player 1 and 4 points to player 2.  However, given player 2’s play of (2), the highest score player 1 could possibly have acheived by making a different play is 2, by playing (2).  Any play other than (2) results in a score of 0.  So, player 1 lost 2 points by not playing (2), or, to put it another way, his opportunity cost for playing (100) was 2 points.

Player 2, however, is in a much worse situation.  He played (2) and got 4 points.  Given player 1’s play of (100), the highest score possible for player 2 was 101, with a play of (99).  In other words, by playing differently player 2 could have gotten 101 points, but instead he got four.  That means that his opportunity cost for playing (2) was 97 points.

So, in the above example, on the face of it it seems like player 2 won – he got 4 points, while player 1 got none.  However, if you look at it a different way, player 2 lost 97 points while player 1 only lost 2.  If you consider the scale of a loss of 2 vs. a loss of 97, you see that a play of (100) is much less risky than a play of (2).

In fact, for an opponent’s play of 2, the OC of (100) is 2.  For any number between (3) and (99), the OC is 3.  And for (100), the OC of (100) is 1.  The OC of (2), however, is (Opponent’s play – 3).  That means that for any play above (6), the opportunity cost of (2) is higher than that of the opponent’s play.

So the (2) player almost always loses more money than his opponent – not that the players are losing money that they actually possessed, but money that they could have – and perhaps should have – earned.  If you ask any economist or poker player, that loss can sting just as much as a loss of cold hard cash.

The thing is, if you evaluate the Traveler’s Dilemma in terms of Opportunity Cost, the definition of improving one’s position changes, and therefore so does the Nash equilibirum.  It’s a situation where gaining money and not losing opportunity are not the same thing – and this situation probably comes up fairly often in the real economy, which is why opportunity cost is important as an economic concept.  Rational choices and selfishness, therefore, cannot necessarily be evaluated successfully using only the rubric of amassing the most gain by the end of the game.  There are other measures of success, and people do use them.  Game theorists and economists alike would do well to remember that.

January 25, 2008 Posted by | Economics, Game Theory | , , , , | 1 Comment

The Traveler’s Dilemma

Now for some real content. I came across this article in Scientific American about the Traveler’s Dilemma. To explain briefly, the TD is a game in which two players are each asked to select a number within certain boundaries (2 and 100, in the example). If both players select the same number, they are rewarded that number of points. (In the example, each point is worth $1, which makes the game of more than academic interest.) If one player’s number is lower, they are each awarded points equal to the lower number, modified by a reward for the player who selected the lower number and a penalty for the player who selected the higher number. So, for instance, if you choose (48) and I choose (64), you get 50 points and I get 46 points.

The intuition that I had upon reading the rules of this game was that it would be “best” for both players to choose (100). That is certainly true from a utilitarian point of view: (100, 100) results in the highest total number of points being given out – 200. The runners up are (99, 99), (100, 99), and (99, 100) with 198. However, there are two small problems – here’s the dilemma part – that prevent (100, 100) from being the “best” choice: one, the players are not allowed to communicate, and two, the (100, 99) and (99, 100) plays result in one player receiving 101 points – an improvement, for that player, over a 100 point reward.

So, the reasoning goes, if player one predicts that her opponent will play (100), she should play (99) in order to catch the 101 point reward. Her opponent, however, ought to use this same strategy, and also play (99), in which case player one ought to play (98) in order to trump her opponent, and so on and so forth. This reasoning degenerates to a play of the minimum number – in the example, (2). According to Basu, the author of the article, “Virtually all models used by game theorists predict this outcome for TD.”

However, reality does not follow these models. When people are asked to play the TD, many of them choose 100. Many of them choose other high numbers. Some seem to choose at random. Very few choose the “correct” solution – (2) – predicted by game theory. Something’s up.

Basu takes this to mean that all of our assumptions about rational behavior need to be questioned. With my philosophical background, I happen to have different assumptions about rational behavior than the mainstream, and so for me the results of the TD are not surprising in any way. But perhaps the best way to explain why the results to not surprise me is that I am a gambling man. Continue reading

January 24, 2008 Posted by | Economics, Game Theory | , , , , , , , , , , , | 6 Comments