The All-Seeing Eye

Musings from the central tower…

Free Will, Determinism, and Motivation

It is possible to look at the universe like a giant computer. If you know the software a computer is running and all of its inputs you can predict the result. Similarly, one might think that if you knew all the rules of the universe – that is, if you understood physics perfectly and accurately – and if you knew the position and velocity of each particle in the universe, you would be able to predict the results – that is, how everything would turn out. Such a view is called determinism. When Newton first proposed that all matter obeyed certain laws, he was accused of atheism, because the obvious implication of his theories was determinism, which is a theory that leaves no place for God and no place for free will.

The question of free will vs. predestination or determinism is, of course, older than Newtonian physics, but physics is the way in which I first conceived of the question. One might ask, if God is all powerful, how can anyone act in a way God does not want? One might ask, if God knows all, then isn’t destiny written – isn’t there no way to change things? I have never been particularly into theology, but physics always fascinated and frightened me.

Some time ago I read Isaac Asimov‘s Foundation trilogy, which despite its name consisted of approximately thirteen thousand books. This series is based upon the idea that there was a mathematician named Hari Seldon who was able to predict the course of history using mathematical models and a deep understanding of historical trends. I did not find this idea credible. People, after all, are far too complicated to reduce to a mathematical model. Aren’t they?

Can we predict what people will do in a given situation? If we can, what does that say about free will? If we can’t, how can we enact social change?

In Freakonomics, authors Levitt and Dubner describe a scenario in which parents were charged a small fee (I think it was $3) for being late to pick their children up from daycare. The result of this fee was that lateness increased dramatically. According to Levitt and Dubner, the fee was too low, and parents felt as though paying $3 justified their lateness. In other words, when no provision is made for lateness, the parents have to pick their kids up on time or risk their kids being scared and alone. When the daycare center charges for lateness, watching the kids for a few more minutes becomes just another service that the parent can buy, and buy they do. The point of this story is that incentives don’t necessarily work the way we think they will. There are complicated issues at stake even in something as simple as daycare. As we saw in the Traveler’s Dilemma, it’s not a simple task to predict how people will make their decisions, and sometimes rational behavior isn’t what theorists think is rational.

However, what both of these scenarios show is that despite the difficulty, despite the complications, it is possible to develop models and predictions for how people will behave. It is possible to find, with experimentation, the fee amount at which parents will begin picking their children up on time to avoid the fee. It is possible to find, with experimentation, the punishment amount at which people will begin picking the low number rather than the high number in the Traveler’s Dilemma. In other words, people’s behavior may be more complicated than we think, but it is not unreasonable. People act based on motivations, and although these motivations are often not obvious, they are there and they can be found.

Of course there will always be exceptions. There’s always room for free will. There will always be people on the far ends of the bell curve, people who defy expectations and act inexplicably. But in order to effect positive change in the world, we have to believe that we can predict behaviors for most people. We have to believe that there’s a number of dollars that will decrease the amount of late pickups from daycare. After all, isn’t this how we determine prison sentences? Isn’t there a number of years of incarceration that we believe will make the commission of murder unattractive to most potential criminals? Isn’t there a number of dollars that we believe will deter people from speeding and thus decrease the number of traffic accident fatalities?

I’ve never really believed in free will. I’ve always thought that everything is already determined by particle vectors, that everything I do is explainable by something that happened to me in childhood or by a set of circumstances that outlined my choice to such an extent that I didn’t really have a choice. And that’s why, for me, I think it is important for us to search for these motivations, search for these incentives, to build and discredit and rebuild these mathematical models to predict behavior. Because I want to set things up so that people have no choice but to make the right choices. I want a society full of people who pick 100 on the Traveler’s dilemma and pick their children up on time from daycare, and if we’re going to have that we have to pick the right game.

And that, in turn, is why it’s worth looking at something like the Traveler’s Dilemma and finding out that people will cooperate with each other as long as the risk for doing so isn’t too high. It’s why it’s worth looking at the daycare paradox to find out how much guilt is worth. It’s why it’s worth asking why people follow their king or their president against their best interests. We need to find out what motivates people. And in exploring incentives and economics, game theory and modeling, philosophy and psychoanalysis, that’s exactly what I hope to do. I hope to find a solution, a way to set up society so that we’re all playing a game that everyone can win.

In closing, right now I feel that most people are not playing a game that everyone can win. There’s a game called the Prisoner’s Dilemma. In this game, two prisoners are in the custody of law enforcement, but the police don’t have enough evidence to convict them of a serious crime. Each of them is told that they are both suspects and given the following options. If one prisoner gives up the other, that prisoner will go free and the other will go to jail for a long time. If neither of them confesses they will both serve a short sentence for whatever smaller crimes the police can put on them. If they both confess they’ll both serve a short sentence. The implications of the game are that it is better for each player, no matter what the other player does, to confess their crime. Unlike the Traveler’s Dilemma, the Prisoner’s Dilemma tends to lead to uncooperative behavior – in other words, it is much better for each player to screw the other player over, unlike in the TD in which screwing the other player leads to a greater loss.

The Prisoner’s Dilemma game describes many situations in modern life – situations in which people have a great incentive to hurt other people. If there is some way to change the rules of the game so that, like in the Traveler’s Dilemma, or many other games, people have an incentive to help other people, then everyone could benefit immensely. Changing the rules of the game is what I’m aiming for, but it’s going to take a lot of searching to find the right game and a lot of convincing to get people to play it.

Advertisements

February 10, 2008 Posted by | About, Economics, Game Theory | , , , , , , , , | 2 Comments

Power: The Metonymic Model

In my last post I introduced the “panoptic model of power” as an explanation of where the name of this blog comes from. In doing so I touched briefly upon the concept of the panopticon, because at first glance “panoptic” is the word in that phrase that needs to be explained. I was able to take for granted that anyone reading would have some previous understanding of the word power. However, in presenting a new model of power I also implicitly challenged that understanding. Therefore, I believe that an examination of power as a concept is worthwhile before we go any further.

Often individuals and groups are spoken of as having power. For instance, America is a powerful nation – some would say the most powerful in the world. Within America, George W. Bush is currently in power. Here we are speaking of military power, political power, economic power. What does it mean to have this kind of power?

One can say, “George W. Bush invaded Iraq and removed Saddam Hussein from power” in all seriousness without considering that it was not Bush himself but rather certain members of the United States military who invaded Iraq and toppled the government. Using the name of the President to stand in for the troops who are carrying out his orders is an example of metonymy, a rhetorical device in which one word or concept is used to stand in for a related word or concept. The use of metonymy is widespread when discussing power relationships. If officials from the US government sign an agreement with officials from the British government, it is said that Washington and London have signed an agreement. This, too, is metonymy.

If we read these metonymic statements literally what we see is a displacement of agency. Bush himself did not invade Iraq, nor did the city of Washington, D.C. pick up a pen and write its name on a piece of paper. In these examples, Bush and Washington are not direct agents but related concepts – concepts linked by the relations of power. They do not do anything themselves and yet the agency of the actions taken is ascribed to them through metonymy.

So one formulation of power we could postulate would be the metonymic model of power – the possession of agency not through action but through metonymic relations. The reason I am formulating power this way is to point out that it is not just individuals who wield power – it is also concepts, and it is also the names of these concepts. Under the metonymic model, “Washington” has power even though it has no real agency of its own. Washington, instead, is a symbolic agent – it has agency through a metonymic relationship.

By definition, then, metonymic power is the displacement of agency from an acting agent to a symbolic agent. This displacement of agency is what gives metonymic power its power. A displacement of agency is also a displacement of responsibility. Therefore, metonymic power gets its power from the human tendency to evade responsibility. Continue reading

February 3, 2008 Posted by | Power | , , , , , , | 1 Comment

What’s in a name?

This blog’s URL is “panoptical.wordpress.com.” I chose the name “panoptical” for several reasons. First, the inspiration for this blog was a concept I came across several months ago while studying Foucault that I call the “panoptic model of power.” The second is that “panoptical” means “observing all,” and I intend this philosophy blog to be highly interdisciplinary: I intend, to the extent possible in my spare time, to observe all. The third is that “panoptical” gets few google hits and is therefore a reasonably distinctive name.

The panoptic model of power merits more explanation, because I intend to delve very deeply into that subject and I have plans to use this model extensively to explain all manner of social institutions, from the free market to the public school system. Foucault made a study of Jeremy Bentham’s panopticon, a physical prison building designed, before modern surveillance techniques, to make it easy for a single observer to supervise a large number of people. The structure of the panopticon consists of a single, central tower surrounded by a large number of individual cells situated such that each cell can be seen into from the vantage point of the tower. Ideally, the inmates should be isolated from each other, so that no communication is possible. Additionally, the inmates ought not to be able to see into the central tower, so that at any given time they will not be able to determine whether or not they are under surveillance.

The proposed psychological effect of the panopticon is that the inmates exercise self-surveillance and self-discipline. Because the inmates know, at any given time, that they might be under surveillance, they will tend to watch their own behavior to ensure that it conforms to the way they would act if some authority figure were actually watching. It may also be an important aspect of the panopticon that the inmates are isolated from each other. The twin effects of isolation and self-surveillance serve to magnify the power of the central authority over the inmates.

The implication of the panopticon is that this panoptic magnification of power also takes place outside the physical structure. In other words, isolation and self-surveillance occur in individuals in our society due to various other institutions and social factors, and it may be the case that when these things come together with a perceived authority or set of norms, they govern the individual as surely as if the individual were actually in a prison cell. This is where Foucault comes in, because he re-envisioned power as the cumulative effect of every relationship and institution, rather than as the simple effect of one person ruling or dominating another.

This is all a vastly brief summarization of a set of theories that are farther-reaching in their implications than perhaps anything I’ve ever studied, so if things seem a bit unclear, don’t worry – I’ll be going over all of these issues with a fine-toothed comb. To give you an idea of just how far-reaching these implications are, I’ll say this. For about five years I adopted the political and economic philosophy of Libertarianism, studying many of its facets and related ideas, such as Objectivism, Austrian Economics, praxeology, and anarcho-capitalism. All of those systems, at their very core, assume a theory of power that Foucault may have made obsolete. The panoptic model of power and its implications could, therefore, lead me to retrace five years worth of steps and start over at the beginning. The scope of that project is why I felt I needed a new blog and the inspiration, the panoptic model of power, is where this blog gets its name.

January 27, 2008 Posted by | About | , , , , , , , , | 1 Comment

Traveler’s Dilemma and Opportunity Cost

As a followup to my last post, I thought now would be a good time to say some things about opportunity cost, or The OC.  Please do not confuse this with any other things called The OC.

Opportunity Cost is an economic analytical tool – a measure of the cost of a missed opportunity.  It goes like this.  Let’s say you have a dollar and you’re standing on the street at a hot dog cart.  The cart is selling pretzels for $1 and hot dogs for $1.  If you buy the hot dog, you can’t buy the pretzel.  Therefore the opportunity cost of buying the hot dog is one pretzel.  Conversely if you buy the pretzel you can’t buy the hot dog.  So the opportunity cost of the pretzel is one hot dog.  Simple, right?  At first glance this seems a trivial and reductive measure for an economist to be thinking about, but in real life, when applied to more complex situations, we can see the value of considering the opportunity cost.

Let’s try another example.  You have a dollar but you aren’t hungry, and you’ve got a year.  You can keep the dollar in your pocket and at the end of the year you’ll have a dollar.  Or you can put the dollar in a bank and at the end of the year you’ll have, let’s say, $1.05.  The average person would think that if they kept the dollar in their pocket, they haven’t lost anything, and in one sense this is true.  However, they have missed something – the opportunity to earn $.05.  The opportunity cost of holding onto the dollar was five cents.  Doesn’t seem like much, but what if it’s a thousand dollars?  A million?

The point is, when there’s money at stake it pays to consider the opportunities that you have when making choices, because in some sense, missing the opportunity to earn money is sort of like losing money, even if it’s money you never actually had.  An opportunity is worth something.  If you don’t believe me, play poker.  If you fold a hand, and it turns out at the end that you would have won if you had stayed in, you will feel like you have lost something.  What you’ve done is missed an opportunity, and the loss you’re feeling is the opportunity cost.  Folding may have been the right decision based on the odds, but you’ll still feel bad that you didn’t get the pot.

So what does opportunity cost have to do with the Traveler’s Dilemma?  Well, it’s another way of evaluating possible plays in the TD, and it demonstrates a major flaw in the models used by game theorists to “solve” the TD.

To recap, in the TD, game theory says that logically speaking, a player ought to play (2).  Many people intuitively feel that they should play higher, and (100) is perhaps the most common play, with (95) to (100) comprising the majority of plays in some experiments.  According to Kaushik Basu, (2) is the correct or best play, because of a game theory concept known as the Nash Equilibrium.  Further, (100) is the worst play, because it is the only play in the game that is “beaten” by every other play.  In other words, if player 1 plays (100) and player 2 plays anything else, player 2 will be rewarded more points than player 1.  If this logic is to be believed, (2) is a better play than (100), and people, if they are acting “rationally,” ought to play it.

But let’s look at the OC to see if that’s true.  Let’s say player 1 plays (100) and player 2 plays (2).  The rewards, then, are 0 points to player 1 and 4 points to player 2.  However, given player 2’s play of (2), the highest score player 1 could possibly have acheived by making a different play is 2, by playing (2).  Any play other than (2) results in a score of 0.  So, player 1 lost 2 points by not playing (2), or, to put it another way, his opportunity cost for playing (100) was 2 points.

Player 2, however, is in a much worse situation.  He played (2) and got 4 points.  Given player 1’s play of (100), the highest score possible for player 2 was 101, with a play of (99).  In other words, by playing differently player 2 could have gotten 101 points, but instead he got four.  That means that his opportunity cost for playing (2) was 97 points.

So, in the above example, on the face of it it seems like player 2 won – he got 4 points, while player 1 got none.  However, if you look at it a different way, player 2 lost 97 points while player 1 only lost 2.  If you consider the scale of a loss of 2 vs. a loss of 97, you see that a play of (100) is much less risky than a play of (2).

In fact, for an opponent’s play of 2, the OC of (100) is 2.  For any number between (3) and (99), the OC is 3.  And for (100), the OC of (100) is 1.  The OC of (2), however, is (Opponent’s play – 3).  That means that for any play above (6), the opportunity cost of (2) is higher than that of the opponent’s play.

So the (2) player almost always loses more money than his opponent – not that the players are losing money that they actually possessed, but money that they could have – and perhaps should have – earned.  If you ask any economist or poker player, that loss can sting just as much as a loss of cold hard cash.

The thing is, if you evaluate the Traveler’s Dilemma in terms of Opportunity Cost, the definition of improving one’s position changes, and therefore so does the Nash equilibirum.  It’s a situation where gaining money and not losing opportunity are not the same thing – and this situation probably comes up fairly often in the real economy, which is why opportunity cost is important as an economic concept.  Rational choices and selfishness, therefore, cannot necessarily be evaluated successfully using only the rubric of amassing the most gain by the end of the game.  There are other measures of success, and people do use them.  Game theorists and economists alike would do well to remember that.

January 25, 2008 Posted by | Economics, Game Theory | , , , , | 1 Comment

The Traveler’s Dilemma

Now for some real content. I came across this article in Scientific American about the Traveler’s Dilemma. To explain briefly, the TD is a game in which two players are each asked to select a number within certain boundaries (2 and 100, in the example). If both players select the same number, they are rewarded that number of points. (In the example, each point is worth $1, which makes the game of more than academic interest.) If one player’s number is lower, they are each awarded points equal to the lower number, modified by a reward for the player who selected the lower number and a penalty for the player who selected the higher number. So, for instance, if you choose (48) and I choose (64), you get 50 points and I get 46 points.

The intuition that I had upon reading the rules of this game was that it would be “best” for both players to choose (100). That is certainly true from a utilitarian point of view: (100, 100) results in the highest total number of points being given out – 200. The runners up are (99, 99), (100, 99), and (99, 100) with 198. However, there are two small problems – here’s the dilemma part – that prevent (100, 100) from being the “best” choice: one, the players are not allowed to communicate, and two, the (100, 99) and (99, 100) plays result in one player receiving 101 points – an improvement, for that player, over a 100 point reward.

So, the reasoning goes, if player one predicts that her opponent will play (100), she should play (99) in order to catch the 101 point reward. Her opponent, however, ought to use this same strategy, and also play (99), in which case player one ought to play (98) in order to trump her opponent, and so on and so forth. This reasoning degenerates to a play of the minimum number – in the example, (2). According to Basu, the author of the article, “Virtually all models used by game theorists predict this outcome for TD.”

However, reality does not follow these models. When people are asked to play the TD, many of them choose 100. Many of them choose other high numbers. Some seem to choose at random. Very few choose the “correct” solution – (2) – predicted by game theory. Something’s up.

Basu takes this to mean that all of our assumptions about rational behavior need to be questioned. With my philosophical background, I happen to have different assumptions about rational behavior than the mainstream, and so for me the results of the TD are not surprising in any way. But perhaps the best way to explain why the results to not surprise me is that I am a gambling man. Continue reading

January 24, 2008 Posted by | Economics, Game Theory | , , , , , , , , , , , | 6 Comments

The System Of The World

Aside from being the final installment in Neal Stephenson‘s excellent Baroque Cycle, The System Of The World is an important part of a metaphor for my approach to matters philosophical. It goes like this:

Picture a system of equations. Or just consider this one:

a + b = 3
2a + b = 4

It’s a very simple system with a very easy solution: a = 1, b = 2. But how does one solve this system? Well, one method is to examine one equation to try to find a relationship that can help us solve another equation. If we consider the first equation, we can discover that b = 3 – a. If we use this insight about b’s value in the second equation, we get the equation 2a + 3 – a = 4, which we can then solve for a. Once we know that a = 1, things become very easy.

So a system of equations can be solved by, essentially, cross-referencing the information in one equation with the information in the others.

This sort of action, however, is not limited to manipulation of numbers. Philosophy, I believe, works the same way. We can analyze one work of philosophy, or literature, or what have you, and use the conclusions we draw to analyze another different work in a different field, and from this cross-referencing we can derive new equations – perhaps ones with easier solutions.

Let’s work on a very prominent and easy example: The Oedipus complex. Freud looked at a dramatic and mythological character, Oedipus, and from his story drew some conclusions about human nature, which he then applied to the field of psychoanalysis to achieve new and unexpected results. We can challenge Freud’s particular assertions, his methods, etc, but we cannot challenge the fact that Freud was incredibly influential and his insights essentially generated a whole new science.

So where do we find insights like Freud’s? Insights that, regardless of their ultimate validity, help us to look at old problems in new ways? Insights that open up entire new fields of enquiry? The answer is, anywhere.

Each philosophy, each story, each insight, represents a piece of information, an equation in the System of the World. Each equation helps us decode other equations, helps us situate other ideas in reference to one another. All that is needed is for us to find relations, but the fun thing is that everything is related. Anything can be a metaphor for anything else, if creativity and thought are put into it. You might even say that every thought and image we have is a metaphor – after all, a picture of a pipe is not a pipe. And now we’re verging into epistemology and cognitive science. How, exactly, are thoughts organized in our minds? How do we form knowledge? Difficult questions, and well beyond the scope of this post. Suffice it to say that time will tell whether my methods are valid – whether the insights I am able to produce contain truth or falsehood.

January 24, 2008 Posted by | About | , , , , , , , , | Leave a comment

Hello world!

Ah, the Hello World. I can still remember my first programming class – we used QBasic, in which the Hello World program consisted of the following instruction:

PRINT “Hello World!”

Before that class I used to “program” my personal computer: A Commodore 64. That machine used plain old BASIC, and my first program reflected my priorities at the time:

10 PRINT “NEAL”

And that, I hope, may be contorted into some sort of useful metaphor concerning this blog. My first program was not the conventional, didactic, “Hello World,” but rather something that I dreamed up. I’m not claiming that it takes much intellectual horsepower to conceive of displaying one’s own name on a TV screen, which is what we used for a monitor that first year we got the Commodore. But since then I have made my own path in many more significant ways, and ventured, untaught, into many additional fields. Computer programming wasn’t the first (it was preceded, in predictable little-boy fashion, by my study of astronomy and dinosaurs), but it was the first that really stuck in my personality, that really gave me a new tool with which to analyze and communicate ideas.

And that, in a nutshell, is what this blog is about: finding, evaluating, and using an ever-expanding collection of analytical tools that will help us better understand and affect the systems around us.

I like to tell stories, and a lot of my previous writings have consisted of a story followed by the implications of that story and how they apply to some current issue. Much like a sermon, which tells a story from the Bible, only to draw a concept out of that story and then from that concept extrapolate some lesson or advice to the congregation that is hearing the sermon.

The thing is, I see stories everywhere. In economics, politics, philosophy, psychology, theatre, literature, history, cognitive science, even mathematics. When I hear a story told by an economist I want to learn a new way to look at human behavior. When I hear a story told by a philosopher I want to learn a new way to look at politics. My goal is to come up with whole new ideas – but I’ll be satisfied with new ways of looking at old ideas – that can be usefully employed to change peoples’ lives. That’s a tall order for a lone blogger, and I may end up, like Albert Jay Nock, writing for an unknowable potential future audience (“the Remnant”) in need of my ideas, or worse, for no one at all. But despite the risk of failure or irrelevance, I have ideas, and I might as well write them down before they go away, or else, to quote Emerson, “to-morrow a stranger will say with masterly good sense precisely what we have thought and felt all the time, and we shall be forced to take with shame our own opinion from another.”

January 24, 2008 Posted by | About | , , , , | Leave a comment