Rationality trick: noticing what you want to be true

Any time you make a judgement about something, whether it is during private reflection, or as part of some argument or debate, it is useful to stop and think: Do I have any preference as to what I would like to be true? and What is that preference? At first it sounds like a silly think to ask yourself; what’s true is true regardless of what you would like. Which brings me to the following distinction:

  • what is true
  • what I think is true
  • what I’m arguing to be true
  • what I want to be true

Consider a simple question

What is the population of Switzerland?

In this example, what’s true is a straightforward fact you can easily find out. But that’s not the reason I used this example. What I’m remarking here is that you don’t really have a preference about what the truth is. When doing introspection with what do I want to be true, nothing results, you don’t care either way. Let’s contrast this with another example, assume for the sake of argument that you have personal convictions in the realm of politics, consider

Is the minimum wage beneficial or detrimental?

If you have some political affiliation and if you’re honest with yourself I’m pretty sure you will have an answer as to what you want to be true. And even if it’s not the case for this particular example, you can probably find a matter of policy for which a positive answer results from the introspection.

The important thing is to note the clear difference between the two examples. In the first example, there was no fact of the matter as to what you want to be true, in the second there is. And despite the fact that wanting something to be true has no bearing as to whether in fact it is true, it absolutely does have a bearing on

  • what I think is true
  • what I argue to be true

If you have a preference as to how you’d like things to be, you can be pretty sure that your mind will distort things to match that. It’s those pesky cognitive biases again, and as we’ve mentioned before, the biggest offender and most relevant here is confirmation bias.

Say, for example, you have a strong opinion about an issue, strong to the point that you consider that position to be part of your identity. In this scenario facts and arguments about the issue that are contrary to your position become an attack on who you are, they compromise your identity. And psychologically speaking, this is a big deal. Your ego will defend itself, and that includes distorting things and deceiving you and anybody else if necessary. This simple description does a good job of explaining some of the irrationality in politics.

So in summary, a preference for a state of reality activates biases that distort cognition to match.

This brings us back to the beginning of the post, here’s where the rationality trick comes in handy. When reflecting about some matter, it is a good exercise to ask yourself if you have a preference as to what you would like to be true. Noticing that you have such a preference should be a warning sign and a cue to exercise more discipline and restraint, because you know there is probably a bias at work.

Lastly, I want to point out the relationship between

  • what I’m arguing to be true
  • what I want to be true

To make matters worse, arguing something to be true may well determine what you want to be true. It’s all about signaling. As soon as you establish, in a social context, that you are advocating or defending a certain position, you become bound to it: being proven wrong as well as changing your opinion signals weakness, something we are evolutionarily programmed to avoid at all costs. That’s why you rarely see someone admitting being wrong or changing their mind in a debate, especially if there’s an audience.

Coincidences and explanations

I was reading about a famous article by physicist Eugene Wigner titled The unreasonable effectiveness of mathematics, where, citing Wikipedia

In the paper, Wigner observed that the mathematical structure of a physics theory often points the way to further advances in that theory and even to empirical predictions, and argued that this is not just a coincidence and therefore must reflect some larger and deeper truth about both mathematics and physics.

I’ll write about this in a later post, but for now this brings me to consider what we mean by coincidence and how we think about them.

In the above, a coincidence is remarked between two apparently independent domains, that of mathematics, and that of the structure of the world. In general, when finding striking coincidences our instinct is to reach for an explanation. Why? Because by definition a striking coincidence is basically something of a-priori very low probability, something implausible that merits investigation to “make sense of things”.

An explanation of a coincidence is a restatement of its content that raises its probability to a level such that it is no longer a striking state of affairs, the coincidence is dissolved. Example:

Bob: Have you noticed that every time the sun rises the rooster crows? What an extraordinary coincidence!

Alice: Don’t be silly Bob, that’s not a coincidence at all, the rooster crows when it sees the sun rise. Nothing special

Bob: Erm… true. And why did David choose me to play the part of fool in this dialogue?

Alice’s everyday response to coincidence is at heart nothing other than statistical inference, be it bayesian or classical hypothesis testing[1]. The coincidence at face value plays the role of a hypothesis (null hypothesis) that assigns a low probability to the event, ie the hypothesis of a chance occurrence between two seemingly independent things. The explanation in turn plays the role of the accepted hypothesis by virtue of assigning a high probability to what is observed.

So one could say that the way we respond and deal with coincidence is really a mundane form of how science works, where theories are presented in response to facts, and those that better fit those facts are accepted as explanations of the world.

But how do explanations work internally? The content of an explanation is the establishment of a relationship between the two a-priori independent facts, typically through causal mechanisms. The causal link is what raises the probability of one given the other, and therefore of the joint event. In the example, the causal link is ‘the rooster crows when it sees the sun rise‘.

But the links are not always direct. An interesting example comes from what in statistics is called a spurious relationship. Again, Wikipedia says:

An example of a spurious relationship can be illuminated examining a city’s ice cream sales. These sales are highest when the rate of drownings in city swimming pools is highest. To allege that ice cream sales cause drowning, or vice-versa, would be to imply a spurious relationship between the two. In reality, a heat wave may have caused both

although the emphasis here is about the lack of direct causal relationship, the point regarding coincidence is the same. Prior to realizing that both facts have a common cause (the explanation is the heat wave), one would have regarded the relationship between ice cream sales and drownings as a strange coincidence.

In the extreme case the explanation reveals that the two facts are really two facets of the same thing. The coincidence is dissolved: any given fact must necessarily coincide with itself. Before the universal law of gravitation, it would have been regarded as extraordinary that both the apples falling from a tree, and the movement of planets in the heavenly skies had the same behavior. But we know now that they are really different aspects of the same phenomenon.


Notes

[1] The act of explanation is, in classical statistics language, the act of rejecting the null hypothesis. In the Bayesian picture, the explanation is what is probabilistically inferred due to the higher likelihood it assigns to the facts (and its sufficient prior probability)

Why vote? The so called Paradox of Voting

Someone asked me to formalize an argument I made some time ago about how voting is an irrational act. The essence of the argument is as follows. Given that hundreds of thousands of people vote, the chance of a tie is minuscule. So the chances that an individual’s vote decides the election is equally minuscule. Hence it is almost certain that the act of voting has no consequences. So why vote?

At first glance, it seems like an infantile argument to make, most people would reply: if everybody thought/did that no one would vote. And truth be told, the first time I heard that reply I was fooled. But of course, that reply is wholly irrelevant to the matter. It’s an example of argumentum ad consequentiam, and on top of that, something being true does not imply people believe it. So the bottom line stands, and you can readily formulate it in decision-theoretic terms. As follows:

Expected Utility of Voting = (Probability of Deciding * Utility of Deciding) – Cost of voting

Because the probability of deciding is so small the term in parentheses vanishes, and the utility of voting is negative, the cost of voting, which includes explicit costs of going to vote plus opportunity costs of what you could have done otherwise.

I decided to briefly google the subject and it turns out it’s actually a controversial issue in rational choice theory. It was initially formulated exactly as above by Anthony Downs in 1957[1][2]. Nope, it wasn’t just something of an anecdote anymore, but an open “problem” as of today. Here are some example calculations from [3] including probability estimates

Consider an election in which 5 million voters are expected to cast ballots and candidate 1’s expected vote share is 50.1 percent, while candidate 2 is expected to receive 49.9 percent ofthe votes cast. Myerson (2000) develops a formula in which the number of people who vote is a random number drawn from a Poisson distribution with mean n. According to Myerson’s formula, the probability a vote is pivotal for candidate 2 is 8.1079 x 10^-9. Thus, the benefit to a voter who prefers candidate 2 must be more than 8 billion times greater than the cost to vote. For example, if voting costs $.01, then the expected benefit of electing one’s favored candidate must be greater than $80 million dollars. Expected benefits at such levels seem unreasonable.

Because rational choice theory has the pretense of describing actual human behavior, and because in fact millions of people do vote, there is an apparent contradiction. It’s called the Paradox of Voting, and has been described as “the paradox that ate rational choice theory” [4].

I haven’t looked extensively into the literature at how attempts are made to resolve the “paradox”. I’m pretty sure one can invoke all sorts of technical wizardry to get the desired empirical results (game theory and Nash equilibrium come to mind). But to be frank, I presume it’s just ad-hockery to arrive where you were trying to get to initially.

It’s much simpler to just accept that people are either not rational (surprise surprise) or that there are motivations besides those related to the act of deciding itself (surprise surprise). People may go to vote because they have nothing else to do, out of a sense of duty, because it’s amusing, because it’s what everyone else does [5], or simply because they are outright irrational in estimating cost/benefit. What’s that? Do I hear anybody crying heresy at this scandalous violation of democracy’s sanctitude?

As I said previously, this matter has been debated extensively since 1957. But it seems to me that it’s just as simple as I’m making it out to be; there is no Paradox of Voting just as there is no Paradox of Buying Lottery.


References

[1] Downs, Anthony. 1957.  An Economic Theory of Democracy.

[2] Riker, William and Peter Ordeshook. 1968. “A Theory of the Calculus of Voting.”

[3] Feddersen (2004) Rational Choice Theory and the Paradox of Not Voting

[4] Fiorina, Morris (1990). Information and rationality in elections.

[5] These motivations for voting can be modeled as a consumption benefit in Riker and Ordeshook (1968):

Riker and Ordeshook (1968) modify the calculus of voting by assuming that, in addition to a cost to vote, voters get a consumption benefit D > 0 from the act of voting.

 

 

Politics as rationality catastrophe

One of the spinoffs of my short research into epistemic rationality is the realization that politics is a rationality catastrophe. When a person engages in thought, or even worse, debate, about politics, they will most probably deviate strongly from the standards of rational thinking and distort reality. Put differently, politics is a trigger for a large battery of cognitive biases; confirmation bias as usual gets honorable mention.

Ok, so this doesn’t really count as a new realization, I’ve been making analogies between politics and sports for a long time. But nailing down vague or half-baked ideas into precise explicit form feels like a novelty, even if most of the information was already there.

Some quotes that illustrate the matter.

People go funny in the head when talking about politics.  The evolutionary reasons for this are so obvious as to be worth belaboring: In the ancestral environment, politics was a matter of life and death.  And sex, and wealth, and allies, and reputation…  When, today, you get into an argument about whether “we” ought to raise the minimum wage, you’re executing adaptations for an ancestral environment where being on the wrong side of the argument could get you killed. – Eliezer Yudkowsky

The typical citizen drops down to a lower level of mental performance as soon as he enters the political field. He argues and analyzes in a way which he would readily recognize as infantile within the sphere of his real interests. He becomes primitive again. – Joseph A. Schumpeter

The proposition here is that the human brain is, in large part, a machine for winning arguments, a machine for convincing others that its owner is in the right – and thus a machine for convincing its owner of the same thing. The brain is like a good lawyer: given any set of interests to defend, it sets about convincing the world of their moral and logical worth, regardless of whether they in fact have any of either. Like a lawyer, the human brain wants victory, not truth; and, like a lawyer, it is sometimes more admirable for skill than for virtue. – Robert Wright, The Moral Animal

If people can’t think clearly about anything that has become part of their identity, then all other things being equal, the best plan is to let as few things into your identity as possible. – Paul Graham

Yudkowsky’s point is that irrationality is best understood as a fact of evolutionary psychology, especially in politics. Another interesting insight is the framing of political thinking in terms of identity, self-defense/self-esteem, us vs them (sports anyone?), as Graham suggests.

Now the remaining question is, does thinking rationally let you “get ahead”? Is there an advantage to it? Or could it be a handicap? A rational person could be unable to establish strong ties and membership inside cohesive groups precisely because of his/her clear detached thinking.  And this seems a strong disadvantage in society where groups can exert power for the benefit of their individuals.

So ironically, it could be that nature is still right! Those evolutionary adaptations that worked in the ancestral environment are still “winning” today, even if from the standards of correct thinking they are aberrations. It could be one of those funny cases where epistemic and instrumental rationality are not aligned.

Rationality slides

Here is a presentation (in spanish) on rationality for skepticamp I’ve worked on recently. It needs some polish but the content is essentially complete. A summary of the main points

  • Optimization and the second law of thermodynamics are opposing forces
  • Intelligence is a type of optimization that evolved in certain species to counteract the 2nd law through behavior. Intelligence functions through observation, learning and prediction
  • Prediction requires a correct representation of the environment, this defines epistemic rationality as a component of intelligence
  • Classical logic fails to model rationality as it cannot deal with uncertainty
  • Probability theory is an extension of logic to domains with uncertainty
  • Probability theory and Bayes define a standard of ideal rationality. Other methods are suboptimal approximations
  • Probability theory as formalization of rationality:
    • Provides a quantitative model of the scientific method as a special case of Bayes theorem
    • Provides operational, quantitative definitions of belief and evidence
    • Naturally relates predictive power and falsifiability through the sum rule of probability
    • Explains pathological beliefs of the vacuous kind; astrology, card reading, divination, etc
    • Explains pathological beliefs of the floating kind; “There is a dragon in my garage”
    • Exposes fraudulent retrodiction; astrology, cold reading, ad-hoc hypothesis, bad science, bad economics, etc
    • Dissolves false disagreements described by matching predictions but different verbal formulations
    • Naturally embeds empiricism, positivism and falsificationism
  • Pathological beliefs can be analyzed empirically by re-casting them as physical phenomena in brains, the province of cognitive science.
  • A naturalistic perspective automatically explains human deviations from rationality; evolution will always favor adaptations that increase fitness even if they penalize rationality
  • Today, politics is an example of rationality catastrophe; in the ancestral environment, irrationality that favored survival in a social context (tribes) was a successful adaptation. (Wright, Yudkowsky)
  •  

    Recommended reading

    Probability Theory: The Logic of Science
    Bayesian Theory (Wiley Series in Probability and Statistics)
    Hume’s Problem: Induction and the Justification of Belief

    Various papers

    Bayesian probability – Bruyninckx(2002)
    Philosophy and the practice of Bayesian statistics – Gelman(2011)
    Varieties of Bayesianism – Weisberg
    No Free Lunch versus Occam’s Razor in Supervised Learning – Lattimore, Hutter(2011)
    A Material Theory of Induction – Norton(2002)
    Bayesian epistemology – Hartmann, Sprenger(2010)
    The Illusion of Ambiguity: from Bistable Perception to Anthropomorphism – Ghedini(2011)
    Bayesian Rationality and Decision Making: A Critical Review – Albert(2003)
    Why Bayesian Rationality Is Empty, Perfect Rationality Doesn’t Exist, Ecological Rationality Is Too Simple, and Critical Rationality Does the Job* – Albert(2009)
    A Better Bayesian Convergence Theorem – Hawthorne

    Bayesian epistemology (stanford)
    lesswrong.com (excellent blog on rationality)