The “Straw-Scotsman” pattern of ideological debate

Recently I had a short exchange on twitter on the subject of feminism. Reflecting on the nature of the disagreement, I realized that the structure of the arguments conformed to a pattern I had seen many times before, but never identified. It is a pattern that arises frequently in ideological disputes, I will mnemonically call it the “Straw-Scotsman” pattern.

Suppose Alice and Bob are debating about ideology X. Alice is a supporter of X, whereas Bob is a detractor.

Bob proceeds to criticize X:

Bob: I find ideology X unsatisfactory because of its properties a, b, c.

Alice retorts that Bob is mischaracterizing X:

Alice: Ideology X does not in fact have the properties a,b,c that you are wrongly assigning to it. You should inform yourself about what X really is before criticizing it.

Bob finds this to be a disingenuous answer:

Bob: You’re just dodging my criticisms by redefining X in a way that suits your argument. It seems to me that whatever criticism one could make of X you would simply reply that the real X is not like that.

In the language of fallacies, the pattern can be succintly described with

  • From Alice’s point of view, Bob is committing the straw man fallacy by attacking a position that does not in fact correspond to X.
  • From Bob’s point of view, Alice is committing the no true scotsman fallacy, by responding to any criticism of X by saying that the real X is not at all like that.

I offer no resolution here. From both points of view the opponent is engaging in fallacies, the argument is a stalemate and leads nowhere. The pattern typically also takes the following form when discussing the merits of ideologies in terms of historical outcomes.

Alice criticizes X using historical examples:

Alice: Ideology X is flawed, one only needs to look at what happened in the following examples a,b,c where it was applied and led to disastrous results.

Bob responds:

Bob: I disagree, examples a,b,c only show that the implementation of X was flawed. X was applied incorrectly or not at all, and it is this that led to bad results. However, if one applies X correctly, the results would be satisfactory.

A response which Alice finds unsatisfactory:

Alice: You could always dodge any critcism of a real world case of X by insisting that the implementation was wrong, rather than X is itself faulty.

In this manifestation the Straw-Scotsman pattern is best summed up as

  • When criticizing opposing ideologies people refer to their real world implementations, whereas when defending their own, they insist on its idealized form.

I realize now that I have seen this pattern occur many times when people debate, for example, communism and libertarianism.

Rationality trick: openly acknowledge errors

Imagine you’re having a debate or discussion with someone, and have a strong difference of opinion. The discussion ends, and no agreement has been reached, both sides hold their positions. Later, perhaps because of further reflection, or new information, you change your mind. You privately realize you were wrong, and that that someone was right; you actually agree with them now. What happens then?

Previously, I have said

As soon as you establish, in a social context, that you are advocating or defending a certain position, you become bound to it: being proven wrong as well as changing your opinion signals weakness, something we are evolutionarily programmed to avoid at all costs. That’s why you rarely see someone admitting being wrong or changing their mind in a debate, especially if there’s an audience.

We don’t want to lose face or status, not even in front of ourselves, and this may establish preferences over states of reality

If you have a preference as to how you’d like things to be, you can be pretty sure that your mind will distort things to match that

So, besides being vigilant when detecting said preferences, what else can be done to counter the cognitive biases at work? Two things, which I’ll call conditioning and reversed reinforcement.

Conditioning means exposing yourself to a negative experience in order to develop tolerance and reduce its effects[1]. When the negative effects of being wrong are reduced, that is, we become accustomed to “losing face” both publicly and privately (i.e. private introspection), the motivation to avoid said negative stimulus should be correspondingly reduced. This in turn will reduce the strength of biases that distort reality to avoid the negative experience.

The other technique involves trying to convert a negative experience into a positive one. Instead of shamefully and half-heartedly admiting an error, doing it openly. Displaying an open, honest attitude over something that is typically a sign of weakness is a very strong signal of strength. And this can result in positive reinforcement, both publicly and privately, thus countering the negative stimulus that is by default associated with being wrong. As above, a dimished negative effect may reduce the strength of activated biases[2].

In summary, openly acknowledging being wrong may work towards reducing resistance to changing your mind in the future instead of fooling yourself to protect your status.  And there’s an added bonus. If people observe that you’re honest about your errors, they will automatically assign greater credibility to positions you hold. So, next time you realize you’re wrong, instead of shutting up about it, come out and say it outright.


[1] Technically, systematic desensitization

[2] And before somebody points it out, it really is very hard to go too far and reverse the bias, the innate tendency to avoid signaling weakness is very strong.

Epistemological dive

 
Note: I wrote this piece before the two posts presenting simple model of learning using bayesian inference. There is significant overlap, and  conclusions are stated without complete explanations.
 
 

I attended an informal talk on climate change recently, after which I had several discussion regarding the scientific process and the foundations of knowledge (in science).

One question was Is scientific knowledge is inductive or deductive? Well, the scientific method requires deductive inference to establish the logical consequences of a theory in order to make predictions. But the justification of theories, the method by which a theory is temporarily accepted or discarded is inductive. In the language of Bayes, theory confirmation/invalidation occurs by updating theory posteriors inductively (P(H|E)), whereas evidence conditioning on theories (P(E|H)) is derived deductively.

So, although deduction plays a part in establishing the logical consequences of theories in the form of testable predictions, the nature of the knowledge, or rather, the process by which that knowledge is gained, is fundamentally inductive.

What does this say about the foundations of knowledge in science? If scientific knowledge were deductive, we could simply say that its foundations are axiomatic. We could also talk about incompleteness and other interesting things. But if as we have stated this knowledge is inductive, what are its foundations? Is induction a valid procedure, and what are its endpoints?

This is a very deep subject, trying to go all the way to the bottom is what I have titled this post as an epistemological dive. I’m not going to give this a thorough treatment here, but I’ll briefly state what my position is and what I argued in discussion that day.

The way I see it, the foundations of scientific knowledge are the postulates of probability theory (as derived for example by Bernardo-Smith or Cox) together with Occam’s razor. In fact, given that most people are aware of probability theory I would say that the best single answer to the foundation of knowledge, in the sense that it is something we are less aware of, is Occam’s razor. I will give a brief example of this, borrowed from a talk by Shane Legg on machine super intelligence.

Let’s consider a minimal example of a scientific process. An agent is placed in an environment and must form theories whose predictions correctly match the agent’s observations. Although minimal, this description accounts for the fundamental elements of science. There is one missing element, and that is a specification of how the agent forms theories, but for now we will use our own intuition, as if we were the agent.

For this minimal example we will say that the agent observes a sequence of numbers which its environment produces. Thus, the agent’s observations are the sequence, and it must form a theory which correctly describes past observations and predicts future ones. Let’s imagine this is what happens a time goes forward, beginning with

1

For the moment there is only one data point, so it seems impossible to form a theory in a principled way.

1,3

Among others, two theories could be proposed here, odd numbers, and powers of 3, with corresponding predictions of 5 and 9:

f(n) = 2n – 1

f(n) = 3^n

the observations continue:

1,3,5

The powers of three theory is ruled out due to the incorrect prediction 9, while odd number theory was correct.

1,3,5,7

The odd number theory has described all observations and made correct predictions. At this point our agent would be pretty confident that the next observation will be 9.

1,3,5,7,57

What?! That really threw the agent off, it was very confident that the next item would be 9. But it turned out to be 57.  As the builder of this small universe I’ll let you know the correct theory, call it theory_57:

f(n) = 2n – 1 + 2(n-1)(n-2)(n-3)(n-4)

which if you check correctly describes all the numbers in the sequence of observations. If the 5th observation had instead been 9, our odd number theory would have been correct again, and we would have stayed with it. So depending on this 5th observation:

9 => f(n) = 2n-1

57 => f(n) = 2n – 1 + 2(n-1)(n-2)(n-3)(n-4)

Although we only list two items, the list is actually infinite because there are an infinite number of theories that correctly predict the observations up until the 4th result. In fact, and here is the key, there are an infinite number of theories that correctly predict any number of observations! But let us restrict the discussion to only the two seen above.

What our intuition tells us is that no reasonable agent would believe in theory_57 after the fourth observation, even though it is just as compatible with the odd number theory. Our intuition strongly asserts that the odd number theory is the correct theory for that data. But how can we justify that on the basis of induction, if they make the same predictions (ie they have the same P(E|H))?

The key is that our intuition, in fact our intelligence in general, has a built-in simplicity bias. We strongly favor the odd number theory because it is the simplest theory that fits the facts. Hence induction, including our everyday intuitions and the scientific method, is founded upon Occam’s razor as a way to discriminate between equally supported theories.

Without this or some more specific bias (in machine learning we would call this inductive bias), induction would be impossible, as there would be too many theories to entertain. Occam’s razor is the most generally applicable bias; it is a prerequisite for any kind of induction, in science or anywhere else.

Learning and the subject-object distinction

Previously I presented a model where learning is impossible. In this post I want to emphasize

..not only must the phenomenon be learnable, but additionally the learning agent must incorporate a bias to exploit existing regularity. Without such a bias, the learner cannot penalize complex “noisy” hypotheses that fit the data.

In the model, the environment (phenomenon) that gives rise to observations in the form of the binary sequence of ‘0’ and ‘1’ is left unspecified. Nothing is said about how the environment evolves, whether it is deterministic or stochastic, or whether it follows a certain rule or not. The model I presented is thus completely orthogonal to the nature of the environment. And yet

Whatever the sequence of events, the learning agent does not gain any knowledge about the future from the past, learning is impossible.

So a property of the learning agent, the subject, makes learning impossible irrespective of the environment, the object. This property of the subject is the belief that all sequences of observations are equally likely, that is, the lack of a-priori bias favoring any of the outcomes[1]. Even if the object was completely predictable, the subject would be unable to learn. Learning imposes constraints on both, hence the subject-object distinction;

To drive this point home, we could specify any environment and note how the conclusions regarding the model would not change. I hinted at this by presenting an example of the environment’s evolution that began with 111. Now assume the environment is such that it produces ‘1’ indefinitely in a completely deterministic and predictable way. You could interpret this as the classical example in philosophical treatments of induction: ‘1’ means that the sun rises the next day[2], and ‘0’ means that the sun does not rise. But again, this would make no difference, the learner would never catch on to this regularity.

Conversely, specifying an unlearnable environment will not do the learner any good either, of course. In fact, the astute reader will have realized that the learning agent’s prior corresponds exactly to the belief that the sequence of ‘0’ and ‘1’ are the results of a series of coin flips, where the coin is fair. And of course, given the assumption describing the coin, previous coin flips do not yield any information that serve to make predictions about future coin flips; the environment has no structure to be learned.

Most problems of bayesian inference include, in the problem statement, a description of the environment that is automatically used as the agent’s prior knowledge or at least a starting point, as in the typical case of drawing balls from an urn. This prior knowledge is incomplete of course, as no inference would be necessary otherwise. But in these cases the subject-object distinction is not so apparent; analysis about the agent’s learning performance assumes the problem definition is of course true!

However, the subject-object distinction is more important when asking what model of inference applies to the scientific investigation of nature and the problem of induction. This is because in these models, there is no problem definition, prior knowledge is genuinely prior to any experience.

Pending questions: What problem definition applies to inductive inference in science? What happens when extending our model to cases with infinite observations/theories? Does learning logically require bias, and if so, what bias is appropriate universally and intuitively acceptable?[3]


[1] In fact, not only must there be bias, but it must be a bias that exploits structure. Altering the distribution such that it favors more ‘1’ in the sequence irrespective of previous observations is a bias, but does not allow learning. This is another important distinction, the entropy-learnability distinction.

[2] Or after 24 hours if you want to be picky about tautologies

[3] Another more technical question is, can prior knowledge in inference problems always be recast as a bias over theories that are deterministic predictions over entire sequences of possible events? (as we saw when noting that the binary sequence model is equivalent to a repeated coin flip scenario) If so, what property of these distributions allows learning?

When learning is impossible

(image taken from climbnewfoundland.com)

I’ve defined learning as the extraction of generally applicable knowledge from specific examples. In that post I remarked

An agent may have the ability to learn, but that is not enough to guarantee that learning does in fact take place [1]. The extra necessary ingredient is that the target of learning must be learnable.

Today I’m going to a present a model where learning is impossible in the context of bayesian inference. We will see in this case that not only must the phenomenon be learnable, but also that the learning agent must incorporate a bias to exploit existing regularity. Without such a bias, the learner cannot penalize complex “noisy” hypotheses that fit the data.

As components of the model we have an agent, an environment from which observations are made, and theories the agent probabilistically reasons about as the object of its learning. For observations we use a binary sequence, S = {0, 1}^n, like for example

S = 1010111010

The learning agent sees a number of elements and must try to predict subsequent ones according to different theories, which are of the form H = {0,1}^n. An important aspect of the model is that the agent will consider all possible theories that can explain and predict observations. The number of theories is therefore equal to the number of possible observations, which are both 2^n. If the agent considered a smaller number of theories it could be that the true theory describing the environment would be left out.

Furthermore, let’s say that a-priori, the agent has no reason to consider any theory more likely than the rest. So it will assign an a-priori equal probability to each theory:

P(H) = 1 / 2^n

Define the total observations up to a given time as Si, where i <= n, and that a given theory is Hk, where k <= n. We can apply bayes theorem to obtain the probability that a given theory is true given the observations (feel free to skip the math down to the conclusion):

P(Hk|Si) = P(Si|Hk)*P(Hk) / P(Si)

and the probability of a given sequence of observations P(Si) is obtained by summing[1] over all theories that yield such a prediction:

P(Hk|Si) = P(Si|Hk)*P(Hk) / Sum(k) P(Si|Hk)*P(Hk)

in other words, summing over all theories that begin with Si. To see exactly whats happening let’s restrict the example to n = 4. This gives us a total of 2^4 = 16 possible observations and theories. Say the agent has observed three elements ‘111’ and call the sequence S3:

S3 = 111

Let’s calculate the posterior probability on theories for this case. First for theories that do not predict 111:

P(Hk|Si) = P(Si|Hk)*P(Hk) / Sum(k) P(Si|Hk)*P(Hk)

but since P(Si|Hk) = 0, then

P(Hk|Si) = 0

ie theories that do not predict 111 are ruled out as should be the case. There are two theories that do predict 111:

H1 = {1110}

H2 = {1111}

the denominator of the posterior is

Sum(k) P(S3|Hk)*P(Hk)

there are two theories that predict the sequence, therefore

Sum(k) P(S3|Hk)*P(Hk) = P(H1) + P(H2)

plugin this in, the posterior is therefore

P(H1|S3) = P(S3|H1)*P(H1) / [P(H1) + P(H2)]

P(H2|S3) = P(S3|H2)*P(H2) / [P(H1) + P(H2)]

since both H1 and H2 predict S3 (P(S3|H) = 1), this reduces to

P(H1|S3) = P(H1) / [P(H1) + P(H2)]

P(H2|S3) = P(H2) / [P(H1) + P(H2)]

but because all theories are equally likely a priori

P(H1) = P(H2) = 1/16

so

P(H1|S3) = 1/16/ [1/16 + 1/16] = 1/2

and similarly

P(H2|S3) = 1/16/ [1/16 + 1/16] = 1/2

So H1 and H2 are assigned equal probabilities, 1/2. Because no other theories are possible and 1/2 + 1/2 = 1, it all works out. Now, the agent will use these two theories to predict the next observation:

P(1110|S3) = 1 * 1/2 + 0 * 1/2 = 1/2

P(1111|S3) = 0 * 1/2 + 1 * 1/2 = 1/2

Thus, the agent considers that it is equally likely for the next element to be 1 or 0.

But there is nothing special about the example we chose with n = 4 and S3 = 111. In fact, you could carry out the exact same calculations for any n, and S. Here’s the key point, the learning agent makes the exact same predictions as to what will happen no matter how many observations it has made, and no matter what those observations are. Whatever the sequence of events, it does not gain any knowledge about the future from the past, learning is impossible.

I’m going to leave the discussion for later posts, but here are some relevant questions that will come up:

Does learning logically require bias? Can one meaningfully speak of theories when there is no compression of observations? What happens when the model is extended to an infinite number of observations/theories? Is this an adequate (though simplistic) model of scientific investigation/knowledge?


Notes/references

[1] I’m using the notation Sum(n) as the equivalent of the Sigma sum over elements with subscript n