Most people have heard of Ockham's Razor. Many accept this principle without ever interrogating it.

In this (yes, rather nerdy) post, I want to go through the processes of understanding, challenging and assessing the Razor in a bid to decide to what extent it should ever be used in philosophy (and, indeed, beyond).

What is Ockham's Razor?

The modern canonical definition goes something like this:

Do not multiply entities without necessity.

Oddly enough, William of Ockham never actually said this – but the sentiment, at least, did come from him.

In philosophy, Ockham's Razor is a tool which can be used to help to assess the theoretical virtue of a theory. It is broadly accepted as an important part of the philosophers' toolkit. The Razor provides guidance to us, as philosophers, in helping us decide which theory to prefer.

But it hasn't gone without opposition and should not go without scrutiny. We'll explore Ockham's Razor more deeply by examining the important parts of the requirement not to multiply entities without necessity.

What does it mean to "multiply"?

On reflection, prima facie, it's a little worrying to think that in metaphyics, we should (all other things equal) always prefer theories with fewer entities. It is one thing to try to keep one's ontology slim by rejecting, say, the existence of mathematical objects. But if I create a theory of a possible world that contains 1000 concrete objects, and you respond with a theory that contains 999, it's hard to see why we should prefer it.

Perhaps there should be more to it than simply saying that we should not multiply any entity without necessity. We can choose either to be very generalistic ("entity") or highly specific ("mathematical objects", "second-order properties" or "concreta"). Perhaps Ockham's Razor is concerned only with multiplying types of entities by necessity.

We might end up with situations where different theories generate different levels of type or token generation. One theory might posit 3 types with 100 tokens each. Another might posit 1 type with 300 tokens. Ockham's Razor doesn't, by itself, tell us anything about which should be preferred.

There are numerous examples in science which appear to suggest both that type parsimony is more important, and yet also that token parsimony is more important.

Daniel Nolan shared an example of the thought process of an 18th century scientist studying the behaviour of gases during chemical reactions. Amadeo Avogadro assumed that when combining one unit of oxygen to two units of hydrogen, the water produced would be the same as the amount of oxygen and half that of the amount of hydrogen. But that's not what happened when subjected to an experiment. In fact, two volumes of water were produced – twice the amount of oxygen and the same amount of hydrogen. He therefore proposed that each particle in gases are molecules composed of two atoms. Explaining the volume of water produced when combining oxygen and hydrogen is a matter of adjusting the ratio of oxygen atoms to hydrogen atoms.

Now, Avogadro could have assumed that the number of atoms per molecule was 4 at 1/2 atomic weight, or 8 at 1/4 atomic weight, or 16 at 1/8 atomic weight. He could have done that and his theory would have been exactly the same except that it would have been less quantatively parsimonious – but he chose the more quantitatively parsimonious version. Now, this does not posit any more fundamental entities – only non-fundamental token entities, but it's intuitively obvious that the more parsimonious would be preferred. This seems like a counterargument.

Before using the Razor, we need to think about how general we are being about types or tokens.

What count as "entities"?

Do we count entity-hood as binary (i.e. something either counts as an entity, or it doesn't) or do different types of entity have different weights?

Let's think about the two opposite ends of the spectrum when it comes to answering the Special Composition Question: when do objects compose? The mereological nihilist believes that objects never compose, so the only type of object is a merological simple or "atom". But the mereological universalist says they always compose, so there is an object in existence which is the-Moon-and-a-sixpence, and an Eiffel-Tower-and-the-core-of-Venus object.

Other theoretical virtues aside, Ockham's Razor might prima facie suggest that we should prefer mereological nihilism. In most conceptions (setting aside monist or minimal universes) mereological universalism forces us to posit an alarmingly large number of entities. It also forces us to posit a composing relation so that if a and b are concrete entities, there is a relation C such that a and b compose an object c.

But if we are forced to either exclude or include these entities and give them varying theoretical weights when weighing up the number of entities, the outcome could be very different.

Some philosophers, such as Jonathan Schaffer and Ross Cameron, have challenged the open definition of "entities", preferring to add a qualifier that only fundamental entities should not be multiplied without necessity. This, however, has been challenged.

What counts as "necessity"?

Where do we draw the line as to what counts as "necessary"? What if positing an extra entity is not strictly necessary but would be incredibly helpful in allowing us to avoid other theoretical vices, such as counter-intuitiveness?

On the other hand, what is a good reason for something to be necessary? If I say it is necessary because I enjoy doing it, and to me that constitutes necessity, one would imagine Ockham wouldn't have been too impressed.

We could even question whether we need to say "without necessity" at all. If we just say "Do not multiply entities", and we end up needing to flout the maxim, then we could simply say that Ockham's Razor has been outweighed by another theoretical virtue or simply in the pursuit of a valid or sound argument.

It's not clear cut, and this is something which must be considered when invoking Ockham's Razor.

The verdict

To the avid enthusiast of theoretical virtue, in clear-cut cases, there is no obvious reason not to use Ockham's Razor to help us decide which theory to prefer. After all, the fewer entities in a theory, the less there is to be wrong about.

However where Ockham's Razor is used as part of a wider theoretical calculus which might, for example, consider other virtues such as parsimony, simplicity, unification and intuitiveness; it's important to think carefully about how we are qualifying the rather open aspects of the principle.

I'm personally more sceptical of overarching theoretical rules of thumb. Our modern conception of physics is hugely complicated and unintuitive, yet few will deny the findings. I find theoretical principles helpful as guides, but not as law.

Further Reading

Stanford Encyclopaedia of Philosophy: William of Ockham

Quantitative Parsimony – Daniel Nolan

What Not to Multiply without Necessity - Jonathan Schaffer

Do Not Revise Ockham's Razor Without Necessity – Sam Baron and Jonathan Tallant