/ Occam's Razor
It's only a logical deduction, not a legal expression.
Technically speaking, it is a methodological principle.
That no explanation should be no more complicated than required by the observed data, colloquially referred to as "less is more" or "simplest is best".
How reliable is it? One of the most basic pieces of scientific method or philosophical reasoning. But no, you can't use it in court (where the main interest is in winning for your side, not determining the truth), nor is it much good for shaving with.
Yes. Occam's razor says that in order to claim something you need to have evidence for it. If you have no evidence for something then the claim can be disregarded. This principle does indeed hold in court. Any claims that the prosecution makes need to be backed by hard evidence; if the evidence is inadequate then the claims are dismissed and the defendant walks free. Similarly, if a defendant makes claims to rebut an accusation, then the defendant is expected to supply evidence for the claims and can be cross-examined on them.
What's the evidence that Occam's Razor is "very" reliable?
Life, dear boy, life :-)
Otherwise known as the principle of parsimony. Where you have two or more hypotheses that explain the observed phenomena, the simplest one is to be preferred.
Guaranteed to provoke a murderous rage in girlfriends, when invoked during arguments, I've found :)
Er, no it doesn't. Ockham's Razor says entia non multiplicanda sunt praeter necessitatem, "Entities are not to be multiplied beyond necessity."
Yeah, but that's not what he said down the pub afterwards when he was with his mates....
If Coel wants to contribute to the discussion, he might as well start by getting his facts right.
Surely this is only an interim position though (and experience may suggest that the simpler one is a lot more likely to be correct), but if subsequent evidence supports the less simple explanation then the simpler explanation should be abandoned. Coel's post above does not make sense to me!
Whilst he stated it badly, I think he may have meant that if the evidence doesn't require you to add an extra complication to your theory you shouldn't, which is a manifestation (particularly relevant to physics) of Occam's razor. i.e. if the evidence doesn't require you to make a claim you shouldn't.
A lot of people would be sent down on circumstantial evidence if courts took this principle as more than a rough, overrideable guide, wouldn't they?
"M'lud, the prosecution submits that the simplest explanation of this murder is that the accused did it. Therefore he's guilty."
This legal argument is rotten. Why is it rotten? Because the simplest explanation is not always the true one.
He didn't state it at all. He stated a quite different principle.
Amazing. He's been on these forums for years, trying to make out that Ockham's Razor is a knockdown argument. Turns out he doesn't even know what Ockham's Razor is.
It's an a priori probabilist assessment, but not an a priori absolute position that indicates a necessary truth. Where evidence is compatible with multiple theories, a more complex theory (that posits a greater number of entities) is less likely to be true than a more simple theory (that posits relatively fewer entities). However, there are problems with the razor, and the way it is applied, such as:
- do the entities concerned include abstract entities - in which case, for example what is a simple abstract thing vs a complex one? What would such a hierarchy of complexity look like in mathematics for example?
- people interpret it more as indicating necessity, rather than the relative contingency of possibilities
- can it be proved?
- what does it say about reality if it is proved?
>Where evidence is compatible with multiple theories, a more complex theory (that posits a greater number of entities) is less likely to be true than a more simple theory (that posits relatively fewer entities).
But this is just false, isn't it? Where a body of evidence E is compatible with both T1 and T2, the fact that T2 is more complex than T1 has no bearing at all on which of T1 and T2 is true.
To think otherwise is to confuse *truth* with *theoretical elegance*. Theories can be very elegant indeed, and still false: Leibniz's monadology, for instance. And they can be very messy indeed, and still true: sub-atomic physics, for instance.
I agree that it is overrideable, but generally as long as the multiple hypotheses presented comply with the other constraints of a court hearing (which your case clearly does not in that no evidence is presented) the simplest one which explains all the evidence would surely be preferred.
see my last post :-)
No, a priori T1 and T2 are both equally likely, but if you look at the history of science the fact that a theory is simpler seems to be a pretty good predictor of its success. This is of course no guarantee that this trend will persist into the future, but you've got to have some way of prioritising theories to test or you get nowhere.
edit: just seen your last post. The sub-atomic physics argument is interesting, because in fact our current theory isn't messy it's actually stunningly simple and is the simplest theory which explains (most of) the current evidence. Occam's razor works very well (experimentally not a priori) if you include the constraint that the theory must explain the data.
>a theory is simpler seems to be a pretty good predictor of its success
Then why didn't we stick with Newtonian physics, and dismiss the anomalies that led to general relativity as just that, anomalies to be dismissed with a slash of the Razor?
(That might sound like a rhetorical question. It's not: I'm no scientist, I'm not even a philosopher of science. By all means instruct my ignorance.)
Occam's razor used on it's own doesn't work well, but if you use it with the constraint of consistency with the available evidence it does. Initially the anomalies were dismissed, as it was simpler to assume the measurement was wrong, until they were corroborated by more measurements and then it became clear that the theory didn't explain the evidence and you then move on to finding the simplest theory which explains the new set of evidence you're fairly certain of.
Yep. This all fits with my (rudimentary) understanding of the history of science. Thank you.
One key point is that Ockham's Razor (I keep getting halfway to writing "Ockham's Taser", it's really distracting) is an intratheoretical tool. It works against a background of assumptions and within the structure of an already existing theory. There is, therefore, no such thing as a theoretically-unloaded use of it.
Which is probably obvious anyway. What counts as "beyond necessity" outside a theoretical framework? Nothing does. And even within one, it's always going to be a matter of judgement which data-points to take as key counter-examples to a pre-existing theory, and which data-points to disregard as outliers.
I like Occam's Razor as a guiding principle, but it sometimes gets over used and I like to bear in mind H. L. Mencken's words;
'For every complex problem there is an answer that is clear, simple, and wrong'
Good one. To quotes file :-)
Oh dear. I think you should have kept quiet and waited for Coel to come back and explain himself without presenting him with an easy target!
Occam's razor says that it makes sense to go with the simplest explanation compatible with the current evidence (NOT to ignore inconvenient evidence!). Anyway, it could be argued that general relativity is simpler than Newtonian gravity because it does away with the distinction between gravitational and inertial mass.
>> Occam's razor says that in order to claim something you need to have evidence for it.
So my translation from the Latin is worded slightly different from yours. Big deal. Both translations mean the same thing.
"it could be argued": and so could the contrary.
"simpler": not, apparently, to schoolchildren.
My general thesis at the moment is that all uses of Ockham's Razor are contestable. You're supplying evidence for that thesis, so thank you.
>> Where you have two or more hypotheses that explain the observed phenomena, the simplest one is to be preferred.
> evidence supports the less simple explanation then the simpler explanation should be abandoned.
Absolutely. After the "subsequent evidence" the simpler explanation no longer "explains the observed phenomena". So, again, you pick the simplest explanation that explains the observed phenomena.
Of course not. But Occam's razor does not say pick the simplest explanation, it says pick the simplest explanation that explains all the evidence.
Ten out of ten for chutzpah, but they don't mean the same thing. Not by miles. Your formulation is not a different translation of that Latin, it's not a translation of it at all.
Your principle about every claim needing to be based on evidence is an entirely different principle from Ockham's Razor, and if you can't see that, you're not really competent to discuss this.
Also, your principle appears to generate a vicious regress.
> T1 has no bearing at all on which of T1 and T2 is true.
I disagree. If E is compatible with both T1 and T2, and T2 is more complex and convoluted, then T1 is more likely to be right.
Essentially T2 will contain lots of extra information that is not mandated by any evidence. That extra information could then only be right by chance, which is unlikely.
Only if you don't understand the maths!
Because they don't understand the maths! (and nor do I really.....)
An explanation is not complicated just because you don't understand the language (maths in this case) used to expound it.
Have you been eating meat again?
> translation of it at all. Your principle about every claim needing to be based on evidence is an entirely different principle from
> Ockham's Razor, and if you can't see that, you're not really competent to discuss this.
That's a lot of waffle that doesn't actually say why my version is any different from Occam's version. Of course our understanding of these things has improved a lot over time, so I'm stating a more modern version of it, but it is still essentially the same thing.
Afterall, Occam's razor is just about the only time in history that a theologian got something right! ;-)
If you seriously think that the principle "Every claim needs to be based on evidence" is the same as the principle ""Entities are not to be multiplied beyond necessity", then
1) you are beyond the reach of persuasion
2) I strongly advise you not to try your hand at philosophy
3) there's really no point my trying to argue with you; if you can't see a point this simple, I doubt you can see anything at all.
> anomalies to be dismissed with a slash of the Razor?
This has been dealt with by others, but:
Occam's razor does not say "ignore anomalies to make a simpler theory fit", it says pick the simplest theory that *explains* *all* *the* *evidence*. Physics ditched Newton for General Relativity because of the observational evidence that demanded it.
Second, General Relativity is actually conceptually quite simple (even if it leads to mathematical complications).
Well, that pretty much satisfies me that your statsement of the principle is effectively equivalent to Tim's. Thanks! I'll leave you and Tim to it.....
> 1) you are beyond the reach of persuasion
> 2) I strongly advise you not to try your hand at philosophy
> 3) there's really no point my trying to argue with you; if you can't see a point this simple, I doubt you can see anything at all.
Sigh. This is a very theological way of arguing. Any excuse to avoid actually discussing the issue.
"Every claim needs to be based on evidence" =>
"Every claim that an entity exists needs to be based on evidence" =>
"Entities shoudn't be claimed unless evidence demands them" =>
"Don't invoke entities un-necessarily" =>
"Entities are not to be multiplied beyond necessity".
Yes, Tim, I seriously think that those are all the same. And I like trying my hand at philosophy, it's fun. And anyhow, scientists have long laid claim to Occam's razor. It's no longer merely the property of philosophers.
By the way, for anyone interested I happend to write about Occam's razor on my blog recently: http://coelsblog.wordpress.com/2014/01/06/tools-of-science-induction-and-occams-razor/
I was also laying just into William Lane Craig ( http://coelsblog.wordpress.com/2014/01/16/william-lane-craigs-eight-special-pleading-arguments-for-g... ) and noticed this:
Tim is this you?
So now it's *five* obviously different claims that are all "the same"?
You're even confused about what you're confused about.
What a mess.
Nope, five obviously similar claims that all amount to the same thing.
Hardly evasion. You haven't said anything, except present a confused list of claims and make the sweeping assertion that they all amount to the same thing. Nothing there for me to evade.
We were getting on quite nicely on this thread discussing Ockham's Razor and related principles, until you first produced an obvious misstatement of the principle, and then started, to use your lovely phrase, laying into anyone (such as me) who pointed out how wide of the mark you were.
I don't have time to go on trying to put you straight. It's an exhausting business, when you're simultaneously so convinced you're right and so obviously wrong.
Have a nice weekend.
You know Tim, the one thing that continually surprises me about you is how bad you are at actually arguing for and defending your position. I mean, you'd have thought that a Professor of Philosophy would have some basic competence at such, especially on his own turf, but you seem to be trying hard to reinforce some of the less-complimentary atttiudes that scientists have about philosophers.
Anyhow, I'm standing by my claim that my statement is effectively the same as Occam's, and that you have given no reason why they are not effectively equivalent. All you've done is make a mere assertion and then evaded every request to defend it.
> "Every claim that an entity exists needs to be based on evidence" =>
> "Entities shoudn't be claimed unless evidence demands them" =>
> "Don't invoke entities un-necessarily" =>
> "Entities are not to be multiplied beyond necessity".
Do all your arrows also go the other way?
I'm not convinced the first one does because there are claims which are not claims that an entity exists (eg the claim that fish can swim).
One last attempt to persuade you of the obvious. Just try and grasp this:
1) Two claims are different claims when they have different meanings.
2) Claims mean the same when (a) the words in the claims are the same words or (b) the words in the claims are synonyms.
3) "Every claim must be supported by evidence" means "Every claim must be supported by evidence".
4) "Entities are not to be multiplied beyond necessity" means "Entities are not to be multiplied beyond necessity".
5) The words in "Every claim must be supported by evidence" are different words from the words in "Entities are not to be multiplied beyond necessity".
6) The words in "Every claim must be supported by evidence" are not synonyms of the words in "Entities are not to be multiplied beyond necessity".
7) So the two claims mean different things.
If you mean to suggest that there's some looser relation than synonymy or identity between the claims, like logical entailment, you'd better show how your principle is entailed by Ockham's Razor. (You'll have a job. It isn't.)
That's a long and formal way about to show what's obvious, but since you want the long and formal way, here it is.
Let me know when you need more from me than "mere assertion" that "Spiders are eight-legged" and "Dogs bark" don't mean the same. An argument of the same form will be available; though most of us won't need it, I'd hate to appear dogmatic.
Can I just stir the pot by pointing out that Occam almost certainly was not the originator of his own Razor, though there is good reason to believe that he was an enthusiast for it and possibly one of the first to codify it.
So the simplest possible explanation consistent with the known data is that Occam picked this up from general philosophical and theological discourse at the time, and expressed it in a pithy (for the time), phrase.
For once, I agree with you. I'm not how significant or profound or meaningful this spat is though.
Occam's Razor seems like a useful rule of thumb to use when evaluating competing explanations or propositions. It even has practical applications (particularly in IT, FWIW.) I'm not sure what profound scientific principle is involved though. Much stuff is complicated because it's complicated.
The Hitchens point - claims made without evidence can be equally dismissed without evidence - does seem a different point, one that I also agree with, and I am puzzled by Hitchens and Coel conflating them.
Quite so, it's a complete side-issue. We're only discussing it because Coel insists that he's right about this. Before he got on his "I'm right I'm right" high horse, the discussion was actually getting somewhere interesting.
Me too. Puzzled, too, that he thinks it's dogmatic to say that they're different.
From what he's said so far, I wonder if Coel appreciates that "p is the same claim as q" and "p entails q" are quite different things. ("Dogs bark" entails "something barks"; "dogs bark" and "something barks" are not the same claim. First-year logic...)
Remember Curiosity killed the cat !
See what you have started:)
> which are not claims that an entity exists (eg the claim that fish can swim).
Yes, the arrows go both ways, since in a modern understanding of Occam's razor it is about the information content of statements that describe and explain the universe. Thus the distinctions between "claims" and "entities" is unimportant, they are both subsets of that more-general statement.
In this way Occam's razor has been refined since it was originally stated, though it is still much the same thing. This is quite common in science, for example Newton's laws are rarely stated in their original form, but instead in a more refined form, though they're still called "Newton's laws".
Pot, kettle, black.
Read my blog (link upthread) for a discussion of that.
> - does seem a different point, one that I also agree with, and I am puzzled by Hitchens and Coel conflating them.
Why is it different? Occam says that if something is unneccesary (not required by evidence) then ditch it. Hitchens says that if a claim has no evidence supporting it then ignore the claim. It's the same principle.
I don't see that the two versions stated are equivalent
The most parsimonious explanation is to be preferred
All claims need evidence
Seem rather different and certainly not due to different translations. Are you claiming the second is a generalisation of the first?
What does that mean? What relation do these arrows denote?
You really do need to brush up your logic. Statements don't have subsets. That's sets. And statements are not sets.
I am continually surprised, Coel, by the cavalier way in which you, supposedly a master of precise understanding, constantly come out with these hopelessly imprecise, handwave-y assertions.
What do you mean, "the same thing"? Are you claiming that all statements of Ockham's razor are semantically identical? Or that they all logically entail each other?
If it's entailment, then, as you might put it yourself, your failure to provide any evidence of how the entailment goes is noted.
Or is it just that one of these claims leads you to the others by some sort of psychological association?
Come out of your murky world of vague sweeping generalisations, into the pure clean light of philosophical precision...
Again, no it isn't. Or if it is, the onus is on you to show how these two principles are the same.
<=> implies and is implied by
"implies" meaning material implication? Or strict implication? Or some other kind of implication? Or entailment? Or loose, fuzzy, this-makes-me-think-of-that-ness?
More a question for Coel than for you :-)
And to repeat: to show that any of these relations holds between any two claims is quite different from showing that those two claims are the very same claim.
I disagree with that, sentences can mean the same even if the words are different and the words are not synonyms. Philosophers often get too hung up on mere language. It's our understanding of the universe, the principle behind the razor, that is far more important than the specific words used.
Both statements on Occam's razor above are common-English wordings of a principle that can be stated more precisely in terms of the information content of statements that describe the universe. The differences in wording between them are simply unimportant.
It's quite common in science for things to be to describe and state things in several ways, that are *scientifically* the same, even if the English-language words have different meanings. One of the things that, for example, scientists dislike about school GCSE/A-level science is the way the marking schemes place great emphasis on specific words used when the exact words are not scientifically important and where other wording would, to a scientist, be as good.
In a mathematical statement, logical implication
I think you're wrong here. Occam can be applied to situations where there is no evidence at all. E.g. if I am designing a computer program, other things being equal, I will try and simplify rather than make it unnecessarily complex. If legislators are creating laws they should try and apply it too (but of course, being mostly lawyers, they never will.)
To conflate it with scientific methodology isn't helpful, and rather misses the point, I would suggest.
Here's an example of what I was meaning in my last post:
"Y is linearly proportional to X"
"The ratio of the increase in Y to the increase in X is a constant".
Those sentences have different English words and the different words are not English-language synonyms. However, mathematically they mean the same thing, indeed mathematically both can be written:
Y = aX + b (where a and b are constants).
The non-parsimonious explanation has extra claims attached that are not supported by evidence. All claims need evidence. The extraneous claims are ditched.
> E.g. if I am designing a computer program, other things being equal, I will try and simplify rather than make it unnecessarily complex.
Well I don't think that that's what Occam's claim is about at all. Occam was originally doing theology, trying to explain the world, as scientists do today. The claim has always been about explanations of the world. Now you can make some rough analogy with designing a computer program, but I don't see that that's what Occam's razor is actually about.
Scientific methodology is exactly the entire point of it!
You now seem to be appealing to a special notion of "mathematically meaning the same thing". What this notion is I have no idea: you haven't said. What relation it bears to the ordinary notion of "meaning the same thing" I have no idea: you haven't said.
Whether you think that your principle and Ockham's Razor entail each other, or "mathematically mean the same" (whatever, if anything, that means), or mean the same in the ordinary sense of the words, or something else, I simply have no idea.
Behind the bluster and the bullshit and the unclarity, you seem to be hard at work trying to cover up the results of a rash statement earlier on. Why not apply something like Ockham's Razor, and just withdraw the rash statement?
Go on, just admit it, you'll feel so much better: The principle that every claim needs evidential support is not the same principle as Ockham's Razor. There might be some sort of argumentative relationship between them. If there is, you haven't shown what. But it's certainly nothing as strong as entailment, still less identity. And again, remember, showing that a entails b is not the same thing as showing that a = b.
It would be nice to get away from all this and back to Ockham's Razor, really!
>> Thus the distinctions between "claims" and "entities" is unimportant, they are both subsets of that more-general statement.
The "claims" and "entities" are both sub-sets of the more general set of "statements" about the world.
But you yourself don't seem to know what Ockham's Razor is, actually, about.
This is rubbish, Coel. I don't say that abusively; I'm speaking precisely. To be specific, it's the kind of rubbish that logicians call a category confusion. Entities can be in sets, and so can claims or statements, but entities can't be a subset of statements. That just confuses the semantic category with the category of metaphysics.
As I said up-thread, a more precise statement of Occam's razor would be the principle of minimizing the information content of statements adopted to describe and explain the universe.
Sigh. A statement claiming the existence of an entity is a sub-set of the more general set of statements about the world.
Yes, I was using a short-hand that wasn't fully rigorous for brevity.
What do you mean by "information content"? How do you measure it?
If you can't answer that, then you can't tell whether you are increasing or decreasing your information content.
Still less whether you are minimising it. NB minimising means "making as small as possible". And as I said up-thread, what's possible or not is always a judgement call. Which is one of a number of reasons why Ockham's Razor has no unequivocal and unambiguous extra-theoretical meaning or deployment.
Hmm. Probably not a good move, given how lacking in rigour the rest of your position seems to be.
Are you going to tell us what logical relation, if any, you think holds between Ockham's Razor and your principle?
The number of bits (in the computer-information sense) needed to specify something.
Yes, you reduce to a minimum the number of bits of information (or the length of the minimum string of 0s and 1s).
> unambiguous extra-theoretical meaning or deployment.
Oh I don't know, the different lengths of information strings are pretty unambiguous. And this principle, while not a guarantee, has a good probabilistic justification to it.
Occam's original formulation was a less refined and less general version of how the principle would be stated nowadays.
Typically I have to rebuild geologic stories from the modern evidence using scant data. I can typically do this by several routes of various degress of complication and likelihood, and typically the simplest is the easiest to sell, and the most likely to be approximately true. This is an excellent example of the principle, but also shows that it is far from a proof.
What do you think the "normal understanding" is?
Plenty of people disagree with you, given the mathematically precise statements of it by people such as Solomonoff and David MacKay.
> typically the simplest is the easiest to sell, and the most likely to be approximately true.
In what way is that any different from anything that I've said?
Well a couple of "normal" explanations are
Neither of which talks about evidence and claims. I do think you have got some work to do if you think what your saying matches what is in those articles. As I (and it would appear most) understand it, it is mostly used for choosing between two hypotheses for which ther might not be any or much evidence currently but both of which appear to explain something. Then you can go and get the evidence more efficiently for the one that is (according to Occam) the most likely candidate.
The later parts of this article are quite good
Basically you seem to be advocating a rather stronger version of the Razor (more a sword) than the usual meaning.
The first talks about "hypotheses" rather than "claims". These slight differences in wording are unimportant. The point about evidence is indeed there in all formulations, something cannot be *too* simple such that it doesn't explain the evidence. E.g. wbo would not pick a geologic column so simple that it didn't explain the evidence.
Really? Hmm, ok. Well to me it's obvious that my wording essentially matches those articles. For example calling something a "claim" rather than a "hypothesis" really is not important.
> for which ther might not be any or much evidence currently but both of which appear to explain something.
Well if there is *no* evidence for *either* then you pick neither. This is all about when you have some evidence that demands something, and you need to invoke a something to explain that evidence. If you have multiple options for that something you pick the simplest one (that explains the data).
No, it's crucial. A hypothesis (a suggestion, a line of inquiry to check) is quite different to a claim (a positive statement of something), as you well know. If you have two competing hyotheses that both match whatever evidence there is equally well, choose the simpler, as a rule of thumb is Occam's Razor.
Just thinking about it, if Newton had had both his hypothesis about gravity and Einstein's available to him, Occam' Razor would have suggested choosing his, even though both matched the evidence available to Newton equally well.
I agree, and it gives lots of different formulations of what is essentially the same principle!
So what is this "weaker" or "usual" version?
> claim (a positive statement of something),
Both are of the form "Evidence E is explained by Suggestion S". I don't agree that using "claim" instead of "hypothesis" makes any difference to the principle, but if you like replace all of my usages of "claim" with "hypothesis". Makes no difference as I see it.
Well from that article "To begin with, we used Occam's razor to separate theories that would predict the same result for all experiments. Now we are trying to choose between theories that make different predictions"
> available to him, Occam' Razor would have suggested choosing his, even though both matched the
> evidence available to Newton equally well.
Well that's not fully clear, as Robert pointed out. E.g. Newton's formulation says: There is inertial mass; there is gravitational mass; these have the same value; there is also energy". Einstein's formulation simply says: "there is mass/energy". Conceptually Einstein's formulation is actually very parsimonious, indeed that is exactly what led Einstein to it: "I wonder, why do we have two different sorts of mass when we could get away with one?".
> the same result for all experiments. Now we are trying to choose between theories that make different predictions"
So we've refined and strengthened the principle as we've understood things better.
Well OK but that *is* a signifcantly different statement of Occam's Razor, which is where all this started.
In the same way that when we state "Newton's laws" today we used slightly more refined versions than how Newton stated them. Nowadays "Occam's razor" means how we phrase it today. Really, this entire thread has been an exercise in pedantry about minor and pretty irrelevant differences.
At root, Occam's razor is simple probability. The fraction of statements about the world that are actually true is a tiny subset of the set of all possible statements. Therefore the chances of hitting on a true one by accident are tiny. Therefore you need to be guided to the true ones by evidence and therefore you only accept a claim when backed by evidence.
More precisely the number of information-strings that actually describe the world are a tiny subset (indeed an infinitesimally small subset) of the set of possible information-strings. Therefore ditto ditto.
All the other formulations and wordings are the same general idea stated in many different ways. They are all "Occams razor".
I admit I've only skimmed through the replies and I expect I'm going to be refuted, but the statement that entities must not be multiplied unnecessarily, doesn't say much, except that when we say it we're aware of the context in which it was uttered, i.e. means of adjudicating between competing explanations for things in the world.
When we talk about Occams Razor, it's understood that the hypotheses we apply it to must explain all the observed phenomena - that much is a given, after that the most parsimonious hypothesis wins the prize. The idea of statements supported by evidence is implicit in the enterprise, irrespective of the wording.
I couldn't resist googling this after I wrote the above and this made interesting reading:
This maxim seems to represent the general tendency of Ockham's philosophy, but it has not been found in any of his writings. His nearest pronouncement seems to be Numquam ponenda est pluralitas sine necessitate [Plurality must never be posited without necessity], which occurs in his theological work on the 'Sentences of Peter Lombard'.....
The words attributed to Ockham, entia non sunt multiplicanda praeter necessitatem (entities must not be multiplied beyond necessity), are absent in his extant works; this particular phrasing owes more to John Punch
Yep, good point. "Occam's" razor has actually been stated by many people both before and after Occam. The Latin formulation that Tim placed such emphasis on is a paraphrase of Occam by a later writer. This is just one of a number of reasons why concentration on the specific wording (rather than one the principle) is not warranted.
It is better to state that the non-parsimonious explanations have a lower probability of turning out to be correct than the parsimonious explanation. Therefore we prefer the parsimonious explanation.
The parsimonious explanation has claim(s) attached that are unsupported by evidence. If every claim were supported by evidence it would have a prior probability of one, and there would be no other possible explanations.
It is a bit confusing to say that all claims need evidence therefore.
when we're talking about evidence, in the context of comparing hypotheses, we're talking about the observed phenomena that the hypotheses purport to explain.
For example: Dave Hartnett's behaviour in respect of Vodafone and Goldman Sachs. A senior government official, takes loads of meetings with major corporations that owe billions in tax, and despite the stated policy of his government and rulings in his favour by the courts, he decides, unilaterally, to let them off with a token payment. Saving them billions.
The parsimonious explanation is that they slipped him a few quid. The fact that he's still unincarcerated, demonstrates that a parsimonious explanation is not sufficient for proof.
It works for science though because scientific theories and hypotheses are judged primarily, on their predictive power. It's not rated so highly by some believers because in a world so full of arbitrary suffering, any explanation of events in it,that posits a loving god behind it all requires a few more entities than one that relies on physics alone. This doesn't mean that one side is correct, but it does make extra work for the believers, as it does for anyone who's trying to sell an extremely unlikely version of events.
The problem with thinking about Ockham's razor in terms of degrees of being simple or parsimonious is that definitions of this vary with taste. To take the example of our failure to understand galactic rotation curves. We have two possible explanations - our theory of gravitation is wrong, or there's a lot of mass around that we can't otherwise detect. An Ockhamist might think that explaining this discrepancy by supposing that the universe was full of dark matter that we can't otherwise detect is multiplying entities by rather a lot. The fact that most scientists prefer this explanation to supposing that our theory of gravity is wrong tells us that they aren't Ockhamists, but Platonists, who like to judge the truth of a theory by its simplicity and elegance.
But whether something is true and whether we should believe it are different things. If we're fitting a model to a set of data, there's a good principle that we use the most parsimonious model that fits the data, and we can make that idea quite precise by maximising the information entropy. But we shouldn't think our maximum entropy model is true. It's the model that is "maximally non-committal". I think this gets closer to the spirit of Ockham's razor - it's not about saying what is true and not true, it's about when we should suspend judgement and wait for more evidence.
Of course William of Ockham was a thirteenth century divine whose starting point was God and Aristotle, so we shouldn't expect a direct match from his preoccupations, interesting though they probably were, and ours.
> is wrong tells us that they aren't Ockhamists, but Platonists
It's not clear that Occam's razor would favour either solution here. Yes, postulating dark matter is "multiplying entities", but the alternative is to add extras to gravity instead. The main competitor, MOND, has ad hoc extra parameters added to make if fit. That is just as much "multiplying entities" if you interpret Occam's razor in terms of the information content of statements needed to explain things. Making things fit by adding ad hoc complications to gravity is no more parsimonious than making it fit by adding dark matter.
If there were an equally parsimonious alternative theory of gravity that did away with dark matter then Occam would favour it, but so far we don't have one. Also, the dark matter idea has demonstrated much better predictive power than MOND in explaining newer observations of galaxies (MOND tends to need to be re-tweaked every time better observations come along). Further, dark matter neatly explains the results of recent weak-lensing surveys, which MOND basically can't do (without more ad hoc complications).
Then you factor in that the dark matter also enables the whole of the standard cosmological models to work, which have now had a huge amount of success in predicting the structure in the cosmic microwave background, thus putting the current cosmological models on a pretty firm footing (though admittedly without having found the dark-matter particle and without understanding dark energy). MOND simply can't do this, and thus any current alternative to dark matter also entails scrapping much of cosmology. Which is why MOND is currently pretty much an ex-parrot.
So, overall, I don't think that accepting dark matter goes against Occam -- the alternatives are worse.
> Ockham's razor - it's not about saying what is true and not true, it's about when we should
> suspend judgement and wait for more evidence.
I entirely agree. Occam doesn't give you any guarantees and doesn't tell you when something is true. It's not about confirming the simpler model, it's about rejecting an unnecessarily complex one.
As this thread seems rather to have degenerated into irrelevances, you might find these sources useful. At least this one is authoritative:
And the wikipedia article is, I think, fairly useful too, though for some reason UKC won't let me paste in the url for that, so you'll just have to google it.
What Ockham's Razor comes down to is a piece of rough guidance to people who are constructing theories of some domain: "Don't postulate more entities than you need to explain what you need to explain".
This rule of thumb doesn't tell us what counts as an entity and why, nor how to count entities, nor what "postulation" is (hypothesise that there might be such things? Be seriously existentially committed to them? Grant that they're possible?), nor what counts as needing entities, nor what it is that the theorist needs to explain, nor what, in other respects, counts as a good explanation. As such it's a rule of thumb that is consistent with all sorts of tradeoffs between explanatory power, fullness of ontology, and various other theoretical desiderata.
That's why the Razor, all on its own, does hardly any cutting at all; its power, when it has power, comes against the background of a rich set of theoretical and methodological assumptions. Try and use it outside or with no reference to a particular theoretical background, and you will quite likely get nonsense or arbitrariness, because all on its own the Razor is a radically indeterminate guide to theory-construction.
For example, without a defined background, there's no reason why we shouldn't get rid of the whole of modern physics and chemistry on the basis that the following theory is simpler:
"Everything happens if and only if, and also because, the Great Elf wishes it to happen."
That's an extremely simple theory: it postulates just one theoretical entity, the Great Elf, and just one form of explanation, reference to the wishes of the Great Elf. Modern physics and chemistry are far more complicated, both in that they have far more theoretical entities, and in that they have far more forms of explanation. Does that mean we should ditch them for the Great Elf theory?
There is a possible deployment of Ockham's Razor that says "Yes, we should". But that's exactly my point. There are indefinitely many possible deployments of Ockham's Razor. This obviously loopy deployment is just one of them. Choosing how and where to apply Ockham's Razor is a matter of careful judgement. That's why I say the Razor all on its own cuts virtually nothing at all.
I hope that's helpful.
Come on Tim, can you make an effort not to come across as quite so pompous? Others find the specific problem of how Occam's Razor applies to dark matter very interesting: how does OR apply in the modern context of evidence-based theories of universe? I fail to see what could be more relevant than this.
The lady asked about Ockham's Razor; I thought someone should try and give her a straight answer. Nothing pompous about that.
The thing about the Great Elf theory, is that it would postulate one more entity [the existence of which we have no good reason to assume] than a similar theory which starts with what's empirically verifiable. But I agree with your point about the dictum attributed to Occam being useless outside of its context - that's kind of the point that I was trying to make earlier, although I lack your erudition.
"Sure we have a good reason to assume the existence of the Great Elf! For the Great Elf explains EVERYTHING, at a stroke, in the simplest possible way! What better reason could there be?"
See what I'm getting at about background indeterminacy? :-)
Now this is a bit off topic, but from a relatively naive perspective, it has often struck me that dark matter looks a bit like black body radiation or the non-existence of the ether wind: a piece of evidence that points towards a really big underlying theory not currently within our knowledge. Paradigm shift type stuff. Do any physicists take that view of the problem?
A better reason would be one that we can extrapolate from. We find, for instance that the great elf tends to cause the tide to rise higher when the moon is full, somebody points out that now we can predict the tides without reference to the Great Elf. Obviously this person is promptly burnt as a heretic, but you get my point. Supernatural entities drop out of the picture as soon as we have sufficient knowledge to dispense with them, because at that point they're surplus entities...
I've got to agree with Jon, the issue about dark matter is actually a pretty direct illustration of your own point, and the theory of Bayesian inference is about the best example of something that looks like an operationalisation of Ockham's razor in a restricted theory domain.
It looks like that to me too, but it's easier to say "we need a paradigm shift" than to begin to understand where one might find one. I suppose quite a lot of physicists might think that if a way to reconcile gravity and quantum mechanics were to be found that might clear up quite a lot of mysteries.
Fine; discuss it then. I just thought there should be a bit more about the principle itself rather than about applications of it. 'Sall...
Thanks Tim and others who tried to explain this. Whilst I am genuinely curious to understand Ockham's Razor, I have to confess that I was also being slightly mischievous in asking about something connected to science, religion and philosophy on this website. It was bound to agitate the usual suspects.
Ooo, get you and your mischief-making. It seems to have worked...
Where's Einstein when you need him?
Bloody Einstein. He's on here sometimes. He's so evasive. Never gives you a straight answer to anything.
Replying to Coel:
Can Occam's razor be used in court?
>> Yes. Occam's razor says that in order to claim something you need to have evidence for it. If you have no evidence for something then the claim can be disregarded. This principle does indeed hold in court. Any claims that the prosecution makes need to be backed by hard evidence; if the evidence is inadequate then the claims are dismissed and the defendant walks free. Similarly, if a defendant makes claims to rebut an accusation, then the defendant is expected to supply evidence for the claims and can be cross-examined on them. <<
Coming to this a bit late, with apologies. I think this isn't quite right at least so far as criminal law, and evidence generally, is concerned.
So yes, the prosecution must prove its case. But one of the things that a jury will be specifically reminded by the judge to bear in mind is that the simplest explanation may NOT be true. For example, where the defendant has obviously lied about a detail (where was he on Tuesday night) the judge will remind the jury that he might have lied not because he is guilty but because he may have some other reason for not saying where he was on Tuesday night. Life is complicated, and Occam's razor is not all that reliable when it comes to claims about things that happened.
The defendant does not have to "claim" anything, and does not have to have evidence for what he says (if anything) in his own defence - it may help if he has, but it's not a requirement and he may be able simply to cast enough doubt on the prosecution's case.
Sorry, that's a nerdy response to a statement about criminal law; more generally, though, while something analogous to Occam's razor is not a bad tool for legal reasoning (simpler explanations may well be better), it doesn't seem to have the same force that it has in the context of postulating entities, which is a rather different matter.
> non-existence of the ether wind: a piece of evidence that points towards a really big underlying
> theory not currently within our knowledge. Paradigm shift type stuff. Do any physicists take that view of the problem?
Yes, both dark matter and dark energy are really big problems. If there is a "dark matter particle" then it is unlike anything we currently know about. We know that it cannot be normal stuff (i.e. "baryonic matter", made up of protons, neutrons, electrons), so there could be whole classes of currently unknown particles. Thus big searches are being made for "dark matter particles".
Of course it could instead be some other fundamentally new type of physics, that just "looks like" dark matter. Dark energy is even worse, in that that is not at all understood.
If you had to put your money either on
- finding "dark matter particles" and something turning up for dark energy; or
- some kind of Einstein coming along with a completely new understanding of the problem, probably unifying the very big and the very small with an elegant and completely new conceptual way of framing the whole thing and showing how the incomplete theories of general relativity and the standard model give rise to the illusions of dark matter and dark energy
which would you go for?
> nor what "postulation" is ... That's why the Razor, all on its own, does hardly any cutting at all ...
This goes to the heart of our disagreement. You are taking a very narrow interpretation of the razor, limiting it to specific words, and then having somewhat emasculated it by doing so you then declare that it doesn't do much cutting.
My version is that Occam's particular formulation was just a specific case of what is now understood as a much more general principle. That general principle is this: The number of actually true statements about the world is vastly smaller than the number of possible statements. Therefore the chances that a statement picked by chance is true is very low. Therefore you need to be guided to the true ones by evidence.
This razor is further refined in terms of the information content of statements (and has been developed by Solomonoff and others into a rigorous mathematical statement).
Now, it is this modern, more general statement that scientists today call "Occam's razor", in honour of Occam, though yes it has been developed and refined since then (doing this sort of thing is very common in science, for example Darwinism is nowadays stated in terms of genes, which Darwin didn't know about).
This more general and modern formulation does not suffer from any of the defects that you [Tim] point to. That's why we use the modern, general formulation.
To me, limiting the term "Occam's razor" to the exact formulation used by Occam (even though he used several different formulations in his writings, and even though the commonest phrasing attributed to him is actually a much later paraphrase) seems rather bizarre. Why? The whole point of the development of understanding over time is that we can nowadays do better.
The specifics of the wording of whichever "original" formulation you choose to pick really are not important. In the modern, general form the razor is powerful and widely applicable and does a lot of cutting and doesn't suffer from the drawbacks that you point to in the one particular formulation that you pick on. What's not to like? (Or is it that you deliberately weaken Occam's razor because you don't like how it might be applied to gods?)
Occam's razor is generally taken to mean “simpler explanations are generally better than more complex ones”, so it's really just a variety of the KISS (“Keep It Simple, Stupid”) principle. Talking of Newton and Einstein, both had their own variants on the principal. Newton: “we are to admit no more causes of natural things than such as are both true and sufficient …”. But I prefer Einstein’s constraint: "everything should be kept as simple as possible, but no simpler”, which I think gets to the heart of the scientific method.
> That's an extremely simple theory: it postulates just one theoretical entity, the Great Elf, and
> just one form of explanation, reference to the wishes of the Great Elf.
No it is not a very simple theory. As I've explained, Occam's razor should be understood in terms of information content. If you try specifying a Great Elf that is capable of wishing for things to happen and then making them happen by some mechanism, then you need oodles of information to do it.
If you don't accept that then try it. Produce a blueprint for this Great Elf. Produce a working prototype of this Elf if you can. Add up the information needed in your blueprint to make it work, and then point to your working prototype to demonstrate that you've done this correctly and that it does actually work.
For comparison the human brain contains about a petabyte of ordered information. That's 1000 terabytes. How much would your Elf need?
Claims such as "this Great Elf is simple" is just wordplay, it's not a coherent proposition. This sort of claim has all the hallmarks of something that has not been thought through.
As Richard says, the one thing that everyone knows that we need is a unified and consistent theory of quantum gravity. I think it is indeed likely that that requires "an elegant and completely new conceptual way of framing the whole thing".
Is dark matter actually there (rather than appearing to be there because our gravity theory is wrong)? I'd opt for it really being there, and that's because of the clumpiness of it, that fits it being real "cold dark matter" very well. I could of course be wrong on that.
As for dark energy? Well, the "dark energy" is really a property of space-time itself (as far as we can tell), and thus it would (have to) arise naturally in any successful quantum-gravity theory.
Well, it was Troll of the Year before you came back! 9/10 anyway ;)
Nonsense. It is truly fascinating discussion. At least Coel is trying to put the razor on a really firm footing with his information theory approach rather than just playing with words.
If u have a sealed wooden box, sound insulated, with a wooden button on one side and a stick that protrudes out the other when the button is pressed. What is the probability of the mechanism linking the button to the stick being simple and direct versus more complicated and more indirect. Is there any a priori reason at all to favour a more simple mechanistic theory?
I've not been able to keep up with every post on the thread so far, but for the one's i've read it looks like you are broadening the meaning of the razor beyond what it originally meant. which is fine, but it then risks no longer being Occam's razor, but Hellier's adaptation of it.
In its original form, it is still very much alive in medicine- for example, a patient has condition x, as a result of which they have symptoms a, b and c. they then develop symptom d, which is a less common but possible manifestation of condition x, but could also be a sign they have developed the rare but unpleasant conditions y or z.
you *could* order a whole lot of diagnostic tests to see, but each of these comes with a range of possible hazards, and costs, and may not give definitive results- but Occam's razor would say, its much more likely that the existing condition is responsible, and the patient should be spared the unpleasantness and anxiety involved in confirming this.
now the razor is just a principle, not a law, and every so often someone *will* have condition y or z, so it would still be wise to consider them and look for any subsequent progression that would make them more likely, and keep your defense union subscriptions up to date.
but the harms from investigations are not trivial- eg its estimated that 30000 people die in the US every year from cancers caused by exposure to medical radiation, mostly from CT scans. so looking for the evidence may kill more people than not looking for it
so Occam's razor serves as a useful principle to guide people in situations where there is incomplete evidence, and there is considerable cost in obtaining more evidence.
this seems a more useful and specific use of it that your very broad 'knowledge must be based on evidence',
When I put the kettle on, is it more likely that I am making tea or coffee?
You don't know until you know some more about me and my habits with respect to kettle usage. Similarly, you can't start to make judgements about how the universe is likely to behave until you have some experience of it. OR is a principle that seems to be helpful in generating successful theories, not a principle that holds in all cases for a priori reasons.
I think by asking the question why does OR seem helpful in developing successful theories, you are really asking the question: why does the universe behave according to the series of patterns which fit together like russian dolls (newtonian gravity within general relativity and maxwell's electromagnetism with quantum theory)? I have no idea.
Yes, I think there is, by an entropy-style argument. This argument is that there are only a few ways in which the mechanism could be simple, but if it were very complex then there are a vast number of different ways that it could be that complex.
Thus if you have one bit of information then there are only 2^1 possibilities, but if there are 1000 bits of information then there are 2^1000 possibilities.
Now, if you were going to compare specific proposals for the mechanism, meaning not just saying a simple one versus a complex one, but actually specifying the mechanism, then you would be very unlikely to get it right even if you got the degree of complexity right. Thus, even it was a complex 1000-bit mechanism, and you guessed that it was, you'd still have only a 1-in-2^1000 chance of getting it right. If, though, you guessed it was a simple mechanism, say a 1-bit mechanism, then you'd stand a 1-in-2 chance of getting it right. Thus you're much more likely to get it right by opting for a simple mechanism. (Or, rather, the simplest mechanism consistent with the pattern of stick-responses to button-pushes.)
Having said all that, however, Occam's razor is a probabilistic argument, it is not a guarantee, and if some intelligent agent set out to trick you into getting it wrong by using it then they could easily succeed.
That's an interesting way of looking at it, but I'm not sure it answers the question: is it likely to be the simple mechanism or any more complex mechanism?
One that is pretty widely used and accepted in science! I'm only reporting how it is generally understood by scientists today.
Your specific use comes down to evaluating probabilities. Which is fine, since Occam's razor as scientists use it is just an implementation of probability theory. So I'm pointing to the general principle, you're pointing to a specific implementation in practice.
I don't think that a-priori probability theory (and thus Occam) helps with that question.
It's not an argument at all. It's a rough principle, a rule of thumb, as I explained above, about committing to as many entities as possible in your theory.
In most of your posts on this thread you're nowhere near talking about Ockham's Razor. Fine if you want to talk about something that isn't Ockham's Razor, but you might as well get your names straight, and stop trying to pretend in some vague way that "it's all the same thing really". Because it isn't. And pretending it is obscures all sorts of philosophically vital distinctions. (I've pointed to some of the distinctions I mean by that in the summarising post I wrote above.)
Nowadays it has been developed into a well-founded argument.
What, then, do you think Occam's razor actually is? And why do you pick that version?
Do you think the version you pick has any validity and usefulness? If so, what is your argument for it having validity and usefulness?
This addressed to Coel and Tim jointly. I've just had a very rapid skim read of what appears to have been an unusually interesting debate. Unfortunately, I can't stop for a moment to discuss it as I'm under huge deadline pressure working on my seventh book. What is amusing to me is that I'm about 99 per cent certain that no one had ever mentioned Occam's (goddamned) Razor on UKC forums until I did so in discussion with Coel about a decade ago (definitely before Tim was on the scene), and then Coel took it up, in his characteristically re-defining way, with alacrity. I'm also somewhat baffled to see, when 'information' is discussed so vaguely, and in a similarly re-defined way, that Shannon hasn't been mentioned once in the entire thread (unless the 'search' isn't working properly). Shannon famously admitted that what he meant by 'information' was not the way it was normally understood, and amounted simply to the complexity of 'bits' of 'information' (in his usefully new and interesting sense) and, more importantly, about what happens to that 'information' when it passes through any transmission system. Bit of a side issue, perhaps, but I'm surprised it hasn't been mentioned, because it's at the heart of one of the confusions that typically bedevils present discussions of information and complexity.
there are quite a few people with a scientific background (i dont actually know what many peoples' specific jobs, but they appear to have a good knowledge of scientific method), and none seem to agree with your interpretation here, so i would question whether it really is that widely used and accepted.
"> Your specific use comes down to evaluating probabilities. Which is fine, since Occam's razor as scientists use it is just an implementation of probability theory. So I'm pointing to the general principle, you're pointing to a specific implementation in practice."
i'm not sure that is the case- i think i'm talking of a fundamentally different usage of the razor to you. along with others here, i consider it no more than a useful rule of thumb, a principle to hold in mind when taking day to day decisions in situations where a number of possible interpretations of incomplete evidence are possible, and one which is very much capable of coming to the wrong conclusion- see Hickam's dictum
as a reliable means of determining the truth about the universe, as opposed to a way of deciding where to direct resources in situations of incomplete information, it would seem to me to be of limited value. and i'm not persuaded that the use you argue for- that 'every claim should be based on evidence' is at all the same as meaning it is commonly understood to have- that, 'where there is limited evidence, which could support more than one explanation, the more parsimonious explanation is usually to be preferred'.
you set out a rationale for this upthread, copied below:
"Every claim needs to be based on evidence" =>
> "Entities shoudn't be claimed unless evidence demands them" =>
> "Don't invoke entities un-necessarily" =>
> "Entities are not to be multiplied beyond necessity".
i don't think step two is equivalent to step 3 in this formulation- step three is more proscriptive, with the introduction of the word 'demand'. step two would allow claims for multiple entities based on the available evidence (diseases x, y and z in my example)- evidence which could be consistent with the presence of any of them is there. it offers no guidance as to how to choose which one is most likely to be present, whereas step 3 does- it points in the direction of the existing condition, x.
so you have introduced information into step 3 which was not contained in step 2, and therefore the initial proposition in step 1 is not equivalent to that in step 5. i don't think these are matters of semantics, as you claimed upthread; i think these are describing different concepts.
apologies if i've misunderstood you, its always a battle between clarity and brevity in these threads. an example of how you use 'your' version of the razor in practice would help clarify what it means,
> Having said all that, however, Occam's razor is a probabilistic argument, it is not a guarantee, and if some intelligent agent set out to trick you into getting it wrong by using it then they could easily succeed.
Okay, I've rephrased slightly. If u have a sealed wooden box, sound insulated, with a wooden button on one side and a stick that protrudes out the other when the button is pressed. One simple and one complex internal mechanism have been designed ready for implementation. What is the probability of the mechanism linking the button to the stick being simple and direct versus more complicated and more indirect. Is there any a priori reason at all to favour a more simple mechanistic theory?
If this example doesn't conform on the basis of agency, then what does that say about agency and reality:
-why would an agent behave non-naturally, not conform to the natural behaviour of reality?
-if more complicated theories emerge in reality, is that increase the chance of influence by an agent?
Another example.. ..consider the black box of the ear, and the mechanism involving the transmission of sound inward from the ear drum. I have two competing theorems both identical except that one theory involves two bones, the other involves three. Assuming we don't know the answer and have no further evidence to guide us, which theory would be chosen and why?
I admit I haven't read the comments leading up to this, but from how this is worded it seems that either you're not talking about Occam's razor, or you've completely failed to understand it.
The principle of parsimony applies when you have a set of observed phenomena and two or more competing hypotheses purporting to explain them. If you have a black box on a table with a button on one side and a stick on the other, and all it does is pop the stick out when the button is pressed, then for the purpose of predicting future behaviour of the box and no knowledge of its purpose or its internal workings, then you might as well assume a simple mechanism until an event occurs that requires a complex mechanism to explain.
Re: The ear - same detail - bearing in mind that rational people prefer not to speculate when it's possible to verify.
Here's an example of the principle of parsimony:
I considered the evasive nature of your replies to direct questions, and the way you tended to "paraphrase" opposing arguments to better fit your chosen responses, plus the spelling issue; and the most parsimonious hypothesis that explained these phenomena was that you were taking your answers from a crib sheet of some sort. nothing personal, even though you took it that way, it's just that in order to reconcile your responses on that, and other threads, with a knowledge of the subjects you were expounding, further entities needed to be postulated.
> with Coel about a decade ago (definitely before Tim was on the scene), and then Coel took it up,
> in his characteristically re-defining way, with alacrity.
I don't remember this incident, but note that the principle of Occam's razor is widely used in science, and I've not "redefined" anything, the version I'm expounding is that that scientists generally adopt.
When I've mentioned it is not in a "vague" or "re-defined" way.
This is exactly why one should interpret Occam's razor in terms of probability theory, and not just take a narrow interpretation of one specific wording.
>> "Entities shoudn't be claimed unless evidence demands them" =>
>> "Don't invoke entities un-necessarily" =>
First, the "unless evidence demands them" is actually the same as not invoking them "unnecessarily".
But, more importantly, and for the umpteenth time, THE SPECIFICS OF THE WORDING ARE NOT IMPORTANT HERE, it is the understanding in terms of probability theory that actually underlies all of this that matters.
In your Hickam cite: "it is often statistically more likely that a patient has several common diseases rather than having a single, rarer disease that explains his myriad symptoms".
Yes, that is so. Note that in "multiplying entities" Occam was originally talking about entities that exist at all somewhere in the universe. If common diseases are known to exist and be prevalent then invoking them is ENTIRELY IN LINE WITH OCCAM since they are know to exist. As in, there is evidence for those diseases being common. In Bayesian terms, the prior probability of a patient having those diseases is high.
Which just reinforces the general point, don't focus on particular partially-understood wordings of the razor as the important thing, understand the thing in more general probability terms. Too many people are taking an unnecessarily narrow interpretation of the razor, and then poking holes in that narrow interpretation.
> What is the probability of the mechanism linking the button to the stick being simple and direct versus more complicated and more indirect.
One half for each. Since you've just told me that!
In that situation, no. You've already told me that there are two mechanisms and one will be picked. Given that information we can't do better than 50:50.
To reinforce a point, the only validity that Occam's razor has is in terms of probability theory, so any result needs to be translatable into probability terms.
Yet again a csw response that is full of projection, inference and insinuation and indeed the flat suggestion of the target's ignorance. And yet there is a concomitant self confidence despite the supposed lack of education that is delivered with the aplomb of an antisocial like behaviour, all delivered to the crowd for a little ego boost too. I see you're managing it with PMP on another thread too. Nasty.
> involves three. Assuming we don't know the answer and have no further evidence to guide us, which theory would be chosen and why?
If the theories really were identical in explaining data that you then had, then you'd pick the one with two bones. You'd have no reason to complicate to three. The opting for three comes from evidence from dissecting ears.
Note, by the way, that in constructing the question you are using information that the person answering the question is not allowed to have, which is a bit of a trick!
A fairer question would be this. Consider a two-bone explanation. Now consider making it more complex by an amount of complexity equivalent to an extra bone and the extra mechanisms linking it to the other two bones. There would be many options for adding such complications, and most of these would be wrong. Thus adding complexity for the sake of it is very unlikely to have succeeded.
it was these two steps i was referring to:
> "Entities shoudn't be claimed unless evidence demands them" =>
i think you have introduced information in the second of these statements that was not present in the first; and so your 'proof' that your position is the same as the more limited one generally held here via the five statements you set out is refuted.
i (and it would seem most (all?) others) also dispute that the wording is unimportant. the words set out what the concept means, and the words you are using define a different concept than Ockham proposed, and a different one than its common usage (on here, at least). this is not a matter of semantics; the words are different because the underlying concept is different.
so far, you've made some assertions- that the version you outline is the commonly used one in scientific circles; and the logical reasoning chain where you set out why the two positions are the same. i don;t think the second of these is true, for the reason i've set out above; and there are many from a scientific background on here, none of whom seem to support your interpretation.
is it possible that your interpretation is one that is widely held in your field, and has taken a technical use which is different from its more broadly held definition? that's certainly the case in my field, there are some words which have a specific technical meaning in that context that they would not have in others, and are actually labels for different concepts.
as i said above, can you cite an example of when you use the razor in conversations with colleagues, or teaching students, that might help make it clear what you are meaning by it. indeed, *do* you invoke it in discussions- i reckon i probably do every couple of weeks, but with the 'narrower' meaning,
Not at all. The comment I linked to seemed to me to be a reasonable example of the principle at work, and it's not like I failed to give my reasons. There's a Japanese proverb that says it's unwise to adjust your hat under an apple tree, or tie your shoelaces in a rice paddy - Or to put it another way, if you don't want people to think you're being dishonest, then you shouldn't act in a manner that suggests it.
> In that situation, no. You've already told me that there are two mechanisms and one will be picked. Given that information we can't do better than 50:50.
> To reinforce a point, the only validity that Occam's razor has is in terms of probability theory, so any result needs to be translatable into probability terms.
Okay. So if nature tells us enough that we know there are at least two, but not more than three bones beyond the tympanic membrane there is no reason to think nature would go for the more simple system, even by a very slight bias. So how could this scheme be extended to show the limit of initial conditions present to indicate the selection of simple vs more complicated ear mechanism?
Let's make a comparison. In cosmology there is a widely hald "Copernican principle", named in honour of Copernicus, who of course suggested that the Earth was not at the center of the universe. He placed the Sun there instead.
In cosmology is the Copernican Principle goes much further, it says that nor is the Sun the centre of the universe, indeed there is no centre, no special place in the universe.
Is that exactly what Copernicus said? Well no (he has the Sun at the centre), but it is a generalisation and development of Copernicus's idea, and thus named in honour of Copernicus for taking that first step.
In the same way, the modern scientific use of "Occam's razor" is a generalisation and refinement of Occam, now understood in terms of probability theory. It wasn't exactly what Occam said, but it's a generalisation of it named in honour of Occam.
Lots of people here are insisting on a much narrower and literal take on Occam, and then pointing out that that version has flaws! So why stick with that version?? This is exactly why people have long since developed a better understanding of it!
> that might help make it clear what you are meaning by it.
I've already done this multiple times on the thread. Here's a copy from up-thread:
"At root, Occam's razor is simple probability. The fraction of statements about the world that are actually true is a tiny subset of the set of all possible statements. Therefore the chances of hitting on a true one by accident are tiny. Therefore you need to be guided to the true ones by evidence and therefore you only accept a claim when backed by evidence.
"More precisely the number of information-strings that actually describe the world are a tiny subset (indeed an infinitesimally small subset) of the set of possible information-strings. Therefore ditto ditto."
> the tympanic membrane there is no reason to think nature would go for the more simple system, even by a very slight bias.
No, I don't think there would be a reason to pick the two bones there (especially if we're allowed to factor in our knowledge of the vast amount of historical contingency in evolution).
Note that the number of ways of permuting two bones is two, and the number of ways of permuting three bones is six, so if you start guessing specific mechanisms (without evidence to guide you to it) then you'd be increasingly unlikely to get it right as it got more complicated.
I take your point about the Copernican principle though in that example one can point to several hundred years of published works that develop and expand what Copernicus originally stated.
Can you point me to any literature where Occam's razor is similarly developed- not trying to score points by asking that, I think this is an interesting subject and would genuinely like to find put more.
Still not persuaded by the arguments you put forward though- and not suggesting that the razor is flawed, i think it does what I think it sets put to do perfectly,
That's not what I was meaning- I was thinking more along the lines of, 'I was teaching the undergrads about x , and in order to get them to understand why theory a rather than theory b is preferred, I told them about Occam's razor'- do you do anything like that?
I still dispute that (but again, I don't want to get too focused on specific wordings, when as I see it it is the underlying probability theory).
Your narrower meaning seems to ignore your prior evidence of the prevalence of common diseases. This then requires you to add a corrective dictum else you go wrong. This doesn't seem an optimal way of thinking about it to me.
> about x , and in order to get them to understand why theory a rather than theory b is preferred,
> I told them about Occam's razor'- do you do anything like that?
Sort of, but such attitudes tend to be sufficiently ingrained and part of the culture of physical-science research students that you rarely discuss it formally.
More likely it would be discussed in terms of a simple: "What is the evidence for that?", with the attitude understood that one wouldn't add complications into a model without good reason.
That sort of thing gets discussed a lot, one example being the up-thread example of the respective merits of dark matter versus MOND. The accusation that a model has "ad hoc" features is a common way to criticise a model.
> years of published works that develop and expand what Copernicus originally stated.
I was rather naively under the impression that the same had occurred for Occam's razor, and have spent most of this thread rather baffled that no-one agrees with me!
If your narrow version is not flawed then why do you need Hickam's dictum?
Will check these out and get back to you later, got to go out now...
This is the second draft of my reply. I'm going for tact and diplomacy for this one.
Firstly, nature tells us nothing about the inner ear. All we know about ears from nature, is that we have them and we hear with them - everything else has been learned by examining them. By the time you get to the realisation that there are bones in the ear, you're in a position to count them, so your example is at best, a poor one.
One final thought before I go out...
I think this thread is perhaps an interesting illustration of the way evidence and ''rules" to interpret it are seen and understood differently in different fields. The razor as applied to medicine in clinical practice does seem to be a weaker version, but despite that I don't see that as a weakness or limitation. It is never (or should never...) be used as the basis of decision making, but just as a rule of thumb to hold in mind to guard against over investigation. Both it and its correction are competing all the time, and by holding both in mind a balance and the correct level of investigation will (hopefully) be struck.
So the 'weaker' version is not limited in this context, it does exactly what it needs to, and if it didnt exist, I suspect we would have to invent it...
Is this as much a 'cultural' debate as anything else? The same two words may be used in different ways in different areas, and each be a useful concept in their own territory, and unhelpful if taken out of it...?
I'll check out your links later,
Still think there is a flaw in your reasoning in the 5-step logical chain that you posted upthread though...!
It's a long thread, but my 10.45 Sat reply talked about "information entropy" - this is Shannon's concept.
If you've got some noisy data - like a blurred picture, then many possible models can be fitted to it. Which model should you choose? An information theorist might take all the models that are consistent with the data, and then choose the one that has the highest information entropy. This is a powerful image reconstruction tool (the "maximum entropy method").
In some sense, the maximum entropy model is the model that assumes the least possible amount of information consistent with the data you have. It seems to me that this is in some sense a realisation of Ockham's razor. It allows you to remain "maximally noncommittal" - a fine lesson for life, in my view.
I think Coel isn't being clear about what he means by "probability".
There are two ways you can think about probability. You can either take the view that the probability describes your state of knowledge of the world. This, in a sense, is a subjective quantity that changes when your knowledge changes. Or you can take the view that the probability is an objective quantity that tells you about the world itself, and represents the frequency that some outcome would take if you repeated the experiment many times. The two concepts are quite different, though they behave in the same way mathematically and (unfortunately) are both called the same thing.
In your box example, you should give a higher subjective probability to the mechanism being simpler rather than complex. If you are then given extra information - that the mechanism was designed by a human, or it evolved from a fish's jawbone, or whatever - then you will amend your subjective probability (Bayes' theorem gives you a precise recipe for doing this), thinking it more likely (in the latter case) that it's got three bones in it. The extra knowledge hasn't changed what was in the box, it's just changed what you know about it.
Coel gets into difficulties by trying to use this method of dealing with subjective probabilities to say something about the laws of nature themselves, rather than our state of knowledge of them. His statement - "At root, Occam's razor is simple probability. The fraction of statements about the world that are actually true is a tiny subset of the set of all possible statements. Therefore the chances of hitting on a true one by accident are tiny." - attempts to use the objective definition of probability, based on counting frequencies, when, since he is talking about our state of knowledge of the world, he's actually talking about subjective probability.
The trouble with trying to use a frequency definition of probability about the laws of nature themselves is that we have to invoke large numbers of copies of the universe in order to count the frequencies. That way multiverse madness lies...
Possibly, yes. When a physicist encounters something like Occam's razor, they immediately start thinking: Is it true? If so, why is it true? Can we extend and generalize it? Are there some deeper ideas behind it?
By thinking along those lines physical scientists have arrived at a more general version and understanding of Occam in terms of information and probability. We still call it "Occam's razor" as a nod to the past, in the same way that we use the term "Copernican Principle", though we could call it the "principle of parsimony" is people preferred.
Indeed, in nearly all cases in science the developments of the past (Galileo, Maxwell, Newton, etc) are nowadays stated in refined and developed formulations, rather than their original. To a scientist what is important is the insights and understanding something led to, not the original formulation.
The opposite of this approach is exemplified by philosophers such as Tim, who stick with a literal reading of an ancient text, ossified for all time. They endlessly discuss those ancient texts, but are not seeking to make progress and leave them behind. (As I said upthread, Tim reinforces the less complimentary attitudes that physical scientists sometimes have about philosophers.)
[The irony being that the specific wording that Tim regards as "Occam's razor" was actually a development and refinement of Occam stated 300 years later. What Occam actually said (among other things) was: "For nothing ought to be posited without a reason given, unless it is self-evident (literally, known through itself) or known by experience or proved by the authority of Sacred Scripture". Obviously we're not going to use that version, are we?]
Now, when it comes to specific tasks, such as diagnosis in medicine, a rule-of-thumb narrower version might prove useful. And not invoking extra diseases without good reason is sensible. However, I submit that this version as you expound it is *not* faithful to the original and is indeed less faithful to the original meaning than my version!
Occam (and the other medieval philosophers who said similar things) were asking whether an entity existed at all, not whether a particular instance of a known entity existed.
For example, if you know of the existence of a beach-full of sand grains, then postulating extra sand grains for another beach is *not* a violation of Occam. That is not what "multiplying entities" is about (it is instead about postulating new and hitherto unknown types of entity).
Thus, if it is known that certain diseases are common and prevalent then suggesting that a patient has more than one of them is *not* a violation of Occam's razor, it is actually quite parsimonious, in that it doesn't go beyond known expectations (though of course if you can explain the symptoms with only only one common disease then you should).
So I think that your version -- while it might be useful to you in practice -- is not that faithful to the original meaning.
> since he is talking about our state of knowledge of the world, he's actually talking about subjective probability.
I'm not sure I agree. The bit you quoted is indeed using an objective definition of probability, and the likelihood of alighting on a true statement by chance (and I don't see any problem with the objective-definition nature of that argument). The subjective aspect -- our actual knowledge of the world -- comes in when we try to use evidence to guide us to the true statements.
This, this, one thousand times this.
Also interesting is Occam's Broom. Invented by Sydney Brenner but given a wider audience by Daniel Dennet:
“The molecular biologist Sidney Brenner recently invented a delicious play on Occam’s Razor, introducing the new term Occam’s Broom, to describe the process in which inconvenient facts are whisked under the rug by intellectually dishonest champions of one theory or another.
“The practice is particularly insidious when used by propagandists who direct their efforts at the lay public, because like Sherlock Holmes’s famous clue about the dog that didn’t bark in the night, the absence of a fact that has been swept off the scene by Occam’s Broom is unnoticeable except by experts"
I don't know if Dennet or Brenner state this explicitly but you could also use 'Occam's Broom' to refer to the tendency to prefer a simple explanation over a more complex even when the more complex one explains the events better.
Occam's razor can be useful but if it leads to inappropriate reductionism it has failed as an intellectual maxim.
>> "Every claim that an entity exists needs to be based on evidence" =>
I agree with Gregor that these are different. Think about a medical context. Disease X has the symptoms a and b and is a very rare disease. Disease Y has the symptoms a and b and is fairly common. A patient comes in to see a doctor and has symptoms a and b.
So, if a doctor claims that the patient has Disease X, that claim IS based on evidence (because the patient has the symptoms that that disease produces) but the evidence does NOT demand that the claim be made (because another, more common disease, Disease Y, has the same symptoms).
OK, but Occam's razor is not about evidence demanding that a claim be made, it's about requiring claims to be supported by evidence.
Your scenario is roughly comparable to this: I know that Fred drives a car (= symptoms a and b). Do I then have to conclude that he drives a Ferarri (= has rare disease X)? No I don't, because there are other cars he could be driving (- has disease Y).
In other words I entirely agree with your scenario, but I don't see why is it at odds with anything that I've said.
What my scenario shows is that the same claim (that the patient has Disease X) simultaneously meets your first proposition but not your second. Which is why I disagree that the first sentence in your progression here:
""Every claim needs to be based on evidence" =>
"Every claim that an entity exists needs to be based on evidence" =>
"Entities shoudn't be claimed unless evidence demands them" =>
"Don't invoke entities un-necessarily" =>
"Entities are not to be multiplied beyond necessity"."
means the same thing as the last. The break in logic is between steps 2 and 3, which are not the same thing (as I was just showing with the disease scenario).
But this mistakes what Occam's razor was originally about. In talking about whether "entities exist" or whether one should "multiply entities" we're talking about whether these entities exist at all, not just whether a specific example of them does exist.
Thus, one should not claim that Disease X exists unless the evidence demands it. Similarly one should not claim that Disease Y exists unless the evidence demands it. It is that that I was talking about in my sequence. The discussion of "does a particular patient have X" is actually not what Occam's razor was originally about, and as I argued a few posts ago is a colloquial use of it that is not that faithful to the original.
If you like, re-phrase my 2 and 3 as:
> "The existence AT ALL of an entities shouldn't be claimed unless evidence demands them" =>
I don't think adding "at all" gets you anywhere. These two:
> "The existence AT ALL of an entities shouldn't be claimed unless evidence demands them" =>
are still different concepts. For example, before 2012, the claim that the Higgs boson existed at all was based on evidence, but the evidence did not demand the claim that the Higgs boson existed at all.
> the evidence did not demand the claim that the Higgs boson existed at all.
Again, it's not about evidence demanding the *claim*, it's about evidence demanding the thing that you are postulating.
Before 2012 the claim of the Higgs was based on evidence.
And, before 2012, the evidence did demand something such as a Higgs.
But, more basically -- and my central point in all this thread -- ALL of these wordings are just colloquial implementations of the deeper point about probability. It's not worth all this analysis of the specific wordings of various forms of Occam's razor. If you want a precise statement to analyse then turn to the statement in terms of probability and information content. The whole point of developing that is to put the thing of a sound footing. *That* is now "Occam's razor", what it has been turned into as it has been refined and generalised. All the more colloquial maxims are not important.
"Such as" is the key phrase in your reply. And the claim v. thing distinction isn't helpful; I could rewrite the whole example to say that the evidence did not demand the Higgs boson itself.
I'm only trying to point out that your formulation of "every claim needs to be based on evidence" is not equal to, and not even a variant of, "entities are not to be multiplied beyond necessity." It's simply a different concept. Which is fine, but you were trying to show that they were the same.
I repeat, and for the thirty-sixth time in this thread: the precise and rigorous form of Occam's razor is in terms of probability theory and information content. If you want to attack Occam's razor attack that.
ALL of the colloquial phrase versions are just reflections of that. None of them are precise and rigorous versions. That's why people developed the more precise and rigorous versions. That's why I have never on this thread cared about the detailed wording, regarding them all as roughly equivalent -- because if you want a proper understanding of it with a precise and rigorous form you turn to the version in terms of information content.
> It's simply a different concept.
No, they're both expressions of a concept that can be and has been stated more precisely and rigorously in terms of information content.
If you thought that my five phrases up-thread was intended to be a rigorous logical argument then, sorry, it wasn't. It was an "I don't care about the pedantry, they're basically the same thing, I don't care about the minor details in wording, because this stuff is not important". Wasn't that obvious?
As I've said repeatedly, all these colloquial versions are much the same thing, but they all are just expressions of the more precise and rigorous version. That's why I have repeatedly pointed to that version.
I do admit, though, that I made one major mistake on this thread. I assumed early on that most people would be familiar with the development of Occam's razor in terms of a more rigorous information-content version, because that's how physical scientists generally think, and physical scientists are generally aware of that and treat Occam as obviously just an implementation of probability theory.
I genuinely didn't realise that many other people don't think like this, and regard the particular colloquial wordings as all important (I thought that Tim was just being an arse as usual in making an issue of the original wording).
But, can we all be clear now:
OF COURSE the various colloquial phrase versions are not precise and rigorous, and it's easy to poke holes in them. That's why the razor has been developed into a more precise and rigorous version.
Is that a yes or a no?
I think it was a hissy fit! ;P
> One simple and one complex internal mechanism have been designed ready for implementation. What is the probability of the mechanism linking the button to the stick being simple and direct versus more complicated and more indirect. Is there any a priori reason at all to favour a more simple mechanistic theory?
Given that you presupposing two mechanisms 'prepared' presumably by somebody sensible intending to produce a reliable mechanism and taking account of the cost of goods and profit margin then, yes, in the absence of other considerations, of course there's an a priori reason to favour the simpler solution as more likely. It's not something to which you can ascribe a specific probability though. Occam's Razor can't see through solid objects or predict the future.
More broadly, the point is that Occam's Razor is usually applied a bit less formally to guide the construction of scientific models. If your mysterious box was found inside a pyramid or in the wreckage of a spacecraft and was equally mysteriously impervious to x-rays or MRI, then we might propose alternative models to how it worked and what its internal mechanism might be. Occam's Razor would favour the simplest model that accounted for all the known facts, but of course, it's limited because we might not have all the facts. Including the possibility that is an alien child's puzzle with an improbably complex internal mechanism!
Well - since everything you've said on the subject seems to indicate that you don't understand what the principle of parsimony is, it's hardly surprising that someone might feel the need to state their position more firmly.
I understand how you might feel threatened by a principle that says your deity, or any deity for that matter, is not needed to explain the workings of the universe. Your problem is this though, Occam's razor, the principle of parsimony, call it what you will, works. I mean if you take a look at the predictive power inherent in the scientific method compared with the theological method, the winner should be obvious. If I listened to clerics then I'd never live in Preston. Too close to Blackpool, and all the gay sex and promiscuity going on there ought to be triggering earthquakes and hurricanes every week - Except it seems the wrath of the almighty seems to be somewhat localised with regard to disasters of that type.
You can play semantic games all you like. Nobody is going to dismantle the LHD on the strength of an argument you got from a creationist website. Those arguments are there for the faithful, they're not for critical minds.
> More broadly, the point is that Occam's Razor is usually applied a bit less formally to guide the construction of scientific models. If your mysterious box was found inside a pyramid or in the wreckage of a spacecraft and was equally mysteriously impervious to x-rays or MRI, then we might propose alternative models to how it worked and what its internal mechanism might be. Occam's Razor would favour the simplest model that accounted for all the known facts, but of course, it's limited because we might not have all the facts. Including the possibility that is an alien child's puzzle with an improbably complex internal mechanism!
What I'm interested in doing is establishing some minimum conditions for the applicability of the razor. Is it just a fad? Or is it established on logical, probabilistic or inductivist grounds, in which case can it be articulated as such? How can something "usually applied a bit less formally" be true / applicable. Don't get me wrong, I understand it and have used it science and medicine numerous times. However, while I understand what it is, and how to use it superficially, I am not sure what it is physically / substantially, whether it can be proven, and on what substantive ground it resides.
- Can the razor be proven?
- Does it apply equally to abstract entities as non-?
- Can it be written in mathematical logic language?
- people often interpret it as indicating necessity, rather than the relative contingency of possibilities
- what does it say about reality if it is true?
- is the razor more applicable for basic aspects of reality as distinct from reality that is perturbed by agencies like us? In which case what is the distinction, are we somehow non-natural, what is the difference in character between us and reality that the razor then distinguishes (if it does)?
You really can't help yourself can you.. .slipping straight back down into that sociopathic nastiness of yours and.. ..as a semi-ex pentecostalist.. ..I suspect you're the one actually likely to have spent time on creationist websites.
Jimbo, you can't really complain about what I say to you, without coming across as a complete and utter hypocrite in the light of the snide comment of yours, that it was in response to.
Besides, I may be a tool, but I'm a sincere tool, I'm not about to pretend that I think you're engaging in honest debate, when the evidence of my eyes suggests otherwise. Surely as a man of faith you can understand that?
It is never an absolute guarantee, it is always a probabilistic argument. But I think that that probabilistic argument has a sound footing.
> How can something "usually applied a bit less formally" be true / applicable. Don't get me wrong, I understand it and have used it science and medicine numerous times. However, while I understand what it is, and how to use it superficially, I am not sure what it is physically / substantially, whether it can be proven, and on what substantive ground it resides.
> - Can the razor be proven?
I think you need to talk to Coel!
As far as I'm concerned, it's common sense. Given that many systems (physical and biological) are constrained by efficiency requirements (ie greater efficiency is, by one mechanism or other, favoured) then it's rational to assume that the simplest solution that fulfils all the requirements is the most likely. Exactly how much more likely is not something that the principle, as originally expressed and as usually applied, is much concerned with. You may able to work it out when you know the exact entropic and mechanistic constraints of the specific system.
It's a bit like like asking whether looking at a map and taking the shortest route is always going to get you to your destination in the shortest time. Obviously the answer is no, but assuming that a much longer route will be quicker, in the absence of any specifc facts about road works and traffic jams, is not likely to be very efficient. Asking for a mathematical proof of this makes no sense, although if you had enough data you could easily show the probability of the shortest route being quickest within specific constraints. That's more stamp collecting than science, of course.
Biology is one of the areas where the simplest explanation often turns out to be wrong, for two reasons. One, we almost never know all the relevant requirements when we are looking at new systems - think of intracellular signal transduction, for instance. We know this and think up the simplest plausible model based on our current knowledge confident that it will almost certainly be wrong or, at least, not the whole story. But it's a start, and it allows testable hypotheses to be developed. As we find inconsistencies and make new observations we add extra bits to the model. Often, at some point, we think it's all just got too complicated and have a flash of inspiration that suggests a completely different model - more complex that our original one but much simpler than the mess we ended up with. And so on.
The second complicating factor in biology is that living systems arose by evolution. They started off simple and kept being tweaked to adapt to changed circumstances. Living systems don't have the option of redesigning from stratch to develop a simpler system but, of course, if the system ends up being too inefficient or prone to break down it eventually tends to be outcompeted by something else.
However, it does mean that networks and mechanisms can end up being somewhat baroque with loops and branches that would be difficult to predict. Even worse, these complexities sometimes turn out to have potential for further functions and turn out to be useful after all.
I guess in physics it all comes down to entropy or something but that's not my area.
> Jimbo, you can't really complain about what I say to you, without coming across as a complete and utter hypocrite in the light of the snide comment of yours, that it was in response to.
It was a joke, and doubly indicated as such for the more immune to such things.
I'm not engaging in debate with you at all.
Sorry, I've come to the discussion late but I wanted to add something to the discussion regarding dark matter and dark energy because I think this actually shows a case where Occam's Razor currently fails.
Essentially, if you take Occam's Razor seriously, the above problems are completely solved.
For dark matter, all we need to do is add ONE extra particle to the 17 (61 depending on how you define particle) that already exist in the standard model. This particle follows all of the current laws of physics and is in no way special. Simply by doing this, you agree with ALL of the experimental evidence for dark matter. Furthermore, a 'natural' parameter choice for this particle means that it would not have been detected so far.
For dark energy, all that is required is a very similar perturbation. We simply add an extra force to the four that already exist. Again, this force follows all of the current laws of physics and agrees with all current data.
So why do very few physicists think that this 'new minimal standard model' is correct? In my opinion, it's because te current belief is that beauty overcomes simplicity. Perhaps a final theory will be more simple but all current attempts to solve these riddles lead to theories with far more complexity.
Jimbo, you don't engage in debate with anyone, you just parrot other peoples arguments and dodge issues you can't look up a suitable response to.
Or at least that's the simplest explanation for your responses on here....
> Jimbo, you don't engage in debate with anyone, you just parrot other peoples arguments
Please show your evidence. Otherwise, crawl back to your fundamentalist sociopathic hole.
Is there really enough of a consensus on this to say that? Surely the majority opinion is indeed that a dark-matter particle along the lines you suggest is the most plausible model?
As for dark energy, no-one claims to really know. Though, given that a model with the current four forces predicts dark energy, just with a hopelessly wrong value, tweaking that to get the right value seems as parsimonious as adding a new force. Afterall, the main problem with dark energy is not explaining it (that's easy to do, as the expected zero-point energy of the vacuum), it's explaining why it has such a small value.
see the link above
also it rather makes my point that you chopped off the bit that says "Or at least that's the simplest explanation" You're altering what I actually said, to better fit the reply you want to make.
I'm not sure what you're on about, though it seems to be more of a Terry Pratchett fantasy than anything to do with reality. I really haven't a clue what you're talking about.. ..but I suspect it is probably pretty nasty.. ..again.
Sorry, perhaps I wasn't clear enough. I agree that majority opinion is that there is a dark matter particle. What I meant to add is that the majority opinion is that this dark matter particle will come along with a whole zoo of new particles (e.g supersymmetry, extra dimensions, etc .....). If we were simply to apply Occam's Razor, we would only add the single particle required and say that we are finished.
Regarding dark energy, I agree that an even simpler solution is to simply set the vev and I would again say that this is the solution given by Occam's Razor. Any 'explanation' I have seen adds many more parameters into the game.
The point that I was trying to make is that both these very simple solutions are ignored. You said right at the top that Occam's Razor is 'very' reliable. If so, why are physicists ignoring the principle in the two biggest fundamental questions we currently have?
I'm not sure they are being ignored, they are being explored along with a range of other suggestions. Certainly searches for dark-matter particles are could detect one particle as readily as a zoo of them. I guess the expectation of a whole range of supersymmetric partners comes from other lines of evidence (though one can argue about whether those are valid). Occam's razor needs to be applied to the theoretical explanation as much as to the number of physical particles, and thus theorists seek explanations as much as ad hoc additions of new particles. Isn't one of the favourite explanations of dark energy simply declaring lambda to be small, and saying that that is because of the anthropic picking out of a small lambda from a multiverse? That is fairly parsimonious under Occam, and certainly attracts a swath of theorists.
Wouldn't supersymmetry actually greatly simplify things in that, although there would be more particles, there would be fewer arbitrary parameters (masses, charges etc). I thought that was the appeal of supersymmetry; it satisfies Occam's razor. A single new particle would just add parameters to the standard model (Coel will correct me if I am mistaken!)
That's the idea, yes, it gives a unified account of the particle types, rather than having an ad-hoc list of particles. Of course one can argue about how well it succeeds, and so far it is unproven.
I think you give yourself insufficient credit as a scientist, and someone with a wealth of experience!
But is it? As you outline in biology, the most simple explanation is often wrong, and my experience is certainly more consistent in that direction. So in this case, the "common sense" view appears a triumph of hope over reality, and something of a delusion. So, from where the impetus?
It seems to me you are confusing two different things: reality and explanations of reality. I don't think anyone is claiming that biology has to be simple - just that your best chance of being right - or more nearly right - at any interim stage on the journey from complete ignorance to complete knowledge, is to develop models with no more parameters than are necessary (but no fewer than are sufficient).
Occam's Razor is not a proof, or anything you need to doff your cap to; it's just a rule of thumb for finding the shortest path to the best explanation of reality, no matter how simple or complex reality ultimately happens to be. The reason why it works has been outlined by Coel in terms of probability several times.
Rules of thumb can be ignored if there are good reasons for doing so. I certainly allow my intuition to overrule a simple application of Occam's razor when it comes to judgements of people - this is because experience has taught me that a million years of evolution can hone an edge so sharp that a single deployment it is worth a decade of Occam's cuts. I would very much doubt that similar intuition could be usefully applied to particle physics or cell biology though.
I did, just to confirm that your statement was wrong...
entia non sunt multiplicanda praeter necessitatem was my recollection of the Latin expression of Ockham's razor, which, according to wiki, is attributed to John Punch...
Although I grant you that Coel's statement is stretching the razor somewhat...
The razor also addresses likelihoods, nor certainties; whilst complicated answers may be unlikely, they are not impossible.
There's also an argument that Punch's expression is merely good advice for those dealing with bookkeeping or other arithmetic disciplines...
Why? Why do that unless there is a reason to? What reason? Presumably because you have a greater chance at being right. If this rule gives us a greater chance at being right then it must be because its a rule that brings us closer to reality than we might otherwise be.. ..in other words it is a rule that says something about reality. Which was why I asked the question I did in my first post.. ..what does it say about reality? Also, you've expressed an interesting form of the razor (no more than necessary, no fewer than sufficient). In this double edged form, does x (the number necessary) exceed y (the number sufficient), or does x always = y?
(Just to stick my head above the parapet re. this very old subject) … The exact truth is that the meaning of OR has shifted through the ages. O himself was talking about keeping assumptions in hypotheses to a minimum … surely a sensible enough principle. Aquinas then hardened it up slightly so that it became a discussion about principles; and then centuries later (C17th) Punch started talking about 'entities' (I think rather a logically unjustified leap, with potentially unscientific consequences) and then the likes of Bertrand Russell cast the latter into tablets of stone. So that it now seems to have been reincarnated as a rather dangerous a priori metaphysical precept rather than a merely sensible philosophical or scientific principle or guideline. In its original sense it suggested the logical dangers of making rash and unjustifiable assumptions about the existence of such ""entities"" as 'multiverses' (or should that be 'a multiverse'? … they can't answer that, because it's a nonsense to start with). Without seeing that any discussion about it has exactly the same unscientific and irrational status as theology, because it rests on a huge prior 'assumption' about the universe (in its proper, original sense of 'all we know' … what was traditionally called in philosophy 'the world' i.e, in a Wittgensteinian sense.) O's original R was useful in that it warned against oversimplifying nature, which of course was later famously expressed by Einstein as 'Everything should be made as simple as possible, but no simpler' (in conversation, though his exact words aren't recorded … and, even if he didn't say it, it's a very useful scientific principle) This actually got dubbed 'Einstein's Razor'. Worth remembering, because nature often turns out, frustratingly, to be more complicated than scientists have hoped - blatant examples being the fields of subatomic particles and microbiology.
We could probably do worse than start a whole new thread about the dangers of talking vaguely about 'entities' … without having the foggiest idea what we mean. I.e without going back to the clarity of the Greeks in their discussion of 'ousia' (translated appallingly badly a 'substances' by medieval scholars) meaning 'beings' i.e. things i.e material things. Nothing else being allowed to be a 'thing' or 'entity'. Certainly a piece of (equally nebulously defined) 'information' such as a computer 'bit' would not be allowed as a 'thing', because information is always *about* things, and so presupposes material things (almost certainly, at a minimum, a collection of particles, or a 'collapsed wave', rather than individual 'particles'). Things being essentially about the macro physical world, and not the underlying world of potentialities, energy etc. which always have to be 'in some form or other' to have a sustained macro physical existence.
(Half asleep, in case you hadn't guessed! ,,, time for bed. Don't take too seriously, please.(
PS. to above. If someone says in answer to my quip above re multiverses or a multiverse?, "there is (simply) "a multiverse"'', then I would argue that we are left with a proposition that has exactly the same beyond-reason status as 'there is a "god"''. Because the very embarrassing truth about both such statements is that they could easily be wrong, or just as easily wrong as right. Ya? (speaking from Konigsberg, or possibly, Edinburgh, or even Cambridge.)
PS2. E's Razor of course contains both Occam's and his razor, so we're left with a very useful double warning about neither overcomplicating or oversimplifying nature with our hypotheses. Two sides of the same rational coin really.
PS3. Golly gosh, that'll teach me to read a thread more closely in future! I hadn't realised (honestly) that my own (twin) brother had made the very same point on Saturday. Clearly something in this genes thing after all :))
But Occam's Razor doesn't say that the answer is simple. It says (or rather implies) that the simplest explanation that seems to work is a good place to start. 'As simple as possible' can still be pretty damn complicated but only if that's what the evidence requires.
Gordon puts it beautifully in his post when he says that OC seems to be becoming: "a rather dangerous a priori metaphysical precept rather than a merely sensible philosophical or scientific principle or guideline". I have no idea whether a strict mathematical interpretation makes any sense at all. I can see that there are entropic and biological selective principles that favour simplicity (though also inevitable sources of complexity too in biological systems at least) but that's all rather beside the point in my view. I don't think Occam had any view about how complicated the universe would turn out to be.
Well yes. They could be wrong, but the status of the two is not exactly the same is it?.. ..as, the complexity of multiverses is a complexity that "becomes" as a result of the expression of more basic elements purported to exist (strings)?
> Gordon puts it beautifully in his post when he says that OC seems to be becoming: "a rather dangerous a priori metaphysical precept rather than a merely sensible philosophical or scientific principle or guideline".
Sure it doesn't say the answer *has* to be simple.. ..however, if it is at all a successful strategy, even at the probabilistic level, then it must conform to the a priori view that simplicity is more likely to be right than not. Otherwise, what on earth use is it? And why shouldn't we throw it out?
I could quite see that the razor is a useful strategy that reflects the nature of the reality under analysis, and that biological complexity might not yield to such a strategy in the same way that the more basic physical reality does. However, if that is true, it cannot be guaranteed that the strategy is a useful one. And it certainly means that it is not a strategy that can be appealed to with authority to decide on questions like god.
Same question as I posed above..
You've expressed a double edged razor (no more than necessary, no fewer than sufficient). In this double edged form, does x (the number necessary) exceed y (the number sufficient), or does x always = y?
The problem being that we never deal with the "thing", but always with the abstraction of it.. ..because when we think about, and discuss a thing, we have to be thinking about and discussing our brain's abstraction, which may refer to a real thing, but an abstraction nonetheless. As a result.. ..why shouldn't "a piece of (equally nebulously defined) 'information' such as a computer 'bit'  be allowed as a 'thing'"?
If it's "because information is always *about* things, and so presupposes material things", would you allow numbers such status.. ..as presupposing material things? What criteria can we use to distinguish abstract things that refer to real things from those that don't?
This seems rather relevant to this discussion:
You can always pick a perspective that allows you say that two things are different. Likewise you can pick a perspective that allows you to say that two things are the same. When someone says that x=y, really they're saying something about "=" i.e. one thing that they have in common. rather than x or y. God and Alternate Universes belong to the same set, of non-verifiable entities. That's all that was being said.
It's like someone said: "two plus two equals four." and you just said, "No, that's wrong. Two plus two is a combination of numbers, and four is only one."
> It's like someone said: "two plus two equals four." and you just said, "No, that's wrong. Two plus two is a combination of numbers, and four is only one."
Wicamoi and Gordon have both expressed a double edged razor. I am not saying anything is "wrong", but I am asking the question about whether in the form of the razor: "no more than necessary, no fewer than sufficient", does x (the number necessary) exceed y (the number sufficient), or does x always = y? And the allied question of whether this doubled edged form is an empty semantic contrast, or that the contrast is a valuable one, and if so, how if x always = y?
I'm not sure why you think this has to do with god. I only mentioned god because, if it is really is just a rule of thumb, and lacks even necessary probabilistic validity, as suggested by Dave G, then the appeal to the authority of the razor by Coel against the possibility of god would appear to be rather an empty one.
That Occam's Razor works as a rule of thumb doesn't say anything about the universe - except that humans aren't generally prescient about it.
> (or should that be 'a multiverse'? … they can't answer that, because it's a nonsense to start with).
Nope, sorry, it is not nonsense. There are very good and sound scientific reasons for pointing to a multiverse. And it is actually very parsimonious under Occam's razor, more so than the default alternative. Anyone not understanding that doesn't understand multiverse proposals as used in modern cosmology.
I disagree.. ..if it works, then it does say something about the universe or at least our abstraction of it, albeit not very specific. If it doesn't work.. ..bin it.
What's Occam's razor got to do with astrology?
All forms of Occam are doubled-edged. E.g. "no more than necessary" => you do need those that are necessary. Or Einstein's "as simple as possible but no simpler". Or putting it another way, explain the *data*, but don't go beyond the data, inventing things for other reasons. It is a good statement of empiricism!
The recent use of the razor that I made was this. People like Tim point to their inner contemplative experiences as evidence for God (he interprets the experiences as a personal interaction with God). If, however, you can satisfactorily explain the experiences as the sort of inner-conversation that all human brains generate, without invoking gods, then you should do so. I stand by this as a valid argument. Not an absolute knock-down argument, admittedly, but it very much puts the burden of proof on those suggesting that the brain-alone explanation is insufficient.
Occam says reject all of astrology as unsupported by any evidence.
> I disagree.. ..if it works, then it does say something about the universe or at least our abstraction of it, albeit not very specific. If it doesn't work.. ..bin it.
OK, maybe it just says something about the way we solve problems. We have found (since the Enlightenment anyway) that it's more productive to work outwards from a simple precept based on the observable phenomena rather than inwards from a chaotic fantasy. Works for me anyway!
Not sure there is any Big Philosophy involved, other than it's more useful to infer the existence of things from positive evidence of their effects than to attempt to eliminate the impossible from an infinity of the implausible.
I wasn't referring to your double-edged razor metaphor. I was commenting on this exchange:
PS. to above. If someone says in answer to my quip above re multiverses or a multiverse?, "there is (simply) "a multiverse"'', then I would argue that we are left with a proposition that has exactly the same beyond-reason status as 'there is a "god"''. Because the very embarrassing truth about both such statements is that they could easily be wrong, or just as easily wrong as right. Ya? (speaking from Konigsberg, or possibly, Edinburgh, or even Cambridge.)
Well yes. They could be wrong, but the status of the two is not exactly the same is it?.. ..as, the complexity of multiverses is a complexity that "becomes" as a result of the expression of more basic elements purported to exist (strings)?
Gordon pointed out that Both God and the Multiverse enjoy the same, non-verifiable, status. your reply suggests you've misunderstood this for a reason I won't speculate on
Except that they don't. Gods are quite capable of performing miracles that would verify their existence. All you'd need to do is, for example, spot the thunderbolts coming down from the heavens zapping everyone who has gay sex. The fact that claims of gods have not been verified is not the same as them being un-verifiable.
As for the multiverse, there are several ways in which multiverses can be verified observationally. Indeed, mutliverse models have aready generated some predictions that have then been verified.
But HIV was so much more imaginative, don't you think?
> PS. to above. If someone says in answer to my quip above re multiverses or a multiverse?, "there is (simply) "a multiverse"'', then I would argue that we are left with a proposition that has exactly the same beyond-reason status as 'there is a "god"''. Because the very embarrassing truth about both such statements is that they could easily be wrong, or just as easily wrong as right. Ya? (speaking from Konigsberg, or possibly, Edinburgh, or even Cambridge.)
> Well yes. They could be wrong, but the status of the two is not exactly the same is it?.. ..as, the complexity of multiverses is a complexity that "becomes" as a result of the expression of more basic elements purported to exist (strings)?
> Gordon pointed out that Both God and the Multiverse enjoy the same, non-verifiable, status. your reply suggests you've misunderstood this for a reason I won't speculate on
I really don't know why I bother.. ..you're off on one again! So I agree with Gordon "well yes", at a certain level but then suggest a reason why the status of the two is not necessarily the same.. ..relevant because there is still persistent disagreement between Coel and Gordon and so you decide to assert I've misunderstood it because I suggest that multiverses are different from god because they can have technically very simple origins, which it is hard to see how god could. You really are tiresome.
Sorry that huge workload this week prevents me from entering into the discussion (except perhaps in depths of night, like last night!) The question about numbers is a v interesting one. They certainly seem to have a Platonic existence, yet they probably depend on the existence of a (material) universe (or multiverse, for arg's sake). It's hard to see how there could be numbers if there was literally nothing. But it still seems a bit of a mistake, too much of a metaphor, to call them things. They seem more like an abstract network in which the universe is somehow routed, or an 'aspect' of it that comes, undetachably, 'with' it.
Bits, likewise, don't quite qualify as 'things' in the sense I'm arguing for. Again, it's more like they're intangible parts of things, that require a material substrate, or 'signals' of things. A very, very tricky one, and I don't pretend for a moment to have a fully or even remotely coherent argument about this. .. Yet :))
[Back to work]
I'd argue that numbers are not "things" and they don't "exist" (sorry, I don't count Platonic existence as being a sensible concept). However, the *concept* of a number is a "thing" and does exist, it exists as a pattern of material (e.g. as an idea in our brain, being a pattern of material in our brain).
This doesn't make sense to me. If, say, 6 "entities" are sufficient to describe or explain the evidence, then so are 7 or 8 or 9 and so on. The idea of the razor is to use the minimum number of "entities" which are sufficient (ie the ones which are actually necessary). So, in fact, for any explanation, y exceeds or is equal to x, but, if Occam is satisfied, y=x.
An example might be describing the nuclei of the elemants as composites of protons and neutrons; it might be sufficient to treat the elements separately with arbitrary properties, but is far more parsimonious (and far more explanatorily powerful) to view them as composites of just two necessary particles.
It's not worth a hair pulling match over. The way you worded you reply suggested you'd missed the point that Gordon made. After all there are many ways in which the idea of a multiverse differs from that of God, but in the sense that their absolute existence is currently open to debate, their status is the same. so your "but they're not exactly the same" comment seemed to indicate disagreement on that score. The fact that their status differs in so many other ways is a point too trivial to warrant making.
So according to your reductionism, there is no number between 2 and 4, and there is nothing which is the product of 5 and 6, real numbers are really no realer than unreal numbers, etc. etc.
That seems to be a logical consequence of your reductionism. Which strikes me as a good reason for rejecting your reductionism. For the converse of Ockham's Razor is this principle: "Whatever theory does not admit the entities that we have to admit in order to make any sense at all, is to be rejected".
I wonder, though, why it is that you think numbers can't be patterns, if concepts can. That move, if you could make it, would save your position from a lot of bother. The concepts view you outline is in bother for all sorts of reasons, but here are two particularly obvious reasons. First, mathematical statements are about numbers, not about concepts of numbers. Secondly, if you take the concepts line on numbers, your position makes mathematics strikingly parallel to theology (on an atheist view of theology). It's all about statements of relationships between concepts of things that don't actually exist.
However, I suspect that for a strict reductionist, patterns ought to be as ontologically objectionable as numbers are. In which case the move to "concepts are patterns" is no more available to you than the move to "numbers are patterns".
The general moral is that extreme reductionism can only be made coherent by cheating.
> PS2. E's Razor of course contains both Occam's and his razor, so we're left with a very useful double warning about neither overcomplicating or oversimplifying nature with our hypotheses. Two sides of the same rational coin really.
I think Einstein was joking though, wasn't he? As I recall (and now I'll have to look it up when I have time), he was merely, and rather kindly, pointing out that someone was being a bit dim. 'Einstein's Razor' is meant ironically as an explanation of the bleedin' obvious.
An explanation with too few elements isn't just a sub-optimally simplistic one, it's wrong. It's not just unlikely to be right, it definitely isn't right. You don't need anyone's Razor to point out that you have failed to account for all the observed facts.
The concept "number" can be one attribute on a pattern, yes. A group of four apples is a pattern with the attribute "4", but that's different from that group *being* the abstract Platonic number "4".
How you do you know that? How do you know that maths is about actual-Platonic-existing entities called "numbers" (or whatever other form of existence you want to postulate), rather than being about the concepts that we label "4", "5", etc?
> It's all about statements of relationships between concepts of things that don't actually exist.
Not at all, it's about attributes of *patterns*, descriptions about patterns, and those patterns are patterns of material. Both the material and the patterns of material do exist.
The general moral is that reductionism works just fine.
I don't have time to get embroiled in all this. But you've just said that numbers don't exist. From which it plainly follows that there exists no number in between 2 and 4.
You said it. If that's not an unpalatable consequence I don't know what is.
Not as standalone Platonic entities, no. Have you ever stumbled across a standalone Platonic "4"? If so, can you take a piccy of it on your iPhone?, it'd be interesting to see what it looks like.
*Concepts* do indeed exist. And concepts about patterns, and concepts about the attributes of and descriptions of patterns, do indeed exist. Those "numbers" are concepts about patterns. The concept of a number "3" as an attribute of a pattern does indeed exist.
I don't see anything unpalatable about my view of this. I like it. It's a heck of a lot more sensible than applying Platonic essentialism to abstract numbers.
If you want to know why mathematics is about numbers, not about concepts of numbers, I suggest you read Frege, who refuted the conceptual/ psychologistic view of them 125 years ago. (This isn't my view, it's a standard view in philosophy of mathematics.)
Lots of "standard views" in philosophy are wrong.
Since thousands of philosophers of mathematics have devoted their considerable ingenuity to this question for lifetimes of research, you might want to look at what they say rather than dismissing it in advance.
I don't claim, incidentally, that they all agree with Frege, though certainly Frege's view is one standard one.
Does philosophy basically consist of pointing to writers from at least 100 years ago and saying "Look, Bloggs said this then. You're wrong"? It's all you seem to do in these sorts of argument.
I've not given much thought to whether numbers are things but Coel is doing a much better job of convincing me of his position that you are by pointing at obscure philosophers. I am sure there is another one Coel could point at that said the opposite.
> conceptual/ psychologistic view of them 125 years ago.
Are you sure about this? Having just googled this I get as far as:
"In what has come to be regarded as a seminal treatise, Die Grundlagen der Arithmetik (1884), Frege began work on the idea of deriving some of the basic principles of arithmetic from what he thought were more fundamental logical principles and logical concepts. Philosophers today still find that work insightful. The leading idea is that a statement of number, such as ‘There are eight planets’ and ‘There are two authors of Principia Mathematica’, is really a statement about a concept."
Which seems to me exactly what I was saying. (Admittedly I've not done more than a quick google yet.)
Keep reading then...
The point is to discuss these questions properly, without just ignoring the serious philosophical debates that there have been about them, and without taking a dismissive prejudice against philosophy as an excuse for blithely reinventing the wheel.
Certainly am. Just got to this (same link as above):
" Frege thereby identified the number 0 as the class of all concepts under which nothing falls, ... Essentially, Frege identified the number 1 as the class of all concepts which satisfy Condition (1). And so forth."
In other words he's defining numbers as concepts. Which, as I said, is exactly what I was saying.
I entirely agree.
Which I am not doing. You're the one who tends to argue by declaring yourself right.
I never said there was anything original in what I said about numbers, I'd expect that most physicists who have thought about it would opt for something along the lines of what I'm saying. Afterall there is a remarkable lack of iPhone piccies of abstract Platonic essentialist "4"s.
And, if you prefer people to be a bit less dismissive about philosophers, then you could avoid acting in ways that reinforce such attitudes.
Hey Tim, we can't all read the breadth of philosophy as it pertains to all subjects of interest. Is it not okay to tackle philosophical ideas as you encounter them.. ..even if it means ground that has been worn before. Personally, I find it refreshing when I think through a problem, and find it has been discussed before. However, because you've thought it through yourself, and started from ground that is almost certainly going to be different from other philosophers, the perspectives are never quite the same. And, perhaps this is a silly view, but I look at philosophy as something that you do, rather than as something that has been done. So anyway, why "numbers" and not "concepts that we label as numbers"? Referring back to your chair discussion, once you have the chair concept, chairs can be counted.. ..once you define points of a square as a number 4, that concept can be recognised in other places, can it not, such as sides of a square?
...Ah, I see what the confusion is here. This is what comes of haste...
You were defining numbers as concepts meaning psychological entities, apparently. That's what I was suggesting needed more careful consideration.
It is in that sense that Frege denies that numbers are concepts. His talk of concept and object that you're quoting now is talk about abstracta. When Frege talks of concept and object (Begriff and Gegenstand) he precisely does NOT mean that either is a psychological entity.
First, I would regard the "patterns" as real attributes of nature, in the sense that the patterns of material are independent of us.
Second, where else can a "concept" exist other than in a mind? (Oxford dictionaries: "concept" in philosophy: "an idea or mental image which corresponds to some distinct entity or class of entities, or to its essential features, or determines the application of a term (especially a predicate), and thus plays a part in the use of reason or language".)
I would assert that we have mental concepts about what are really existing patterns in nature. What we call "numbers" are one example of our mental concepts about what are really existing patterns in nature. What is the argument against that stance?
>What we call "numbers" are one example of our mental concepts about what are really existing patterns in nature.
Nothing wrong with saying *that*, in fact with some caution I might say it myself. Provided "concept" is not understood psychologistically (see above), in a way that confuses the activity of thinking with the object of thought.
I'm less clear that *you* can say it (without cheating). For I think you claim to be a physicalist, someone who believes that nothing exists except what's physical. But patterns are abstract objects, as are numbers. So if you do say that, it immediately commits you to the existence of abstract objects. But abstract objects are not physical, and not mental either. So this is a very quick refutation of the thesis that everything that exists is physical.
Going back to Ockham's Razor, the principle that "entities are not to be multiplied beyond necessity". (As it's usually expressed. As I was pointing out in undergraduate lectures on Ockham in Oxford 20 years ago, these aren't Ockham's own words so far as we know, though Ockham did say some similar things, e.g. frustra fit per multa quod fit per pauciora.) On this subject of ontological parsimony, and on how it relates first to patterns and then to properties and then to moral properties, I do have something written, accessible from my OU webpage under the title "Moral perception". (I would put in the url but the UKC software is objecting. My page googles pretty easily.)
In brief, the moral is that even a very parsimonious ontology turns out to be pretty prolific.
I hope it's of interest to Catriona and other readers of this thread...
Having a quick squiz at your OU moral perception thing, but I seem to have become stuck right at the beginning with sixteen dot matrix. If we imagine that a 16 dot matrix might objectively exist, does a pattern that is perceived, such as 4x4dot squares really "exist" apart from our (or another animal's) ability to perceive it? Or is a pattern a necessary projection of mind upon the matrix of pre-existing concepts? How can this be decided?
Yes, (a) because the pattern would be there whether we perceived it or not, and (b) because for us to perceive it, it has to be there to be perceived.
Just as there's an ineliminable distinction between an act of thinking and an object of thought, so likewise there's an ineliminable distinction between an act of perceiving and an object of perception.
Thanks for your interest!
Tim, when I'm thinking about the ideas for a novel I want to write (e.g about a fictional character doing something outlandish), what is the 'object' of my thought?
Patterns are not "objects" in their own right, they are, well, "patterns" of physical stuff. If I'm allowed to invoke physical stuff then surely I'm allowed to invoke particular arrangements of that stuff. For example if I collect 6 protons and 6 neutrons together then I have a carbon nucleus, being a pattern of more basic particles (the protons and neutrons themselves then being patterns of yet more basic particles). Thus, more or less everything "about" physical stuff is really about patterns of physical stuff.
All I need is that it be meaningful to talk about distances betweens physical particles, particular arrangements of those particles, and interactions between those particles. The "physicalist" surely must be allowed those things. Indeed, the *only* meanings of the properties of a particle are essentially talking about how it interacts with other particles.
I would say that patterns of material are indeed "physical objects" (rather than being abstract ones). For example, take the example of the "chair" discussed up-thread. Clearly a "chair" is a particular pattern or arrangement of physical material. Is the chair then a "physical object", or is it an "abstract object"? I'd say the former. Certainly in saying that "everything that exists is physical" the physicalist means to include such patterns of physical stuff.
Presumably only if you also invoke space. Is space physical stuff or something else?
I'd say it is physical stuff. At least "space" is intimately connected with time and with physical particles and all the rest of physics, so it counts as "physical stuff".
Patterns are realised in physical stuff. That's precisely why they aren't physical stuff themselves. They're abstract. It's because they're abstract that it is possible for them to be realised.
>If I'm allowed to invoke physical stuff then surely I'm allowed to invoke particular arrangements of that stuff.
Not if you believe that nothing exists except the stuff. As you profess to.
>Thus, more or less everything "about" physical stuff is really about patterns of physical stuff.
Yes, and that's what's wrong with physicalism. Take physical stuff all alone, and aside from the patterns that can be imposed on it when it realises this or that pattern, and all you have is shapeless nothingness.
That depends. For now you're talking about relations. And relations are abstract entities too. If the physicalist says that nothing exists except stuff, and then adds in a whisper "and patterns and relations of stuff", then the physicalist is cheating. He's contradicting himself.
I have a book-length development of this argument: Reading Plato's Theaetetus...
PS A little background reading:
Nothing exists except physical stuff, and the interactions between that physical stuff. Yet, the only characterisation of "physical stuff" is how it interacts with other physical stuff. Thus of for "physical stuff exists" to mean anything at all it must entail the interaction with other physical stuff. And those interactions are all that "patterns" actually are.
Since "patterns of physical stuff" is entailed in the very concept of "physical stuff" I'm allowed to invoke patterns while still maintaining that "physical stuff" is the only stuff that exists.
> is cheating. He's contradicting himself.
Not really, he's just using a different definition of "physicalism" than you, that differs on what is entailed in "physicalism".
On the subject of Occam and the approach in your article mentioned above. Surely it is way more parsimonious to say "there exist particles, which have ways of interacting; all patterns are products of that" than to declare vast numbers of "patterns" as existing as "abstract-object" entities in their own right. That latter is very ontologically extravagant.
Would it? The matrix would, but would the other patterns? I don't think so. Also, why do patterns within the matrix come in and out of focus if it isn't in fact because the pattern is an application of a pre-existing concept.
It seems more of an illusion to see 4x4squares in your lattice, than the appreciation of something existent.
See.. ..I find this difficult because the 6 protons, 6 neutrons and 6 electrons that make up a carbon atom have a very different ontology to 6x protons, 6x neutrons and 6x electrons, for example 1) because of the gluons that transmit the residual strong force between protons and neutrons (so the neuton and protons cease to be quite the discrete entities we think of in this new context) and 2) because the physico-chemical characteristics of a carbon atom is very different from what might be envisaged by a particular arrangement of 6x protons, 6x neutrons and 6x electrons in a certain space.
Agreed, just like the onology of a cat is very different from the count of hydrogen, oxygen, nitrogen etc atoms that compose it. I still argue, though, that the best way of understanding this is in terms of particles and their interactions and the patterns they those interactions give rise to.
Ontologically there are particles; all else is patterns of those particles.
"Ontologically there are particles"? What's "ontologically" doing there? Looks like a big word to hide a big problem to me.
If you can say "ontologically there are particles", or (better) "there are particles", you can equally say "ontologically there are patterns"--or (better) "there are patterns".
If you want parsimony, then you shouldn't believe in all those atoms, should you? Billions and trillions of the blighters. Most extravagant.
If you really want parsimony then Parmenides is your man: he said there's just one thing, TO EON ("that which is"). And he had arguments to prove it. Now that's what I call theoretical parsimony.
All of which is a joshy way of reiterating what I said before: there is no theory-independent account of what parsimony *is*, and that's why there's only so much we can do with considerations of parsimony without begging the question in favour of our own theory.
The best way of understanding a cat is by understanding the interactions of its constituent subatomic particles? Or am I misunderstanding you?
>he's just using a different definition of "physicalism"
What's your definition of "physical", then? (Start with that, not with "physicalism"; it's more basic.)
Mine is: "physical" existents are whatever existents are the characteristic study of physics.
One thing to like about this definition of "physical" is that it immediately implies the falsity of physicalism, in the sense of the thesis that nothing except the physical exists. For mathematical objects and relations, logical objects and relations, zoological taxa, evolutionary processes, bus timetables, economic statistics, and the works of Shakespeare, and indefinitely many other things too, are none of them among the things that physics characteristically studies.
Nor, come to that, are physicists and physics.
Those ideas. Possibilia imaginatively grasped. (Possibilia, possible things, are abstract objects too; imaginative grasp is a mode of your psychology.)
There's a big literature on fictional objects too; this is me shooting from the hip about it :)
For a more considered view:
You're just repeating yourself here. You could, instead, say what's wrong with my objection to it.
Also, entailments are relations, and relations are abstract objects.
"the best way of understanding this is in terms of particles and their interactions and the patterns they those interactions give rise to."
It wasn't clear what "this" referred to. But if the cat, then actually I don't disagree with Coel about *this* claim: the best way of understanding a cat is certainly to see it in its place in nature, and that understanding can go all the way down to subatomic particles and all the way up to the cat's intentional psychology. It's just that (1) in normal practice the intentional psychology is all we're likely to need to understand the cat ("Puss is scratching at the window because she wants to go out", etc) and (2) I think that adding all these layers of understanding is adding layers of non-physical understanding (see my definition of "physical") and also adding loads of ontology.
Another problem for Coel here, parallel to the earlier problem that I noted about mathematical statements.
Most people, including most mathematicians, want to say things like these:
"there is an integer between 2 and 4" (namely 3)
"something is the product of 5 and 6" (namely 30),
and so on. Coel says that numbers don't exist; so Coel is committed to saying that all such statements are false. ACCORDING TO COEL THE WHOLE OF MATHEMATICS IS FALSE! Sorry to be shouty, but that's quite a result. I maintain that this is an intolerable consequence of his physicalism.
Likewise with patterns. I say of these dots
X X X
X X X
X X X
that there is a pattern (in fact there are numerous patterns) that they form. I imagine most people would say this too. But Coel can't say this, because according to him patterns don't exist either.
I think this consequence of his physicalism is intolerable too. And intolerable in a way that relates directly to Ockham's Razor. Because like I said before, the Razor has a mirror-principle on the other side. (Call it Thomas's Mirror if you like, since Aquinas was rather good at not having a too-parsimonious ontology.)
Thomas's Mirror, then, is the principle that ontology should not be so parsimonious that it can't do all the explaining that we need our theory to do. I think Coel's ontology falls foul of Thomas's Mirror right here.
Of course, that can't be an absolute refutation of Coel's physicalism, because I've been arguing myself that Ockham's Razor and related principles about theory-construction have little or no theory-neutral application. However, I think this application of Thomas's Mirror is closely enough related to common sense to be a serious problem for Coel's physicalism.
> trillions of the blighters. Most extravagant.
Billions and trillions of copies of the same thing is indeed parsimonious (cf up-thread discussions of Occam's razor in terms of information content). Second, and again as up-thread, one's model should not be *too* simple to explain reality.
You're welcome to present that proof, but I doubt if it holds.
> subatomic particles? Or am I misunderstanding you?
No, the best account of ontology is in terms of physical particles and their interactions.
> existents are the characteristic study of physics.
I'd find that tricky to define, since I don't know of anything that I would not put in the class "physical".
I would say that, as far as I'm aware, there are physical particles, and the nature of physical particles is to interact with each other. That produces patterns of physical particles. As far as I'm aware that accounts for everything that exists.
I disagree that patterns are "abstract objects" and I disagree that relations are "abstract objects".
Of course we're in danger of semantic miscommunication here. Since you introduced the term "abstract object", what does it mean? What is an "abstract object"?
> "something is the product of 5 and 6" (namely 30),
> and so on. Coel says that numbers don't exist; so Coel is committed to saying that all such statements are false.
It's very kind of you to tell me what I think, but nope. According to me "numbers" do exist, as concepts about patterns, specifically concepts about regularities in patterns of physical stuff. Therefore the above statements are true, as I see it.
> I maintain that this is an intolerable consequence of his physicalism.
And I maintain that it is not *my* physicalism that you are addressing, but your strawman version of it. *Of* *course* physicalism includes the interactions between physical particles and the patterns that those interactions give rise to.
Ladies and gentlemen, the show is over; that's your question begged right there.
It's not begging any question. It is a "monist" stance that, as far as I'm aware, everything in the universe can be explained as being the sort of stuff that physicists study (to use your definition), together with the physical interactions of that physical stuff and the patterns that that leads to.
From there the burden of proof is on anyone wanting to argue for substance dualism, or for some other way in which the above is not true.
> Ontologically there are particles; all else is patterns of those particles.
It's not as simple as that though. I mean, take the carbon atom again:
6N+6P+6e isn't quite that in the context of a C atom is it? Rather, the protons and neutron do something different, which makes them effectively modified (N' and P') in this context so that Carbon involves 6N'6P' in its nucleus, which isn't a pattern of 6N+6P, but rather something different though related. That doesn't mean we need a non physical account, but just that it's not quite coherent enough to say that it's all particles and patterns thereof.
I don't believe those discussing cats can ever have lived with one. The idea that they are explicable, in any way, is most peculiar.
> (N' and P') in this context so that Carbon involves 6N'6P' in its nucleus, which isn't a
> pattern of 6N+6P, but rather something different though related.
Suppose we had a perfect computer model of a proton and the same for a neutron. Suppose we took nothing but that model, and computed the combined effect of six of each together in the same strong-force potential well. The result would surely be a C12 nucleus. If that is true then I'd disagree that we have 6N'6P', but simply 6N6P.
When I 1st started seeing my now wife she was living in a flat with her landlord who had 2 cats. One cat was jet black and fiercely independent / solitary and out most of the time, but would come and say hello when I have a ciggy in the back garden. The other was a sycophantic tabby that wouldn't stop snuggling and got on my nerves. Both were well trained and never had accidents in the flat. Having stayed over one night, while out the next day the bed got well and truly pissed on, especially the pillow. We presumed it must be the tabby, but after days of this happening we caught the black cat in the act. Fergus the cat had forced an ultimatum... ...me or the cat.
OK, here goes. I deny that "patterns" are "abstract objects". The abstract pattern-ness of a pattern of material is not an "object". The "object-ness" comes from the material stuff. The pattern of a compound object (an arrangement of physical stuff) is an attribute of that compund object, and that is a consequences of the attributes of the constitutent particles.
However pattern is not an "object", it does not have "existence" in some Platonic essentialist manner, or any other manner other than being an arrangement of material stuff.
But we don't have close to a "perfect" model even with respect to 2 nucleon interactions, let alone 3+ and the associated many body problem. These particles you then concede are not really particles, which begs the question, are there ever particles that are truly distinct. What is the basic thing that there can actually be a pattern of? Is it patterns of patterns all the way down?
True, but that is simply a limitation of our knowledge, it isn't a refutation of the idea that higher-order patterns are the products of component-particle interactions.
They can be compound particles, made up of more basic particles.
As above, the only attributes of a particle are really statements about how it interacts with other particles.
We don't really know. It could be. Or there could be genuinely elemental particles.
Well I disagree. If behaviour of larger scale objects cannot be explained by basic elements then there is a disjunction that is problematic for that idea, especially given that such models seem to require multiple experiment specific adjustable parameters to be made to work. Will nucleon-nucleon interactions be determined by quark-quark interactions in the nuclear context? Of course, this doesn't mean that a non-physicalist approach is necessary, but it drives a wedge between on the one hand noting the apparent units that make up larger objects, and explanation of the behaviour of larger objects on the basis of the natural inclination of those more basic units.
Do you mean if large-scale objects cannot be explained in terms of the interactions of component parts even in principle? Or do you mean simply that our current knowledge is not good enough to do it?
The latter is indeed true, in most instances, but it alone doesn't imply the former.
What principle? I don't think there is such a principle, but there is induction and the hope that where its possible somewhere, then its possible everywhere. I think that where larger-scale objects can be shown to consist of smaller-scale components, there is not a
*necessary* reason to assume that the action of larger-scale objects are explicable according to the natural behaviour of smaller scale components.
The question is, if we did have a perfect model of elementary particles and their interactions, and a perfect ability to compute the consequences of those interactions, would that be sufficient to then predict the behaviour of larger, compound objects? I assert that we have no reason to doubt that.
Occam's razor would suggest accepting that it would be sufficient, until such time as it is shown to be insufficient.
Well, if by 'compound object' we mean e.g. a living organism, or even an organ of an organism, then all its properties cannot be explained simply by the properties of its particles and their interactions. The way an organ is organised (on a much larger scale than that of its particles) plays a crucial part. The properties of the parrticles go a long way towards explaining it, but by no means the whole way … and its this 'extra' level of qualities and capacities that is surely the most interesting question. As one of the greatest minds ever said, 'the whole is other than the sum of its parts' (Note, not simply the oft and wrongly quoted 'more than')
As noted above, we have to be v careful about taking O's R too far.
Do you have an example of something that cant' be explained by the particles and interations (at least in principle)?
You can look at the particles in any organism and have no clue what part they play in it (or even what organism or even non-living thing they reside in). Only at the level of particles do they have any explanatory power. They don't explain the existence of an organ or organism, and its higher level features and properties. Or only in part. What they do explain is v interesting, but what they don't explain is also v interesting.
Won't be able to speak more on this for c. another day.
I don't think that's right - particles (and their properties etc) can explain everthing, it just might be more conveniant to take a more zoomed out view. Much like 1:2500 maps could tell the shape of the UK but it is much more convenient to see it from a small-scale map from which the entire shape is readily apparent.
No, particles are not like large-scale maps, but like the half-tone ink dots or pixels that make up the printed map.
Well OK, same applies.
> Well, if by 'compound object' we mean e.g. a living organism, or even an organ of an organism, then all its properties cannot be explained simply by the properties of its particles and their interactions.
I think you can, for example in a protein the primary structure is explained by the arrangement of the atoms in the amino acids, the secondary structure is explained by the primary structure, the tertiary and above explained by the secondary, which explains its function in the cell.
Yes its all rather complicated when trying to map protein interactions, and how that is involved with signal propagation and responses to stimulus, but so far the evidence is supporting this theory.
Do you have some examples of organs which properties can't be explained by its structure?
And yes it can be more than/ greater than/ other than the sum of its parts. But a cake is definitely tastier than some flour eggs sugar and water plus some hot air, but I don't need to invoke any other stuff to explain it.
OK, many people follow your line - that it can all be explained bottom-up; whereas others (inc. myself) think that it can only be explained by a picture in which nature works top-down as well, i.e. in both directions at once. Huge amounts of evidence, even at the cell level, that this is the case. And huge subject - can't unfortunately get into any major discussion (or any discussion at all, actually, right now because working to a deadline at moment). Assume you have read some of the v large numbers of papers/books on this subject? [Said v large to avoid saying 'huge' yet again :)]
All this says is that *if* we have a perfect model then we would have a perfect model and therefore we could predict using that model perfectly etc. Well that's a truism hooked on a theoretical *if*. However, its the *if* that is crucial here, because such an *if* doesn't automatically resolve the apparent boundaries that we are currently experiencing. Yes there are inductive reasons to believe that those boundaries are simply a function of insufficient knowledge (and indeed limitations in technology in terms of computational methodology, and power, which are very relevant to this area). However, such resistance to the explanatory power of reductive models *is* a reason to doubt that modelling smaller scale behaviour is sufficient to yield accurate models of larger scale behaviour. And, as Freeman Dyson put it, echoing Godel "in mathematics, the whole is always greater than the sum of the parts".
Only if a) you believe the razor is applicable here and b) you're selective about your evidence
> Huge amounts of evidence, even at the cell level, that this is the case.
> we could predict using that model perfectly etc.
Agreed. But there is a big difference between our knowledge and abilities being limited, and those limitations being fundamental aspects of nature.
That's not what I said. We have inductive reasons to hope that our knowledge and abilities are not limited, but we do not know that such limitations are the reason for our experiential difficulties in resolving larger scale behaviour from small. Do you believe there are no fundamental limits in nature, and do you believe those mathematical limitations are irrelevant to nature?
Yes, there are limitations in nature, but what I'm asserting here is a view that nature can be explained bottom-up. That means, if you made a perfect emulation of basic particles, and allowed them to interact according to their natures, then that emulation would reproduce the higher-level behaviour.
The fact that we are incapable of constructing that emulation doesn't negate that idea.
> The fact that we are incapable of constructing that emulation doesn't negate that idea.
But that's a flat contradiction: that "nature can be explained bottom-up" and "the fact that we are incapable of constructing that emulation". And seeing as you believe there are fundamental limitations in nature.. ..where in nature do you think these are? Do you think that mathematical examples of such have no real world relevance?
There's no contradiction. The idea that "nature can be explained" is not negated by the fact that *we* cannot explain something, given our limitations.
All over the place. One is example is that information transfer seems to be limited by the speed of light. Another example is that information can be lost, and thus it can be impossible to know everything about past events.
> OK, many people follow your line - that it can all be explained bottom-up; whereas others (inc. myself) think that it can only be explained by a picture in which nature works top-down as well, i.e. in both directions at once. Huge amounts of evidence, even at the cell level, that this is the case.
My world view is that the characteristics of system can be explained by the interactions of the things that it consists of. This seems to have worked very well in the physical sciences, and now that the technique is being applied to the biological sciences, it is also working well, but due to the inherent complexity, our understanding is not as mature.
I personally don't believe the top down view offers much in the way of explanation. Mainly because boiled down it offers no more than just stating 'because it is' or 'because they do' e.g. Why is the sky blue? Why do mammals share similar characteristics? How come DNA seems to be present in most living things? Sure it gives us description, and correlations, but not too much in the way of explanation.
Sure at the moment we use a mixture of both, (in physics the one that springs to mind is the semi-empirical mass formula for nucleon mass) but I believe that is mainly because we haven't got a good enough understanding of these systems get to get rid of the descriptive part and replace it with an explanative part.
My masters was in molecular biophysics so I have read a little, and have some hands on experience in the area. Enough anyway to let me know how complicated it can all get, certainly at the protein-protein interaction web/matrix.
> But that's a flat contradiction: that "nature can be explained bottom-up" and "the fact that we are incapable of constructing that emulation".
There is no contradction, my younger brother at age 5 didn’t understand calculus, but doesn't mean a) calculus is wrong and b) he doesn’t understand it now.
And seeing as you believe there are fundamental limitations in nature.. ..where in nature do you think these are? Do you think that mathematical examples of such have no real world relevance?
Heisenberg's uncertainly principle springs to mind; the probabilistic nature of QM. The latter does have real world examples, e.g. minimum size of transistors before electrons start jumping between wires.
I must be being thick.. .."Nature can be explained" (who else does that other than *we*). And "*we* cannot explain [an aspect of] nature". The first is a logical statement that is absolute, i.e. an exception defeats it. The second statement is an exception, therefore the statement "nature can be explained" cannot be absolutely true. Furthermore, if you regard those limitations in nature to be absolute, not over-rideable, then they too must also mean that the absolute statement "Nature can be explained" is necessarily false.
See my answer above to Coel.
Humans in the future, other life elsewhere, maybe no one but it could none the less be explained given sufficient insight?
Prior to Newton, we couldn't explain gravity, are you suggesting until then it was inexplicable?
No! Newton invented gravity.
On the particulars (e.g. gravity), we have good inductive reasons to hope that will be the case, but we certainly don't know it. On the general (nature), we need only an exception or the acceptance of engrained limitations for the statement "nature can be explained" to be logically false.
See above, but I'd add that I don't think gravity *has* been explained (full stop). Newton described behaviour according to gravity as an inverse square law; he described a regularity. Such a description is not quite an explanation because the causes thereof have not been clarified.
Just because something cannot be fully explained now does not imply that it cannot be explained in the future. Therefore there are things that *we* (as human beings in the year 2014) cannot explain now but may be able to explain as human knowledge and understanding expands.
It is entirely logical to believe that nature can be explained whilst at the same time realising that we cannot do it at the moment.
My analogy used one single mind developing and understanding more as it got older and comparing that to our knowledge as a whole.
Actually I think it is entirely *inductive* and not at all logical vis a vis Godel, Hawking and numerous commentators thereof.
Or, if you take Hawking on this:
"Because there is a law such as gravity, the universe can and will create itself from nothing."
Gravity is placed in the position of ultimate cause, from which the universe emerges "from nothing". Nothing invented gravity!
> Actually I think it is entirely *inductive* and not at all logical vis a vis Godel, Hawking and numerous commentators thereof.
Sorry I must not be being clear, I mean logical in so much that the statemtent 'nature can be explained' AND (as in boolean) 'we cannot explain nature at the moment' would equal 1.
I suppose the inductive bit would be 'human knoweledge will increase in the future'. But moving away from an exercise in pure logic, this has been true in the past, is true now, so the evidence points towards it being true in the future.
Finally, limitations in nature does not imply non-understanding. As mentioned above Heisenberg's uncertainly principle is a limitation, but it is well defined and bounded, and it can be used to make successful predictions about the world.
It's a "thought experiment" about what can *in* *principle* be done.
That's like saying that because rabbits can't flap their wings and fly then the claim "winged flight is possible" is falsified. Yet eagles manage it. You can't extrapolate from limitations of humans or rabbits or grasshopers to make a fundamental point about nature.
Which begs the question I asked before "what principle"?
No its not like that because the exception is part of the set "nature".
> That's like saying that because rabbits can't flap their wings and fly then the claim "winged flight is possible" is falsified. Yet eagles manage it. You can't extrapolate from limitations of humans or rabbits or grasshopers to make a fundamental point about nature.
Not quite, its like saying 'there are no white cows', to show this statement to be false, all you have to do is find a white cow, or in our case, an example of nature that cannot be explained.
I believe that we will be able to explain all of nature, and use Occam's razor in my induction: As time as moved forward, human knowledge has increased. I see no evidence that this is changing nor any area in which a fundamental part of nature cannot be explained (as in some magic that physics goes 'nope not for us'), therefore in the future we will know more, and in time explain all things.
Not sure what you mean?
Yes. That's what I meant by induction. However, its one thing using inductive reasoning to expect the sun to come up everyday. Its another thing to believe that a type of explanation (reductive) will always work to describe every aspect of nature, especially when there are mathematical reasons to believe that false.
Well that's your example.. ..which is well chosen not to be problematic. I was more thinking of Godel's incompleteness theorem, and those physicists like Hawking, Freeman dyson etc who believe that it has definite implications for explanations of nature: Hawking's a TOE will never be possible, and Freeman Dyson's "in mathematics, the whole is always greater than the sum of the parts". I was also thinking about the problem of infinite recursion and the result that all reductive explanations in that context *always* being necessarily incomplete.
This does not follow. Just because some quantity increases for all time does not necessarily mean that it will grow arbitrarily large or reach some pre-defined limit.
A counter example from mathematics (where this sort of statement can be clearly defined) is this: the sequence 1 - 1/n (where n = 1, 2, 3, 4, ...) gets larger and larger as n grows. But it never exceeds 1.
Similarly, human knowledge may indeed continue to grow for all time, but that does not imply that there will come a point when we know everything.
> Not sure what you mean?
0 AND 0 = 0
0 AND 1 = 0
1 AND 0 = 0
1 AND 1 = 1
Therefore both my statements can exist and be true.
Really, can't I abstract my inductive process from observing things to observing the *observing of things*? May I ask which mathematical reasons as I must admit I am unfamiliar with them or they are not jumping to mind now>
Yep and shows the point that limitations in nature does not imply lack of understanding/explanation in the absolute sense. Sure there may be *an* example, but not all examples make it true.
wouldn't 1 - 1/n would get smaller and smaller?
But anyway I take your point, partially, something can always increase but never reach a vaule. However, so far there have been no examples where humans have gone 'nope, can't explain that, thats all just magic and goblins, and will be so til the end of time'. And so because it hasn't happened yet, it probably isn't going to happen in the future (we all know about disproving a negative though).
>> It's a "thought experiment" about what can *in* *principle* be done.
Asking that question is a different use of the word "principle". In principle mammals can evolve wings and fly. Rabbits can't fly. That doesn't invalidate the statement.
Can you point me to the dictionary definition you are referring to please?
"in principle": as a general idea or plan, although the details are not yet established:
-- "the government agreed in principle to a peace plan that included a ceasefire"
-- "The plan was accepted in principle but the details for it were not."
-- "The town council says it supports the plan in principle, but says there could be problems finding a suitable location."
> -- "the government agreed in principle to a peace plan that included a ceasefire"
> -- "The plan was accepted in principle but the details for it were not."
> -- "The town council says it supports the plan in principle, but says there could be problems finding a suitable location."
Okay, I understand, but that is to assert any idea you can think up as a real possibility. Just because you have the idea does not mean it is possible in this world.
Why is there something rather than nothing? Definitely Goblins.
Nope. 0, 1/2, 2/3, 3/4, 4/5, 5/6, 6/7, ...
I see your point: No thing has yet been shown to be completely unexplainable. And I sort of agree with it. In principle, perhaps we could model everything from fundamental interactions, if only we had enough computing power.
And yet I'm not so sure that this is really a meaningful statement. If you ask "why is ice slippy" then you can give an answer in terms of water molecules and forces and so on. Showing that some phenomenon is described by some more fundamental principle or law is a satisfying explanation for that phenomenon. But you can't answer "why are the laws of electro-magnetism as they are?" (unless they a part of some Theory of Everything, but then you can't say why the TOE is as it is). In fact I don't think the question even makes sense. What sort of reply would qualify as a answer?
We can model complex molecules from fundamental physics, at least in principle.
By which I mean that you can write down differential equations that the complex molecules obey, and that, given enough computing power, one could predict their future behaviour to any degree of accuracy.
Caveats: (1) we don't know for certain that the molecules really do obey these equations, but then we never do, do we? (2) If there's a big crunch, the computation might not finish in time.
I'm not convinced. Complexity, chaos, Godelian incompleteness, many body problems, temporal / historical limits on knowledge, theoretical biasing, etc etc all lead me to think that *in* *principle* we *cannot* predict future behaviour of aspects of nature accurately.
There big caveats though.. ..we can do it "in principle" but we might not actually be able to do it because we can't work it out / don't have the computing power / don't have the time for the calculation to be run etc etc it seems that that *can* in principle, given those and many other caveats just as easily be a *cannot* in principle. So the in principle statement is really pretty empty, which is what I meant above in my answer to Coel.
OK, I should have also said that you would need to be able to measure the system very very precisely to get good enough initial conditions if the system happens to be chaotic. Is a complex molecule (on its own) a chaotic system? I don't know. But chaotic is not the same as hard.
As for the others, I'm not sure what you mean by "complexity". Godel's incompleteness theorem has something to say about certain systems of axioms and whether they can be both consistent and complete -- what's that got to do with running a computer program to solve some PDEs?
The point I was trying to make is that solving the PDEs that govern these systems does not involve an computing an uncomputable function (at least, I don't think it does); there are standard algorithms for solving PDEs, which given enough computing power could get the job done. It's just that they would take a very very very very long (but finite) time.
All computations requires resources. If I wrote an algorithm and was able to prove that it would terminate after 1 year and give you the answer you wanted, would that satisfy you? What about 10, 10^10 or 10^(10^10) yeaars?
Elsewhere on the site
Manchester Climbing Centre is showing Reel Rock’s Valley Uprising on Tuesday the 11th of November at... Read more
Last year, Finn McCann wrote an article about climbing El Capitan with his terminally ill father Seamus, who had been... Read more
A fantastically versatile little pack; whether out running in the hills, hitting the trails on the bike or just running for the... Read more
Pete Whittaker has flashed the 32 pitch route Freerider 5.12d on El Capitan in Yosemite Valley over three days,... Read more