I have this annoying habit of actually wanting to read the source material. Following the recent post about Popper's notion of falsification being judged too harshly, I went and read a couple of sections from his Logic of Scientific Discovery. I do not have the time or inclination to read the whole thing so I restricted myself to part 1, section 1, subsection 6 ("Falsifiability as a Criterion of Demarcation") and part 2 section 4 ("Falsifiability").
You should probably read the first post before proceeding with this one. In brief, while I don't subscribe to the notion of falsifiability as being a good definition of a scientific theory, and actively support the notion that a theory can also be proven true as well as false (see links in original post for details), I think it's too strong to say that theories are inherently unprovable. And I definitely don't think that this idea has done "incalculable damage to science and human wellbeing", as the Aeon author claimed.
Even my very brief reading of Popper confirms this opinion. It's very clear that Popper already covered the objections raised : that experimental error could be blamed, that a single contrary finding shouldn't be taken to refute a well-established law, that unforeseen implications of a theory should not be ignored.
Popper is hardly an eloquent author. He seemed to have "drunk the Kool aid" when it came to embracing philosophical jargon. For example, over concerns about the asymmetry of being able to falsify but not verify he says :
My proposal is based upon an asymmetry between verifiability and falsifiability; an asymmetry which results from the logical form of universal statements. For these are never derivable from singular statements, but can be contradicted by singular statements.
He goes on at length about "singular" and "basic" statements and I got quickly bored. I don't care if he's being especially rigorous - it's unreadable to the point of being unintelligible. Nevertheless, there are some passages which are abundantly clear. For example, when it comes to falsifications pointing to a scientific revolution, I think he raises a good point, albeit a bit implicitly :
Whenever the ‘classical’ system of the day is threatened by the results of new experiments which might be interpreted as falsifications according to my point of view, the system will appear unshaken to the conventionalist. He will explain away the inconsistencies which may have arisen; perhaps by blaming our inadequate mastery of the system.
But the newly rising structure, the boldness of which we admire, is seen by the conventionalist as a monument to the ‘total collapse of science’, as Dingler puts it. In the eyes of the conventionalist one principle only can help us to select a system as the chosen one from among all other possible systems: it is the principle of selecting the simplest system — the simplest system of implicit definitions; which of course means in practice the ‘classical’ system of the day.
Falsification is necessary for progress. If you can't ever rule something out, you can't possibly make advancements. There is a real risk that you will forever keep making adjustments and not accept the plain truth in front of you, a la Steady State theory, or even begin to consider an alternative. You simply have to throw out epicycles and crystal spheres if you're going to get a better model of the Solar System. Without at least someone to say, "I don't believe this", everyone ends up stuck making tweaks rather than having new ideas. When you have an idea which basically works, e.g. Newtonian gravity, it's far simpler to modify it (i.e. posit the existence of Vulcan) than strike a wholly new path (i.e. accept that Mercury's precession falsifies the theory and come up with curved spacetime instead).
A clear appreciation of what may be gained (and lost) by conventionalist methods was expressed, a hundred years before PoincarĂ©, by Black who wrote: ‘A nice adaptation of conditions will make almost any hypothesis agree with the phenomena. This will please the imagination but does not advance our knowledge'.
So disproving a model is extremely useful. But what about the prospect of "ancillary hypotheses" ? There could be legitimately unforeseen consequences of a theory - that can only be examined long after its publication debut - that might save it from any apparent contradictions with observation. Or there might be other, extraneous factors that affect the observations that do not actually challenge the fundamental mechanism in the theory at all. Clearly we shouldn't reject this possibility. Popper says :
As regards auxiliary hypotheses we propose to lay down the rule that only those are acceptable whose introduction does not diminish the degree of falsifiability or testability of the system in question, but, on the contrary, increases it. We can also put it like this. The introduction of an auxiliary hypothesis should always be regarded as an attempt to construct a new system; and this new system should then always be judged on the issue of whether it would, if adopted, constitute a real advance in our knowledge of the world. An example of an auxiliary hypothesis which is eminently acceptable in this sense is Pauli’s exclusion principle.
I like very much the attempt to clarify which kind of auxiliary hypotheses are acceptable but I think there's scope for elaboration here. Sometimes the full consequences of a theory take many years to work out, and we should accept these as being part of the theory irrespective of their testability. On the other hand, I agree that an ancillary hypotheses which doesn't itself have testable consequences - that must be distinct from the possibility that the original model is simply wrong - is not actually much use.
But this seems to me incomplete. Something about the overall complexity of the model and the fundamental nature also needs to be described here. First, there is value in simplicity, and it would be good to have general criteria as to how to decide when a theory has become too complex (e.g., perhaps, when it can be conceivably be adjusted to fit any new observation). Second, it matters whether your additional hypothesis relates to the basic nature of the model or deals strictly with the observables. If your adjustment only changes what's going on with your observations, then it's probably fine - but if it changes the fundamental mechanism at work, then it's really a new theory altogether, in which case you've effectively admitted the original idea was wrong : but not necessarily that it was devoid of merit.
For example, the statement that galaxy rotation curves can be explained by (a) some hitherto unknown particle is very different from (b) a particular particle having specific characteristics (likewise, some sort of modification to gravity versus a very specific sort of of modification). Of course, theories need to be specific, but one's mere opinions do not. There's nothing inherently wrong with saying, "well, just because this one sort of modification or particle didn't work, it doesn't mean that some other similar thing won't do a better job." That is absolutely fine. The point is that it's a lot easier to test highly specific notions than it is vague general ideas. Falsification has to be as specific as the ideas which can be tested. So if your "adjustment" doesn't actually change the model itself, then there's nothing much wrong with it, but if it does, you're on much shakier ground.
For example, this ghastly "planet nine" business. If the model was changed to say that the new planet's orbit was different because of, say, new discoveries of other outer solar system bodies, then that's okay. If it instead was modified to say that the planet itself was very much larger or smaller than the original, that would arguably constitute a new theory.
Back to Popper. As well as allowing for certain sorts of amendment to the theory, he goes further in saying that not every modification that doesn't meet his proposed requirements need be rejected :
Our methodological rule may be qualified by the remark that we need not reject, as conventionalistic, every auxiliary hypothesis that fails to satisfy these standards. In particular, there are singular statements which do not really belong to the theoretical system at all. They are sometimes called ‘auxiliary hypotheses’, and although they are introduced to assist the theory, they are quite harmless. An example would be the assumption that a certain observation or measurement which cannot be repeated may have been due to error.
So contrary to the Aeon piece last time, Popper specifically allowed for conflicts between measurement and prediction not to count as an immediate falsification. This is at least rather less absolute than the simplistic view of his ideas. Later he elaborates :
We say that a theory is falsified only if we have accepted basic statements which contradict it. This condition is necessary, but not sufficient; for we have seen that non-reproducible single occurrences are of no significance to science. Thus a few stray basic statements contradicting a theory will hardly induce us to reject it as falsified. We shall take it as falsified only if we discover a reproducible effect which refutes the theory.In essence, Popper is quite aware that you do have to be very sure your observation does contradict the theory and isn't just a weird fluke or a mistake.
That's really all I have to say. I still don't think falsification is all that amazeballs, and reading Popper is frankly tedious. But we do him a disservice to not realise that he had actually thought through at least some of the basic objections, and to call him a "neoliberal", let alone to saying he did "incalculable damage" is just being weird.
No comments:
Post a Comment
Due to a small but consistent influx of spam, comments will now be checked before publishing. Only egregious spam/illegal/racist crap will be disapproved, everything else will be published.