Sister blog of Physicists of the Caribbean in which I babble about non-astronomy stuff, because everyone needs a hobby

Thursday, 25 February 2021

Pity poor Popper (II)

I have this annoying habit of actually wanting to read the source material. Following the recent post about Popper's notion of falsification being judged too harshly, I went and read a couple of sections from his Logic of Scientific Discovery. I do not have the time or inclination to read the whole thing so I restricted myself to part 1, section 1, subsection 6 ("Falsifiability as a Criterion of Demarcation") and part 2 section 4 ("Falsifiability").

You should probably read the first post before proceeding with this one. In brief, while I don't subscribe to the notion of falsifiability as being a good definition of a scientific theory, and actively support the notion that a theory can also be proven true as well as false (see links in original post for details), I think it's too strong to say that theories are inherently unprovable. And I definitely don't think that this idea has done "incalculable damage to science and human wellbeing", as the Aeon author claimed.

Even my very brief reading of Popper confirms this opinion. It's very clear that Popper already covered the objections raised : that experimental error could be blamed, that a single contrary finding shouldn't be taken to refute a well-established law, that unforeseen implications of a theory should not be ignored.

Popper is hardly an eloquent author. He seemed to have "drunk the Kool aid" when it came to embracing philosophical jargon. For example, over concerns about the asymmetry of being able to falsify but not verify he says :

My proposal is based upon an asymmetry between verifiability and falsifiability; an asymmetry which results from the logical form of universal statements. For these are never derivable from singular statements, but can be contradicted by singular statements.

He goes on at length about "singular" and "basic" statements and I got quickly bored. I don't care if he's being especially rigorous - it's unreadable to the point of being unintelligible. Nevertheless, there are some passages which are abundantly clear. For example, when it comes to falsifications pointing to a scientific revolution, I think he raises a good point, albeit a bit implicitly :

Whenever the ‘classical’ system of the day is threatened by the results of new experiments which might be interpreted as falsifications according to my point of view, the system will appear unshaken to the conventionalist. He will explain away the inconsistencies which may have arisen; perhaps by blaming our inadequate mastery of the system.

But the newly rising structure, the boldness of which we admire, is seen by the conventionalist as a monument to the ‘total collapse of science’, as Dingler puts it. In the eyes of the conventionalist one principle only can help us to select a system as the chosen one from among all other possible systems: it is the principle of selecting the simplest system — the simplest system of implicit definitions; which of course means in practice the ‘classical’ system of the day.

Falsification is necessary for progress. If you can't ever rule something out, you can't possibly make advancements. There is a real risk that you will forever keep making adjustments and not accept the plain truth in front of you, a la Steady State theory, or even begin to consider an alternative. You simply have to throw out epicycles and crystal spheres if you're going to get a better model of the Solar System. Without at least someone to say, "I don't believe this", everyone ends up stuck making tweaks rather than having new ideas. When you have an idea which basically works, e.g. Newtonian gravity, it's far simpler to modify it (i.e. posit the existence of Vulcan) than strike a wholly new path (i.e. accept that Mercury's precession falsifies the theory and come up with curved spacetime instead).

A clear appreciation of what may be gained (and lost) by conventionalist methods was expressed, a hundred years before Poincaré, by Black who wrote: ‘A nice adaptation of conditions will make almost any hypothesis agree with the phenomena. This will please the imagination but does not advance our knowledge'.

So disproving a model is extremely useful. But what about the prospect of "ancillary hypotheses" ? There could be legitimately unforeseen consequences of a theory - that can only be examined long after its publication debut - that might save it from any apparent contradictions with observation. Or there might be other, extraneous factors that affect the observations that do not actually challenge the fundamental mechanism in the theory at all. Clearly we shouldn't reject this possibility. Popper says :

As regards auxiliary hypotheses we propose to lay down the rule that only those are acceptable whose introduction does not diminish the degree of falsifiability or testability of the system in question, but, on the contrary, increases it. We can also put it like this. The introduction of an auxiliary hypothesis should always be regarded as an attempt to construct a new system; and this new system should then always be judged on the issue of whether it would, if adopted, constitute a real advance in our knowledge of the world. An example of an auxiliary hypothesis which is eminently acceptable in this sense is Pauli’s exclusion principle.

I like very much the attempt to clarify which kind of auxiliary hypotheses are acceptable but I think there's scope for elaboration here. Sometimes the full consequences of a theory take many years to work out, and we should accept these as being part of the theory irrespective of their testability. On the other hand, I agree that an ancillary hypotheses which doesn't itself have testable consequences - that must be distinct from the possibility that the original model is simply wrong - is not actually much use. 

But this seems to me incomplete. Something about the overall complexity of the model and the fundamental nature also needs to be described here. First, there is value in simplicity, and it would be good to have general criteria as to how to decide when a theory has become too complex (e.g., perhaps, when it can be conceivably be adjusted to fit any new observation). Second, it matters whether your additional hypothesis relates to the basic nature of the model or deals strictly with the observables. If your adjustment only changes what's going on with your observations, then it's probably fine - but if it changes the fundamental mechanism at work, then it's really a new theory altogether, in which case you've effectively admitted the original idea was wrong : but not necessarily that it was devoid of merit.

For example, the statement that galaxy rotation curves can be explained by (a) some hitherto unknown particle is very different from (b) a particular particle having specific characteristics (likewise, some sort of modification to gravity versus a very specific sort of of modification). Of course, theories need to be specific, but one's mere opinions do not. There's nothing inherently wrong with saying, "well, just because this one sort of modification or particle didn't work, it doesn't mean that some other similar thing won't do a better job." That is absolutely fine. The point is that it's a lot easier to test highly specific notions than it is vague general ideas. Falsification has to be as specific as the ideas which can be tested. So if your "adjustment" doesn't actually change the model itself, then there's nothing much wrong with it, but if it does, you're on much shakier ground.

For example, this ghastly "planet nine" business. If the model was changed to say that the new planet's orbit was different because of, say, new discoveries of other outer solar system bodies, then that's okay. If it instead was modified to say that the planet itself was very much larger or smaller than the original, that would arguably constitute a new theory. 

Back to Popper. As well as allowing for certain sorts of amendment to the theory, he goes further in saying that not every modification that doesn't meet his proposed requirements need be rejected :

Our methodological rule may be qualified by the remark that we need not reject, as conventionalistic, every auxiliary hypothesis that fails to satisfy these standards. In particular, there are singular statements which do not really belong to the theoretical system at all. They are sometimes called ‘auxiliary hypotheses’, and although they are introduced to assist the theory, they are quite harmless. An example would be the assumption that a certain observation or measurement which cannot be repeated may have been due to error.

So contrary to the Aeon piece last time, Popper specifically allowed for conflicts between measurement and prediction not to count as an immediate falsification. This is at least rather less absolute than the simplistic view of his ideas. Later he elaborates :

We say that a theory is falsified only if we have accepted basic statements which contradict it. This condition is necessary, but not sufficient; for we have seen that non-reproducible single occurrences are of no significance to science. Thus a few stray basic statements contradicting a theory will hardly induce us to reject it as falsified. We shall take it as falsified only if we discover a reproducible effect which refutes the theory.
In essence, Popper is quite aware that you do have to be very sure your observation does contradict the theory and isn't just a weird fluke or a mistake. 

That's really all I have to say. I still don't think falsification is all that amazeballs, and reading Popper is frankly tedious. But we do him a disservice to not realise that he had actually thought through at least some of the basic objections, and to call him a "neoliberal", let alone to saying he did "incalculable damage" is just being weird.

Monday, 22 February 2021

In two minds

Recently I accomplished a long-term goal by finally articulating what I think free will really is. In my view, it means the capacity for mind over matter : for our thoughts, the things we subjectively experience, to change our behaviour and thus give us a limited measure of mental control over the external, objective world. I see consciousness as a sort of field-like thing, highly localised to within our own brain, and only able to affect (and be affected by) the material that gave rise to it. This then prevents any sort of mysticism, albeit while keeping consciousness itself thoroughly mysterious.

Thanks to a comment on that post, I started pondering my way down a rabbit-hole of implications as to what this view might mean. I'll give a shout-out here to this very nice blog : from what I've seen, I disagree with much of it but find it hugely interesting and provocative. So now I'm wondering if, though the idea of field consciousness seems sound enough in itself, the same can be said for its consequences.

What started to nag at me was the idea of split brain memories. This is the bizarre finding that some animals appear to store and access memories in different sides of the brain, depending on how they were formed : e.g. an animal seeing something with one eye responds differently than the case of seeing it with the other eye. And this phenomenon occurs in humans too. When certain connections between the two hemispheres are severed (as is occasionally done through medical procedures to treat severe epilepsy), patients behave as though they have two distinct minds, with one side trying to physically stop the other from doing things. In humans this is . Example 1 :

When Sperry and Gazzaniga presented stimuli to the right visual field (processed by the speaking left hemisphere), the patient responded normally. However, when stimuli were presented to the left visual field (processed by the mute right hemisphere), the patient said he saw nothing. Yet his left hand would draw the image shown. When asked why his left hand did that, the patient looked baffled, and responded that he had no idea.

In some animals a similar condition is perfectly normal. Example 2 (from "Other Minds", which I briefly review here) :

The pigeons were trained to do a simple task with one eye masked, then each pigeon was tested on the same task while being forced to use the other eye. In a study using nine birds, eight of them did not show any "inter-ocular transfer" at all. What seemed to be a skill learned by the whole bird was in fact available to only half the bird; the other half had no idea.

And it's not a straightforward binary condition either. Example 3 (also from "Other Minds") :

An octopus trained on a visual task using just one eye initially only remembered the task when tested with the same eye. With extended training, they could perform the task using the other eye. The octopuses were unlike pigeons in that some information did get across; they were unlike us in that it did not get across easily... The special kind of mental fragmentation seen in split-brain humans seems to be a routine part of many animals' life.

It's also worth recalling blindsight, when visual information is processed only subconsciously; reverse blindsight, when no subconscious processing is applied to the received images such that they simply make no sense, like being unable to read text but applied to everything; and flatworms, who can apparently literally eat memories.

So what's going on here ? Does this point towards an even more materialistic interpretation of consciousness, where it's strictly limited to matter over mind and never the other way around ? Naively, I would think that if consciousness is any sort of field, be that generated by the brain or only received by it, it ought to be able to convey information throughout itself. It seems extremely strange to say that it could only access information with such extreme locality; that part of the field would know things that the rest didn't.

Of course, idealism has no problem with this since everything is imaginary anyway. Illusionism denies consciousness exists, so again, no problem. Compatibility with panpsychism is harder to determine, since it's not obvious how consciousness is combined from its constituent atoms. But it definitely does appear to be a problem for any dualistic field theory of consciousness, be that generated or received by the brain.

However, we also know about blindsight. And as I've suggested, this may not be so uncommon - we all run on autopilot sometimes, lost in thought but for the most part still managing to avoid bumping into tractors and stray hippos. So our brain is definitely capable of sophisticated data processing without conscious intervention. Maybe, then, the physical splitting of the brain simply prevents certain memories from reaching the consciousness at all, so that it's handled by the brain's other faculties that don't require awareness. After all, we're not exactly fully aware of all the signals our brain sends out : we don't feel nerve impulses sent to our arms or lungs or feet, yet they all manage to get by perfectly well without is. 

This makes it at least conceivable that someone could draw something they genuinely weren't aware of, without invaliding conscious field theory (if I may give it an unnecessarily grandiose title) at all. I think this idea would depend on how far it can be pushed - could someone solve differential equations without being aware of them ? A bigger and more fundamental problem might be that something has to make a choice : part of the brain has to decide whether or not to commence drawing, and to be unaware that this choice was made seems extremely strange. The part of the brain that enacts drawing must furthermore understand the instruction to draw... all in all, it would have to behave suspiciously similarly to someone who was fully conscious themselves.

Of course, splitting the brain could just outright split the consciousness. This doesn't invalidate its field nature though : the two halves of the brain would then either generate (or a receive) an additional consciousness to the original state, with the original also being changed by the change of its receiver/generator equipment. And, since they're split, each consciousness will only ever report the experience of being one person, by definition, regardless of whether it does so in writing or speech. So an individual will never feel like they're two people, even if there are multiple consciousnesses coexisting inside their singular head.

This seems perfectly reasonable in cases of medical intervention, but bizarre in the animals that are like this anyway : why should a pigeon have multiple awarenesses ? Of course it might, but this doesn't feel very plausible.

A third possibility relates to how consciousness, in this model, interacts with the external world depends on the exact nature of its receiver/generator. The conscious field is mediated by the brain, so that if the brain is damaged then the field can't interact with the external world in the same way : even if the connections to the arms, hands etc. are themselves undamaged. In this interpretation the field itself would be fine, with no change in awareness or memory, just unable to control the body as it did previously. Likewise, an animal could exist in this state perfectly naturally, without surgical intervention or multiple consciousness in its head, just unable to control itself as a more naïve interpretation would expect. It would be a bit like being left-handed, only taken to an extreme : not unwilling or unaware of what was going on, but simply unable to express a particular memory or perception.

This would seem to be unsatisfyingly strange and quite unprovable, however, it is perhaps not as weird as it might first appear. Recall reverse blindsight, where those restored to sight after a long absence are unable to "read" the world around them, to perceive it but not assign anything any meaning. Or foreign accent syndrome, or dyslexia - it is quite possible to perceive something, to be aware of it, and not understand it, and thus not be able to communicate (i.e.) express it. The major difference being that in those situations people are generally aware of the difficulty.

In short, explanations could be :

  • The brain processes some information without it ever reaching the conscious mind. The brain would have to receive external information, decide on a course of action, and then enact it all without conscious intervention. To some extent it can do this anyway, but it raises the question of how far this can go - and of course, ultimately whether consciousness is really required at all.
  • Multiple consciousness coexist alongside one another. Each is singular and has access (usually) to the same basic information, but each can only perceive itself. This would be radically at odds with how we normally perceive ourselves, and one wonders why a pigeon would need to have multiple minds - and why I can't let a secondary consciousness take over when I'm bored.
  • There is only one conscious field and it is always privy to the memories of the entire brain, but since the field's interaction is governed by the material substance of the brain, its capabilities can be changed in surprising ways. The mind may be perfectly aware of what's going on but unable to express itself due to the limitations of the mediating brain.
I think it fair to admit, though, that while this phenomenon certainly doesn't rule out a consciousness as a field, it does lend credence to a more materialist, illusionist perspective : passive consciousness. Either way, it points to a clear separation of consciousness and memory. Now I pointed that out in the original post, but I think this still leaves a distinct... uneasiness. Yes, there's more to identity than memory. But if I lose all my memories, can I really still call myself the same person ? If my "soul", for want of a better word, need not even be aware of anything I've ever done, in what sense is it really "mine" ?

I don't know. For my part this isn't nearly enough to persuade me that illusionism is anything other than mad. But consciousness does seem, at the very least, to be even stranger than we give it credit for - and that's saying something.

Saturday, 20 February 2021

Pity poor Popper

I don't subscribe to the view that science is all about falsifying things. It's rather popular on the internet but seldom crops up much in day-to-day research, which tends to be a lot more free-wheeling. Not that it never comes up at all - people do worry if their model is hard to test - but it's not the dominant strain of scientific thinking, if there even is such a thing.

As discussed at length previously, real science tends to be messy. When it comes to theories and models, broadly they can be said to be scientific and/or useful if : they can be compared with observations; they have different strengths and weaknesses that can be compared with each other; they are at least testable in principle if not yet in current practise; they offer markedly different fundamental mechanisms which affect our intuitive understanding of a situation. A model which does not do this last point, which does not describe the actual physics at work, is not a scientific explanation - no matter how precise its predictions or how thoroughly they can be tested. Hence astrological predictions are not scientific because they lack any mechanistic explanation.

This is a lot more sophisticated than the classic notion of falsification, which essentially makes the process weirdly half-binary : keep testing things until their eventual but inevitable destruction, at which point, come up with something better. Instead, I believe it's entirely possible to both prove and disprove something with sufficient evidence - and given sensible constraining assumptions. If you don't have those, then you will say you never know anything at all, and may as well sod off and stop annoying everyone. But, while being able to do this is certainly very nice, and always desirable, it certainly isn't necessary to doing useful science. Not at all.

But this Aeon piece goes too far*. It leans towards a view of science in which everything is possible, where falsification can essentially never happen. That's not science in my book, it's philosophy. And philosophy is feckin' important, but it ain't science.

* Hypothesis : any article which uses the word "neoliberal" is likely to be awful. Except when the offending word is in quotation marks, obviously.

The process of science, wrote Popper, was to conjecture a hypothesis and then attempt to falsify it. You must set up an experiment to try to prove your hypothesis wrong. If it is disproved, you must renounce it. Herein, said Popper, lies the great distinction between science and pseudoscience: the latter will try to protect itself from disproof by massaging its theory. But in science it is all or nothing, do or die.

Three philosophers were pulling the rug away beneath the Popperians’ feet. They argued that, when an experiment fails to prove a hypothesis, any element of the physical or theoretical set-up could be to blame. Nor can any single disproof ever count against a theory, since we can always put in a good-faith auxiliary hypothesis to protect it: perhaps the lab mice weren’t sufficiently inbred to produce genetic consistency; perhaps the chemical reaction occurs only in the presence of a particular catalyst. Moreover, we have to protect some theories for the sake of getting on at all. Generally, we don’t conclude that we have disproved well-established laws of physics – rather, that our experiment was faulty.

A classic, "yeah, but" moment. Yes, we need to consider implicit assumptions, faulty measurements and the like - and yes, sometimes we can even have more confidence in a theory than an observation. And yes once more, most of the time all we can do is vary probabilities : this new data increases our confidence in theory A, this one favours theory B, this finding suggests something entirely new that we hadn't thought of before. But while it's very stupid to view science as a set of binary true/false choices, it's not much better to suppose that it can never falsify anything. If we found a truly superluminal object, that would instantly falsify relatively. If we found a giant space cat, that would instantly disprove... err, well, lots of things, probably - and simultaneously vindicate anyone who had proposed the existence of a giant space cat.

The notion that science is all about falsification has done incalculable damage not just to science but to human wellbeing. It has normalised distrust as the default condition for knowledge-making, while setting an unreachable and unrealistic standard for the scientific enterprise. Climate sceptics demand precise predictions of an impossible kind, yet seize upon a single anomalous piece of data to claim to have disproved the entire edifice of combined research; anti-vaxxers exploit the impossibility of any ultimate proof of safety to fuel their destructive activism. In this sense, Popperianism has a great deal to answer for.

I just think that's daft. First, anti-science loonies will always be anti-science loonies; they aren't just demanding impossibly high standards of proof - they are just not very good at thinking. This idea that such people are just demanding unreasonably high levels of evidence, and are actually just as rational as everyone else, is pernicious : on the contrary, if you do actually talk to such people, it becomes very obvious very quickly that they are neither especially interested in the truth nor in any way rational : they are already convinced of their own ideas and use that conviction as circular justification for their belief. Second, I'm just not seeing a scientific culture dominated by falsification; at most, reviewers can be overly-skeptical but that's about it. It hasn't "normalised distrust" - science is always and unavoidably more complex than that, relying on a careful, considered, moderated approach, neither too trusting nor too skeptical. Just because some philosopher described it as something else hasn't had that much effect. Popper, in the end, was just not that important : reshaping a culture is a very great deal harder than writing a book.

The abuses of Popper

A powerful cadre of scientists and economists sold Karl Popper’s ‘falsification’ idea to the world. They have much to answer for.

Whose cloud is it anyway ?

I really don't understand the most militant climate activists who are also opposed to geoengineering . Or rather, I think I understand t...