A while back I posted some stuff over at PotC about John Locke, and a sort-of sequel post on what it would be like to be a ghost (and a another one about split brains). This generated some very nice discussions in a variety of places, so here, albeit belatedly, I want to collate some of the major themes from that.
My basic take on consciousness is that it's a sort of field-like thing. It is non-physical, yet, like a more familiar electrical or gravitational field, it has an effect on physical substances. Unlike most physical fields, it requires a specific configuration of matter to generate it, and can only interact with that particular configuration. Thus it remains highly localised, giving a very limited but nevertheless real aspect to "mind over matter". It is important to emphasise the non-physical nature : I am not suggesting something directly analogous to electromagnetism or some "substance" with a corresponding particle.
The aim of this is to save common sense intuition that our thoughts control our actions whilst minimalising the amount of mystical woo. I do not pretend any knowledge of what consciousness is (I offer a description, not an explanation), only that its effects are highly limited but not zero. Further, I say that we do not have direct control over our thoughts - that would be a contradiction in terms - but that we can beckon our thoughts in certain directions. Thus our free will is also limited, but again, quite definitely real in spite of that.
In order to keep this summary as brief as possible (some of the discussion threads were very very long), I'll try and arrange things thematically. I'll be anonymising the content to avoid accusing anyone of saying something that they didn't or didn't mean. For the enthusiasts, the two main discussions can be found here and here, with a third smaller one here (another short discussion occurred on MeWe, which doesn't allow linking).
1) Can we test for free will ?
I believe consciousness is efficacious. My thoughts control my actions and are dependent upon entirely subjective concepts and qualia. Numbers do not physically exist, but they have a manifest impact on my actions. Likewise, nothing else in nature is posited to be utterly passive; everything affects everything else to varying degrees, and for consciousness to be any different would be radically strange.
Since I view consciousness as a quasi-physical phenomenon, generated by hardware, I've suggested that this view of free will could be tested by building a robot with AI and running a simulation of the same robot. If consciousness really does depend on hardware, and really does give us a measure of control, then their behaviour should be different.
A difficulty for this came up in discussions. It was suggested that the notion that free will would require you to be able to make a different choice given physically identical conditions : if there is something non-physical at work then it ought not to be bound by physical reality. This poses two difficulties, one scientific and one philosophical.
The scientific objection is that (as I understand it) the Uncertainty Principle means not that we merely can't measure things accurately, but that things literally don't have certain properties. So an exact repeat is fundamentally impossible, not a measurement problem. And it should be noted that mere randomness is not the same as intention - free will according to me means that I do something because I choose to, and that choice occurs through subjective thought, not through random electrical currents in the brain (although this might tip the balance in some cases).
The philosophical objection concerns identity. If we re-ran history and everything was identical except that I decided to vote for Boris Johnson, something would be seriously amiss. There would be no causality - I would have continued going on lengthy monologues against the prattling twit only to suddenly decide to actually vote for him ? It wouldn't make any sense. If I did something so out of character it would make a mockery of the whole concept of personal identity. Certainly there are some things I could do entirely differently and not feel like I'd soiled myself (like choosing waffles or toast for breakfast), but others are fundamental to who I am*. Sometimes decisions are easy, at other times they involved prolonged wrangling, but at all times I cannot escape the clear sensation of being in control.
* This does not mean accepting determinism, only that free will has limitations like everything else.
I do not have any answers to these points.
2) But does consciousness do anything in the first place ?
There were an interesting variety of positions on this which I hadn't previously considered. One is that consciousness is real, affects the world, but doesn't allow for free will (how that one works I don't know, the discussion moved on to other things). Another is that panpsychism, while a form of dualism, doesn't necessarily imply that consciousness interacts with anything. One could be a dualist in saying that subjective experience is not the same as physical reality, yet still deny that consciousness actually does anything. This surprised me, but on reflection it's perfectly consistent.
A more extreme version posed was that maybe fictional characters are conscious. I'm reminded in an author's comment on a webcomic : you see, he's a fully realized little guy, in my head, and HE made that decision to leap, not me. Or from one of the discussions : plenty of authors note that their characters often do things that they never intended, and they had to follow the story where they led. Superman isn’t just a passive model in your head, but an active simulation. This is similar to another description of consciousness as a simulation, of the brain talking to itself, a way to try and simulate what will happen if we run action x given condition y. And we certainly know that the brain does fill in a lot of gaps, and even that there's a delay between sensory data and our mental realisation of it, with the brain extrapolating so we don't realise the difference. Conscious experience doesn't always mesh well with sensory input data at all.
But to me if we allow fictional characters to have consciousness then we've robbed it of any real meaning. Anything I imagine is under my control (even if limited) - to say I could imagine something with a will of its own seems contradictory to me; an internal simulation is distinctly different from an external one. And rather than asking, "is Superman conscious ?" we should perhaps first ask, "does Superman experience qualia ?". I think the answer is clearly no. At the most, fictional characters can only receive the qualia their authors deign to give them, and this has some distinct tones of idealism rather than panpsychism.
(None of the discussions mentioned idealism much, and anyway I recently finished Berkeley's Principles of Human Knowledge which certainly warrants its own post. So I'll leave idealism aside for now.)
3) Is consciousness actually any different from matter ?
There were a couple of really interesting takes on what consciousness actually is. First, I periodically returning to the mind-bending idea of illusionism, that we're not really thinking at all. The discussion here helped illuminate what might be meant by this, since other examinations have hitherto proved fruitless.
Optical illusions are perhaps the best way to illustrate this. Consider the waterfall illusion, which induces the sensation of motion without causing any "real" motion. This is an illusion : what we see does not correspond to reality : we perceive motion where none exists. Also recall the extreme case of having to re-learn to see after a protracted period of blindness, how assigning meaning to the world can be extremely difficult. This may help explain motion blindness, with motion being a qualia-like sensation, something we have to learn to experience rather than perceiving directly. So much of what we perceive - everything, in fact - is actually our own internally-assigned meanings, not the raw sensory data at all.
(Which is not, of course, to say that everything is perception, which I think is daft, but more on that in a forthcoming PotC post on Berkeley.)
In this vein, illusionism could be interpreted to mean that we do have inner mental lives and we do experience qualia. It's just that instead of them having any direct connection to objective reality, they are entirely mental constructs. Our thoughts themselves are qualia-like constructs; we have the sensation of thinking things we're not really thinking.
This can perhaps be better explained using another example : CGI movies. Can the brain do all the calculations needed to produce a Marvel movie ? This is surprisingly contentious point with a variety of implications. In dreams, it was argued, the detail is actually very low but the perception of detail can be arbitrarily high (similarly, most people can't draw a dollar bill in any detail at all from memory). So we have a sensation of perceiving details we actually don't perceive. Other optical illusions seem to attest to this, such as the grid of dots.
But is that really what's happening ? Let's reduce this to something more basic. Is my mental image of something much simpler - say, a circle - really an image at all, or, for want of a better description, merely the sensation of perceiving a circle ? I would dispute the claim that we don't really perceive high levels of details in dreams - I've tried to deliberately concentrate on details in dreams and found them only to increase in vividity. Likewise while some people possess no mental imagery at all, others have the extreme opposite condition. So it's probably a mistake to get carried away with this. Sure, our brain does fill in a lot of missing stuff, and that we can sense motion where none is apparent is interesting - but I see no reason why our brain couldn't just be creating moving images.
Perhaps both are true to some degree. Maybe the details in dreams are more like foveated rendering, created only when necessary, with the memory of details in other areas preserved only in sufficient detail to fool us into thinking we're seeing more than we actually currently are. In any case, while I can accept that there's more to qualia than we might at first guess (like understanding the meaning of motion or shape or distance), I still don't see how this could apply to thought or perception itself. In short, I still think illusionism is without merit. If I think I'm perceiving a circle, then dammit, it must be so. I can't be mistaken about my own perceptions, only how those perceptions correspond to objective reality.
4) Is the brain a computer ?
Before going to the second interpretation of how consciousness relates to matter, it's worth discussing here whether the brain is actually doing calculations in the classical sense. Obviously, it is not literally doing the same process as when we do a sum on pen-and-paper; there is no in-built "carry the one" subroutine that the brain employs. But arguably we could turn this on its head. The brain doesn't do mathematics, but, it was suggested instead, mathematics describes what the brain does. When we calculate the trajectory of a projectile, we're describing in linguistic, numerical form the procedures the brain must do in order to extend our hand to catch a ball.
I have some sympathy for this view. The brain clearly has outstandingly high performance when it comes to sensory data processing, as described already. And though it can't process data in the same way as a computer, bare intuition is incredibly powerful - one can see, at an instant glance, whether two things are perceived to be the same or different, whether there is structure in data or not, so long as everything is displayed in a format the brain accepts. Expecting it to be able to speak the same language as mathematicians, however, is not viable, any more than we could feed punch cards into a Dell laptop and expect it to process them.
I'm less sure about how far we can extend this. Computers can operate to arbitrary precision, and it seems unlikely the brain can do the same - we can't guess the trajectory of a ball with perfect accuracy, much less actually catch it every time. Claims that the brain might be able to calculate pi to the nth decimal place just don't hold up. I lean towards the brain doing something fundamentally quite different to mathematics; I don't see any reason to presume that mathematics is in any sense a reflection of neural processes. The brain is warm, squishy, and prone to paradoxical contradictions. That mathematics is derived from that warm squishy mess doesn't mean that it must follow the same parameters.
5) Does consciousness need to offer a survival advantage ?
Even if it doesn't exactly do complex mathematics, clearly the brain does something. There would be no point at all in evolving such a complex structure if it hadn't any useful function at all. This causes no great difficulty for biologists or psychologists, who are happy to equate emotions, senses, motivation, and cognition in general with actions. But it causes no end of problems for philosophers.
One argument is that we only perceive that which offers a survival benefit, which I completely reject. A more interesting approach was raised that, since we can imagine an automaton performing all the same actions as someone conscious, there is no unavoidable need for it. Thus, consciousness cannot have evolved, hence the appeal of panpsychism : it must be a basic property of matter.
This is to my mind at least more sophisticated than other arguments against the evolution of consciousness, which I found so foolish that I haven't bothered to rebut them. I don't buy it though. Evolution comes up with all kinds of wacky things that offer no advantage - it does not optimise very well, producing some things which are indeed genuine adaptations and some which are purely side-effects.
Let's flip this around. Things which don't have consciousness have severe survival disadvantages. Yes, we can imagine an awareless automaton managing to react unconsciously but appropriately. But, can we imagine the opposite - could a conscious, intelligent robot with goals and motivations suffer a survival disadvantage ? I would say no. If you have a conscious desire, intelligence and understanding, you'll generally perform far beyond the level of pure instinctive stimulus response. How could pure stimulus response deal with novel situations correctly ? I'll venture that it couldn't. Genuine understanding, the capacity for at least a rudimentary analysis, requires consciousness by definition.
6) Is it all just a terminology problem ?
Which brings us at last to the eponymous aspect of functionalism. Having read Locke, then Berkeley, and a good chunk of Hume, I'm struck by the way Berkeley and Hume seize on the same points in Locke and then (mis)interpret them in radically different ways. And so in the discussions here : accepting that "there is no objective evidence for anything", I would assume would be a clear mark of an idealist. Not so. Instead, this went the exact opposite way, to a distinctly materialistic and functionalist view of consciousness. Qualia are thus viewed as objective as everything else - that is, not at all.
Functionalism is apparently the mainstream view of consciousness but as far as I can tell it's a big cop-out. Rather than trying to define consciousness by what it is, functionalism essentially says, "consciousness is just whatever the brain does". Or, per the discussion : The brain handles 100% of all thinking, including the bits we’re conscious of. There is no brain function where we’re 100% consciously driving it, they’re all just tips of various icebergs, mostly out of reach... I do not think conscious thinking is very good for almost anything at all… except for reflection. It is a way to look at our own actions as if done by someone else.
In other words there is a direct one-to-one correlation between physical reality and consciousness. Do something to the brain and you inevitably do something to the consciousness. Thus (if I understand correctly), any influence the consciousness appears to have over our actions is just the result of physical processes altering the consciousness, and our will is just an illusion.
But what I don't understand is why need to presume this goes one way. Why is it more legitimate to say that the brain affects the mind than the mind affects the brain ? How can we be sure that inherently subjective things always correspond perfectly to objective reality ? How do we know that it's a change in brain activity which causes a change in our thoughts, rather than our changing thoughts causing a change in physical properties in the brain ?
The functionalist answer seems to be that they are one and the same, that no differentiation between brain activity and subjective experience is possible or meaningful.
This I do not find helpful. Clearly I can imagine things which are absolutely non-physical in nature, e.g. concepts like justice, responsibility, disagreement, concepts themselves... these things cannot exist except as mental constructs, labels. They are of a qualitatively different nature to physical objects; you can't hit someone in the head with yellow. Granted, our labels and descriptions of physical things are themselves mental constructs. Hence the need for a magical, illusory horse, i.e. a unicorn.
Does functionalism help with this ? I don't think so. I think it's an attempt to define the problem out of existence. It seems undeniable to me that mental imagery and concepts do not exist in the world. True, mental states corresponding to duty and honour and numbers do exist, and correspond to physical properties of the brain, but the concepts themselves are nowhere to be found in nature. Nor do I see how we can ever know that measurable, observable mental states in the brain invariably correspond to subjective thoughts in the mind - I see no reason to assume a perfect correspondence. In short, no, functionalism cannot save my unicorn.
Well, ramblings over. I don't expect this will convince anyone of anything, but then, that was hardly the point.
Reading back over one of the previous links, I still feel indignant about an old comment saying that I "just don't get it". For whatever it's worth, I've tried to explain different positions in different ways to give the best possible chance to understand them even while disagreeing with them.
Ultimately, I suspect, consciousness remains something that people either find deeply mysterious or perfectly ordinary. These things like, "it's all just a feedback effect" or "it's whatever the brain does", seem to genuinely satisfy those who to subscribe to them. As for me, I think they explain nothing at all, and indicate the problem is not properly understood. Then again, maybe I really just don't get it.