Sister blog of Physicists of the Caribbean in which I babble about non-astronomy stuff, because everyone needs a hobby

Wednesday, 8 February 2023

Pinning down consciousness

After some recent reads, I'm tempted to venture a working definition of what I think I mean when I say "consciousness". This is nothing more than what I find useful, mind you.

I've said in the past that I think consciousness is a spectrum, and that a dreaming or sedated person has a consciousness of a sort. But actually now I'm thinking if it wouldn't be better to demark consciousness as a particular kind of the more general category of mental states. Clearly, dreamers and the drugged have those, but they're not conscious in the usual sense. We can be subconsciously aware of our surroundings all the time but not really actively thinking about them, and in the vernacular we often use "conscious of" to mean "attending to" rather than merely "sensing".

It's helpful to start with the extremes. Let me be bold enough to declare that I think our regular, waking sort of awareness - of the Sun at noon, in bright light with nothing hidden or concealed beyond our own sensory limits - is the truest sort of consciousness. Look, I know, we could spend all day trying to justify this, but I don't want to do that so I won't.

Under these conditions, I think the following characteristics define our conscious person. Each of these needs to be heavily qualified, but let me just state them first :

  • They have an inner awareness of themselves and their surroundings which has no direct connection to their external reality. For example they can create mental imagery, run an inner monologue, and in general create imaginary things that aren't there.
  • Intelligence. They can take information and process it using some kind of (preferably, but not necessarily, reasoned and self-consistent) judgments to arrive at a conclusion.
  • Agency. That is, they feel they can make choices put before them and decide for themselves as to how they act. This can be done with varying levels of intelligence.
  • Sentience. They receive external information from their senses (or internal in the case of proprioception) which is incorporated into their internal awareness.
  • They are self aware. They have control over themselves in relation to the external world. They can distinguish their own inner awareness from their perceived external reality.
My reasoning being that we can surely all agree that someone having all of these characteristics is definitively conscious, and someone having none of them is definitely dead, or a rock. Furthermore, each individual characteristic is a spectrum : you could be sentient about one or many things and have varying degrees of sensitivity; you can have agency over some things you do (choosing to go left or right) but not others (liking or disliking a particular taste); you can sometimes muddle up external and internal realities (did I just see that or was it a trick of the light in the corner of my eye ?). 

With this definition, not only is consciousness itself allowed to be a spectrum, but you can have qualitatively different sorts of mental states as well. A dreamer is not conscious, but is clearly alive; an intelligent robot can be useful, but isn't necessarily conscious or alive.

Let me further qualify these characteristics to avoid any gross misunderstandings. Note that I'm trying to keep each of these as independent as possible in order to allow for different types of mind and information processing. The reasons for this will become clearer as we go on. 


Inner awareness : I mean a mental state that has no direct connection to reality; essentially, qualia. A conscious person experiences something. For example, colour is not wavelength, touch is not pressure, sound is not frequency, and heat is not temperature. What we experience is how our brain interprets those external events, and the experience of a thing is not at all the same as the thing itself. Memory is another good example. Other things are also totally imaginary* and exist nowhere in reality, e.g. concepts ("show me one atom of justice" and all that). Such imaginings need not be at all sophisticated, however.

* From our senses we only know our mental representations of the external thing and not the true nature of the thing itself - but we do at least know that something external induced an internal representation. 

For me this is the main hallmark of a mind and thought. The "mind" is the collection of these thoughts which are all of this same basic type. If you don't have this inner, unphysical awareness, you're a mindless zombie or an abacus. But if you do have it, you still might not be what we mean by a fully conscious being. Awareness is necessary but not sufficient for consciousness, but necessary and sufficient for a mind.


Intelligence : This I see as the capacity to solve problems. That is, processing and integrating information, forming a reasoned judgement about how things work and forming conclusions appropriately. This need not at all be a rational process, that's much more sophisticated. Pavlovian responses are a good example of basic intelligence. A pure stimulus response (smelling food -> go towards food -> eat the food) is not intelligence, but responding to something only indirectly related requires some level of reasoning (hearing a bell -> go to the place food is dispensed -> eat the food). You can't do this without some sort of very basic learning.

The difference between instinct and intelligence isn't always clear. Some key behaviours can be purely instinctual but then applied using genuine intelligence to different circumstances. And computers, in my view, can count as intelligent but they're not at all likely to be conscious. But in the main they have only the most rudimentary form of intelligence, further blurring the line with instinct : they can carry out highly sophisticated calculations, but only in a purely mechanical, instinct-like way. They don't form any chains of reasoning by themselves. Chatbots are increasingly able to overcome this, but I hasten to add that intelligence does not automatically imply that any of the other conditions I'm proposing are satisfied.

Intelligence, as with some of the other characteristics, surely requires memory. I'm hesitant to include memory as its own parameter as to me it seems like intelligence is more crucial to consciousness : an automaton could regurgitate information, but intelligence requires some form of thought (if not necessarily the conscious kind). Memory is thus included here only implicitly.


Agency : Some capacity for decision-making is essential. It doesn't mean a conscious being must actually be capable of carrying out its decisions, only that it's capable of making them. It must be able to determine or act towards a goal, however crudely. 

Note that I regard this as closely related to, yet far removed from, intentionality. I don't think any artificial intelligence at present comes close to having any sort of intentions whatsoever - none of them have even the merest inkling of any sort of "desire" at all. For that I think you require also this inner awareness, the capacity to form internal imaginary constructs. As it stands, programs can be said to have "agency" only in the very loose, very basic sense that they can make decisions. They can't really act on their own volition because they don't have any. They can have goals, but they can't as yet analyse those goals and alter them, certainly not based on any deeper objectives.

I should note a couple of other points. First, by "desire" I don't mean this in the emotional sense. I don't think emotion is necessary for consciousness. There isn't a good word meaning "desire" that isn't also emotive so we're just going to have to live with that.

Second, I don't propose to define free will here; I personally do think this is a thing but that's a whole other gargantuan kettle of fish. More to the point is that a conscious being (which must have inner awareness) will tend by default to believe it's acting on its own will. It will probably, I think, have true intentionality, but let's limit this requirement to mere agency to play it safe. This property doesn't require intelligence, but could be limited to just choosing between options or deciding a numerical value. They key point is that it makes decisions. Of course, when coupled with intelligence, this becomes very much more powerful.


Sentience : It's an interesting question as to whether this condition can exist independently of all the others. I lean towards no. A camera receives light, but by itself it can't be said to be "sensing" anything - something must occur as a result of its input for that to be the case. This could be merely changing the inner awareness, or causing a decision to be made or the information to be analysed. But it can't do absolutely nothing at all, or nothing has really been sensed. So while sentience can be independent of all of the other conditions here, it can't exist without at least one of them. You can't have something which is just sentient.

Should we choose this capability to mean only in relation to the external world, or should we allow internal senses as well ? I lean towards the former. Bearing in mind that I've said consciousness is just one particular sort of a more general variety of mind, I'd say a person who is totally disconnected from the external world is still, crucially, thinking, but like a dreamer, we wouldn't say they are conscious.

A more general point is that a conscious being requires something to think about. Now a sentient being automatically has this, but sensory perception is not the only route by which this can happen. Computers without external sensors can still have information to process, and I want to allow for the possibility for a conscious (or at least mentally aware) computer otherwise there's a risk of defining valid options out of existence. So as memory is implicit for intelligence, let's let "access to information" be implicit for sentience.


Self awareness : Since we allow the minimum condition for sentience to be sensing the external world, self awareness here means being able to distinguish the internal and external realities. In this definition, a conscious being cannot help but have an internal reality - that is indispensable to the whole concept of a mind. But to be conscious they must be able to sense the external reality as well. They must further be able to distinguish the two, otherwise they cannot have any real agency within their environment : they would be living in a dreamworld, or more accurately a sort of augmented reality which blurs the lines between real (external) and imaginary (internal).

Bearing in mind again that dreaming is here defined to be a kind of thinking, and consciousness another sort, it seems essential that conscious creatures must be at least partly able to distinguish the two. And again, since I allow everything to be on a spectrum, it's not necessary that they are fully capable of making this distinction at all times (hardly anyone can do that). It's enough that they can do it at all.



So that's my minimally-conscious entity then. It has an inner life, can sense the external world and distinguish it from its imagination, make judgements about information, and has goals. It needn't do any of these very well, but the more it does them, the more conscious it can be said to be. A dreaming person thinks, but is not conscious; they may think they have agency, but are unable to make decisions with respect to the real world. Sleepwalkers blur this distinction, and that's perfectly fine since this definition allows for different degrees and types of consciousness.

Recall that I've decided that memory and access to information, and possibly emotions, are only implicit requirements for consciousness. A thinking being is probably going to require these in general, but a being which is specifically conscious rather needs the higher-level attributes of intelligence and sentience (and possibly intentionality).


We can explore this further by considering different possible configurations. Taking some inspiration from the "moral matrix" concept, I envisage these characteristics as a series of sliders. Unfortunately there are rather a lot of possible combinations... if we reduce them to the binary on/off conditions, there are 2^5 = 32 possibilities. Allow an intermediary state and that's 3^5 = 243 options.

Well, the binary cases aren't so bad. Let's simplify the criteria so that :

Inner awareness = mindless if none, mindful if present
Intelligence = stupid if none, clever if any
Agency = inert if none, intentful if any
Sentience = blind if none, sentient if any
Self-awareness = dreamlike if none, awake if any

And allowing all options to have memory and access to information in general, as implicitly required.

Using this we can print out all 32 possibilities and explore the various combinations of mental states, just for funzies. Let's group them according to the number of conditions satisfied, and sub-group them according to whether they have inner awareness (a true mind, however limited) or not.

Yes, really, this is how I spend my spare time.


0 conditions met
Mindless Stupid Inert Blind Dreamlike : The trivial case of being dead. Or a rock.


1 condition met
Mindful
Mindful Stupid Inert Blind Dreamlike : Willy Wonka, living in a world of pure imagination.

Mindless
Mindless Clever Inert Blind Dreamlike : Calculators.
Mindless Stupid Intentful Blind Dreamlike : Angry switches.
Mindless Stupid Inert Sentient Dreamlike
Mindless Stupid Inert Blind Awake 


2 conditions met
Mindful
Mindful Clever Inert Blind Dreamlike : Sleepy scientists.
Mindful Stupid Intentful Blind Dreamlike : Brexit voters.
Mindful Stupid Inert Sentient Dreamlike : Drugged Brexit voters.
Mindful Stupid Inert Blind Awake

Mindless
Mindless Clever Intentful Blind Dreamlike : A decent chatbot.
Mindless Clever Inert Sentient Dreamlike : Useful robots.
Mindless Clever Inert Blind Awake
Mindless Stupid Intentful Sentient Dreamlike : Dangerous robots.
Mindless Stupid Intentful Blind Awake
Mindless Stupid Inert Sentient Awake


3 conditions met
Mindful
Mindful Stupid Inert Sentient Awake : Lazy idiots.
Mindful Stupid Intentful Blind Awake
Mindful Stupid Intentful Sentient Dreamlike : Hallucinations.
Mindful Clever Inert Blind Awake
Mindful Clever Inert Sentient Dreamlike : Beginnings of true AI.
Mindful Clever Intentful Blind Dreamlike : Beginnings of truly dangerous AI.

Mindless
Mindless Stupid Intentful Sentient Awake : A really boring but dangerous robot.
Mindless Clever Inert Sentient Awake
Mindless Clever Intentful Blind Awake : A hallucinating robot.
Mindless Clever Intentful Sentient Dreamlike : A different hallucinating robot.


4 conditions met
Mindful
Mindful Stupid Intentful Sentient Awake : About half the population.
Mindful Clever Inert Sentient Awake : Philosophers.
Mindful Clever Intentful Blind Awake 
Mindful Clever Intentful Sentient Dreamlike : Angry philosophers on drugs.

Mindless
Mindless Clever Intentful Sentient Awake : A potentially superb AI.

5 conditions met
Mindful Clever Intentful Sentient Awake
The trivial case of being a full conscious being, the most highly evolved entity possible, i.e. a radio astronomer.


This all reduces to 23 options, including the two extreme cases. Of course no more than a rough draft. I've not allowed the cases of being awake without being sentient, but we could debate if this is the best use of terms; clearly, you can know something is out there without really knowing what it is. And obviously the descriptions aren't meant to be taken seriously !

And... I'm not really happy that the characteristics don't have the same level of independence. I'm not thrilled with my definition of "agency" either. As I said, a rough draft.

Still, I think it's maybe at least provocative ? There are attempts to build machines to detect consciousness, which is very important for patients in a coma. According to the above, such patients may be truly conscious but unable to act, or they may still have mental states (which is more important for ethical considerations) but not consciousness proper, just as a dreamer. 

More philosophically problematic may be people who have aphantasia (no mental imagery) or no inner monologue. My assumption is that they do have some form of inner awareness, just not the most common ones we're used to. Or maybe they have them but they're at a lower level. Or maybe they don't, and they're the equivalent of biological robots. Which would be a bit scary, but at least it would explain Dominic Cummings.

Monday, 6 February 2023

How to write a popular book

Having read innumerable popular history and science books, I want to consolidate a few thoughts on some all-too-common mistakes. I would not presume to offer anything as grand as a guide, just some observations on things that happen (or don't happen) too regularly to ignore. 


1) Bibliographic references go at the back, footnotes go at the bottom of the page

This is by far the most common and most irritating structural problem of the books I read. Yes, please, do give me references - sometimes even a lay reader like me does like to check the sources. And absolutely do give me footnotes as well. They're both generally a good thing.

But combining references and footnotes and putting them all at the back is daft. Why would I want to keep flicking back and forth between different sections, doing a really tedious lucky dip to see if I get some interesting extra text or just yet another reference ? I mean, IBID ? Again ? YOU MADE ME TURN 356 PAGES JUST FOR THAT ?

Look, it's perfectly simple. Supplementary information that's immediately relevant to the main text : same page please. References : at the back.

Why these points are so often overlooked  is something that confuses the heck out of me because it just seems so bleeding obvious. Yet probably about 90% of authors don't do this. And I can't understand how nobody else seems to want to complain about it.


2) Put maps in relevant sections, not all at once

Likewise, maps are great. But if they're all together (usually at the front) despite referencing different areas or times, they're not much help. And this is silly, because by and large different chapters tend to have a different geographic or chronological focus, so just put each map at the start of the most relevant chapter. I mean, it's fine if occasionally you need to say "see map on page N" which happens to be in a different chapter. I don't mind turning to a different section once in a while; I don't have some weird allergy to looking up page numbers. Just making me do this constantly is very annoying because it breaks the flow of the text.


3) Reference all figures and colour plates

A final structural point : using colour plates is nice, but just because they're usually (by necessity) in a separate section doesn't mean they have to be avoided in the main text completely. Just give them numbers and then refer to them appropriately. Hardly anyone does this, which is just irritating : at what point should I stop to look at the pictures ? Are they just to be perused at random ? Honestly, this takes next to no effort and helps the flow of the narrative considerably.


4) Be chronological, not biographical

This one is a bit more subjective because there certainly are times when it's beneficial to do things out of order. But attempts at doing biographies of historical figures tend to be pretty awful, in my experience. First and foremost, tell me what the person did. This is far more comprehensible if you tell me what things they did in the order that they did them. If you want to extract some common character traits by all means do that, but trying to do this by itself is pretty much guaranteed to fail.

Likewise, in popular science it's helpful to sketch a history of how we arrived at current findings, and here too being chronological is a perfectly sensible guideline if nothing else. Above all, though, try and tell me a story. Tell me how one thing led to another, or how one thing may or may not imply another. You don't have to be rigidly linear and you certainly don't have to make everything artificially certain. But give me a thread to follow and explore.


5) Make a point

Or at least, have a point. I personally enjoy it very much when an author has a clear axe to grind, even if it's not very likely I'm going to agree with them : at least I know what they're up to and feel equipped to mentally argue back, to view their arguments more critically. Trying to draw at the grand underlying trends and causal effects has a particular appeal to me - I want to know why things happen not just how.  As long as your attempt isn't hopelessly stupid, I'll probably enjoy the reasoning process regardless of the end result.

I don't mind either way if an author wants to be explicit or explicit about this. Sneakily trying to bend the narrative so the reader goes away with a certain impression without ramming your own personal pet theory down my throat is fine; on the other hand, being up-front about why you're writing the book is usually a good idea. Of course, one can set this out at the start and/or end without having to labour the point continuously in every section.

But most important of all is that you have some point to make. Doesn't have to be anything deep or insightful, you could just want to tell a good tale. Or you might want to support the established view in defiance of popular alternatives, or indeed do the exact opposite. It's all good. What you absolutely cannot do is have no sort of point at all, otherwise you'll just get a big mess. And if you do have an axe to grind, feel free to grind away... but don't pretend you're not doing it. Don't pretend to be writing something you're not. If you want to wax lyrical on the sins of the decadent West, that is fine - but don't tell me you're writing a history of China, because it'll be obvious what you're up to and you'll come across as biased and annoying.


There we go, that should sort everything out.

Review : Pagan Britain

Having read a good chunk of the original stories, I turn away slightly from mythological themes and back to something more academical : the ...