Sister blog of Physicists of the Caribbean in which I babble about non-astronomy stuff, because everyone needs a hobby

Thursday, 19 December 2019

I reckon it's turtles all the way down

One of those, "what's it all about, I mean really, when you get right down to it ?" articles.
Do electrons feel forces from their own electromagnetic fields? Either answer leads to trouble. First, suppose the answer is yes. The electromagnetic field of an electron gets stronger as you get closer to the electron. If you think of the electron as a little ball, each piece of that ball would feel an enormous outward force from the very strong electromagnetic field at its location. It should explode. Henri PoincarĂ© conjectured that there might be some other forces resisting this self-repulsion and holding the electron together – now called ‘PoincarĂ© stresses’. If you think of the electron as point-size, the problem is worse. The field and the force would be infinite at the electron’s location.
But would that mean the particle itself can feel anything ? If it's infinitely small (along any dimension), how can it be said to have any substance that can be affected ? Then again, interactions between particles of opposite charge would involve infinite energy, and that's not nice. And if it's not infinitely small, what the heck is it made of ? Perhaps instead it has extension, but it's an indivisible substance that simply cannot be ripped apart further. If so, how is it possible that it can be so readily converted into energy in the right circumstances ? That matter-energy duality seems weirder by far than any philosophical mind-body dualism...

Nope, I reckon it's all magical turtles. Solves the whole thing at a stroke. But for the sake of it, let's continue.
So, let us instead suppose that the electron does not feel the field it produces. The problem here is that there is evidence that the electron is aware of its field. Charged particles such as electrons produce electromagnetic waves when they are accelerated. That takes energy. Indeed, we can observe electrons lose energy as they produce these waves. If electrons interact with their own fields, we can correctly calculate the rate at which they lose energy by examining the way these waves interact with the electron as they pass through it. But, if electrons don’t interact with their own fields, then it’s not clear why they would lose any energy at all.
Faraday asked: ‘What real reason, then, is there for supposing that there is any such nucleus in a particle of matter?’ That is, why should we think that there is a hard core at the centre of a particle’s electromagnetic field? In modern terms, Faraday has been interpreted as proposing that we eliminate the particles and keep only the electromagnetic fields.
 But what actually is a field ? What gives it "substance", for lack of a better word ? My head hurts.
In a 1938 paper, Dirac proposed a modification to the laws of electrodynamics, changing the way that fields exert forces on particles. For a point-size particle, his new equation eliminates any interaction of the particle with its own electromagnetic field, and includes a new term to mimic the kind of self-interaction that we actually observe – the kind that causes a particle to lose energy when it makes waves. However, the equation that Dirac proposed has some strange features. One oddity is ‘pre-acceleration’: a particle that you’re going to hit with a force might start moving before you hit it.
In the 1930s and ’40s, a different strategy was pursued by four notable physicists [who] proposed ways of changing the laws that specify how particles produce electromagnetic fields so that the fields produced by point particles never become infinitely strong. When you change these laws, you change a lot. As Hubert explained in his presentation, we don’t fully understand the consequences of these changes. In particular, it is not yet clear whether the Born-Infeld and Bopp-Podolsky proposals will be able to solve the self-interaction problem and make accurate predictions about the motions of particles.
So on the smallest of scales it looks like things are inevitably weird. I think I'll stick to the largest scales, where everything is well-behaved and there are no controversies of any kind.
Wheeler and Feynman – like Ritz – do away with the electromagnetic field and keep only the particles. As I mentioned earlier, Ritz’s field-free theory has particles interact across gaps in space and time so that each particle responds to the past states of the others. In the Wheeler-Feynman theory, particles respond to both the past and the future behaviour of one another. As in a time-travel movie, the future can influence the past. That’s a wild idea, but it seems to work. In appropriate circumstances, this revision yields accurate predictions about the motions of particles without any true self-interaction.
In the action-at-a-distance theories put forward by these physicists, you can’t tell what a particle will do at a particular moment just by looking at what the other particles are doing at that moment. You also need to look at what they were doing in the past (and perhaps what they will do in the future). Lazarovici argued that the electromagnetic field is merely a useful mathematical bookkeeping device that encodes this information about the past and future, not a real thing out there in the world.
If you think of electrons as particles, you’ll have to think of photons differently – either eliminating them (Lazarovici’s story) or treating them as a field (Hubert’s story). On the other hand, if you think of electrons as a field, then you can think of photons the same way. I see this consistency as a virtue of the all-fields picture.
But how does this help with the double-slit experiment ? It's all magical turtles, I'm sure. At least they make sense. How dare the Universe be so bloody complicated...

Is everything made of particles, fields or both combined? - Charles Sebens | Aeon Essays

Long before philosophy and physics split into separate career paths, the natural philosophers of Ancient Greece speculated about the basic components from which all else is made. Plato entertained a theory on which everything on Earth is made from four fundamental particles.

Wednesday, 18 December 2019

Take back control, or at least try to

Knowledge is truly power if one is interested in moving towards free will and away from unconscious choice. I’ve given up the fantasy of making it all the way to pure free will, but I can certainly move myself closer to it along the spectrum.“Making the unconscious conscious” has been an area of great interest for me for many years, and I consider it to be incredibly liberating each time a new awareness can be brought into the light of consciousness. “Fate” can be transformed into identifiable behaviour patterns, that once recognised, can be embraced or abandoned at will.
Surely not though. If that were true, smoking would have disappeared overnight. Self-knowledge can be imperfect, but even if you do know the reason why you did something, that doesn't mean you'll have more control over it in the future. Knowing things and desiring things are not the same thing, and habitual behaviour can override all conscious knowledge; you might be able to act against your desires, but hardly "at will".

Of course, that's not to say that self knowledge is unimportant or that being consciously aware of why you're doing something doesn't bring you an element of self-control : it most certainly does, as in the example below.
An important part of my progress resulted with the discovery that it’s possible to partially immunise myself to the dopamine-hijacking methods employed by advertisers and social media.  In some cases,  I’ve determined that I’m simply unable to resist, that my wiring is fixed and ‘they’ are simply too adept at juicing the pathways, and so my best defence is to limit my exposure.  Similarly, I’m currently wrestling with admitting my biological limitations and giving up my smartphone in favour of reverting to a much more basic flip phone. 
Huge benefits have also resulted from understanding the ways in which emotionally manipulative language (a.k.a. “propaganda” or most of what passes for mainstream news) operates.  Once you’re able to spot it, you’ll see it everywhere and it will no longer sway you. In fact, it might even elicit the opposite reaction. 
Fair enough that manipulation is an area where knowing the specific tactics can be enlightening and really does give you control you wouldn't otherwise have. It's less obvious how much of a difference this really makes. I distinctly recall one of the best school lessons of all being an analysis of adverts, but I cannot honestly say if I already hated adverts by this point or not. Certainly I was susceptible to them as a child - who hasn't persuaded their parents to buy the cereal that comes with the coolest free toy ?

Now I would hope that a lot of this - that advertisers are trying to sell you stuff and don't mind embellishing the truth or telling outright lies - becomes pretty obvious as you get older, even if the specific techniques need to be taught. Unfortunately this doesn't solve the "fire is hot" problem : the propensity for politicians to tell absolutely brazen lies and get away with it. So are their gormless devotees really taken in, or is it more subtle ? Do they know the lies but don't see them as relevant to a politician's fundamental honesty ? Can they simply not understand that someone telling obvious lies is hardly likely to have anyone else's best interests in mind ? "It's very hard to trust a man who wants to borrow a picklock", as the old saying goes...

The thing is, knowing what's going on doesn't always help. Sometimes it will only make you rationalise your actions by some bullshit excuse that you probably even really believe. Knowing why I like chocolate doesn't stop me liking chocolate, even though I also know that it's not the healthiest of snacks. And sometimes it can be extremely dangerous to pretend that lack of self-knowledge made you a villain :
As a topical reference, the current Epstein sexual predator case just reminds us that many men often live out their lives thoroughly subject to the biology of sexual hormones and the drive to reproduce, with about as much free will as a rutting elk during mating season.
I don't think I could even begin to properly analyse what's going on there and I'd be a bit skeptical of anyone who did. Suffice to say that I find it fantastically unlikely that anyone becomes a sexual predator just because they're uncontrollably horny. It's hardly as though they're unaware of the immorality of what they're doing either - one suggestion is that they are genuinely unable to empathise with their victims. Anyway, choosing the worst sort of criminal as an example of how we sometimes lack control was probably not the most sensible move here; it brings in moral dimensions that are best tackled only once we've decided on the degree of free will people have. Otherwise we risk biasing the judgement.

I do strongly agree with the sentiment though :
I now more broadly interpret “unconscious” to mean anything that you aren’t aware of that’s causing you to respond with certain actions, or experience things in a certain way.  It could be something from your past long buried (nurture) or it could be hard-wired into your neurochemical response set (nature). Similarly, as long as it’s operating undetected by your conscious mind, yet resulting in certain responses, I’m calling that the “unconscious”, too. Simply knowing that such scripts are running in your brain is truly life changing once you become aware of them.
You can only be said to be making a free choice if you're conscious of it. An alternative philosophical definition of "free" to mean "unhindered" is useful in some contexts, but not when it comes to the more fundamental aspects. And though there are indeed some aspects of our nature we can't control, that doesn't mean we don't have any free will at all - we're just constrained by physics, biology, and information.

Trying to be more self-aware is a perfectly worthy goal - Epictetus described ignorance as a form of slavery, since you can't make a full, free choice if you don't have complete information. It's just that their are a lot of important caveats to this, most notably that knowledge does not automatically equate with desire, let alone behaviour.

Do You Truly Have Free Will?

Authored by Chris Martenson via PeakProsperity.com, How we're constantly at war with our biological programming... Until you make the unconscious conscious, it will direct your life and you will call it fate. ~ Carl Jung I love that Jung quote. I've used it generously in conversation, seminars and writings throughout the years.

Tuesday, 17 December 2019

Consciousness : maybe it's not like a rainbow but a horse

I'm told that philosophy doesn't go in for definitions much these days. But there are two major terms which it doesn't seem to have nailed down properly : free will and consciousness. The first is something about making choices, while the second has something to do with being aware of what's going on. Try and pin things down more precisely than that and things get tricky.

I personally lean heavily towards the everyday, common-sense view that consciousness is real, non-physical, and gives us free will to make decisions. I have absolutely no clue how this works, but while I find the idea that consciousness could be a rainbow-like illusion (being real but having no direct influence over anything) very interesting, in the end I simply don't believe it.

There are lots of things in this essay I strongly agree with. Especially :
Computer icons, cursors and so forth are not illusions, they are causally efficacious representations of underlying machine-language processes. It would be too tedious for most users to think in terms of machine-language, and too slow to interact with the computer by that means. That’s why programmers gave us icons and cursors. But these are causally connected with the underlying machine code, which is why we can actually make things happen in a computer. If they were illusions, nothing would happen – they would be causally inert epiphenomena. 
As a metaphor for mental images, I think this is perfect. Mental constructs are non physical, yet distinctly real : a horse is a collection of atoms, but that makes it no less true that a certain collection of atoms is a horse, A horse is surely more than a convenient label for a bunch of atoms, it is a very real thing. So mental images as ways of presenting what the brain is doing at a much lower level makes a lot of sense to me, and I agree that the label of "illusion" is completely wrong; mental images are no more illusions than horses are illusions. They are both descriptions of something real.
It is certainly true, as the illusionists maintain, that we do not have access to our own neural mechanisms. But we don’t need to, just like a computer user doesn’t need to know machine-language – and, in fact, is far better off for that. This does not at all imply that we are somehow mistaken about our thoughts and feelings. No more than I as a computer user might be mistaken about which ‘folder’ contains the ‘file’ on which I have been ‘writing’ this essay.
When illusionists argue that what we experience as qualia are ‘nothing like’ our actual internal mental mechanisms, they are, in a sense, right. But they also seem to forget that everything we perceive about the outside world is a representation and not the thing-in-itself. 
Exactly. It is nothing but folly to suggest we can define anything except via our perception of it. The problem I have here though is just how far the images-on-a-screen analogy helps with consciousness itself.
Or take a more mundane example. Would you call the wheel of your car an illusion? This illusion talk can be triggered by what I think of as the reductionist temptation, the notion that lower levels of description – in this case, the neurobiological one – are somehow more true, or even the only true ones. The fallaciousness of this kind of thinking can be brought to light in a couple of ways. First of all, and most obviously, why stop at the neurobiological level? Why not say that neurons are themselves illusions, since they are actually made of molecules? But wait! Molecules too are illusions, as they are really made of quarks. Or strings. Or fields. Or whatever the latest from fundamental physics says.
No argument there, but the computer analogy raises a quite different question. As far as our awareness in terms of imaginary images goes, it's fine. But a computer presents images to someone who then gets to make judgements based on them; the computer screen (or indeed any image) need not be conscious or aware itself - at most, it's something you're aware of. What's actually conscious is the person looking at the computer, making judgements and experiencing.

It's obvious that mental images are not presented to a little person inside us making their own judgements, because that leads to an infinite nested series of mini-mes which solves nothing. So how exactly should I understand the analogy ? Is consciousness to be just a label for a process, as "horse" is a label for a particular configuration of atoms ? That doesn't sit right. A horse is clearly real, and more solid than a rainbow-like illusion. Yet, in some sense, "horse" is just a description. Perhaps consciousness, therefore, is both like a horse and a rainbow, i.e. consciousness is a unicorn. Since unicorns are magical, that would easily solve the whole silly problem.

Or in other words : I don't know. I don't think I agree with the author that we can say "bye bye to any form of dualism". We can say that our mental images are not illusions and I like the analogy very much as far as that goes. But I don't think it helps at all with the "spooky" decision-making aspect of awareness; the fundamental difference between non-physical mental concepts and hard reality remains as awkward as ever.

Consciousness is neither a spooky mystery nor an illusory belief - Massimo Pigliucci | Aeon Essays

These days it is highly fashionable to label consciousness an 'illusion'. This in turn fosters the impression, especially among the general public, that the way we normally think of our mental life has been shown by science to be drastically mistaken.

Proportional representation : yes please, maybe

Recently I decided to design a new political system based on a truth-finding system that's proven to actually work : science. Here I'll give a the brief summary for those who don't want to read the full thing, and some afterthoughts.


How science works

The essence of the scientific method is that it emphasises examination of the evidence and the arguments presented, while downplaying the importance of who to trust or how many people already believe a counter-argument. It uses extended cognition to carefully examine as many possible relevant data points as possible in a multitude of different ways, which also helps keep things relatively calm and unemotional. Individual scientists are more-or-less free to say whatever they like in most venues, but anonymous peer review means that's not true when it comes to publications. Anonymity helps encourage skeptical inquiry, promoting competition without undue hostility, while editorial oversight ensures that both author and reviewer play by the rules. Dogmatic groupthink is avoided by the need to publish interesting, unexpected results, while peer review acts to restrict people from publishing whatever the hell nonsense they dreamed up while smoking something illegal.

It's a clever balancing act. It doesn't always work on individual cases, but the system does very well indeed on a large scale. The system both selects people who actually have some degree of intelligence in the first place, and, more importantly, provides them with tools to reach better conclusions than they might as mere individuals.

A key aspect of research is that it's always changing - mistakes occur and are tolerated at almost all levels, including factual errors, misinterpretations, and methodological shortcomings. All are subject to continuous improvement, though a surprising number of people seem to think "mistake" is the same thing as "being an incompetent twit". Hint : it isn't. The only kinds of mistakes that are very much not tolerated are those of deliberate deception. In this way, the system has a peculiar kind of pseudo-stability, always being close to the most rational conclusion currently possible (I call this the efficient consensus hypothesis). It rarely if ever reaches "the" truth, if there is such a thing; it just does the best it can possibly do. Conclusions are thoroughly tested by the community at large, making it extremely difficult to fool. Though inherently unstable, it would seem to be remarkably robust.


Lessons for politicians

My suggestion to improve politics was to take this system and transplant it wholesale into the political arena. Have party allegiance represent a broad set of common values, but almost entirely castrate the capacity for party machinery to dictate what politicians say publically - let them be free to express their individual views, otherwise no-one will trust them (the scientific consensus emerges because the system not only tolerates but actively encourages diverse views - it's not much use if you pre-select everyone who already agrees with you). Have their proposed laws be subject to a process analogous to peer review, fostering cross-party co-operation and anonymous scrutiny. Allow MPs of all parties to propose legislation and totally eliminate the government's ability to veto it - everything proposed must be voted on, though sometimes only by selected commissions (chosen by an independent, external council) and not the whole House. Fund the whole political system publically, forbidding donations of any kind. Impose much harsher sanctions for those found to be guilty of deceipt, but encourage trials of proposed laws whenever possible : that is, tolerate mistakes but not willful deception.

Now I have, in this little exercise, almost completely ignored the role of money. This is a deliberate choice, as "how to allocate money" and "what to do with the money available" are not necessarily the same type of problem. The latter can and should be done by specialists, whereas the former needs generalists to judge which department needs what. I do not propose any way to deal with that. Nor do I offer any solution to the problem of using pointless ministerial positions as loyalty bribes to hold MPs hostage to party leaders; though I suggest that either the number of minsters be cut drastically, or every single MP be given some official unique position. And perhaps most importantly, I haven't tackled the thorny issue of local government, nor did I say much about the electoral system.

My point was that we already have an established procedure for getting to the most rational viewpoint possible. This works well not only in compiling encyclopedias, but even in the messy world of front-line research. The system makes a strength of diversity of views, rather than trying to stamp them out, and everyone gets a fair say and genuine influence, rather than the winner-take-all approach of contemporary politics. It's not especially necessary that political reform follows the precise structure I describe, only that the system make use of the general principles of consensus through diversity of genuinely representative voices.


Afterthoughts

I've always been in favour of emphasising the "democratic" aspect of representative democracy. If I vote for a party based on their policies, I expect those policies to be implemented pretty much as stated. Having them compromised by negotiation with others would seem to defeat the purpose of elections. Of course, we do need the "representative" aspect as well, because their are plenty of fine details for which I'm more than happy to concede that expert examination is required, not to mention a myriad of points that few people are at all interested in, but the main policies ought to be the voter's choice.

And yet....

The scientific system is neither representative nor democratic, but inasmuch as it's either, it's infinitely more emphatic on the "representative" aspect. As far as we decide on what's correct, no-one makes a mandatory decision imposed on the others - the consensus view just emerges after prolonged scrutiny. The consensus is not all-encompassing either, but only those parts that the majority can agree on; there are plenty of problems for which there are about as many solutions as there are researchers tackling them. So the system of getting to the most sensible decision would seem inevitably at odds with the need to give people a choice, meaning that in such a system, voters would get less say in policy than they do currently.

This leads me to the much more popular idea of proportional representation. Here's the current UK Parliament :

I've made the Liberal Democrats pink just so they're easier to distinguish from the Scottish National Party.

And here's what it would look like (maybe) with proportional representation :

Quite different. The Tories remain the largest party, but would not have a majority. The SNP would shrink drastically and the Lib Dems expand considerably. In most UK elections, it would seem that a hung Parliament would result and we'd get coalition governments every time. This is good news for anyone wanting a more consensus-based approach, but bad news for anyone wanting more direct democratic choice in decision-making.

I'm not yet ready to commit to saying that the system should be a simple proportionality, but I am more than happy to say, "the current one ain't right". It ought to at least be roughly proportional, if not exactly, but it isn't even that. Overall, 52% voted for pro-Remain parties, yet the Tories have a thumping majority. In Scotland, the SNP won 45% of the votes and 81% of the seats, whereas Labour had 19% of the votes but barely 2% of the seats. This is getting silly. Is it really a democratic system when national issues are decided by local elections, and the results are so starkly at odds with how people actually voted ? Arguably not, meaning that the whole advantage of giving people a more direct say in decisions is negated. If a party doesn't need a majority of votes to win a majority of seats, then most people aren't getting any meaningful say at all. What's the point of allowing a plurality of voters such extreme control over the majority ? In such a situation, proportional representation starts to look a lot more attractive.

I should add that until very recently there was a sort-of temporal balance to the whole thing. There was a fair-ish chance that any damage done by one party could be undone by the next, keeping the system from lurching too far in any direction : even if a party lost, it could at least claim to be mitigating the worst excesses of the others by dragging them in their own direction in order to persuade voters. The success of the centrist incarnations of Labour under Blair and the Conservatives under Cameron are both testament to this.

But a new and altogether grimmer political era has dawned, one with distinctly authoritarian tendencies. The Opposition have been out of power for a decade and are likely to be out for another five years or more. So that aspect of the system too appears to be broken. The argument for PR, or at least a system that's more proportional than the current, is strengthened further.

Yet there are major caveats. First, there is absolutely no guarantee that people would vote the same way under a PR system, so the actual resulting Parliament could be completely different. Secondly, while hung Parliaments force the necessity of compromise, more is needed to actually facilitate how this happens in a sensible way : you can't just bang a group of radically polarised politicians together and lock them until they agree, unless you just want to scrape their mangled remains off the walls. Thirdly, PR doesn't prohibit any party from winning a majority, though it might make it harder. More is needed to ensure that democracy is prevented from falling into a tyranny by majority.

(As an interesting side note : the SNP did incredibly well under FPTP, whereas the Brexit party's share went up and down like a yo-yo, finally collapsing into nothing. The Lib Dems have seen similar trials and tribulations : left, right, or centre, how you campaign under different systems really does matter.)

So PR, or something similar, might be necessary but not sufficient. Politicians need to learn to work together, to both compete and collaborate without falling into petty tribal disputes or pointless unanimity of opinions. This is not asking the impossible - far from it. We already have such a system, and it's used routinely. It's true that it's a lot less interesting to watch than the theatrical displays of politicians bragging about their metaphorical penis size that passes for modern political debate, but I, for one, am sick of living in interesting times. Yes, ideally, I would rather have more direct democracy - but this manifestly doesn't work. So if it's a choice between a representative, cooperative, consensus-based, functional system that limits voter choice but gives everyone some degree of influence, or a dysfunctional, unrepresentative, winner-takes all approach which gives voters plenty of direct control, I have to say that overall I'd prefer the former. Policies that actually make sense but a bit less personal control ? Sounds good to me, thanks.

What the hell is it with Quora ?

As in, "the hell are these people thinking ?"

I answer question on Quora on a regular-ish basis depending on time and interest. I try to give detailed but (unlike certain blog posts) not overly-long answers, and I restrict myself to subjects I'm at least vaguely qualified to tackle. Of the 69 answers I've given so far, only two have upvotes in the double figures. That's annoying, but I'm fine with that. That's just the regular, "people are ungrateful dicks" principle.

What I'm not so fine with is everything else. Like how Quora will leave me alone if I stop answering questions, but as soon as I dare answer something, for a solid week or more I'm inundated with a torrent of assorted nonsense. Partly this is from Quora itself : apparently my answer to an astronomy question needs a different qualification than calling myself an astronomer. Does Quora helpfully suggest what would be better ? No, it just says that it needs to be something else for a whole bunch of answers. Perhaps if I say "I love wearing pink frilly hats" as my qualification for "what is dark matter ?" people will finally start taking me seriously.

Then there's Quora's tendency to tell me that Joe Bloggs has requested my answer to a question. I seriously doubt that Joe Bloggs would do that, because he's never interacted with me before. I have 16 followers, but apparently there are teeming hordes just waiting to pester me the moment I deign to answer a question. That's barely more likely than the really quite astonishing number of hot Russian singles just desperate to meet me whenever I turn my adblocker off. What an astonishing coincidence.

To add insult to injury, Quora makes it completely unclear as to when and why certain questions get outrageous numbers of views while others lurk suspiciously in the depths. Sometimes I'm recommended questions within moments of their appearance whereas often they were asked months ago and the questioner has long since moved on to pastures new. And best not say anything about the dubious, "more followers = more upvotes = more authority" mentality of the site, as though truth could be measured like that. Too often does that result in gibberish being treated as gospel.

Worst of all, though, are the questions themselves. They say there are no stupid questions, but they haven't met Quora users. Look, I get that the people most in need of education are by definition the hardest to educate, but come on. It's not only that a huge fraction of questions could be given a highly competent answer from a two second Google search (did they even try ? I doubt it), it's that a lot of them have worked very, very hard at being stupid. Sometimes they confusingly lack context : "how must white cosmology be considered now ?" The heck is "white cosmology ?" Is that a racial thing or are you talking about noted cosmologist Simon White ? "Where can I see the Milky Way from Malaysia ?". I dunno, did you try looking UP, maybe, or are you asking about the darkest location ? "What are some secrets of astronomy ?" If I told you, it wouldn't be a secret anymore, now would it ?

Oh, but it gets worse. "Where can you see stars in Flagstaff ?" Okay, look, Malaysia is quite big, but Flagstaff is titchy. I've been there. You can see the stars from pretty much anywhere, it's got a frickin' observatory with an excellent public outreach department, the hell are you doing asking this shit on the web ? Just go for a goddamn walk. "Which are the most interesting facts about astronomers, and why is that ?" I dunno, our heights, breast size, tendency to procrastinate, our flair for acerbic sarcasm ? Depends what you find interesting, I guess. And perhaps best of all : "How many hot dogs (just pick a specific brand) would it take to form a star if an arbitrary amount suddenly popped into existence in space?"

AAAARGH.

These people make no sense. There's also a peculiar form of Quora-speak in how questions are phrased, especially a tendency to confuse "how" and "why", which is fine in everyday speech but highly misleading in text. At the same time, if anyone says, "why do scientists believe..." then they get an earful of undeserved crap about scientists not "believing" anything, as though "believe" wasn't being used in its perfectly ordinary capacity as a synonym for "think".

It's not as though there aren't some great questions and fantastic answers that would be difficult to find on the web. There are plenty of those. But there's even more absolute garbage, both on the part of the site and its users. It's a strange, strange place indeed.

This rant has accomplished nothing at all except for its intended purpose of making me feel slightly better.

Monday, 16 December 2019

You are what you eat

At least you are if you're a flatworm. An alternative title would be "how to train your flatworm", but that these little dudes can be trained is nowhere near the weirdest thing about them.
In an early experiment, McConnell trained the worms Ă  la Pavlov by pairing an electric shock with flashing lights. Eventually, the worms recoiled to the light alone.  In other experiments, he trained planaria to run through mazes. Eventually, after his retirement in 1988, McConnell faded from view, and his work was relegated to the sidebars of textbooks as a curious but cautionary tale. Many scientists simply assumed that invertebrates like planaria couldn’t be trained, making the dismissal of McConnell’s work easy.
Okay, it's already quite impressive that these guys have enough mental capacity to learn stuff, but maybe not all that surprising considering the claims made about plants. But flatworms do something much, much weirder :
Then something interesting happened when he cut the worms in half. The head of one half of the worm grew a tail and, understandably, retained the memory of its training. Surprisingly, however, the tail, which grew a head and a brain, also retained the memory of its training. If a headless worm can regrow a memory, then where is the memory stored, McConnell wondered. And, if a memory can regenerate, could he transfer it?  
...Planaria are cannibals, so McConnell merely had to blend trained worms and feed them to their untrained peers. (Planaria lack the acids and enzymes that would completely break down food, so he hoped that some RNA might be integrated into the consuming worms.) Shockingly, McConnell reported that cannibalizing trained worms induced learning in untrained planaria.
So flatworms can eat each other to gain their knowledge. Basically they're Sylar from Heroes, only smaller, squishier, and less weird-looking. Not to mention their phenomenal regeneration abilities... I expect Marvel will soon realise that the only answer to Aquaman is Planariaman, whose arms grow into new Planariamen who already know everything the first one knew.

This weird case of mental ingestion isn't limited to flatworms either. The article describes that injecting RNA from trained sea slugs induces learning in untrained sea slugs, and how butterflies can remember things from when they were caterpillars despite having destroyed their brain during their regeneration cycle. It seems that the neural connections in the brain may be more important for the animal to make use of its memories more than for the actual encoding. It would also be interesting to see if plant memories are inherited from cuttings...

I was also reading recently "Other Minds", an excellent little book about the intelligence of octopus. The author describes the phenomena of "split brains" (as does "You Are Not So Smart"), wherein something learned by one hemisphere of the brain is not necessarily automatically learned by the other. In some animals, the creature responds different to the same thing it sees with different eyes - it's as though only half the animal remembers. This is normally not the case in people, but split brains can be medically induced, resulting in something very similar. It's not dissimilar to blindsight, that rare condition where the visual input is processed by the brain but not at the conscious level (which is, perhaps, not so strange as it first appears - after all, you don't remember everything consciously the whole time, but you can recollect things on command pretty well).

It opens up a bunch of questions, of course. In an episode of Star Trek : Deep Space Nine, a character asks, "what is a person but the sum of their memories ?" The answer would seem to be, "a hell of a lot, really". And is the flatworm aware of its ingested memories - inasmuch as flatworms are aware of anything - but not able to act on them, or does forging the neural connections play a role in incorporating the memories into its awareness ? Like how sometimes you may be fully aware of something but do the exact opposite out of habit, like looking for your keys in their usual spot even though you know you put them in the fridge last night because... well, whatever.

And what is it that actually happens when we think ? Why does it feel difficult to remember things sometimes, even though we're not consciously aware of what's happening ? We just sort of instruct our brains to dig up the data and eventually they do - little of this happens consciously, yet it definitely feels difficult somehow. The same goes for thinking in general : sometimes it's extremely hard, but for the life of me I couldn't describe the sensation at all.

Memories Can Be Injected and Survive Amputation and Metamorphosis - Facts So Romantic - Nautilus

The study of memory has always been one of the stranger outposts of science. In the 1950s, an unknown psychology professor at the University of Michigan named James McConnell made headlines-and eventually became something of a celebrity -with a series of experiments on freshwater flatworms called planaria.

Friday, 13 December 2019

Fire indeed hot


This blog covers quite a bit about human biases and stupidity, but the "fire indeed hot" problem is one I still haven't got a soddin' clue how to solve.

Many problems are very complicated. Sometimes, statistics are difficult to interpret. Often it can be hard to see if data supports one theory or another, and dealing with uncertainty and measurement errors is tricky stuff. There are a whole bunch of ways way people act irrationally in the face of confusion and complexity, and a whole bunch of ways psychologists and sociologists have proposed for tackling this.

But then there are "fire indeed hot" problems, when the issue is staring you in the face and/or burning your legs off. I don't know how to persuade anyone that fire is hot if they won't believe it when their own hair is actually on fire. I'm not sure anyone does. All psychological theories seem to assume that there's at least an element of rationality at work, that when a certain sensory threshold is exceeded the brain will, unless under truly exceptional circumstances, concede that maybe stabbing oneself in the eye or trying to swallow a whole fire extinguisher wasn't such a good idea. Those kind of errors are supposed to be dealt with by the Darwin Awards, not persuasion.

An alternative and probably more common version of this is the "everyday Flat Earther" problem. This is the only slightly milder version where you can't actually see the answer directly, but the weight of evidence is so staggeringly large that the only ways of discounting it are a) to be so distrustful of everyone that you literally wouldn't believe them if they said that oxygen was safe to breathe; or b) you are very, very stupid. Again, it beggars belief that such people are able to wipe their own arse, but apparently most of them manage it.

My sneaking suspicion is that pretty much everyone is an everyday Flat Earther on one or two minor issues. The problem is that democracy depends on the assumption that hardly anyone has such baseless opinions on issues of any real import, so the chance of such things gaining traction is negligible.

Britain has elected Boris Johnson and his Tory acolytes by a landslide. This, to me, is definitely something close to a "fire indeed hot" problem. The country - the whole blasted country, not some tiny subset of lunatics this time - has actively chosen a man with a well-documented history of lying and duplicity; from unlawfully proroguing Parliament, to hiding in a fridge and declaring that he'd be happy to be interviewed by anyone named Andrew, even while saying that the BBC were liars. Let's not even mention the vitriolic "Turkey should join the EU documentary", the Brexit bus, the model buses, the two prepared Brexit speeches, or that time he felt Michael Gove (Michael Frogface GOVE !) was a stronger contender for Tory leader. Let's simply say : this makes no sense. None of it. This is just stupid. We've elected a man determined to "get Brexit done" without an effin' clue what that really means, even while the country is still split 50-50 as to whether it wants Brexit at all, who has a history of being hated by the Scottish whilst determined to prevent them from having a second referendum.

This is nuts. Nobody has a clue what the next few years will bring, because it's damn hard to predict the actions of someone so scared of interviews that they prefer to hide in a fridge. We've chosen the way of cowardice and petulance, of a goofy-haired man-child buffoon who literally steals people's phones in full view of them and the media. Wonderful. Let's all strap ourselves in for a fun-filled next few decades of post-truthism, where we don't need any actual hospitals because we can pretend we've got eight billion; where we don't need to worry about Donald Trump because we can pretend he's a sort of ugly orange Tooth Fairy that only little children believe in; where we can ride BoJo's handmade cardboard buses to work because there's no money for any real ones. We can have our cake and eat it and then some, telling ourselves we shamed the EU into giving us a fantastic deal and stopped all those pesky immigrants from coming over here and tending to our sick and injured, laughing with delight as there are no more damn brown people stealing our jobs and benefits, and just occasionally wondering why we ever let "reality" ever bother us when it's so much easier to simply pretend it doesn't exist.

This is definitely going to end well, isn't it ?

Thursday, 12 December 2019

Review : You Are Not So Smart


I picked up this very nice book from a budget bookshop for the insanely low price of 99 CZK (about £3), and for that price I feel guilty about writing anything negative about it at all. The bottom line is it's a great little read but not a patch on The Idiot Brain, and it's got enough major flaws that I give it a respectable but imperfect 7/10.

The book is, as you might have guessed, all about common fallacies and misconceptions. Overall it does a nice job of describing and explaining everyday delusions. It's lively, engaging, funny, and doesn't make the reader feel stupid by explaining in some detail just how incredibly stupid they are. The author often steps in to say, "but here's how you can avoid this particular bias".

Best of all, I really liked how he mostly pointed the finger of blame at YOU, the reader - rather than say, "everyone else does this stupid thing", he gets the reader to look at themselves first. He very deftly manages to do this without blaming the reader for their errors. And full marks to him for that. All too often, arguments on the internet degenerate into a shouting match as to who's committed what kind of fallacy or which one is worse* - this focus being on getting the reader to check themselves rather than go around shouting at everyone else for being thick is very welcome. Likewise, despite being dedicated to mistakes, I never felt overwhelmed by the possible sources of error which are apparently plaguing me at every waking moment.

*"Playing at contradiction for sport", Plato called it, although it often feels a lot nastier than a game.

Whenever I read about particular biases and common mistakes on the internet, I often want to say, "yeah, but..". Especially the fallacy referee memes : they're nice little summaries, but most fallacies come with serious caveats that shouldn't be overlooked. Alas, McRaney's book is a bit of a mixed bag when it comes to covering the "yeah, but" stuff. To be fair, most chapters are well done. For example, he's careful to point out that the "argument from authority" fallacy doesn't mean you shouldn't listen to experts, and kudos to him for that. But not all of his examples are carefully thought-through or explored in as much detail as they should be. For example :
"Should you listen to a highly trained scuba diver's advice before plunging into the depths of the ocean ? Yes. Should you believe that person when the diver talks about seeing a mermaid making love to a dolphin ? No."
I would have to say, "no, I wouldn't believe them, but I'd give their argument a lot more credence than if an airline pilot had said the same thing". Similarly, I was unconvinced by parts of the chapter on ad hominem attacks - I would say that the past criminal history of a defendant is extremely relevant, at least insofar as I might need to evaluate trustworthiness. Ad hominem is not a fallacy if personal character is directly relevant, which it often can be.

The problem is the book covers a wide range of logical errors but has no underlying theory for establishing what's true. Sure, this is a big ask, but it would be nice to set out some vague framework as to how we know what's correct, otherwise how can anyone justify what's a fallacy and what isn't ?

For instance, he describes well how statistical thinking isn't natural. I couldn't agree more. But by way of illustration, he cites something that would cause anyone even casually acquainted with Bayes' theorem to vomit with rage. He gives a description of a well-to-do man who drives an expensive car, saying he was chosen from a study in which they interviewed seventy engineers and thirty lawyers :
It is more likely, statistically, that he is an engineer, no matter how well the description matches your heuristic model for lawyers.
That is patently wrong. Sure, if I'm given no description at all and told that an individual was selected at random, then it's always more likely an engineer is selected. But it would not be at all sensible to ignore this extra information - accounting for it is absolutely the rational thing to do. If a single individual in the UK was selected at random, the chances that they're an MP is almost zero, but it's massively higher if I'm told that person visits the House of Commons regularly. Ignoring this extra information is profoundly stupid : weird events often demand weird explanations, though of course we should be aware that our stereotypes aren't always accurate.

Another example : he gives a description of fictitious studies showing that old people can either learn more slowly or more easily than the young, attempting to frame both as "common knowledge". The problem is that the idea that old people learn more quickly is anything but common knowledge and immediately stands out as weird and unconvincing; it doesn't undermine his main point, but it would have been easy to come up with a better example.

The example which most irritated me most of all was when he described normalcy bias, the tendency to think that everything is normal when in fact it's not. His overall description was very good, but as a particular example he chooses the survivors of the horrific Tenerife airport disaster. He mentions that there are cases where people are simply stunned into inaction, but suggests that the aircraft survivors fell victim instead to thinking that everything was fine and so that's why they didn't get off the plane. I found that idea to be ridiculous - they've just been through an extreme shock to the system; it's easy enough to see how they could be too shocked to think rationally, but not at all plausible to suggest they thought everything was fine. I don't doubt that normalcy bias is a thing, but in a horrific aircraft disaster ? Come on.

While the book is excellently concise and doesn't usually skip anything too crucial, there are a few cases where McRaney sacrifices too much. One of these is the chapter on confirmation bias, an enormously important aspect of filter bubbles that has a large role in the polarisation that's dominating politics right now. Another was the chapter on the "third person effect", wherein we tend to assume that other people will be affected differently to certain messages than ourselves. It is right and proper to point out that we all believe ourselves to be rational and objective. But it's a heck of a mistake to use this to dismiss any and all censorship - a position which completely ignores the whole sorry history of propaganda. It is demonstrably true that people are affected differently, that we can reach rational decisions - otherwise the book could not even exist.

Perhaps worst of all for a book about fallacies, I found the whole thing rather uncritical and frequently lapse with its standards of statistics. Often McRaney will say something like, "52% of people believed one thing, whereas a lot more believed something else" - confounding quantitative and qualitative descriptions. Once, he even described 44% as a majority rather than a plurality. Sample numbers in the various studies quoted are usually absent, making it very hard to get an idea of the significance. A tenth of a second difference in running speed is held as evidence of the power of belief  - well, maybe, but that seems weak to me. And graphs and illustrations do not grace the pages of this book.

Two chapters stood out for me as an indication to take the book with a pinch of skepticism. The opening chapter deals with priming, where we can be influenced by unconscious clues. Some of this felt bloomin' obvious : of course advertisers are gonna show you pictures of happy people in nice houses when they want to tell you things. They're not exactly going to sell their new and improved moisturising hand cream with bleak images of dystopian wastelands covered in dying kittens, now are they ? Conversely, having just read When The Earth Was Flat, which discards subliminal messages as little more than a hoax, I found many of the claims in the chapter a bit strong.

The final chapter is on how our behaviour can be more a product of our situation than our innate personality. It's pretty good, but at the end it has a lengthy description of the infamous Stanford Prison Experiment. The problem is that there are very serious allegations that the whole thing was a sham, which McRaney appears to be ignorant of. Likewise, when he describes the bystander effect, he takes it as gospel, whereas that too is in doubt. And in the famous invisible gorilla experiment, he ignores that what is supposedly obvious - the person in the gorilla suit - is anything but; what's obvious is highly subjective. Meaning and interpretation happen in our heads and nowhere else - to pronounce arbitrary judgement on what we think should be important and accurate is foolish indeed.


Overall, it's a jolly good read, but weakened by some slapdash statistics and little or no effort at looking at opposing viewpoints : he presents the evidence in favour of the importance of each fallacy, but rarely or never looks at the counter-arguments. That's a serious flaw for a book that tries to encourage skepticism and critical thinking.

The Idiot Brain does an excellent job of explaining similar biases, their limitations, and the limits of current understanding. In contrast YANSS never mentions any of this. Again, to be fair he encourages us to remember these biases so we can guard against them, but I think it would have been strengthened quite a lot if he would remind us that under normal circumstances are memory is not totally wrong : it isn't based entirely on wishful thinking, and we don't always make crappy arguments that have nothing to do with the data.

I, for one, would love to hear more about the conditions that are best for promoting rational thinking. Being on guard against fallacies and misconceptions is all very well, but I want more than this. You can a billion websites explaining biases and errors, but precious few looking at what we need to do to get the most rational, objective viewpoint. Interpreting data is often very, very, very hard, and frankly it's a feckin' miracle we're ever able to do this at all. Maybe explaining that wouldn't be as amusing as describing all our silly human failings, but it might be a damn sight more useful.

Tuesday, 10 December 2019

Quantum physics is NOT fine

An interesting article from the ever-provocative Ethan Siegel on the nature of reality and all that.

On the one hand, I believe very firmly in the value of interpretation and being able to describe things without using equations. I tend towards the old saying that if we can't explain it except with maths, you haven't really understood it. On the other hand, I'm equally firmly convinced that there's absolutely no reason the Universe should make sense to me; what right does a short blonde Welshman have to tell the rest of the Universe what to do ? As Terry Pratchett wrote in Soul Music :
“You could say to the universe, this is not fair. And the universe would say: Oh, isn´t it? Sorry.”

But this doesn't for one second stop me from desperately wanting an intuitive-ish view of the Universe I can understand. I suppose I think that quantum physics really points to some horrible problem we've yet to resolve, I hope it won't turn out that we've already scraped the bottom of the barrel and have reached the parts of reality we're never going to be able to understand. That'd suck.

So I hope when Ethan says :
For more than a century, however, nature has shown us that the rules governing it aren't local, real, and deterministic after all... Despite what we might have intuited beforehand, the Universe showed us that the rules it obeys are bizarre, but consistent. The rules are just profoundly and fundamentally different from anything we'd ever seen before.
... that this really reflects our theories having reached an impasse. Now, other people will tell you that dark matter represents such an impasse - a failure of the models that needs to be explained. I say it's probably just a fact of life and we can no more explain it away than we can explain away rocks or electrical charge or Piers Morgan. But for quantum weirdness... that's where I draw the line. That I think is something we should try and reconcile with intuition, if we can. It's possible we might not be able to, but I'm unwilling to give up just yet. A century of investigation is too short to decide if God plays dice.
Reality, if you want to call it that, isn't some objective existence that goes beyond what's measurable or observable. In physics, as I've written before, describing what is observable and measurable in the most complete and accurate way possible is our loftiest aspiration. By devising a theory where quantum operators act on quantum wavefunctions, we gained the ability to accurately compute the probability distribution of whatever outcomes might possibly occur.
Cue link to long post where I try to get at the fundamental assumptions of science, namely that the world is objective, measurable, real, causal, and finite. A world without such constraints cannot be analysed scientifically. Perhaps some replacement for science could be constructed, but we aren't there yet.
In science, this is what we call an assumption, a postulate or an assertion. It sounds compelling, but it might not be true. The search for "a complete description" in this fashion assumes that nature can be described in an observer-independent or interaction-independent fashion, and this may not be the case. While Sean Carroll just argued in Sunday's New York Times that physicists should care more about (and spend more time and energy studying) these quantum foundations, most physicists — myself included — don't agree.
I am more than happy to say, "yes, these are unprovable but potentially falsifiable assumptions". I just think that if you're going to throw out the most fundamental assumptions of science, if you're going to say that observation directly affects reality or that some physical properties are fundamentally unmeasurable or not even real, you'd better have a damn good reason for it. "Shut up and calculate" is not good enough. Calculation is not the same as explanation, and explanation is the whole point. Modern science is and should be philosophical, not a latter-day Babylonian calculation of eclipses without knowledge of the Moon passing in front of the Sun. How empty and cold would a "science" of pure calculation be ! How pointless would be pure knowledge without understanding ! Knowledge without meaning is hardly knowledge at all.

So to this :
Understanding the Universe isn't about revealing a true reality, divorced from observers, measurements, and interactions. The Universe could exist in such a fashion where that's a valid approach, but it could equally be the case that reality is inextricably interwoven with the act of measurement, observation, and interaction at a fundamental level.
I cannot but disagree. If the tree falling a forest doesn't make a sound with no-one to hear it, then I fall back on intuition and say to the Universe, without fear of what the Universe will say back, "now you're just being silly". It is wholly daft to suppose that reality depends on us - this notion, I say, is not science at all. And if reality is indeed like that, then it's not something that holds much interest for me. What would be the point of knowing the result of a calculation without understanding what's actually going on ? None that I can see. No, for me, understanding the Universe is inherently and unavoidably exactly about revealing a true reality.
There is a strange and wonderful reality out there, but until we devise an experiment that teaches us more than we presently know, it's better to embrace reality as we can measure it than to impose an additional structure driven by our own biases. Until we do that, we're superficially philosophizing about a matter where scientific intervention is required. Until we devise that key experiment, we'll all remain in the dark.
A very fair point. As in the last post about quantum madness, and interpretation should not only offer understanding but guidance on how to proceed. Though, as with dark matter and other oddities in cosmology where observation runs ahead of theory, I think this need for more observation doesn't mean we should stop theorising but the exact opposite. Maybe it will never be possible to reconcile the quantum world with the everyday one... good for me I study galaxies, which never have to experience the godforsaken double slit experiment.

Quantum Physics Is Fine, Human Bias About Reality Is The Real Problem

When it comes to understanding the Universe, scientists have traditionally taken two approaches in tandem with one another. On the one hand, we perform experiments and make measurements and observations of what the results are; we obtain a suite of data.

Tuesday, 3 December 2019

Crossing the quantum street

For all that I know the Universe is under no obligation to make intuitive sense, I still don't like quantum mechanics. Just because something doesn't have to make sense doesn't mean we shouldn't try to make sense of it. I do not understand all of what Fuchs is saying here, but I'm inclined to like it.
The Many Worlds Interpretation just boils down to this: Whenever a coin is tossed (or any process occurs) the world splits. But who would know the difference if that were not true? What does this vision have to do with any of the details of physics? Who could take the many-worlds idea and derive any of the structure of quantum theory out of it? This would be a bit like trying to regrow a lizard from the tip of its chopped-off tail: The Everettian conception never purported to be more than a reaction to the formalism in the first place. 
An interpretation is powerful if it gives guidance, and I would say the very best interpretation is the one whose story is so powerful it gives rise to the mathematical formalism itself (the part where non-thinking can take over). The “interpretation” should come first; the mathematics (i.e., the pre-existing, universally recognized thing everyone thought they were talking about before an interpretation) should be secondary.
Exactly. It's not only equations that lead to predictions. Though for me, having a conceptual model I can intuitively understand is a worthy goal in itself; if it happens to also have other benefits, then so much the better. And from a moral perspective :
The Many Worlds Interpretation has always seemed to me as more of a comforting religion than anything else. It takes away human responsibility for anything happening in the world in the same way that a completely fatalistic, deterministic universe does, though it purportedly saves the appearance of quantum physics by having indeterministic chance in the branches.
Good point. If Many Worlds is correct then all our choices are meaningless. If the Universe is deterministic then we can't even make any choices, they are just illusionary. Both of these seem ridiculous to me : what's wrong the notion that there's one reality and we have free will ? Why is that so absurd ? What is there that needs to be explained away ?

Finally, on crossing the street :
In QBism, an agent—an observer—has some beliefs about the consequences of her actions on a physical system (or, again in less preferred language, “a measurement outcome”). She takes some action on the system and notes the consequence. That might well cause her to reevaluate her beliefs about the consequences of any future action she might take on it. Those reevaluated beliefs just are the new quantum state assignment. That’s all that “collapse” is: It is a change of one’s expectations based upon one’s lived experience. And if that’s all there is to it; collapse is no big deal.
Well I mean it's nice and all that observations just change the observer, not the whole Universe, but I don't see how that helps with the double slit experiment. I already know what happens to me; what I want to know is, what happens to that bloody electron ?

Quantum Physics is No More Mysterious Than Crossing the Street: A Conversation with Chris Fuchs

Recently, physicist Sean Carroll made my head spin with his explanation of the Many Worlds Interpretation of quantum mechanics. In this view, the world around us is just one version of many, many possible realities. Each time an event occurs with more than one possible outcome, reality splits into different versions.

The mind of a forest

"TREE ?!?!? I AM NO TREEEEEE !!!"

Whoops...

But perhaps if an individual tree can't think for itself, maybe the entire forest can. Perhaps it's not so much Tolkein's Ents as it is Doctor Who's sentient forest. Is this totally mad ? Not necessarily. We know that trees are interconnected and interdependent :
Simard went on to show how mycorrhizae-linked trees form networks, with individuals she dubbed Mother Trees at the center of communities that are in turn linked to one another, exchanging nutrients and water in a literally pulsing web that includes not only trees but all of a forest’s life. These insights had profound implications for our understanding of forest ecology—but that was just the start.
She—and other scientists studying roots, and also chemical signals and even the sounds plant make—have pushed the study of plants into the realm of intelligence. Rather than biological automata, they might be understood as creatures with capacities that in animals are readily regarded as learning, memory, decision-making, and even agency.
I am wont to say that just because connections look similar, it doesn't mean that they necessarily have similar results. Too many idiots have compared the structure of the Universe to the structure of a brain. And yet... trees are alive. There would be an undeniable advantage to it being able to respond in an intelligent way. Or at least a pseudo-intelligent way - if the network produces purely mechanistic but beneficial responses, that would still be an example of the connections giving rise to something greater than the sum of the parts. It wouldn't necessarily mean that the trees or forest could be said to be "thinking."
I’ve used the word intelligence in my writing because I think that scientifically we attribute intelligence to certain structures and functions. When we dissect a plant and the forest and look at those things—Does it have a neural network? Is there communication? Is there perception and reception of messages? Will you change behaviours depending on what you’re perceiving? Do you remember things? Do you learn things? Would you do something differently if you had experienced something in the past?—those are all hallmarks of intelligence. Plants do have intelligence. They have all the structures. They have all the functions. They have the behaviours.
But of course, the same thing could be said for a computer program. It raises the question of whether plants made "conscious" choices, if they have a deliberate purpose in choosing what they do.
We have done what we call choice experiments, in which we have a mother tree, a kin seedling, and a stranger seedling. The mother tree can choose which one to provide for. We found that she’ll provide for her own kin over something that’s not her kin. Another experiment is where a mother tree is ill and providing resources for strangers versus kin. There’s differentiation there, too. As she’s ill and dying, she provides more for her kin.
Even so, that too could be due to a purely mechanistic response. Worse, it would be difficult to test : if you could fool the plant with some chemical or fungal alteration that the other wasn't related, it would tell you no more about its consciousness than if you fooled a human being into thinking another person was a dog : of course they'd act differently given different knowledge.
Let’s say you have a group of plants and stress one out, it will have a big response. Botanists can measure their serotonin responses. They have serotonin. They also have glutamate, which is one of our own neurotransmitters. There’s a ton of it in plants. They have these responses immediately. If we clip their leaves or put a bunch of bugs on them, all that neurochemistry changes. They start sending messages really fast to their neighbours.
I think I've switched from skepticism to total agnosticism on this issue. I don't what it is, but consciousness is manifestly not a physical thing - there are no consciousness particles or fields or whatever. So how could we ever prove something has internal experiences ? I'm not sure we ever could. Why, then, should we presume that plants are not conscious, instead of assuming that they are ?

Never Underestimate the Intelligence of Trees - Issue 77: Underworlds - Nautilus

Consider a forest: One notices the trunks, of course, and the canopy. If a few roots project artfully above the soil and fallen leaves, one notices those too, but with little thought for a matrix that may spread as deep and wide as the branches above.

Review : Pagan Britain

Having read a good chunk of the original stories, I turn away slightly from mythological themes and back to something more academical : the ...