Sister blog of Physicists of the Caribbean in which I babble about non-astronomy stuff, because everyone needs a hobby

Monday, 16 March 2026

The Logician's Swindle

What makes a puzzle annoying ? When is solving a problem rewarding, and when is finding out the answer just frustrating ? If we could answer this, we might get a long way towards making the world a happier place. Getting people to actually enjoy solving problems, rather than being pissed off at their opponents for discovering a flaw in their arguments, would surely benefit political discourse enormously.

I don't propose to try and answer all of this today. Instead, what I can do is address one particular aspect of the problem. I say that at least one major cause of puzzles being annoying rather than enjoyable is when you've been outright cheated, and that this happens far more often than it should.

Specifically, consider Newcomb's Paradox as described on Veritasium. The video begins :

You walk into a room, and there's a supercomputer and two boxes on the table. One box is open, and it's got $1,000 in it. There's no trick. You know it's $1,000. The other box is a mystery box, you can't see inside.

Now, the supercomputer says you can either take both boxes, that is the mystery box and the $1,000, or you can just take the mystery box.

So, what's in that mystery box?

Well, the supercomputer tells you that before you walked into the room, it made a prediction about your choice. If the supercomputer predicted you would just take the mystery box and you'd leave the $1,000 on the table, well, then it put $1 million into the mystery box. But if the supercomputer predicted that you would take both boxes, then it put nothing in the mystery box.

The supercomputer made its prediction before you knew about the problem and it has already set up the boxes. It's not trying to trick you, it's not trying to deprive you of any money. Its only goal is to make the correct prediction.

So, what do you do? Do you take both boxes or do you just take the mystery box?

Don't worry about how the supercomputer is making its prediction. Instead of a computer, you could think of it as a super intelligent alien, a cunning demon, or even a team of the world's best psychologists. It really doesn't matter who or what is making the prediction. All you need to know is that they are extremely accurate and that they made that prediction before you walked into the room.

I highlight certain parts because they feel crucial. To me, this is saying very explicitly, "don't think about this aspect of the problem, it's not important at all". Were this not so, I would otherwise object to how such a thing could be possible, and the details would certainly matter : was the machine running over a diverse sample of people, or was there something particular about them that helped its accuracy ? But no, this apparently isn't important, so whatever misgivings I have about free will and suchlike, I willingly surrender for the purpose of the test. I put them aside, still fully expecting to be fooled (I suck at logical puzzles) but in some other way.

Having made that assumption, the answer is obvious. If the machine is essentially always accurate, I take one box. It knows, magically, that this box will contain a million dollars, and I walk out happy and rich and in search for a bank offering a good exchange rate to a proper currency. 

But later in the video we get :

Here's how I think about the problem in a way that actually makes sense. You know that the supercomputer has already set up the boxes, so whatever I decide to do now, it doesn't change whether there's zero or $1 million in that mystery box, and that gives us four possible options that I've written down here.

If there is $0 in a mystery box, then I could one-box and get $0 or I could two-box and get $1,000, but there could also be $1 million in a mystery box. And in that case, I would get $1 million if I one-box or I would get $1,001,000 if I two-box. So, I'm always better off by picking both boxes.

Rubbish. Complete twaddle. You just told us that the machine is accurate and we shouldn't factor this in to our calculations, but in this way of thinking you cannot possibly ignore how the machine works. This is not even self-consistent ! By saying that the machine is essentially perfectly accurate, you've eliminated the very possibility of $1,001,000. That can only happen if the machine actually is inaccurate in some cases, which to my mind you've all but told us directly to discount.

This, then is a swindle, and one common to various logical puzzles. "Don't think about this aspect of the problem", they say, only later to say, "Hah ! You should have thought about this aspect of the problem after all, you fool !". Right, so you expect me to think you're a liar ? How is that a fair test ?

The rest of the video is a perfectly decent discussion of free will etc. (Veritasium is one of my favourite YouTube channels), but the poor description from the outset makes the whole thing a mess. Having been told that accuracy was not an issue, I expect something else I've overlooked to come into play. Naturally I overlooked determinism and all that because you told me to overlook it. The pettiness of it all annoys me quite intensely.

Don't worry, I'm not going down the free will avenue with this post. Rather, I just want to briefly outline that this kind of swindle is common to logic problems, and is itself one particular expression of a more general reason they're so often very irritating.


The closest similarity is surely the Monty Hall problem (the one with the prize goats). That one always confused the heck out of me because people never properly explained that I should have been paying crucial attention to the host's knowledge, not how many goats there are or how many doors. But any logic puzzle can suffer if you're not properly informed about what the key aspect of the problem is, or worse, if you're actively told to ignore it.

Not that framing doesn't sometimes reveal something very interesting. Wason's selection is fascinating in showing how the same people can have much more difficulty solving the same task if it's described slightly differently – especially so when the alternative form is nothing they wouldn't also be familiar with. But there, the whole point is to study psychology. No deception is employed, no swindle pulls the solution out from beneath the solver's feet. The facts are laid bare and it presents a straightforward yet surprising challenge to many people who take it. No, framing is only annoying when it's done to deliberately thwart the participant. 

There's also a common tendency for the puzzle-setter to declare the rational solution from authority, saying "this is obviously the correct solution because the alternative doesn't make sense to me". A classic example concerns people refusing small amounts of compensation when they would normally expect a much bigger payout. Time and time again we hear people declaring that accepting the small offer would be rational since they come out with a net cash gain. But to any sensible person there are a multitude of reasons why this would be an extremely foolish thing to do : accepting the initial offer may deny them any chance at the larger amount; they may simply feel insulted and disrespected, and responding to such behaviour is essentially letting the bully get away with it. It is only rational in an incredibly narrow and naive economic sense, and more broadly simply isn't rational at all*.

* Veritasium does this with a unique peculiarity, openly acknowledging that the "irrational" decision of choosing one box is the more profitable. I find this is going deep into "what's wrong with you ?" territory.

Again, this is a sort of swindle, denying the opposing argument by forbidding debate rather than engaging with it on an equal footing. You thought things were going to be fair and above-board only to find out that they were anything but, that the answer had already been decided without you.

Another similarity is the pettiness. Veritasium didn't have to pull the rug out from under the viewer's feet any more than anyone has to accept that getting a smaller payout is somehow rational. 

Very occasionally, I've run public surveys to help me with my own research. I've tried to ensure the wording was extremely careful, including omitting details when this would bias the result. For example I once ran a public poll on how many groups of points – galaxies – people could see in a plot, deliberately not telling them what they were looking at. Some people objected that there wasn't enough information (e.g. what sort of scales they should be considering), and I sympathise that they might find this annoying. But for me this was the whole point, to gauge what people's natural reactions were : I wanted to know if they would instinctively identify the same groupings that appeared natural and obvious to me (most of them did, as it turned out). I needed to know if my additional knowledge was biasing me, or if the groupings I identified would be readily visible without this extra information. 

The point here is that there's absolutely no reason for misdirection. It's perfectly possible to account for this in a way that will give you a meaningful result to the question you're asking. Sometimes, this can only become apparent after the fact, but in those cases the participant should feel relieved, not annoyed. Annoyance only happens when the misdirection was unnecessary. 

A second personal example : group meetings back in my PhD days. These served the valuable purpose of getting the students used to dealing with tough questions. But they also turned the experience into a weekly grilling that made the whole thing quite intensely annoying... instead of having an enjoyable, low-stakes discussion about science, we had to deal with supervisors being deliberately over-critical. That we all knew full well what was going on didn't help in the least. It would have been fine if such sessions had been clearly demarcated and set aside as such, with regular meetings more about science for its own sake. Trying to pretend this was how scientific discussion should happen, though, was just unfair.

Again, there was no reason for the misdirection. This too was a sort of swindle. Oh, you think you're here to discuss your work ? You thought I was being harsh because I wanted to be ? Hah hah, fooled you ! The idea that maybe they could have just not done that was never raised.

On an grander scale, problems with the alternatives to dark matter. This too feels like something of a swindle : proponents often raise objections to dark matter which are based entirely on the properties of the ordinary matter we can see. They make highly dubious inferences about the necessary connection to the dark matter they're trying to demonstrate doesn't exist, saying that the lack of a naively-expected correlation proves it can't exist. Some of these problems can become obvious, but sometimes it's worth spelling this out at the high level because it's all too easy to lose sight of the forest for the trees. Once you start questioning the underlying assumption and realising that maybe the connection isn't so direct after all, often the whole thing falls apart.

And in other arenas too we find possible swindles. As I've covered before, thought experiments become extremely annoying when changing a small detail would profoundly alter the result but the instigator refuses to consider any variation : no, you must focus on this aspect of the problem because I said so, even if my scenario is actually bunk. Just like insisting someone should accept a miniscule payout, it's disrespectful not to think the other person's opinion might have some value. 

Likewise with analogies. An indirect analogy can be extremely powerful when the relevant aspect is sufficiently similar to its comparison subject, becoming thought-provoking in both its similarities and in its minor, extraneous differences. When an analogy is intended to be direct, though, the seemingly-extraneous details can become crucial, so expecting people to shut up and ignore them is not realistic. It's extremely difficult to focus on the "relevant" bit (usually declared by authority) when there are obvious deficiencies in the whole thing. Conversely, it does no good to pretend similarities don't exist when they do, or to overlook them on grounds which are actually minor details or only quantitative differences.


All this sets out some conditions for when puzzles becoming annoying, and gives us a rough working definition : The Logician's Swindle is the use of unnecessary misdirection from a position of unjustified authority.

This is similar to but not quite the same as the Magician's Choice. In the latter, we know we're being denied crucial information, misdirected, and otherwise deceived. We go in with eyes open knowing we'll almost certainly be tricked and often paying for the privilege of suspension of disbelief. We know we won't be able to solve the problem and we enjoy our failed attempts to work out what's going on.

The Logician's Swindle is altogether nastier. Here, we're supposed to have all the information we need to reach the "correct" conclusion, but we find only afterwards that actually we don't – with the swindler often denying this for the sake of making us look foolish. And the conclusion itself may be open to dispute but the proponent argues from a completely artificial authority that it isn't. Worst of all is that "mistakes" can (though do not always actually) carry real-world consequences. In short, it's a scam : a discussion that should be in good faith which actually isn't.

And that's why I hate logic puzzles.

Friday, 13 March 2026

I'd Like To Teach Machines To Think

Today, a couple of contrarian pieces claiming that maybe LLMs do think and reason after all.

That is, not in a namby-pamby, "it's just something similar enough to thinking that we might as well call it that" sort of way. This is perfectly reasonable. I stand by that myself. To get hung up on saying "they're not really thinking" every time someone casually uses this instead of "processing data" is frankly just annoying, not productive. Likewise for intelligence : if they're taking input data and producing coherent output, well I call that a form of intelligence at the very least.

No such linguistic sleights of hand are to be found here though. No no, these pieces are much closer to the dreaded C-word... consciousness.

The first article makes much the weaker claims of the two. This one touches on the self-awareness issue, but its main point is simply that they're doing something more than pure word prediction.

Modern LLMs (Claude, GPT-4, and others) have an interesting feature, the humble thinking/reasoning tokens. Before generating a response, the model can generate intermediate tokens that the user never sees (optional). These tokens aren't part of the answer. They exist between the prompt and the response, modifying the context that the final answer is generated from and associated via the attention mechanism. A final better output is then generated.

Every token between the prompt and the response is, in information-theory terms, an opportunity for drift. The prompt signal should attenuate with distance. Adding hundreds of intermediate tokens into the context should make the answer worse, not better. But reasoning tokens do the opposite. They add additional machine generated context and the answer improves. The signal gets stronger through a process that logically should weaken it.

This process improves performance. That thought should give you pause, just like how a thinking model pauses to consider hard problems before answering. That fact should stop you cold.

As someone who's not in any way, to any degree, expert in LLMs, this is highly unintuitive. After all, when I start thinking deeply, I don't expect that the more I think, the more off-track I'll go and the worse my output will become. I expect that the more things I take into consideration, the more deeply I delve into the problem, the more accurate and insightful will be my final output. I guess I presume LLMs are doing something broadly analogous in that they are (in effect) considering ever-more voluminous amounts of text in producing their final response after a chain-of-thought. The more they consider the relations between different words, the better should be their final output.

I asked ChatGPT about this, and its (considerably shortened) response was as follows :

I think the article is smuggling in a dodgy intuition there. The right baseline is not “extra tokens should obviously make things worse”. In a transformer, later tokens are not forced to overwrite earlier ones in a simple serial way. Attention lets the model keep referring back to whatever parts of the prompt or scratch work are useful, so intermediate text can genuinely help by turning one hard prediction into several easier ones. That is exactly why scratchpad methods and chain-of-thought prompting often improve performance on multi-step tasks.

So I would rephrase the issue like this : intermediate steps should not be expected to degrade performance in general. They should be expected to help when they function as a good scratchpad, and to hurt when they function as self-generated noise. The article seems to assume that the mere presence of extra tokens ought to be harmful. That is too simple. The real trade-off is not between “direct answer” and “more text”, but between “useful decomposition” and “error propagation”. One small clue that this trade-off is real is that CoT can also make models more confident when they are wrong, which is exactly what you would expect if self-generated reasoning sometimes stabilises mistakes instead of correcting them.

This seems not crazy. It then seems too much of a stretch in the article, to me, to claim that because the model is reasoning "in the context of a probability distribution", it's still doing something directly (and I emphasise the emphasis here most emphatically) analogous to some aspects of human reasoning. I think we have the capacity for a much deeper, truer understanding than any LLM has or ever will have.

If you wish to reduce this to "just" token prediction, then your "just" has to carry the weight of a system that monitors itself, evaluates its own sufficiency for a posed problem, decides when to intervene, generates targeted modifications to its own operating context, and produces objectively improved outcomes. That "just" isn't explaining anything anymore. It's refusing to engage with what the system is observably doing by utilizing a thought terminating cliche in place of observation.

None of this requires an LLM to have consciousness. However, it does require an artificial neural network to be engaging in processes that clearly resemble how meta-cognitive awareness works in the human mind. At what point does "this person is engaged in silly anthropomorphism" turn into "this other person is using anthropocentrism to dismiss what is happening in front of them"?

This doesn't feel warranted to me. For sure, humans probably use linguistic heuristics in place of "actual" reasoning more than we like to, err, think. LLMs manipulating text, in ways not matter how arbitrarily complex, does not indicate any evidence of actual, true reasoning and thought. It's just an incredibly clever way to predict words, and I see no evidence of anything deeper going on at all. No sentience, no awareness, no emotions, no preferences, no inner light. The LLM literally does not exist when it isn't prompted. It has no consciousness, no subconsciousness, no self of any kind.

The irony if course delectable... I prefer the LLM's claim that it isn't thinking to the human's assertion that it is !

Before I go to the next article, I also have to mention a recent discussion with a very interesting claim indeed :

The thing is, Yann LeCun was actually right. Purely text-based LLMs never learned that if you push a table, you also move the objects on the table. What happened instead is that LLMs became “multi-modal” and made to accept images, audio, and video as input as well as text. So “AI” did learn that if you push a table, you also move the objects on the table, but Yann LeCun was right that they did not learn it purely from text.

Despite this coming from a genuinely proper expert, I struggle to believe it. It simply doesn't match my experience with LLMs at all... well, maybe a little bit with GPT-3.5, but even with that hilarious dumbass, only a little bit. Causal connections between objects don't seem particularly difficult to establish via pure text : if you move a table, you move all objects on that table... and LLMs are surely very good at knowing what word represents a literal object. This really doesn't feel like something that should present any difficulty. 

I wish 3.5 were still with us... I'd love to test it.

GPT-5.4 says that this claim isn't correct, but that there is an interesting point behind it. That is, multi-modal models do help LLMs learn things that humans never bother writing down, but that "text-only models clearly do learn a fair amount of everyday physical regularity from language". This I would definitely believe. Without some very specific documentation though, the claim they can't learn something as basic as objects being moved when their supporting table moves is something I'd be very reluctant to concede. The current "car wash" problem is an interesting reminder that LLMs aren't human, but not, I think, proof categorical that they're totally lacking in any sort of reasoning capacity whatseover.

On to the second, much more full-throated article.

The fundamental case against the “I” in AI is that intelligence is organic, derived from sensory interaction with a physical environment. Agüera y Arcas turns the tables with the premise that computation is the substrate for intelligence in all life forms. The claim builds on an apparently crude proposition: prediction is the fundamental principle behind intelligence and “may be the whole story”.

I react quite instinctively against this, essentially with, "fascinating, tell me more about your stupid idea !". That is, there are some things I think are gloriously weird. I love their sheer audacity, may or may not hold them respectable, but don't believe them for a microsecond. I do not mean "stupid" here in an especially pejorative sense, but if you can't already understand it, then I probably can't explain it.

A central tenet of What is Intelligence? is that every form of life is an aggregation of cooperative parts. Links proliferate through patterns that enable increasingly complex functions. When Agüera y Arcas says the brain is computational, it’s not a metaphor: it is not that brains are like computers, they are computers.

He is erasing a familiar conceptual boundary here: intelligence does not prompt function, it is function. Intelligence, he argues, is a property of systems rather than beings, and function is its primary indicator. A rock does not function, but a kidney does. This is demonstrated simply by cutting them in half. The rock becomes two rocks, but the kidney is no longer a kidney.

So does a kidney have intelligence? Or an amoeba? Or a leaf? These questions are opened up, along with the question of whether Large Language Models have intelligence, which may a better way to frame it than asking whether they are intelligent.

In another discussion, I could not make myself understood when I tried to say I think that awareness is something one has, not what one is. This is extraordinarily hard to explain if you don't already "get it", but cannot for the life of me understand the claim that experiences are literally the same as physical brain states. To me this is an absolute non sequitur, with the evidence being possibly the clearest that could ever be presented for anything ever. I won't try and do it again – this blog is chock-full of that kind of stuff as it is – but maybe it's still useful to frame how I think about LLMs.

I do not have any issue with humans eventually constructing some sort of true AI. I think it's perfectly possible we could construct a chip or something which would give rise to (or otherwise access) consciousness. I do not think an LLM will ever do that, because it's literally just rearranging text. I see no more reason an LLM could be conscious than the text of a newspaper if it were cut to pieces and thrown into a whirlwind. This is why that when I say it makes good sense to describe LLMs as intelligent and reasoning, I do so with a quite monumental proviso that this in no way whatsofuckingever implies they do so much like humans – at least, not with regard to the full scope of how humans think.

 Maybe some bullet points would help ? Well, let's try. I claim :

  • LLMs can be said to think and reason in that they process input data to produce sensible outputs. In certain domains, they can already do so with an accuracy that rivals and exceeds humans.
  • LLMs function as more than pure word predictors; they can generalise and abstract to a useful degree, though highly imperfect and not on a par with humans except for the stupid ones (of which unfortunately we must contend with many). They do some things with appreciable, meaningful similarity to humans.
  • LLMs are nevertheless just mechanical. They don't have an inner life, are incapable of feeling anything, have no desires, no sensations, and no awareness of even the smallest degree. You can't program a true AI, you have to build one. BUT...
  • ...who on Earth would want a mechanical mind that would potentially suffer or try and eat us or whatnot ? Pointless. Far better to stick with an LLM-like route and go for pure tool development; if you want to bring new souls into the world, there's exactly one way to do that, and it ain't about building robots in your garden shed.
There, does that clear things up ? I doubt it. This has been the most decoherent of Decoherency posts, so if you think I'm being self-contradictory, you're wrong and I don't care. But I feel slightly better, anyway.

Monday, 2 March 2026

Everything Is Too Easy

Today, a short look at a BBC article claiming that the modern world is making everything too easy. Yes, really.

It is, of course, all about the age-old argument as to whether making things easier makes us stupider. There's a fair point to be made for "use it for lose it", but the question should really be about when and whether skill atrophy actually matters. There's no point worrying that your driving abilities will degrade if you switch entirely to public transport, and so long as you can guarantee your new option is reliable and available, there seems no point at all in bemoaning this (I've covered this issue before, of course). Indeed, the switch should be the whole goal. That's how progress is supposed to work, or it wouldn't be progress at all.

Likewise, the ancient Greeks believed that warm showers would make them weak and effeminate, whereas most modern people would say that cold showers just make them miserable rather than toughening them up. There are more than enough other issues to deal with without also having to suffer needlessly. I believe we generally do better when our basic needs are met rather than having to struggle for them : I prefer to live in Star Trek's comfortable, scientific Federation rather than with Dune's tough, religiously fanatical Fremen. Struggle should be reserved for the things you want, not the things you need.

Still, there is a case to be made that it would be a Bad Thing if your overall intelligence started decreasing because you stopped trying to think for yourself. You do need to actually engage your brain, and all skills are maintained by continuously operating near their peak levels. What I struggle to see is how it's ever possible (except maybe if you're far too rich) to make your life such utterly comfortable, so devoid of any challenge whatsoever, that this ever becomes of any actual concern to anyone at all.

While modern technology can streamline day-to-day life, making everything from dating to food delivery more efficient, it may come at a cost: early data suggests that our attention span may be shortening, critical thinking capabilities weakening, emotional intelligence fading, and spatial memory getting worse as we offload human tasks to our devices. The technological optimisation doesn't seem to be making us happier, either: despite the continual digital assists and enhanced communication of social networks, people still report high levels of stress and loneliness.

Yeah, but that's a lot of social media ends up with interactions with people we'd never want to engage with in real life. You then have to spend ages dealing with stupid people under the dubious grounds that you'll otherwise end up in an echo chamber*. This is partly because most people are, in fact, very stupid, and partly due to algorithmic rage-bait manipulation.

* This apparently being something unique to social media : of course, you can walk away from morons in real life, but online we're expected to deal with them calmly and rationally and take them very seriously for some reason.

That's why a growing number of people are restoring to the hottest new trend: "friction-maxxing", or rebuilding tolerance for inconveniences. The idea is to find tasks or ways of doing things involve a level of difficulty, time or patience. This could, for example, involve going "old school" and swapping digital tech tools for analogue solutions, such as reading rather than watching YouTube, navigating by road signs in place of Google Maps or calling a friend for advice instead of consulting ChatGPT.

But our brains operate on a "use it or lose it" principle, says Mark. Experiments in animal models show that effortful learning keeps new neurons in the brain alive. Studies also show that cognitively-stimulating activities like learning an instrument, reading, playing games and doing puzzles can preserve cognitive function as we age.

I don't get it. Who is finding their life so convenient that they miss the difficulties ? More importantly, HOW CAN I BECOME ONE OF THESE PEOPLE ? I find that one incomprehensibly weird. How are you not just exchanging one set of difficulties for another ? For example, if I no longer have to worry about how to write the code, I simply have more time for thinking about the problem I wanted to solve in the first place. I can't imagine ever reaching the end of the chain, as it were. Yes, it's good to have some amount of very low-level technical knowledge (some degree of grit for the mill, I guess), but I can't imagine who's living in such a utopia that they've already reached a state of post-scarcity difficulties. I'm a lot happier, not stupider, if I don't have to deal with segmentation faults.

According to one band of experts, the features of our digitised existence – constant notifications, 24-hour news and endless social feeds – can hijack this attention system, resulting in cognitive overload, mental fatigue and trouble focusing.

Much like the research on the effects of technology on our mental capacities, the studies of digital detoxes show mixed results. Some breaks from technology lead to better mood, improved focus, lower stress and more social connectedness, while others show the opposite or null effects. One 2014 study found that restricting screen time at a five-day nature camp improved preteens' emotional and social intelligence, while another 2019 study of university students found an increase in loneliness after abstaining from social media for one week.

Now that one makes a lot more sense. I can definitely see problems with "everything devices", like phones : I have a tendency to stockpile stuff for later consumption rather than immediately reading what interests me; I definitely fall into the infinite scrolling trap.

As per a quite different recent post, this is not necessarily a bad thing. Indeed, to a degree I even welcome distractions of my own choosing. I find it helps keep my focus maintained for longer overall if I split it up into chunks, every so often checking something very simple (like a news feed or social media) that doesn't derail my train of thought. Usually, this sort of back-and-forth doesn't become a problem when working, but I'm as guilty as anyone of scrolling on my phone rather than actually watching TV. Especially if I'm watching a show on my computer... the temptation to just quickly check the feed again is, for whatever reason, quite hard to resist. 

But to me this not a technological issue, it's a design choice. I love my digital notebook because it's total lack of features mean I use it for writing notes and reading long web articles, and that's it. Single purpose items, be they digital or otherwise, innately demand more attention because you can't just flick a button for more content. Moreover, so long as I put my phone away, I'm not tempted to go and check it. A single purpose device doesn't feel boring : once you're given something to do, and nothing to distract you from it, you'll just do that thing instead of trying to do dozens of things at once. Printed books do this, but so do well-designed digital products.

But even if friction-maxxing isn't the end-all solution we've been waiting for, "it doesn't hurt", says Mark. "If people are putting in effort, it makes them more intentional and thoughtful." Analogue hobbies such as crafting, gardening or reading – which involve friction as opposed to scrolling or streaming – can act as "active meditation", calming the mind and reducing stress. One 2024 study of more than 7,000 adults living in England found that those who engaged in crafting or the creative arts were more likely to report significantly higher life satisfaction, a greater sense that life is worthwhile and increased happiness. 

"I realised that a good life isn't an easy life," Semple says. "There's an enjoyment that you're cheated out of when you take the easy route."

And this is fair. Most analogue activities are innately more focused because there are no equivalents of "everything devices" or constant notifications. Multi-functionality has its place, but it's not easy to stay to track with such things. The real difficulty is perhaps the web itself. It doesn't make any sense to have a separate physical device for every website or digital tool : some purposes, like photography, can easily and sensibly separated, but many others simply can't. 

What's the solution ? Self discipline is part of it, but it's hardly the whole answer. On a totally different video, I liked the point that it's simply not fair to expect people to be able to resist something that's designed to capture their whole attention. So better designs are also needed, e.g. alternatives to infinite scrolling, more nuanced control of feeds, easy options to control which notifications come through and when (Gmail and DuoLingo, shut the fuck up already)... I don't think this would actually be difficult to do and it would cost exactly nothing.

The problem is getting corporations to see that making us engaged with more and more things but each for less and less time is, in the long term, a stupid metric to gauge product success, and that providing us with one good service is better than the option of a hundred crappy ones. How we reign in this tendency, I don't know.

The Logician's Swindle

What makes a puzzle annoying ? When is solving a problem rewarding, and when is finding out the answer just frustrating ? If we could answer...