Sister blog of Physicists of the Caribbean in which I babble about non-astronomy stuff, because everyone needs a hobby

Tuesday 31 October 2023

Feeling Epicurious

This is not another post about consciousness and reality, though it does start off that way.

Even though for all sorts of reasons I think materialism is quite silly, that doesn't mean I don't find any value in it. Actually I agree with a very great deal of it; I think there is something out there, I just think that that is demonstrably different from what's in here. But in many ways I think maybe just simply agreeing that it's not all consciousness-all-the-way-down may be the more important point. Let's all gang up on idealists ! :P

Anyway, I much enjoyed this BBC article on the Epicureans. I don't know much about Epicurean philosophy at all (I don't seem to get a good impression of it from popular media) but I might have to address that. 

The Epicureans believed that even the contents of our minds – our thoughts and perceptions – are comprised of very fine atoms of a certain kind. On this basis they asserted that all perceptions are equally real – even dreams and optical illusions are real, in the sense that they are made of actual, material stuff just like anything else...  it reminds us that the contents of our screens, like the contents of our minds, are not less real than the external sense objects we perceive, just different. As the philosopher David Chalmers puts it, in two helpful phrases : "information is physical", and "virtual reality is genuine reality".

I certainly agree with Chalmers that VR can constitute something "meaningful" in the emotive sense. The medium doesn't much matter for this (with some caveats I'll return to at the end). Whether we receive information via a book, a letter, a trained parrot, television, braille... none of this has any bearing on the emotional implications. And I would agree that VR has some level of existence : it must do, because we can't respond to something that doesn't exist. But :

These bits are physical things – in early computing they were housed in sequences on punched cards, now they are embodied as voltages on tiny transistors. Different sequences of bits (ones and zeroes), different digital information. Different shapes and arrangements of atoms, different bodies.

Here I don't think it can be said that information itself is qualitatively the same as physical objects. Information (VR or whatever) is something mental. It's only when a person reads a book that information arises. Without a mind to read it, there's nothing but atoms of text; digital information can be instantaneously summoned, dismissed, manipulated in ways impossible for physical media – and mental imagery is vastly more flexible. I won't dwell on my preference for a dualist interpretation here except to note that you can view in this in either a very weak or much stronger way :

  • The weak interpretation is that things have different aspects which might be ultimately no more fundamental than labels. A table is one collection of atoms, a hedgehog is another. From our everyday perspective it makes sense to say that these are qualitatively different, even though at a very fundamental level they're basically the same. 
  • The strong interpretation is that this means mind and matter are utterly irreconcilable and have totally different modes of existence. Arguably this makes it impossible for them to interact by definition.
I think it's on very safe ground to say that minds certainly show qualitatively different behaviour to atoms in the weak sense, thus leaving open the possibility that they could be unified with matter in some unknown, possibly unknowable, "neutral monism" way. As to the strong interpretation I remain deeply conflicted. On the one hand, I reject the need to insist that fundamentally different things can't interact (for reasons given at length here and here), but I also don't necessarily see that we must absolutely insist that because things appear to be qualitatively different then they really must be so at the most fundamental level.

In short, I don't know what the nature of reality is. And I get a bit worried about those who do.

Nevertheless, to return to the BBC article, the Epicurean approach of treating all perceptions as having, let's say, equal validity rather than equal modes of existence, has a lot going for it.

Epicurus believed everything reduces to atoms and void – including our mind (psyche) – and so rejected the conception of the immortal soul, which had been central to prior religious and philosophical thought. The gods, he held, may exist – but even if they do, they have nothing to do with us, and hence give us no moral obligations, no divine law, and no higher purpose. Therefore, the best thing you can do, and in fact the highest good, is to pursue a life of "pleasure" [my quotation marks].

Despite how that sounds, the Epicureans did not feel that this consisted in a life of sex, drugs and dithyrambic poetry (Dionysiac party songs). Rather, they felt, the pursuit of pleasure would be best effectuated by a simple life... For pleasure, as they conceived it, is not something you add up, cumulatively – rather, it is defined negatively, as the absence of pain. The term for this freedom from pain was ataraxia — literally, a state of not-being-shaken-up, a freedom from turbulence.

Preserving your ataraxia was a matter of balance. Should you drink some wine? Sure! – a little. Should you have sex? Yes! – some. If resisting these urges disturbs your mind, then satisfy them with moderation – there's no moral superstructure barring you from doing so. But don't overdo it, for it will shake you up, disrupting your ataraxia.

Which closely aligns with my own practise of "moderation squared". Do everything in moderation including moderation itself. It's fine if you occasionally go to excess, in fact if you don't do this you're missing out on the full range of experiences. And of course the more extreme the activity, the less you should ever do it. Likewise, it's generally a good idea to try anything once, but it doesn't make sense when you've got an extremely high confidence that you won't enjoy it or find it worthwhile.

This brings in the article's main point, which is about how we should behave digitally. The Epicurean view of reality may have led to their approaches to life, but quite honestly, I don't think this actually matters. What matters is not the nature of stuff but how it affects us. Whether it's all solid physicality or idealistic fog, the important thing is how we respond.

We do not seem to be living in an age characterised by "freedom from turbulence". And nowhere is this more evident than in the context of our information culture, and in the chaotic maximalism of our digital experience...  a radically Epicurean course would be to avoid the online world altogether – to completely eschew the extraneous stimulations of the body-mind that come with adding layers to your reality. This would be in keeping with what we're told of Epicurus himself, who spurned the life of the polis (the Greek city-state), with its rhetorical intrigues and social complexities. Instead, he dwelt with his followers in a community called the Garden. 

Well here I start to get a bit more worried. I get that sensory bombardment becomes overwhelming, but this "Garden", particularly given a refusal to participate in the outside world, sounds an awful lot like a commune or even a cult. And that's no good at all, invariably just exchanging one rat race with another. But then the author asks, "how can we go about cultivating Epicurean gardens in our digital spaces? How to dwell online in pleasure and peace?". And this implication of a Garden as a temporary retreat, that I think has much more value.

Neil Stephenson's Snow Crash coined the term "informational hygiene", which has since come to refer to the discipline of keeping your search history, and therefore your mind, clear of digital disinformation. A related concept is technologist Clay Johnson's "information diet". Both of these terms, hygiene and diet, recall Epicurean ethical categories — hygieia (health) and diaita (habit, way of life) – and emphasise the physical-material impact that digital information has on our minds. 

This I like very much. As in, let's say, the analogue world, some interactions are valuable and others are toxic and destructive. You can never avoid this completely, but you can manage it. You can avoid the sewage and stay digitally healthy. Don't avoid digital reality if you don't want to, but treat it just like every other aspect of your life. Live the digital life you want to live. 

As an aside, the idea that we must break down filter bubbles and echo chambers is one I'm increasingly skeptical of. It's a good idea to foster productive discussions, yes, and definitely a good idea to prevent outright violent radicalisation. But we don't go around insisting that everyone in real life actively goes around soliciting discussion from people they really don't like very much. We let this happen naturally. Perhaps something similar would work well for digital hygiene ? That would mean, of course, simply ditching a lot of the algorithms for suggesting people we might like to connect with.

But of course just as books are not phone calls, so digital social media has concerns all of its own compared to town halls and postal letters :

Unlike a regular garden where we simply choose what to include and exclude, the algorithms of online platforms create a feedback loop between our own curatorial choices and what gets presented to us in turn. If choosing to follow someone on Instagram is like selecting and planting a flower that you're consenting to see and smell on a daily basis, on TikTok, the flowers are cast unbidden at your gate, an atomic bombardment at the threshold of your perception. You may think you're the gardener, but it's equally the algorithm, and the garden is you!

This is where the materialist metaphysics of Epicurean thought becomes especially relevant. For, from such a perspective, you are matter experiencing itself, and in the realm of digital experience, you are specifically informational matter experiencing itself. Therefore the digital content you curate, and that which is curated for you, become the actual stuff of your mind – vestigia, as the Roman Epicurean poet Lucretius calls the material images that make up consciousness: "footprints" of the perceptions we have encountered, the content we have put into our brains. 

And this is not just one-directional: it is not only a matter of what you put in, but what you put out. Every digital gesture – every comment, post, message – may be conceived as an addition to a garden that is shared. A provocative tweet, for example, may indeed merit a profusion of indignant @replies – but does that make the garden any better, or worse ? There may be individual pleasure in sharing them, but the question would be whether that pleasure is sustainable for the digital environment.

Oooh, I like this. There's clearly a need to vent that must be balanced with the need to cultivate, and that's not at all easy. If I live my life largely online, then not going on a rant from time to time is directly equivalent to keeping stuff bottled up, and that's not healthy. But my need to be angry may cause more damage to other people, and with social media there's at least the potential (albeit hugely unlikely) for this to spread much further than in the analogue world.

The Epicurean ideal would be to make these additions with a maximum of care and intentionality, in order to maintain a minimum of psychic turbulence throughout the digital community. The question is not "online or offline ?" but "community or not-community ?" In other words, are we using digital spaces to connect with one another in the shared project of diminishing pain, or vainly attempting to escape reality and disconnect from ourselves ?

Thinking in this way cultivates a relational understanding that is ultimately ecological, revealing insights into how we can interact healthily not only with other people, but also with the built and natural worlds. It provides the basis for an ecology of information, where we might cultivate collective awareness of the material costs of our information systems (the energy it takes to power the internet), as well as its psychological costs (a consideration which is also ultimately material). 

It's very easy indeed to dehumanise people on text-based media because so much of the human aspect is already stripped away. This unique style has certain advantages (text can be in some ways a far more efficient information delivery mechanism) but of course it entails risks. We feel free to spew more hyperbole when the other guy barely seems real. And we might need to use more hyperbole because we can't convey the same emotions otherwise : we have to compensate for our lack of body language and tone of voice. 

Ultimately remembering that the recipient is every bit as human, every bit as conflicted, flawed and brilliant as anyone we'll ever shake hands with, isn't always easy. But this Epicurean admonition towards digital health and hygiene, treating all realities as equally valid, may at least help.

Monday 30 October 2023

When not to fight

I read Rutger Bregman's Human Kind recently and I hated it. There are some interesting bits but large tracts are offensively stupid and purely, nakedly ideological, replete with the "noble savage" fallacy in extremis. I'll do a proper review eventually but the short version is : leave it on the shelf. It's awful.

But, one thing that does come across (that the author doesn't really intend) is just how difficult psychology is. All those popular studies : the prison experiment, the electric shocks, the tribal children's camp... all come with major caveats. It's not that they're generally wrong (except the prison one, which was a sham), it's that human beings are complicated. And the popular term "replication crisis" is, I think, a terrible way of looking at it. It implies we already know exactly how studies should be done and should be immediately be able to tell when a result is obviously incorrect, or at least suspicious. That we've already got a handle on the basic methodology and have just slipped up a bit.

I suggest that this isn't the case. I suggest that we don't know the best way to do psychology experiments. People are just too complicated for that, with simply too many variables to put into neat categories, in a way that's qualitatively different from the physical sciences. Trying to say it's all group conformity or obedience or sheep-like desire to fit in... nah. This is over-simplistic nonsense. Useful concepts to be sure, and applicable in the right situations... but not universally by any means. 

It's not that the studies aren't worth doing, it's that none of them are worth promoting as having some unimpeachable insight into the human condition. They simply don't. They give us clues at best, but no more than that. They don't give us answers in the same way that the experiments in natural sciences usually do.


And I also saw Netflix's fantastic documentary Ordinary Men the other day. This looks at the often-overlooked participation of the German police forces during the Holocaust. As you would suspect, it's not an easy watch, but it's an important one. The main point is that so many of those committing the atrocities were not evil monsters. Oh, they did evil, monstrous things, undeniably ! But they did it not for their own gratification or even because they wanted to. They didn't even do it because they had no choice, with the worst that befell anyone failing to kill their assigned victims being social ostracization. 

No, they did it, so the documentary claims, because they felt it was necessary. Most of them didn't want to do it at all. They got no sense of gratification or pleasure from it. They actively realised the horror and repugnance of what they were doing, looked the barbarity of it full in the face... and did it anyway. And that, in its own way, is far more terrifying than viewing them as demonic : no, they were just like you and me, so the theory goes, but under the right circumstances...

Of course the flip side of this is that just as genuine monsters are much rarer than we might think (if they weren't, we simply wouldn't have civilisation), so too are genuine heroes. By the same token, perhaps, ordinary people are equally capable of heroics. No need to look around for those with the rarest combination of courage, integrity and decency, because anyone is capable of extraordinary actions in the right conditions.


All that being true, with any conclusions on the human condition being provisional at best, this article from the Guardian cuts a little differently :

President Joe Biden began his remarks in Israel with this: “Hamas committed atrocities that recall the worst ravages of Isis, unleashing pure unadulterated evil upon the world. There is no rationalizing it, no excusing it. Period. The brutality we saw would have cut deep anywhere in the world, but it cuts deeper here in Israel. October 7, which was a … sacred Jewish holiday, became the deadliest day for the Jewish people since the Holocaust.

“It has brought to the surface painful memories and scars left by millennia of antisemitism and the genocide of the Jewish people. The world watched then, it knew, and the world did nothing. We will not stand by and do nothing again. Not today, not tomorrow, not ever.”

With this, Biden reinforced the rhetorical framework that the former Israeli prime minister Naftali Bennett expressed, in typically unashamed terms, in an interview on Sky News on 12 October: “We’re fighting Nazis.”... without the historical context of Israeli settler colonialism since the 1948 Nakba, we cannot explain how we got here, nor imagine different futures; Biden offered us, instead, the decontextualized image of “pure, unadulterated evil.”

Genuine monsters, I stress, do exist. But leaving aside the lunatics who instigated the Holocaust, those who "only" participated in it.... that's where human nature gets really scary. That's where we get deeply uncomfortable questions about exactly who we are as a species. If you accept the conclusion in the documentary (which accompanies a book of the same name), then the trick of making comparisons to the Nazis as unalloyed evil stands revealed. If your enemy is truly pitiless, sadistic, and beyond all redemption, then it becomes all too easy to justify any sort of response in order to stop them.

But if they're not, if even the Nazis who carried acts of what should be unthinkable evil were, in fact, actually relatively normal people in the wrong circumstances, then this is a reminder that there are limits to justifiable retaliation. That there are bounds which should not be overstepped. Not that there shouldn't be any response at all, that's stupid. Clearly, WWII had to be fought and the Nazis had to be stopped. This is self-evident. But was every Allied response justified ? Was the carpet-bombing of Dresden a sensible form of retaliation ?

To return to present times, in the Russia-Ukraine war, I believe, there is a really quite impressive case of black-and-white, with one side as clearly villainous and one as innocent as you could ever find. There, I think, there is a very clear victim to support and an adversary to challenge. If you don't stand up to abject, unfair hostility, if you turn the other cheek towards such pure malevolence, you're simply surrendering. To say the Ukrainians shouldn't fight back to reclaim lost territory is to surrender to bullies; to place the blame somehow on NATO is a shocking, deplorable case of cynicism overriding good sense and decency. To say the West shouldn't support the victim in case the bully does something even worse is the very worst sort of cowardice and outright stupidity. Well of course they will do something worse if you don't stand up to them ! It's an open, shameful invitation to let them do as they please.

But so far as I can tell the current Middle East crisis is wholly different. There, every political state involved seems to be awful. Not necessarily the same type or degree of awfulness – I make no comment on that score – but still awful. No side can be meaningfully said to be better than the other when both behave like this. In war, yes, you expect collateral damage... as a tragic side-effect. But Israel appears to be actively soliciting it.

The Israeli defense minister, Yoav Gallant said: “Gaza will not return to what it was before. We will eliminate everything.” Nissim Vaturi, a member of the Israeli parliament for the ruling Likud party, to take another example, called for “erasing the Gaza Strip from the face of the earth”. There are many other such expressions by Israeli politicians and senior army officers in the last few weeks. The fantasy of “fighting Nazis” drives such explicit language, because the image of Nazis is one of “pure, unadulterated evil”, which removes all laws and restrictions in the fight against it. Perpetrators of genocide always see their victims as evil and themselves as righteous. This is, indeed, how Nazis saw Jews.

Biden’s words constitute therefore a textbook use of the Holocaust not in order to stand with powerless people facing the prospect of genocidal violence, but to support and justify an extremely violent attack by a powerful state and, at the same time, distort this reality. But we see the reality in front of our eyes: since the start of Israeli mass violence on 7 October, the number of Palestinians killed in Gaza has surpassed 4,650, a third of them children, with more than 15,000 injured and over a million people displaced.

Surely, all this violence in this case is just not right. And I emphasise the specifics because "both sidesism" is typically a fallacy, as it most certainly is in the Ukraine conflict. It's normally the sort of idiotically naïve pacifism/fascism that one expects from contemptible fuckwits like Jeremy Corbyn. I get that. But if both sides actually are awful... ?

The central issue is simple : the actions on each side are unjustified. You can't justify gunning down festival-goers and you can't justify cutting off food and water to an entire city in response. They're both shit. What was done to you in the past has no bearing on how you treat innocent civilians in the present. 

Sometimes, just sometimes, right is right and wrong is wrong. If one side is right and one is wrong, there's only one option who to support. But if both are wrong, why support either ? Why insist that one side has the right of it, that there must necessarily be heroes and villains when in fact there are neither ?  And even if you do find that overall one side must be supported, that doesn't mean you accept everything they do without question. Israel, just like any state, has the right to exist and to defend itself. It doesn't have the right to act with impunity or to commit war crimes. Nobody deserves that much leeway.

Thursday 26 October 2023

Review : a ReMarkable device

My 40th birthday has, horrifyingly, come and gone and the world hasn't ended. I don't have crippling arthritis or the need to constantly discuss mortgage rates, so I think I can continue to hold True Adulthood at bay for a while longer yet.

What I do have is something ubiquitously advertised on reddit : a ReMarkable 2 digital notebook. Normally the incessant advertising frequency alone would put me off, but (a) it seemed like a genuinely useful idea and (b) Shirley already knows someone who has one and likes it.

Oddly enough, I think they're almost underselling this. "No one device will transform the way you work", they say, "it's about fostering better habits". But I have to say, while it isn't perfect, it genuinely is transformative. I've read more papers since I got it than in the rest of the year combined and I've actually enjoyed doing so. For that alone I find it astonishing. 

For comparison, I had a Kindle some years ago, and while I liked it for books very much, reading PDFs on it was hopeless. On this it's a whole new experience. You can't compare the two. Likewise I have a Rocketbook digitizable notebook (you can't call it truly digital since it uses actual and paper and pen and you have to scan it manually with your phone). That's surprisingly good for what it is, but again it just isn't in the same league as the ReMarkable. Writing wasn't pleasant because of the glossy paper and too-thick marker men, and though the scan was better than I was expecting, it just wasn't good enough for easy reading. And erasing the notebook when full was downright tedious. It was a clever idea but not really a practical solution.

Whereas for the ReMarkable 2, to cut to the chase, I give it an easy 9/10. What it does well, it truly excels at. However, what's useful to me and what's important to you might be different, so as well as singing its praises I'm also going to tell you every minor niggle I wish I didn't have to contend with. There are in fact quite a lot of these, but for me at least they are so minor that they seldom if ever detract from the core experience. A big plus here is the 100-day trial period in which you can return it.

I wouldn't normally write about a product like this so soon after getting it, but because of last week's conference I've had disproportionate use out of it already. I've read and annotated a dozen papers and written 56 pages so far (journals, meetings, and random thoughts – the last post on this blog was originally written on the ReMarkable, this one was made from notes I kept while using the device). That, I think, should be more than enough to give a detailed review.


Let me start by saying that the branding for this is wrong. Badly wrong, actually. It's not the world's thinnest tablet because it's simply not a tablet*. Comparing it to a tablet is like comparing a duck to a mongoose : sure, there are some similarities, but it's a bit of an odd thing to do. No, what it is is a digital notebook. All of the more negative reviews I've come across fail to understand this, expecting it to have a web browser and suchlike whereas this is something it quite rightly deliberately avoids. You wouldn't complain your paper notebook doesn't have a built-in radio or detachable legs or allow you to communicate with horses so don't expect it from this one either. It doesn't even have a clock, nor should it. There's nothing to distract you whatsoever. You can just get on with stuff.

* Their other slogan, "paper tablet", is a bit better, but not much.

It is, in a very real sense, a metaverse product. It seamlessly merges the digital and physical worlds, and by and large succeeds brilliantly in combining the best of both. The instant auto-saving help it feel exactly like a paper notebook while the auto-sync, erasablility, and powerful editing tools (drag a box around your handwriting and you can scale, rotate and translate it – this is genuinely useful and far from a gimmick) make it, equally, fully digital. 

Every so often I would jot down my thoughts about the device using a file on the notebook itself, and then to write this post I opened that file on the PC app. Combine this with the mobile app and you essentially have access to all your notebooks all the time. There's even an extension so you can add long webpages straight to the tab... sorry, digital notebook*. What's more, the quality of the handwriting when shown on the PC/mobile is perfect, allowing for easy reading with no strain except in terms of trying to decipher my own rapid scrawl.

* EDIT : I've just started using this and I'm very impressed so far. It makes it far more convenient for reading long web articles. However at least once it's cut the end off a piece, but I haven't used it enough to say how frequently this occurs.

But beware : like upgrading from SD to HD, it's definitely something I would find very, very hard to downgrade from. There's no going back.


The Good

I must admit that the writing experience on this is, contrary to the sales pitch, not quite like writing on real paper. Not only is the tactile sensation different, but despite many other reviews claiming otherwise, there is in fact a little bit of noticeable latency between the act of writing and the the e-ink appearing. But I would stress heavily that this is not enough to cause any issues, and the latency is now something I only notice if I deliberately watch for it. We're only talking a very small fraction of a second here.

Likewise the feel of paper isn't perfect, but it's more than close enough. A critical threshold is certainly exceeded : my handwriting on this is indistinguishable from my handwriting on real paper. And this required no effort on my part to "get used to" anything. Crucially, I don't at all enjoy the physical sensation of writing on standard glossy screens, but this matte screen and specialist stylus instantly met with my approval, even if it doesn't feel as much like paper as I was expecting.

The resolution of the display is exquisite. You can zoom in and out, pan around, and the touch screen controls are intuitive. Adding more pages is done by flicking forwards on the final page and tapping a button. You can use your fingers or the stylus for most operations; I slightly prefer the latter. You can always hide the menus except for the very small, unobtrusive control icon in the top-left, so there's nothing but you and the page. The screen size is spot-on perfect : large enough to make everything readable and give you lots of space to write and draw, small enough to be easily portable. And of course there's a complete impossibility of smudging, which given how I hold my pen at a weird angle, is a big benefit for me.

I have the Marker Plus stylus which comes with an eraser on the back. I have to say I do find this of benefit. You can also tap with two fingers to undo the last continuous draw you made, but sometimes being able to have the detailed control of the physical eraser is extremely handy. I don't think it's strictly necessary but it is certainly nice to have. The marker is magnetic and snaps to the side, but it took me a while to realise the magnetic attraction was much stronger if I put it near the top of the notebook rather than midway down the side. With that, there's no worries about it falling off. You can give it a good hearty shake and the marker stays in place.

The marker itself is expensive and the nib unfortunately wears down over time, with the website saying it lasts between 3 and 7 weeks, of course depending entirely on usage. But you get 9 of these nibs provided; there's no need to ever replace the whole marker unit just the nibs. It doesn't require any charging.

A major asset in terms of the digital side of things is the organisation. As well as a one year free trial to ReMarkable's own subscription storage options, you can also integrate with others for free. I set up OneDrive and the procedure was simplicity itself, giving me access to my ~1 TB drive which is essentially inexhaustible for papers and notes. Some file types can be converted on-the-fly to PDF so you can then download them to the device itself, though very strangely this doesn't include .txt files.

In the device and its own native apps you can arrange files in a standard directory structure and also assign them an arbitrary amount of tabs, so keeping things discoverable is as easy as possible. Essentially you can start a whole new notebook for every passing thought if you really want to, with no need to worry about paper and making it far easier to search through than the Leaning Tower of Notebooks of Random Notes that's accumulated on my desk.

Similarly, the user interface is a breeze. It takes a few minutes to learn your way around and becomes second nature in no time at all. In fact on the whole design front I have to give them full marks : they've really thought about all this very carefully. If it isn't quite perfect, the small details they're implemented are invariably bigger positives than any mistakes they've made. Using it just becomes so instantly normal.

In terms of document functionality, you get about 50 page templates to choose from and a variety of pen styles. I gravitated quickly to the smallest line-separation page style and the thinnest pen. I use a thicker one for headings but that's about it. Honestly it's unlikely I'll ever use much else but it's nice that they're there. You can also add (but not name) different layers of text so you can toggle what you see, but this isn't something I've played around with yet. Nor have I tried anything more than simple diagrammatic sketches, though I don't think it's designed as a drawing tool.

Moving on, battery life is advertised as up to two weeks, and I think this is probably not far off the mark. Leaving it on standby overnight, the battery doesn't drain at all. But in a very heavy, full day of continuous use of reading papers, you might drain it in two days. Note-taking uses quite a bit less power, which I'd estimate would get you three or four days. 

Under more regular use (56 pages in a week is something I could only manage during a meeting, I'd never come close to this normally !) I think a week would be easily possible, though a full two weeks might necessitate only light use. Right now, I charged it a week ago, used it moderately for the first three or four of those days and it's still on 47%; a nasty little cold has meant I've barely touched it in the last three days. So standby time is excellent, maybe even longer than two weeks. You're unlikely to get charge anxiety unless you really have a total writing fixation-fetish, in which case you probably need to see a psychologist.

Lastly I'll repeat that the lack of features is an asset, which I mention again because I was initially skeptical about this. It seemed a bit like a minimalist gimmick deliberately avoiding the need for the designers to do any more work, but I was dead wrong about that. Having one device that does one thing really well is far superior to having one that does a billion different things with inescapable notifications. When they say it lets you focus, they aren't being lazy and they certainly aren't being pretentious.


The Bad

There's very little about this I could really say is genuinely "bad". There are two points I don't like though. First, my only point of real frustration is that pinch zoom is unresponsive with PDFs, usually taking several attempts to get it to do anything – and then it zooms in too much or too little, requiring a good deal of tweaking to get things right. A GUI slider or something would have been far easier. 

Fortunately landscape mode is an acceptable workaround in most cases (since this makes things large enough for readability) but still there's real scope for improvement here. Many PDFs come with a substantial border than means the zoom-to-width setting isn't always accurate, so better control of this would certainly be a big improvement.

Secondly, searching PDFs. You can do this but it's slower than a hibernating tortoise, which is extremely strange. There isn't a workaround for this except to use the device in combination with a PC or other device. That said, I haven't found it a major imposition, just an occasional nuisance.


The Ugly

No, there's nothing ugly about this device. It's a premium product and it shows.

But... there are plenty of minor deficiencies, stuff I thought was obviously missing or could easily be improved. In brief :

  • I want a shortcut to quickly switch between pens. Reading papers I frequently need to highlight text and then add my own annotations, and using the menu is just that bit slower than it needs to be.
  • While the highlighter pen can snap to text, the eraser can't do this : you have to erase a whole section of highlighted text to erase the highlights, rather than word-by-word.
  • A landscape mode for the whole OS would be nice. As it is it remembers your preference per document, which is fine but makes things inconsistent.
  • The PC app should default to the last directory used when uploading files, which would make things more convenient. It's always annoying to have to go down through an entire directory tree to find a file.
  • I want to be able to order tabs in the tab menu alphabetically ! This is the most bizarre omission.
  • The book folio cover should have been designed to also act as a stand. The keyboard folio (which I don't have) can do this, but I'm not sure if I'd benefit from this enough to make it worth forking over the really quite substantial amount of cash.
  • Converting text : actually this performs better on my horrible messy scrawl than I was expecting, though I haven't used it much. Still, it'd be nice to have this option on the PC app and allow conversion of whole documents rather than one page at a time. EDIT : You can absolutely convert multiple pages all at once, my mistake.


My Verdict

Sometimes, lots of little niggles accumulate into a major annoyance. Not so here. The core of it, being able to do handwriting digitally either on empty pages or documents, is outstanding. The vast majority of the time you can just get stuff done, such that when there is any sort of interruption, it's so minor that you still come out well ahead. For me the ability to hold a PDF file at any angle and write on it however I like, with that convenient hard surface rather than the flexible printed page, is a godsend. I need to annotate academic texts to read them, and having ready access to them on multiple devices, not needing to wrestle with the printer any more.... bliss !

It's hard to guess how this would be for other people, but my suspicion is that you'd have to be really anal about handwriting to reject this on grounds of writing quality – the sort of person who insists on vinyl or magnetic tape in place of streaming. But for those of us who aren't twats, I doubt anyone would have any real issues with this for its designated purpose as a reading/writing device. Ignore anyone who says it doesn't do feature X if feature X has nothing to do with what you can do with regular paper books.

The elephant in the room is the price, which is not cheap. Not cheap at all. Would I have got it for myself anyway ? I can only say possibly yes, as a treat, and even then only thanks to the return policy (my experiences with the Rocketbook having made me wary). But would I have kept it despite the price ? Unhesitatingly : oh hell yes. It should be cheaper, but having made the plunge, I don't want to climb back out. It's lovely.

Saturday 14 October 2023

The Monstrous Multiverse

Why is the multiverse such a popular notion ? Scientifically it's bollocks. It replaces physics with statistics and utterly lacks explanatory power.

But psychologically... maybe it's just true ? We live in different mental worlds as a matter of course, inhabiting different umwelts to each other. When I first encountered this concept of our perceived sensory reality, I thought it might be applicable to politics as well as biological sensors. But the more I think about it the more I think I've underestimated its almost infinite scope.

We have different politics, yes, but also different cultures in all aspects, which – often enough, though certainly not always – never interact with each other. Even walking past outside diners at a restaurant, pedestrians don't enter the customer's umwelt, not really, even though they may pass within inches of each other.

Politically... it's partly about morals to be sure, probably even the dominant part. But it's also simply perception. What's relevant to one is of absolutely no concern to others. If you don't actually experience or witness the effects of a policy on a person or a group, it's likely of no concern to you. Awareness is just not the same as experience, like being told the plot of a TV show rather than watching it unfold. So essentially some events are literally not happening in our individual mental worlds.

Of course hard external reality does sometimes collide with these mental realms. It's not that reality isn't real or such. It's just that most of the time, or enough of the time at least, most of reality simply doesn't matter.

And this goes far beyond politics. Large tracts of reality simply don't exist for each of us. There's a whole world of the ins and outs of football shenanigans that don't exist for me. Details of celebrity lifestyles and the finer points of violin techniques alike hold no reality for me whatever. A myriad of mental worlds we'll never have the chance (or misfortune) to explore ! And of course for most people, poor unfortunate wretches that they are, astronomy too just isn't a thing.

To hammer home the point : the same event can be experienced by different people in radically different ways, e.g. an amazing emotional movie to one might be a pile of crap to another. A delicious cake to you might be disgusting to me. The substance, the sensory input, is the same but the output experience is not.

Why does this arise ? Probably from a huge number of causes. We grow up with different experiences, acquire different biases and perspectives, experience different teachings and have natural, pre-determined inclinations. A plethora of tiny differences gradually accumulate and develop. Most children, presumably, are more similar to each other than in the case of most adults.

The multiverse of science fiction is thus appealing because psychologically it captures what we anyway experience. It takes something we experience mentally, something we feel emotionally, and gives it a physical incarnation, just as monsters are our fears made flesh. It's the same basic principle at work, just manifested very differently.

Whether this has any bearing on actual scientists advocating the existence of a real multiverse, I don't know. I suspect not, that this is only a coincidence. I don't think our everyday experiences have much shaped the development of many-worlds theory : this is just a natural interpretation of the quantum mathematics. But maybe it's made people more receptive to it.

Monday 2 October 2023

That's So Meta

Never did I think I would ever have the slightest interest in anything that came out of the demon-haunted orifices of Facebook (sorry, Meta), let alone listen to anything the Zuckerbot had to say. Nevertheless, last week's Meta Connect has me genuinely excited.

First off, I've seen a few outlets expressing surprise because for some reason they thought the metaverse was already dead. This is of course nonsense. As I noted on the first post I did on the metaverse back in 2021, a timescale of five years was plausible but definitely on the optimistic side of things. I was a bit more skeptical a couple of months later, while in March this year I revised a plausible framework (for mass adoption) to the end of the decade. Not for a moment have I ever had the impression that the metaverse is anything imminent; postulating something as being about 5 years away is basically synonymous with allowing a large margin of error. I've never seen Meta claim the ultimate vision of the future is anywhere near nigh. Quite where people have got this impression, I don't know.

Anyway, I've noted throughout my coverage of this that the whole concept is hardware dependent. Thresholds of comfort, price, battery life, affordability etc. must all be overcome for XR to rival (say) mobile phones in significance. The hardcore fans like to claim that the Quest 2 was a huge upgrade from the Quest 1, but I've tried both and honestly it's not. It's incremental : better resolution, more processing power and that's it.

The Quest 3, though... this is sounding like a much bigger deal. Not a revolution that will usher in a new era of human existence, but a much more important step towards changing how we access the digital world.


In the Meta Connect event itself, we got the official launch of the Quest 3. I've seen several hands-on tests with this, and so far as I can tell it seems very impressive indeed. On the Quest 1 and 2, the passthrough gives you get a blurry, grainy, warped, over-exposed, black-and-white view of the world that's enough to stop you bumping into things but nothing much more than that. On that last link, you can clearly see that on the Quest 3 even small real-world text is readable. In full colour, with a massively improved light balance and far less distortions. 

And of course, it gets significant (but more incremental) general performance upgrades. But one can cross a threshold either through one giant leap or many small steps, and here too things look important. We're getting close to, though perhaps not yet equalling, the graphical fidelity of PC VR in a standalone device.

But wait, there's more ! Meta also announced other significant developments. First, their AI program continues apace, generating (for some reason) AI characters tailored to specific tasks. Most eye-catching of these was Snoop Dog as a Dungeon Master. Quite why anyone in the world would want this, other than for momentary shits and giggles, I know not*, but the branding is likely on-point. Most people seem to like pointless celebrities for some reason, though I was more interested in their developing a no-code AI development platform to open this customisation (eventually) to the masses. Plus, they announced far faster and higher-quality AI image generation, which will be directly integrated into WhatsApp in the near future. Job concerns aside, this is the democratisation of creativity.

* The Zuckerbot asks, "why not ?". But I say, why Snoop Dog when you could have Brian Blessed or Ian McKellen ? What's wrong with you, you weirdo ?

Finally there was a return of sorts to the metaverse with their smart glasses. These are much (!) cheaper, better looking and more functional than Google's ill-fated foray. They don't offer metaverse access in the conventional, image-based sense; they don't overlay any visual imagery on what you see. Instead they provide access to Meta's AI, allowing you to query what you see wherever you are (as well as recording and even streaming video and photographs). Although I don't personally want them, I think this is a pretty neat trick. It incorporates the digital world directly and instantly into the regular world, so in that sense they can be said to be metaverse-compliant.


What does all this mean ? For that we go to the "first metaverse interview" (I recommend watching the very beginning and then skipping to 30 minutes in). Watching this, I have to say I felt like Vash in Star Trek : The Next Generation, holding with awed reverence "a piece of the future". Now the interviewer is, it must be said, quite annoying and half the content is just him saying how impressed he is... but... he might be right. If the uncanny valley hasn't quite been fully crossed, the expedition is well on their way up the far slope to the sunny uplands beyond. The degree of realism, the subtlety of expression and nuance of emotions the avatars convey, is far beyond anything I've seen before.

When we get to this quality more generally - and we're not there yet in any meaningful way, since this is just a proof of concept that takes hours to set up per person in a dedicated scanner - we have to confront Morpheus' admittedly-cliché question, "What is real ? How do you define real ?". It's easy to dismiss the metaverse as overhyped techbro nonsense when it's all dumb cutesy cartoon characters, but when you have something with such a visceral sense of presence, when it reaches that level of realism... then it's rather harder to ignore. And some commercial apps (like Brink Traveller) already reach comparable quality for static landscapes... so this is absolutely something we can do. It's going to happen, ready or not.

And that, as per the interview, really lays out the vision of the metaverse and what it means. It's the seamless integration of digital and physical reality. You just can't do that with a phone or desktop PC.

Now, feeling like I've soiled myself, I have to say I wholeheartedly buy into the Zuckerbot's vision of this (extremely serious caveats about whether I trust him notwithstanding). He's right to say we need physical reality, that going on hikes, being in nature, talking with friends, using physical hardware - this all matters very much. But he's also right to say that the digital world is meaningful too. Content isn't any less important because it exists only digitally. The right approach is to use the differences of the two to complement each other. Which is why I found the VR Oasis hands-on test of the Quest 3, though the guy seems lovely, to be quite baffling in its claims that augmented reality isn't really that important compared to fully-immersive virtual reality.

As discussed in the interview, when you can have such a lifelike interview with someone, when you can connect with anyone in this way... why wouldn't you ? Zoom, it seems, won't last five minutes when such a technology is widespread. Likewise, the Quest 3 already allows you to bring digital objects into your reality in a semi-permanent way. The question here really is why not rather than why. Yes, absolutely, you need many objects to be real, solid, physical things. You can't have a VR toaster. But many others you don't. You want a pinball or snooker table, you don't need one. You want a full-size dinosaur skeleton, you don't need one. Unless you're an actual palaeontologist, I suppose, but even then, a virtual model sounds pretty useful to me !


The biggest potential impact of the Quest 3 is for me productivity. It appears to begin to breach several important thresholds : comfort (better weight distribution and so narrow you can reportedly drink while wearing it), accessibility (no need to set up boundaries any more), performance, and flexibility (being able to freely and without hassle switch between real and digital worlds). But while I'm longing to set up Stranger Things portals to the upside-down all over my home, the thing that I think is potentially really impactful is, of all things, the integration with Microsoft Office. If I can have multiple windows acting as giant, lifelike monitors wherever I am, if it's comfortable enough to use for work for extended periods... that's a big deal. If I can keep different windows open showing me the figure I need to write about full-screen whilst still being able to write about them... that makes a real difference to how I work. How much of a difference, time will tell.

Of course the Quest 3 itself won't have us all wearing goggles the entire time. The photorealism of the avatars is still only a working prototype, not a mass-market consumer product. So no, the Quest 3 will not itself usher in the metaverse (no, you still don't need "metaverse officers" !)... but it's making it look an awful lot closer, and a lot less like the delusional pipe-dream of a typical Silicon Valley twatface. 

The end of the decade, the Zuckerbot says, is when we might have glasses rather than goggles for delivering genuine lifelike "holograms", when the digital and physical worlds collide. And for all his myriad faults, I really want to see what happens when they do. And I'm tired of the constant luddite anti-tech rhetoric. The optimistic vision is just so very much more appealing.

Philosophers be like, "?"

In the Science of Discworld books the authors postulate Homo Sapiens is actually Pan Narrans, the storytelling ape. Telling stories is, the...