Sister blog of Physicists of the Caribbean in which I babble about non-astronomy stuff, because everyone needs a hobby

Thursday, 31 May 2018

Mini Spinosaurus !

A spinosaur the size of a pony My Little Spinosaurus, the hit new kid's cartoon.

Spinosaurus (meaning ‘spine lizard’) was the longest, and among the largest of all known predatory dinosaurs, and possessed many adaptations for a semiaquatic lifestyle. It lived in what is now North Africa during the Cretaceous period, between 112 and 93.5 million years ago. The new specimen — the 21 mm-long pedal ungual phalanx (a phalanx supporting a claw of the foot) — is from the smallest known individual of this giant, sail-backed dinosaur. It was discovered in 1999 in the Kem Kem Beds of Tafilalt region, south-eastern Morocco.

“Assuming the juveniles looked like smaller versions of the adults, the 21 mm-long claw phalanx from this small specimen would pertain to an early juvenile individual, 1.78 m-long, only just a little bit longer than the estimated length of the sole head of the largest adult Spinosaurus known to date,” the plaeontologists said.

http://www.sci-news.com/paleontology/smallest-spinosaurus-06052.html

Fairer wages as a solution to the free movement "problem" ?

It would have been better if the article had described the authors at the start. I expect politicians to say, "we're very proud of this legislation" but it was confusing as heck when I thought a journalist was doing it. The Indy may be biased, but it's not Gizmodo :P

Thus far, exploitative employers have been able to take advantage of out of date legislation regarding so-called “posted workers” – when an employee from one EU member state is sent by their employer to carry out a service in another temporarily. Despite obligations for employers to pay workers the minimum wage of the country in which they are working, loopholes in the legislation and enforcement have allowed undercutting, including the deduction from wages for travel and accommodation. Workers hired locally would normally be entitled to allowances to cover these costs as well as higher rates of pay. This had the consequence of making migrant workers more attractive to employers looking to cut costs, resulting in the undercutting of the local workforce.

The new rules voted this week in Strasbourg will prevent the exploitation of workers, bringing about equality between posted workers and their local co-workers. Now employers will be obliged to offer equal pay from the start of the posting, as well as the same allowances and reimbursement for travel and accommodation costs.

Definitely seems like a good thing but I'm unsure how it will play out in reality. On the one hand, this will discourage employers from hiring temporary foreign workers. On the other, there's now more incentive for workers in low-paid countries to try and get hired abroad since their effective pay will now be higher (presumably resident aliens aren't subject to these loopholes so aren't affected at all by this). If they stay in their home country, they'll be earning less than previously - and who replaces them in their former work location ? As to the Brexiteer stance I suspect it won't make the blindest bit of difference : if the workers stay home they'll complain about the sudden lack of goods and services; if they keep moving abroad, they'll complain the legislation was ineffectual.

Originally shared by Jenny Winder

The EU has just passed a law that could end the problems with free movement which led to #Brexit in the first place
Corporations will no longer be able to undercut local workers by exploiting migrants. #StopBrexitSaveBritain
https://www.independent.co.uk/voices/eu-brexit-uk-labour-laws-migrant-workers-a8375836.html

Would Sherlock Holmes' methods work in real life ?

Massimo Piglucci, a philosopher who has long studied the Holmesian canon, describes Holmes’ method as a form of “eliminative induction,” because “deduction” is too limited to encompass it. Holmes, he says, actually works his way through inferences toward the best explanation by considering and discarding others. He uses whatever seems best for the occasion, picking “the right set of tools from a broad toolbox, depending on the characteristics of the problem at hand.” Holmes does use intuition, Piglucci observes, but he combines it with time-tested practices.

There’s nothing wrong with forming a guiding hypothesis, as long as one stays flexible and accepts that it could change with new facts. The real problem lies with the arrogance of believing that one is always right or superior to anyone else, as well as deciding that someone is guilty before the evidence proves it.

I agree that we should be careful about some of Holmes’ statements. One that I've found troubling is, “Any truth is better than indefinite doubt.” Really? Any truth? This pressures us to contrive an explanation that sounds right and achieves closure. That’s how many investigations mess up. Better to leave an investigation open than to tie it up prematurely with something that might seem true.

I agree the statement is wrong, but it does say something interesting about psychology. Hardly anyone is content to have no explanation at all. They must always have something, even if it comes with massive caveats. Rickety bridges are acceptable, gaps are not.

I also quake at this one: “When you have eliminated the impossible, whatever remains, however improbable, must be the truth.” As if any given investigator can think of all possibilities and effectively eliminate all impossibilities. What if they don’t have all the facts or don't realize the limits of their own imaginations? They might erroneously decide they’ve done everything possible when they haven’t. I’ve seen this, too. It’s the fallacy of believing that all the facts you know are all the facts there are.

Which is a very important fallacy and nicely phrased. However, I read the original statement more to be about principle than method. Improbable things shouldn't necessarily be rejected, as the author explains :

To be fair, Holmes has some good advice, too, such as, “It is a capital mistake to theorize before one has data. Insensibly, one begins to twist facts to suit theories instead of theories to suit facts,” and “We balance probabilities and choose the most likely. It is the scientific use of the imagination.” He also claims that the “first rule of criminal investigation” is to “always look for a possible alternative and provide against it.”
https://www.psychologytoday.com/gb/blog/shadow-boxing/201805/sherlocks-curse

Wednesday, 30 May 2018

On the different kinds of infinities

Personally, I'm convinced this is an elaborate way of saying that infinity is silly and/or unpleasant.
https://www.youtube.com/watch?v=s86-Z-CbaHA

Kim Kardashian has written a scientific paper, but...


http://www.iflscience.com/technology/theres-a-glaring-problem-with-kim-kardashians-scientific-paper/

The importance of visualising data and user interfaces

Amen to this.

When tools to produce refined-looking graphics are only accessible to, or usable by, professionals and/or when expert-to-public translation leads to inaccuracies, a fear of elegant-looking graphics, and a concomitant exploratory-explanatory divide is understandable. The divide often leaves behind the aforementioned idea that exceptionally fancy graphics, and the time invested to make them, are only for public consumption, and not really useful for serious scientists or others pursuing deep quantitative analysis.

​As the “democratization of data” continues, more and more services are making data behind both scholarly journal figures and public outreach graphics freely accessible. These open data sets represent a wealth of new information that researchers can combine with more traditional data acquisitions in their inquiries. If it’s quick ​and easy to get the data behind explanatory graphics, scientists will use those data, and learn more.

To generalise on that, take a particularly obvious lesson from nudge theory : software needs to be easy to use if people are going to use it. Visualisation is innately fun, but installing software is not. Interface design really matters - I'm far more likely to experiment with something if the basics just involve pressing a button. If have to pause to write even ten lines of code, well, I'm not going to do that. Case in point : HI source extraction. If the only software available doesn't let you record detected galaxies very easily, you might go away thinking that human-based detection is very difficult. In reality it is not, it's simply the lack of a sensible recording interface that makes it tedious. Humans are great at this, but they get bored by having to write down long numbers. Visualisation software should make it as easy as possible for humans to do what they're good at and ease the burden of the less interesting tasks.

This problem is particularly acute in fields where everyone writes their own code. Also, on a related point, visualisation software should have a freakin' GUI. No, I don't want to have to type commands to generate a plot, that's just plain silly. Code should be used to manipulate data, and not - wherever possible - be used to visualise it. Major caveat : it should always be possible to access the underlying code for experimentation with non-standard, custom techniques. Modern versions of Blender make it very easy to access the appropriate commands to control each module, thus giving the best of both worlds.

Sometimes, even though tools permit easy explanatory-exploratory travel, sociology or culture prohibits it. By way of a very simple example, consider color. To a physicist portraying temperature, the color blue encodes “hot,” since bluer photons have higher energy, but in popular Western culture, blue is used to mean cold. So, a figure colored correctly for a physicist will not necessarily work for public outreach. Still, though, a physicist’s figure produced in an exploratory system like the one portrayed in Figure 2 would work fine as an explanatory graphic for other physicists reading a scholarly report on the new findings.

It might be interesting to have some app/website that lets people play with the raw data behind images to compile them themselves. After a while, you start to lose the bias against thinking that what you can see with your eyes is an especially privileged view of the Universe, e.g. http://www.rhysy.net/the-hydrogen-sky.html

In 2006, no one quite knew what a “data scientist” was, but today, those words describe one of the most in-demand, high-paying, professions of the 21st century. Data volume is rising faster and faster, as is the diversity of data sets available – both in the commercial and academic sectors. Despite the rise of data science, though, today’s students are typically not trained–at any level of their education–in data visualization. Even the best graduate students in science at Harvard typically arrive completely naive about what visualization researchers have learned about how humans perceive graphical displays of information.

Over the past decade or so, more and more PhD students in science fields are taking computer science and data science courses. These courses often focus almost entirely on purely statistical approaches to data analysis, and they foster the idea that machine learning and AI are all that is needed for insight. They do not foster the ideas that one of the 20th century's greatest statisticians, John Tukey, put forward about visualization: 1) having the potential to give unanticipated insight to later be followed up with quantitative, statistical, analysis; or 2) that algorithms can make errors easily discovered and understood with visualization.

Exactly. It's true that human pattern recognition is fallible. However, it's at least equally true that statistical analyses can be fallible too. Having an objective procedure is not at all the same as being objectively correct. Working in concert, visualisation and statistical measurements are more than the sum of their parts. Finding a pattern suggests new ways to measure data, which in turn forces you to consider what it is you're actually measuring.
https://arxiv.org/abs/1805.11300

Dinoasurs iiiiiinnnn spaaaaaaace !

I was very disappointed when I found that the Osiris Rex mission doesn't have a pharonic space dinosaur logo.
https://www.asteroidmission.org/

One thing has become clear to me: Alien dinosaurs, while seemingly frivolous on the surface, are often potent projections of human fears and hopes about our place in space and time. So for all those seeking enlightenment via space dinosaurs, here’s a guide to the trope’s rich history.

https://motherboard.vice.com/en_us/article/nek7ed/dinosaurs-in-space-history-fiction

Sculpting with vantablack

Always fun to see. The sculpture mentioned in the video can be seen here :
https://www.sciencealert.com/vantablack-hyundai-pavilion-pyeongchang-winter-olympics-asif-khan

Then Surrey NanoSystems invented a sprayable version that blocks only visible light, laying down the nanotubes in a random configuration; and the VBx paints, that don't use nanotubes at all, but can be used for commercial applications. This latter product is what has been sprayed on the Hyundai pavilion.

When in shadow, the three-dimensionality of the walls of the pavilion seems to disappear. It's bristling with rods, the ends of which are lit to appear like stars against the extreme blackness.

"From a distance the structure has the appearance of a window looking into the depths of outer space," Khan said in a statement. "As you approach it, this impression grows to fill your entire field of view. So on entering the building, it feels as though you are being absorbed into a cloud of blackness."

Inside the building, the interior is the opposite - pure glossy white, made from Corian, a surface material more commonly seen in kitchen benchtops. The entire room is a water installation, with 25,000 water drops per minute flowing across the hydrophobic surfaces. Haptic sensors allow visitors to interact with the installation, changing the rhythm of the droplets as they collide and flow towards a central lake, which drains, and reappears, and drains again.

It sounds cool, and it is cool, but I felt that more could have been done with this.

https://www.youtube.com/watch?v=DI7tLclZyrE

Nobody puts Shetland in a box, for some stupid reason

Filed under U for "Uh-huh..."

Mr Scott wants a "Shetland mapping requirement" put down in law. This would require public authorities to display the islands "in a manner that accurately and proportionately represents their geographical location in relation to the rest of Scotland" whenever they publish a map of the country. Basically, it would bar public bodies from putting Shetland in a box - or publish their reasons for why they feel they have to.

SNP backbencher John Mason - no stranger to cartographic controversy, given his previous insistence that Skye is not an island - said he was "enthusiastic" about the idea of banning boxes. This was echoed by SNP member Richard Lyle - via a brief diversion around his annoyance at the shape of the BBC weather map - and Scottish Conservative MSP Jamie Greene agreed that "nobody puts Shetland in a box".

However, fellow Tory Peter Chapman said he had consulted map specialists who thought it would be "inappropriate" to bar mapmakers from using boxes. He said this would reduce the scale of maps of the whole of Scotland by 40% - a "loss of detail" for much of the mainland just to add in "a whole chunk of sea".

Mr Scott has attempted to get his mapping rule onto the statute book via an amendment to the Islands (Scotland) Bill. Humza Yousaf, the Scottish government's minister for the islands, has accepted that this is a "really serious issue".

NO IT ISN'T !!!!

He told MSPs he was happy to write to public bodies encouraging them to avoid "incorrect, inaccurate depiction" - but warned that the wording of Mr Scott's proposed rule was too broad, potentially rendering it unenforceable.

Either that and max it out and ban all non-Cartesian coordinate systems from all plots of any kind in any context whatsoever. That'll teach 'em.
http://www.bbc.com/news/uk-scotland-43574298

Tuesday, 29 May 2018

Slime mould designing a rail network

Slime mould : it can design rail networks and doesn't taste very nice.

Slime mold is a single-celled bit of goo that you’d find under a log in the woods. It’s also a master decision-maker, capable of weighing risk and reward in ways that make scientists question what intelligence really is. We grew slime mold of our own, and watched it tackle some amazingly complicated problems that could even help create better algorithms for self-driving cars.

https://www.youtube.com/watch?v=40f7_93NIgA

Hurricane Maria's death toll revised upwards to 4,600

Nothing Trump has done has made me so physically angry as when he declared that everyone had a wonderful time in the emergency meeting when he threw paper towels into the audience.
http://www.bbc.com/news/world-us-canada-44294366

Progress in philosophy is not the same as scientific progress

What use, then, is philosophy of science if not for scientists themselves? I see the target beneficiary as humankind, broadly speaking. We philosophers build narratives about science. We scrutinize scientific methodologies and modelling practices. We engage with the theoretical foundations of science and its conceptual nuances. And we owe this intellectual investigation to humankind. It is part of our cultural heritage and scientific history. The philosopher of science who explores Bayesian [statistical] methods in cosmology, or who scrutinizes assumptions behind simplified models in high-energy physics, is no different from the archaeologist, the historian or the anthropologist in producing knowledge that is useful for us as humankind.

I think that again we should resist the temptation of assessing progress in philosophy in the same terms as progress in science. To start with, there are different views about how to assess progress in science. Is it defined by science getting closer and closer to the final true theory? Or in terms of increased problem-solving? Or of technological advance? These are themselves philosophical unsolved questions.

What is the overarching aim of science? Does science aim to provide us with an approximately true story about nature, as realism would have it? Or does science instead aim to save the observable phenomena without necessarily having to tell us a true story, as some antirealists would contend instead?

I don't like this term "save"'; I think "explain" would be much better.

The distinction is crucial in the history of astronomy. Ptolemaic astronomy was for centuries able to “save the observable phenomena” about planetary motions by assuming epicycles and deferents [elaborations of circular motions], with no pretense to give a true story about it. When Copernican astronomy was introduced, the battle that followed — between Galileo and the Roman Church, for example — was ultimately also a battle about whether Copernican astronomy was meant to give a “true story” of how the planets move as opposed to just saving the phenomena.

We can ask exactly the same questions about the objects of current scientific theories. Are coloured quarks real? Or do they just save the empirical evidence we have about the strong interaction in quantum chromodynamics? Is the Higgs boson real? Dark matter?

https://www.quantamagazine.org/questioning-truth-reality-and-the-role-of-science-20180524/

Compare the.... mongoose

Woe ! If only this had been about meerkats, for then we would have meerkats that compare the market, validating almost a decade of advertising : https://en.wikipedia.org/wiki/Compare_the_Meerkat

Dr. Kern said: "We began by using detailed natural observations collected over many months to show that individuals who perform lots of sentinel duty also receive lots of grooming and are well-positioned in the group's social network. But, to prove a causal link, we needed to nail a tricky field experiment."

Professor Radford added: "Over three-hour periods when groups were foraging, we simulated extra sentinel behaviour by a subordinate group member using playbacks of its surveillance calls—vocalisations given to announce it is performing this duty. At the sleeping burrow that evening, we monitored all grooming events, especially those received by the individual who had had their sentinel contribution upregulated."

The researchers found some striking results. On days when an individual was perceived to conduct more sentinel duty, it received more evening grooming from groupmates than on control days (when its foraging calls had been played back during the preceding foraging session). Moreover, the individual who had had its sentinel contribution upregulated received more grooming than a control subordinate in the group.

Market trade was once considered the domain of humans but the exchange of goods and services is now widely recognised in other animals. What the new research shows is that mongooses have sufficient cognitive ability to quantify earlier acts of cooperation and to provide suitable levels of delayed rewards.
https://phys.org/news/2018-05-mongooses-reward-friends.html

The difference between imagination and belief

QUESTION: Two people in a room are asked to imagine a lion. One closes her eyes and says: ‘Yes, now I see it; it’s walking around; I see the mane and the tail.’ The other person runs screaming from the room. Which one is imagining a lion?

ANSWER: Clearly the person running out of the room is imagining a lion. The other is imagining a movie of a lion. There are no sense organs in our brains; if there were, we would see nerves and not lions. To imagine something is to behave in the absence of that thing as you would normally do in its presence – as you would do if you perceived that thing. Thus, as Aristotle said, imagination depends on perception. Actors on the stage are performing acts of imagination. Good acting is not a consequence of good imagination but is itself good imagination.

No, no, no. The difference between the two people isn't fundamentally because of how good their imaginations are. It's that the person who runs away actually believes a lion is there. They might also have a vivid mental picture of the lion but that's not the same as believing their mental perceptions are true. Instead they have a delusional, hallucinatory belief - the kind of idea that possess the mind rather than one the mind possess, as someone once put it. It's entirely possible to imagine a lion in immaculate detail, from the flecks of dirt in its fur to the hot saliva dripping off its teeth, even secondary effects like the sense of fear the beast would engender, without actually believing that it's real. Conversely one could be utterly terrified of a fairly shoddy imagining of a lion if one believed it was real. Imagination, belief and consequent behaviour are different things. Imagination may depend on perception but belief doesn't.

I insisted that ‘I love you’ is a trite and overused, and therefore meaningless, phrase. I added that if they didn’t know I loved them from my behaviour, nothing I said would make them believe it. My wife said: ‘Well, say it anyway.’ I refused and added: ‘But I’ll tell you what I will do. I’ll say “You love me,” which is neither trite nor meaningless, but based on extensive behavioural observation.’ Then we exchanged ‘You love me’ all around. We were in a much better place than if we had said ‘I love you’: we each had unmistakable evidence for what we said – that we were loved – and could be truly believed.

They're a bloody daft lot if you ask me. Inasmuch as we can know anything at all, we can know our own opinions. Other people are much harder to gauge. True, their behaviour constitutes extremely important evidence as to their opinions. But what they say their opinion is also matters. Combine both as you've got the best possible evidence. If we state our opinion of someone else's opinion, we're only going to believe them if they react appropriately. So we end up back where we started, i.e. expressing our own opinions. Anyway, this particular sort of belief is hardly one that's based on rational evidence, so it's a silly example anyway.
https://aeon.co/ideas/teleological-behaviourism-or-what-it-means-to-imagine-a-lion

Some ramblings about consciousness and plants

One of Gizmodo's better articles, because it's entirely quotes from different experts and not written in their usual, highly irritating, "here's what you must think about this" style.

Consciousness is something where our language seems inadequate to the task. We all know what it is, but defining it is nearly impossible. "Awareness" could refer to any kind of sensory stimuli, but that doesn't adequately convey the experience of thinking - and it can also be used more casually to refer to consciousness itself. The thing about thinking is that it's very hard, probably impossible, to prove that other people (much less plants or computers) are having a similar experience. Oh, we can measure neural activity to the nth degree, but that won't tell us anything about what it's like to be someone else or a cat or a flower or a piece of mouldy cheese.

We don't need external senses to be conscious anyway, at least not all the time, because clearly we do a lot of thinking without reference to what our current senses are telling us. I'm writing this against the backdrop of G+, which currently shows pictures of a bird, the moon, and a spam post in a badly-moderated community. I'm not thinking very much about those while I'm writing this.

Recently I was thinking about blindsight, where people receive visual input from their eyes but it isn't processed by their conscious brain. They can accurately respond to that stimuli, but they aren't conscious of it.
http://www.bbc.com/future/story/20150925-blindsight-the-strangest-form-of-consciousness
I'm wondering if this isn't as strange as is first seems : maybe we're all doing this almost, but not quite, constantly. I don't mean that we have weird mystical woo-woo hunches based on mysterious ESP or anything like that. Rather I mean that when we start imagining things (when we're awake and in the case of visual thoughts, with our eyes open), we're no longer conscious of the external world in quite the same way. Some other part of the brain takes over so we don't trip over ourselves when we go to get a cup of tea and start thinking about data reduction or exploding whales or whatever. In true daydreaming, rather than more causal thoughts, this process is taken much further, but even with random mental images it seems to take hold. At least it does for me anyway. Blindsight, then, only seems strange because it's permanent and specific to one sense, rather than regular (but fleeting) and generic to all senses as in the case where our minds wander.

This kind of mental perception is normally distinctly different from our conscious perception; it's far more dreamlike. But, this all being an extremely fuzzy spectrum, you can have dreams and thoughts which are very close (or even indistinguishable from) conscious perception, and states of consciousness which are more dreamlike. So, consciousness can't be awareness or perception in that sense. It has little to do with the external world in itself.

Memory and knowledge are of no help either because we haven't got much in the way of a good definition of either of them. Take the overflow pipe in my bath. When the water level is too high, it drains away. This very simple device responds in a predictable way to an external stimuli. Does it "know" the water level is too high ? Does it "remember" what to do in the event of too much water ?

Of course not, but it's a very short step into realms of total confusion. Create a more complex device that responds to multiple events, or code that deals with multiple if-then loops. At what point do we say that they are making a choice ? Are we just elaborate versions of overflow pipes : an incredibly complex set of variables that nonetheless run according to fixed, inviolable rules ? If so, in what sense are we conscious whereas plants and pipes and bits of dandruff are not ? Does a calculator or a flower have the sort of internal perception we ascribe to ourselves ? How could we ever prove it ? What sort of consciousness is it ethically acceptable for us to destroy, i.e. why eat plants but not animals ? Why smash calculators but not kill people ?

The Universe is very confusing and I don't like it. For my part I throw up my hands and declare, "Bugger if I know. Look, I'm conscious, you're conscious, he's conscious (even if he's a bit of a dick), that fluffy rabbit is conscious, that rock isn't but I ain't sure about the suspicious-looking daffodil in the back. What's that ? You ask why ? Because I said so, that's why."
https://gizmodo.com/are-plants-conscious-1826365668

Monday, 28 May 2018

Training to shoot to kill : why battlefield causalities used to be (generally) much lower

I'd appreciate any of the people with military experience in my circles telling me if this is bollocks or not.

Summary : (Roughly) pre-20th century, battlefield casualty rates from firearms were far below what the accuracy of the guns would allow. The reason given (which I've heard elsewhere) is that most people, even with that sort of military training, don't naturally shoot to kill. They threaten and posture to get the other side to go away, but (barring conditions of direct, immediate threat to their own lives) they don't actually want to kill them.

In most circumstances only about 2% deliberately shoot to kill when not threatened themselves. Of those, about half are psychopaths who have no empathy for other human beings. The other half do do so out of a sense of familial protection for their troops : they don't want the other side to die so much as the want their own side to live.

During the 20th century, training methods were devised to circumvent the natural desire to leave the other side alone. The most effective is to generate a conditioned response, whereby the instinctive, reflex action of a soldier is to shoot the enemy on sight. They aren't thinking, they're just acting instinctively. Through such techniques the shoot to kill fraction reaches close to 100%. There will, I suppose, be some (small ?) selection effect as to who enlists in the first place.
https://www.youtube.com/watch?v=zViyZGmBhvs

Testing the Trolley problem in real life

The definitive answer to the what would you actually do part of the Trolley Train Problem. I was rather surprised by the result.

Originally shared by Martin Krischik

Quite an interesting experiment. The outcome was almost predictable but still interesting to be confirmed. I also have to say that event true I knew it's just a test the video of the train rushing towards the people and the decision to make made an anxious.

https://www.youtube.com/watch?v=1sl5KJ69qiA

#science #psychology #mentalhealth
https://www.youtube.com/watch?v=1sl5KJ69qiA

A large dog called Mittens

This is a clearly a wereelchupadirewolf.
http://www.bbc.com/news/world-us-canada-44243644

Europe doesn't want to develop a better rocket because that would hurt jobs

Ahhh.... so now we see why rocket development has been in a rut for so long. It's like why no-one wants to build an everlasting light bulb. With this attitude, Ariane could disappear remarkably quickly.

"Let us say we had ten guaranteed launches per year in Europe and we had a rocket which we can use ten times—we would build exactly one rocket per year," he said. "That makes no sense. I cannot tell my teams: 'Goodbye, see you next year!'"

You could tell them, develop a better rocket instead.

https://arstechnica.com/science/2018/05/ariane-chief-seems-frustrated-with-spacex-for-driving-down-launch-costs/

Strawberry-picking robots because we really don't have enough people for that

Octinion's arm is mounted on a self-driving trolley. It reaches up from below and, using 3D vision, grips a ripe berry between two cushioned plastic paws. The gripper then turns the fruit by 90 degrees to snap it off its stalk, mimicking the technique a human picker would use. The prototype is picking one strawberry every four seconds, says Mr Coen, and depending on the cultivar, will collect between 70% and 100% of the ripe fruit - results that he says make it competitive with human pickers.

Cambridge-based start-up Dogtooth is taking a different approach. Founders Duncan Robertson and Ed Herbert have just returned from Australia where they've been testing a picker that delivers berries with a centimetre or so of stem still attached, the way UK retailers prefer, because it extends shelf life. Dogtooth is cautious about giving away too much about how its robot works, but like Octinion it is based around robotic arms mounted on a mobile platform.

It uses computer vision to identify ripe fruit and machine learning to evolve efficient picking strategies. After picking, the robot grades berries to determine their size and quality, and places them directly into punnets. Dogtooth also prides itself in working around the needs and current practices of UK growers. So while Octinion's machine will only work on fruit grown on raised platforms, usually in polytunnels, Dogtooth's will pick traditional British varieties in the field.

Robots can operate at all times of the day or night - harvesting during the chillier night hours can dramatically lengthen shelf life and avoid bruising. But developers emphasise the motivation is not to replace migrant labour with cheaper, more efficient robots. In fact, it's not proving easy to replicate the standards that human pickers deliver. Strawberry farmers say they are increasingly struggling to find people to do the work. They need the robots.

http://www.bbc.com/news/business-43816207

Kilauea's spectacular eruption

Be sure to check out the image gallery. Via Ciro Villa.

Lava from Hawaii's Kilauea volcano has reached a geothermal power plant, approaching wells that have been capped to protect against the release of toxic gas should they mix with lava. 
http://nbcbay.com/3haqRzV

Sunday, 27 May 2018

Can everything be analysed in equations ?

Originally shared by Event Horizon

"No matter how far mathematics progresses and no matter how many problems are solved, there will always be, thanks to Gödel, fresh questions to ask and fresh ideas to discover. It is my hope that we may be able to prove the world of physics as inexhaustible as the world of mathematics... If it should turn out that the whole of physical reality can be described by a finite set of equations, I would feel disappointed."

- F. J. Dyson. Infinite in all Directions. London: Penguin Books, 1990, p. 53.

The idea that Gödel's logical insights might reflect deeply upon physics is fascinating. I am still working hard to wrap my cortex fully around the logic and the theorems but, as I understand it, applied in this context of physical laws this suggests that there would always be further and deeper iterations and recombinatory organisational constellations of physical models and theory. This strikes me as absolutely and utterly beautiful.

Friday, 25 May 2018

The food chain in action

There's always a bigger fish, except for when it's an eagle.
http://www.bbc.com/news/av/world-us-canada-44250472/fox-catches-rabbit-then-eagle-swoops-in

The big tech fall foul of GDPR almost instantly

Complaints have been filed against Facebook, Google, Instagram and WhatsApp within hours of the new GDPR data protection law taking effect. The companies are accused of forcing users to consent to targeted advertising to use the services. Privacy group noyb.eu led by activist Max Schrems said people were not being given a "free choice". If the complaints are upheld, the websites may be forced to change how they operate, and they could be fined.

In its four complaints, noyb.eu argues that the named companies are in breach of GDPR because they have adopted a "take it or leave it approach". The activist group says customers must agree to having their data collected, shared and used for targeted advertising, or delete their accounts. This, the organisation suggests, falls foul of the new rules because forcing people to accept wide-ranging data collection in exchange for using a service is prohibited under GDPR.

"The GDPR explicitly allows any data processing that is strictly necessary for the service - but using the data additionally for advertisement or to sell it on needs the users' free opt-in consent," said noyb.eu in a statement. "GDPR is very pragmatic on this point: whatever is really necessary for an app is legal without consent, the rest needs a free 'yes' or 'no' option."

Privacy advocate Max Schrems said: "Many users do not know yet that this annoying way of pushing people to consent is actually forbidden under GDPR in most cases."
http://www.bbc.com/news/technology-44252327

Does true AI require magic ?

To be fair, Google Duplex doesn’t literally use phrase-book-like templates. It uses “machine learning” techniques to extract a range of possible phrases drawn from an enormous data set of recordings of human conversations. But the basic problem remains the same: No matter how much data you have and how many patterns you discern, your data will never match the creativity of human beings or the fluidity of the real world. The universe of possible sentences is too complex. There is no end to the variety of life — or to the ways in which we can talk about that variety.

Well, yes and no. Yes, because AI is really just a hugely elaborate metaphorical stereotype of a parrot. It's just repeating and matching up stuff. I'm not seeing the merest glimmer that it in any way resembles an actual parrot, which has some understanding of what it's doing and why. So, yes to the part about it not really being intelligent. But no the part about creativity : even a purely mechanical device that rapidly explores parameter space could be said to be (in some basic sense) hugely creative, and potentially much faster than any human. The problem is - like exploring an infinite random number - extracting anything from it that's not simply gibberish.

From a purely materialistic perspective, AI is undeniably possible. It just requires some careful arrangement of atoms to create a perfect synthetic brain, be that made of silicon or otherwise. It could even potentially emerge from pure code and be hardware-independent. In that sense we already have true (though crude) AI; it's simply experiencing the world in a very different way to ours. Our brains are nothing more than incredibly complex and sophisticated calculators, in that view. Thoughts and feelings are merely electrical currents and chemical reactions; whether that means a calculator has some kind of experience of the world, or if it's somehow only our own specific types of electrochemical reactions that give rise to this, I don't know.

In an idealist view, we may artificially create something that approximates intelligence but always remains strictly artificial in nature. It might become a very convincing fake, but it would never have the sort of understanding a living animal has. In that view, true AI is impossible - there is some other substance to mind and thought that cannot be synthesised. The only way to create it (or in some views, create receivers that can interpret it) is by, well, fucking. Goodness knows what would happen if you could synthesise a zygote atom by atom... In this perspective, thoughts and feelings have some weirder, more mystical aspect than physical constructs.

I'm not gonna tell you which (if any) of these I most align with. Though this always reminds me of the question Alexander the Great supposedly asked to an oracle :
"How does a man become a god ?"
To which the response was, "By doing something a man cannot do."


https://www.nytimes.com/2018/05/18/opinion/artificial-intelligence-challenges.html

Different animals have different kinds of intelligence

This one could fit equally well in the politics collection but it's going in this one. Animal intelligence is super interesting in its own right :

“[Pigeons have] knocked our socks off in our own lab and other people’s labs in terms of what they can do,” said Edward Wasserman, a professor of experimental psychology at the University of Iowa. “Pigeons can blow the doors off monkeys in some tasks."

“Learning sets” are a great example of this. First developed by Harry Harlow, learning sets were, essentially, a test of how well a subject could learn to learn. Scientists might give an animal the choice of two doors to open, one of which had food hidden behind it. Then they’d do the same test, over and over, always with the food behind the same door. Humans did better at this than chimps, Wasserman said, and chimps did better than rhesus monkeys, which did better than bush babies. It looked like some kind of cross-species hierarchy of IQ was emerging out of the data. “But then people studied blue jays and I’ll be damned if the blue jays didn’t look better than half the mammals tested,” he said.

Birds and mice, for instance, both do better than people on tests of how quickly an individual can navigate a maze and remember the locations of objects, said Louis Matzel, a psychology professor at Rutgers. That doesn’t mean those mice are smarter than men. It just means that’s a skill their species had more need of.

Which has profound implications :

It’s useless to extend human rights to animals, he told me, because that’s inevitably going to be based on a hierarchy model of intelligence. Instead, we should be giving animals animal rights — starting with the right to a wild habitat. The biggest threat to any species is habitat destruction, he said. So he suggested we start there, and give each species the right to its native place to live, instead of trying to attach human rights to something that is not human. “How about we accord other species the respect they deserve?” Wasserman said.

God knows how we even define "human", let alone how we extend this to human rights.

https://fivethirtyeight.com/features/humans-are-dumb-at-figuring-out-how-smart-animals-are/

Thursday, 24 May 2018

No summit for you

US President Donald Drumpf has cancelled a summit with North Korean leader Kim Jong-un, blaming "tremendous anger and open hostility" from the North. He said it was possible a meeting could still take place but warned North Korea against committing "foolish" acts.

The meeting would have discussed ways of denuclearising the Korean peninsula, building on a historic North-South Korea summit in April. The "unexpected" decision, Pyongyang said, was "extremely regrettable". In a statement released by the North's central news agency, Vice-Foreign Minister Kim Kye Gwan said the country held Mr Drumpf's efforts to hold a summit "in high regards".

Mr Drumpf said he had been "very much looking forward" to meeting Mr Kim. "Sadly, based on the tremendous anger and open hostility displayed in your most recent statement, I feel it is inappropriate, at this time, to have the long-planned meeting," Mr Drumpf said in a letter to Mr Kim. "You talk about your nuclear capabilities, but ours are so massive and powerful that I pray to God they will never have to be used," he added. But he called the meeting a "missed opportunity", saying "someday, I look very much forward to meeting you".

[Subsequently, in keeping with Trump's established behaviour of being a woefully inconsistent idiot, the summit went ahead a month later.]

http://www.bbc.com/news/world-us-canada-44242558

Real numbers are not really real

Thinking too much about infinity is very bad for your health. Just say no. Or, as is popularly expressed, "Shut up and calculate !".

... one single real number can contain the answers to all (binary) questions one can formulate in any human languages. To see this it suffices to realize that there are only finitely many languages, each with finitely many symbols. Hence, one can binarise this list of symbols (as routinely done in today’s computers) and list all sequences of symbols, first the sequences containing only a single symbol, next those containing two symbols, and so on.

When they represent a question whose answer is yes, we set these bits to 01 and if the answer is no we set them to 10. This procedure is not efficient at all, but who cares : since a real number has infinitely many bits, there is no need to save space! Hence, one can really code the answers to all possibly (binary) questions in one single real number. This illustrates the absurdly unlimited amount of information that real numbers contain. Real numbers are monsters!

That's an elaborate, though interesting, way of saying you can write down an answer to any question but it might be wrong.

Furthermore, everyone knows also that each stored bit requires some space. Not much, possibly soon only a few cubic nanometre, but definitively some finite volume. Consequently, assuming that information has always to be encoded in some physical stuff, a finite volume of space can not contain more than a finite amount of information. At least, this is a very reasonable assumption.

Consider a small volume, a cubic centimetre let’s say, containing a marble ball. This small volume can contain but a finite amount of information. Hence, the centre of mass of this marble ball can’t be a real number (and even less 3 real numbers), since real numbers contain - with probability one - an infinite amount of information.

Which is a variation on asking whether reality is continuous or discrete.

But the fact that a finite volume of space can’t contain more but a finite amount of information implies that the centre of mass of any object should not be identified with mathematical real numbers. Real numbers are useful tools, but are only tools. They have no ontological reality - no physical reality.

The way I see it, mathematics is a language. Like all languages it describes physical and other (i.e. conceptual) realities, but it is not the same as those realities themselves. It's a description : sometimes startlingly accurate, but at other times perhaps flawed. Maybe it's not mathematics itself that's flawed, but our poor linguistic skills : e.g. it can take infinite or finite time to cross and event horizon depending on your coordinate system. As to whether physical reality has infinite continuity and precision or is somehow discrete, I've no idea.

The view I am suggesting is that the first bits in the expression of x are really real while the very far away bits are totally random... One may object that this view is arbitrary as there is no natural bit number where the transition from real to random bits takes places. This is correct, though not important in practice as long as this transition is far away down the bit series. In the classical case, one may imagine that the very far down the series of bits of “real” numbers in binary format are totally random

I'd say that's tremendously important ! You can't just arbitrarily decide that very long numbers are so inconvenient that they must actually have a finite but unspecified length.

The main idea here is to introduce randomness into classical physics. Some finite length of the infinitely precise real numbers describe actual physical parameters, whereas the rest (if I understand this correctly) in some way provide a random element. Like the idea that reality is a simulation, I place this firmly in the category of, "genuinely very interesting, but I hate it."

https://arxiv.org/abs/1803.06824
https://www.sciencenews.org/article/real-numbers-physics-free-will

Tuesday, 22 May 2018

Using algae to recapture fertiliser that would normally be washed away

If well-distributed throughout agricultural areas, algal turf scrubbing facilities can recapture the nutrients that run off of farms before they accumulate downstream and cause unwanted blooms. The algal biomass can be recycled locally as a slow release fertilizer, reducing the amount of mineral fertilizer that needs to be imported or manufactured and returning organic carbon to the soil, reversing centuries of damage. It can be used as a feed in agriculture and aquaculture, and as a source of biofuel.

About 20 million hectares (47 million acres) are needed worldwide to completely recycle agricultural nutrient pollution. This is no small task. One percent of the global economy would need to be diverted to algae farming. In exchange, algae farmers would restore ecological balance, basin by basin, reversing damage to their soils and reducing the use of mined and petroleum-based nutrients, rescuing fish stocks by reducing or eliminating algae blooms and dead zones. In the process, 10% of excess atmospheric carbon would be captured in the algal biomass.

https://sciencetrends.com/algal-turf-scrubbing-creating-helpful-not-harmful-algal-blooms/

An interstellar asteroid ? I'm far from convinced

Using computer simulations, Connors and his colleagues found in 2017 that Bee-Zed's orbit has been stable over the past million years. The discovery took Namouni and Morais by surprise; their previous work had suggested that orbits like BZ509's could only last 10,000 years or so. To push these results further, Namouni and Morais built a model of our solar system in its current layout. They then sprinkled in a million virtual “clones” of Bee-Zed, each with a slightly tweaked version of the asteroid's observed orbit, and ran the simulation for the virtual equivalent of 4.5 billion years.

Many of the clones eventually collided with the sun or were ejected from the solar system. Half of them lasted less than seven million years. But 46 of the clones were stable over the lifetime of the solar system—and 27 of them closely resemble Bee-Zed's current loop.

For humans to have a statistical chance of seeing Bee-Zed, Namouni and Morais argue that the asteroid must have been in a highly stable orbit for 4.5 billion years. But if Bee-Zed has orbited the sun since the solar system's childhood, how did it end up going backward? After considering and rejecting a variety of potential explanations, they say it must be an interstellar interloper.

I may or may not actually the paper. Until then, based on the press release, this is daft. All their results say is that asteroids on this sort of orbit can be stable. That's all. I don't see any reason whatsoever that the asteroid in question must always have been on that orbit since the Solar System's formation : it's possible, but that's no reason to think that it actually happened. Maybe it's been in this orbit for a long time, but why assume the full length of the stability ? Why can't it have been there for say, 500 Myr ? More importantly, to estimate the chance of a detection you'd need to know the rate at which asteroids are likely to be scattered into similar orbits. A low probability of an orbital capture could be countered by a high number of asteroids.

Then there's the preferred explanation of the object being interstellar. Is it more likely that such an asteroid should enter this orbit than one from our own Solar System ? I'm not seeing any obvious reason why this should be the case. We need a probability comparison at the very least. And since the flux of interstellar asteroids is only known to be greater than zero, I don't see how that's currently possible.

https://news.nationalgeographic.com/2018/05/interstellar-asteroid-jupiter-bz509-astronomy-space-science/

The EM drive is just stupid for crying out loud

Let us hear no more of this.

Of course, we will anyway. Just like cold fusion, a group of diehards will rant and rave about closed-minded scientific priesthood quashing their glorious rebellion no matter how many times they fail to build a working spaceship.


Originally shared by Winchell Chung

Some scientific pros built and tested an Em Drive.
On the one hand it showed thrust. On the other hand it showed the exact same thrust when the blasted engine was turned off

Turns out the Earth's magnetic field interferes with the measuring system, the Em Drive does not work.
https://arstechnica.com/science/2018/05/nasas-em-drive-is-a-magnetic-wtf-thruster/

A Skylon-based Mars mission

A Skylon-based Mars mission, via Winchell Chung. Well, a man can dream... more realistically, perhaps Britain shall acquire the dubious honour of being the first country to develop and abandon an orbital launcher not once but twice ! Huzzah ! Is there nothing we can't do badly ? I suspect not.

The propulsion section has three stages: the Earth Departure Stage (EDS), the Mars Transfer Stage (MTS) and the Earth Return Stage(s) (ERS). An automated uncrewed precursor mission delivers a habitat module and power supplies to the Martian surface and establishes orbital facilities two years before the crewed mission departs. Of course the second mission only departs after all the assets perform self-checkouts and report success to Terra. The assets are not just to assist the mission, they are emergency back-up in case the crewed ship malfunctions and the crew has to shelter in place on Mars until a rescue mission arrives.

The Earth Departure Stage is designed to be reusable, so it can send off both the precursor and the primary spacecraft. It boosts the spacecraft from LEO to just short of escape velocity. It separates and allows the spacecraft to continue to Mars. The EDS is now in a highly elliptical synchronous orbit with respect to the Troy Operation Base Orbit, it uses that orbit to return. Meanwhile the Mars Transfer Stage burns to complete spacecraft insertion into Mars transfer orbit.

A three ship mission would not cost three times as much, due to the economy of scale. Two ships provides great redundancy, three ships allow up to 90% of the Martian surface to be explored. True, it would need three precursor missions instead of one, but it would be a cheaper than the Apollo missions. Apollo involved the launch of 30,000 metric tons to put 18 astronaut near Luna (12 who landed on the surface) over a period of four years.

http://www.projectrho.com/public_html/rocket/realdesigns.php#projecttroy

A fluffy remake of Dune

It's like Dune, but the sandworms are dogs and the Fremen are ducklings.You know it makes sense.
http://www.bbc.com/news/av/uk-england-cambridgeshire-44202975/fred-the-labrador-takes-nine-ducklings-under-his-paw

Witness two lynx shouting at each other for some reason

Ed Trist lives off-grid in the Ontario bush, about 17 kilometres from the closest town, but a video he posted on Facebook has thrust him into a global spotlight. Trist had been planning to catch a few minnows for fishing but instead caught a rare sighting of two lynx squaring off and wailing at one another.

 "It was really bizarre," he said, adding that it's not unusual to see a lynx around the area, but the typically reclusive cat will usually be gone in a flash. You just get a quick glimpse of them and they're outta there. Two of them, together, headbutting each other and squaring off? It's extremely rare."

Trist, his girlfriend Nicole Lewis and his daughter were in an off-road vehicle, heading down a logging road in Avery Lake, 150 kilometres directly east of Kenora, when they came across the cats. They stopped about nine metres from the lynx and watched for about 10 minutes, "and they didn't even care that we were there," Trist said. When the trio decided to keep going, they drove right past the cats, who remained focused on one another.

http://www.cbc.ca/news/canada/manitoba/lynx-video-avery-lake-kenora-1.4671909

Monday, 21 May 2018

The untold story of the Copernican revolution

When we learned in school about the Copernican Revolution, we did not hear about arguments involving star sizes and the Coriolis Effect. We heard a much less scientifically dynamic story, in which scientists like Kepler struggled to see scientifically correct ideas triumph over powerful, entrenched, and recalcitrant establishments. Today, despite the advances in technology and knowledge, science faces rejection by those who claim that it is bedeviled by hoaxes, conspiracies, or suppressions of data by powerful establishments.

But the story of the Copernican Revolution shows that science was, from its birth, a dynamic process, with good points and bad points on both sides of the debate. When the usual story of the Copernican Revolution features clear discoveries, opposed by powerful establishments, we should not be surprised that some people expect science to produce quick, clear answers and discoveries, and see in scientific murkiness the hand of conspiratorial establishments. We might all have a more realistic expectation of science’s workings if we instead learned that the Copernican Revolution featured a dynamic scientific give and take, with intelligent actors on both sides—and with discoveries and progress coming in fits and starts, and sometimes leading to blind alleys such as Kepler’s giant stars. When we understand that the simple question of whether the Earth moved posed scientifically challenging problems for a very long time, even in the face of new ideas and new instruments, then we will understand better that scientific questions today may yield complex answers, and those only in due course.

http://nautil.us/issue/60/searches/the-popular-creation-story-of-astronomy-is-wrong

Expecting the unexpected

When the midcentury psychologists Jerome S Bruner and Leo Postman presented test subjects with brief views of playing cards, including some non-standard varieties – such as a red two of spades, or a black ace of diamonds – many people never called out the incongruities. They reported that they felt uneasy for some reason but often couldn’t identify why, even though it was literally right before their eyes.

So, crucially, some understanding of the expected signal usually exists prior to its detection: to be able to see, we must know what it is we’re looking for, and predict its appearance, which in turn influences the visual experience itself. The process of perception is thus a bit like a Cubist painting, a jumble of personal visual archetypes that the brain enlists from moment to moment to anticipate what our eyes are presenting to us, thereby elaborating a sort of visual theory. Without these patterns we are lost, adrift on a sea of chaos, with a deeply unsettling sense that we don’t know what we are looking at, yet with them we risk seeing only the familiar. How do we learn to see something that is truly new and unexpected?

Because of the complexity of both visual experience and scientific observation, it is clear that while seeing might be believing, it is also true that believing affects our understanding of what we see. The filter we bring to sensory experience is commonly known as cognitive bias, but in the context of a scientific observation it is called prior knowledge. To call it prior knowledge does not imply that we are certain it is true, only that we assume it is true in order to get to work making predictions.

If we make no prior assumptions, then we have no ground to stand on. The quicksand of radical and unbounded doubt opens beneath our feet and we sink, unable to gain purchase. We remain forever at the base of the sheer rock face of the world, unable to begin our climb. Yet, while we must start with prior knowledge we take as true, we must also remain open to surprise; else we can never learn anything new. In this sense, science is always Janus-headed, like the ancient Roman god of liminal spaces, looking simultaneously to the past and to the future. Learning is essentially about updating our biases, not eliminating them. We always need them to get started, but we also need them to be open to change, otherwise we would be unable to exploit the new vistas that our advancing technology opens to view... The iterative bootstrapping of learning-to-see, then seeing-to-learn, continues apace.

https://aeon.co/essays/seeing-is-not-simple-you-need-to-be-both-knowing-and-naive

A visitor from another star who's here to stay ?

I'm surprised this isn't getting more coverage. At the same time, I wouldn't really put much faith in it. The evidence that this is from another star system is based purely on simulations of the solar system formation, and given the problems of getting planets to form in the right places I wouldn't particularly trust results with regards to an individual asteroid.

The latest discovery marks the first time an asteroid that appears to be a permanent member of our solar system has been revealed as having its origins in another star system. ‘Oumuamua, an asteroid spotted hurtling through our solar system earlier this year, was only on a fleeting visit.

Known as asteroid 2015 BZ509, the permanent visitor is about 3km across and was first spotted in late 2014 by the Pan-Starrs project at the Haleakala Observatory in Hawaii. Experts quickly realised the asteroid travelled around the sun in the opposite direction to the planets – a retrograde orbit.

Writing in the Monthly Notices of the Royal Astronomical Society, Namouni and co-author Dr Helena Morais from São Paulo State University in Brazil describe how they developed a new computer model that allowed them to produce a million possibilities for the asteroid’s orbit, each with tiny differences, and trace their evolution.

To the team’s surprise, the results reveal that the asteroid’s orbit appears most likely to have remained very similar and linked to Jupiter for 4.5bn years – in other words, since the end of planet formation. “That was completely unexpected,” said Namouni.

The discovery provides vital clues as to the asteroid’s origins. “It couldn’t be debris of the solar system because at 4.5bn years, all objects, planets, asteroids, comets in the solar system are going around the solar system in the same direction,” he said, adding that the model suggests the most likely explanation is that the asteroid was captured by Jupiter as it hurtled through the solar system from interstellar space. “It means it is an alien to the solar system,” he said.

[But this is not evidence that the asteroid has been stable for that long, it's just a simulation showing that this scenario is compatible with observations. We have no way of knowing how long this asteroid has actually been in its present orbit.]

https://www.theguardian.com/science/2018/may/21/retrograde-asteroid-is-interstellar-immigrant-scientists-say?CMP=share_btn_gp

Evaluating the effectiveness of evaluations

Peer review is a waste of everyone's time, study finds

That's the misleading clickbaity headline I'd give to this mostly dry, technical, very detailed paper on whether reviewers are accurately able to evaluate proposals or not. Alternatively :

Yo Dawg, We Herd You Like Reviews, So We Reviewed Your Review Of Reviews So You Can Review While You Review

Aaaannnyway.....

This paper looks at the review process used by the European Southern Observatory to allocate telescope time. There's some good stuff in here, but my goodness it's buried in a lot of (I think unnecessarily complex) statistics. They investigate what they call the "True Grade Hypothesis", which is something familiarly Platonic :

For any given proposal a true grade does exist. The true grade can be determined as the average of the grades given by a very large number of referees.

Which is basically what everyone is assuming throughout the whole process.

The first part of hypothesis is obviously debatable, as it implicitly assumes that an absolute and infinitely objective scientific value can be attached to a given science case. It does not take into account, for instance, that a proposal is to be considered within a certain context and cannot be judged in vacuum. Most likely, a proposal requesting time to measure the positions of stars during a total solar eclipse would have been ranked very highly in 1916, but nowadays it would probably score rather poorly. The science case is still very valuable in absolute terms, but is deprived of almost any interest in the context of modern physics.

The second part of the hypothesis is also subject to criticism, because it implicitly assumes that referees behave like objective measurement instruments. This is most likely not the case. For instance, although the referees are proactively instructed to focus their judgement [no-one will ever convince me you can spell it "judgment", that is plainly ludicrous] on the mere scientific merits, it is unavoidable that they (consciously or unconsciously) take into account (or are influenced by) other aspects. Among these are the previous history of the proposing team, its productivity, its public visibility, the inclusions of certain individuals, personal preferences for specific topics, and so on.

What they find is that the TGH does seem to work, but the scatter is very high. Even for the supposedly best and worst proposals, the refereeing process is only a bit more consistent than random, and it's not really any better at all for the average proposals... in terms of grading, though it does do quite a lot better in terms of ranking.

Even so, their results agree with previous findings that the success of an observing proposal is about half down to the quality of the proposal and half due to luck. However, if you add more referees, results do converge. There might be an optimum number of referees to balance the benefits against the extra resources needed.

Of course a major caveat here is that observers write proposals knowing that they will be reviewed. I'd be any sum of money you like that if you were to peer review proposals that the authors were told weren't being peer reviewed, the peer review system would perform twenty bajillion times better than chance alone.

This is the first paper of two, focusing on the review stage when referees individually analyse their assigned proposals. The second paper will look at the effects of the committee meeting where each reviewer explains their reasoning to the others and the final ranking.


https://arxiv.org/abs/1805.06981

Audio illusions

I hear "Laurel" most clearly one bar to the left. Beyond that it's still Laurel, just more distorted. One bar right of centre and I here Yanny. Further to the right and I get Yaylee.

It's like those moments in conservation of mixed nationalities where one side insists the other is making a teeny-tiny mispronounciation that creates a a completely different word, except now you can do it on demand with a slider. I imagine it's only a matter of time before someone spoofs the scene in Contact where they first here the alien signal and it's this...

Originally shared by Charles Filipponi

The NYT is not what is used to be, or maybe it never was. Anyway, as with anyone who loves information, I take it as it comes.

And while I was sitting in a hotel room the other day, I was surprised to see a Twitter storm involving "Laural" v. "Yanny".

People have a lot of time on their hands.

I clearly heard "Laurel" but the explanations of frequency content are trivially true. The surprise is that the terms are so different - just adding some higher frequency content (rather than modulating the lower content as well) seems like an unlikely reason for such a disparity. [I did not hear "Yanny" until the slide was on the last tick before the end on the right.]

But here is the NYT with something one can play with - it adds or subtracts some content. The truly interesting thing here is the"Once you see it, you can't not see it" thing. This is part and parcel of the brain's template business.

I can slide it over to produce "Yanny" but sliding it back does not produce "Laurel" where it once did. So there is more than frequency content at play.

Kind of interesting. #Laurel

https://www.nytimes.com/interactive/2018/05/16/upshot/audio-clip-yanny-laurel-debate.html
https://www.nytimes.com/interactive/2018/05/16/upshot/audio-clip-yanny-laurel-debate.html

Saturday, 19 May 2018

The world is getting better, but at a ferocious cost

In short :
The world is getting materially, technologically, and generally socially better. In many ways it is a stupendous, staggering improvement on previous centuries. But... this has come at a tremendous cost. Resources are being used at an unsustainable, ultimately destructive rate, changing the natural ecosystem which sustains us. Absolute wealth has increased, but wealth inequality has grown. Some improvement have happened because of capitalism but others have occurred in spite of it : the market didn't decide by itself that racism is bad.

I see this as a cautionary note that progress has been made, and should be celebrated, but we shouldn't take it for granted and ought to be extremely careful about deciding the reasons for improvements where they have occurred.


I agree with much of what Pinker has to say. His book is stocked with seventy-five charts and graphs that provide incontrovertible evidence for centuries of progress on many fronts that should matter to all of us: an inexorable decline in violence of all sorts along with equally impressive increases in health, longevity, education, and human rights. It’s precisely because of the validity of much of Pinker’s narrative that the flaws in his argument are so dangerous. They’re concealed under such a smooth layer of data and eloquence that they need to be carefully unravelled. That’s why my response to Pinker is to meet him on his own turf: in each section, like him, I rest my case on hard data exemplified in a graph.

... Taken together, these graphs illustrate ecological overshoot: the fact that, in the pursuit of material progress, our civilization is consuming the earth’s resources faster than they can be replenished. Overshoot is particularly dangerous because of its relatively slow feedback loops: if your checking account balance approaches zero, you know that if you keep writing checks they will bounce. In overshoot, however, it’s as though our civilization keeps taking out bigger and bigger overdrafts to replenish the account, and then we pretend these funds are income and celebrate our continuing “progress.” In the end, of course, the money runs dry and it’s game over.

Pinker claims to respect science, yet he blithely ignores fifteen thousand scientists’ desperate warning to humanity. Instead, he uses the blatant rhetorical technique of ridicule to paint those concerned about overshoot as part of a “quasi-religious ideology… laced with misanthropy, including an indifference to starvation, an indulgence in ghoulish fantasies of a depopulated planet, and Nazi-like comparisons of human beings to vermin, pathogens, and cancer.”

He is pleased to tell us that “racist violence against African Americans… plummeted in the 20th century, and has fallen further since.” What he declines to report is the drastic increase in incarceration rates for African Americans during that same period (Figure 3). An African American man is now six times more likely to be arrested than a white man, resulting in the dismal statistic that one in every three African American men can currently expect to be imprisoned in their lifetime. The grim takeaway from this is that racist violence against African Americans has not declined at all, as Pinker suggests. Instead, it has become institutionalized into U.S. national policy in what is known as the school-to-prison pipeline.

Contrary to popular belief about rising global inequality, it seemed to show that, while the top 1% did in fact gain more than their fair share of income, lower percentiles of the global population had done just as well. It seemed to be only the middle classes in wealthy countries that had missed out. This graph, however, is virtually meaningless because it calculates growth rates as a percent of widely divergent income levels. Compare a Silicon Valley executive earning $200,000/year with one of the three billion people currently living on $2.50 per day or less. If the executive gets a 10% pay hike, she can use the $20,000 to buy a new compact car for her teenage daughter. Meanwhile, that same 10% increase would add, at most, a measly 25 cents per day to each of those three billion. 

For decades, the neoliberal mantra, based on Preston’s Curve, has dominated mainstream thinking—raise a country’s GDP and health benefits will follow. Lutz and Kebede show that a more effective policy would be to invest in schooling for children, with all the ensuing benefits in quality of life that will bring.

Looking back into history, Pinker recognizes that changes in moral norms came about because progressive minds broke out of their society’s normative frames and applied new ethics based on a higher level of morality, dragging the mainstream reluctantly in their wake, until the next generation grew up adopting a new moral baseline. “Global shaming campaigns,” he explains, “even when they start out as purely aspirational, have in the past led to dramatic reductions in slavery, dueling, whaling, foot-binding, piracy, privateering, chemical warfare, apartheid, and atmospheric nuclear testing.”


It is hard to comprehend how the same person who wrote these words can then turn around and hurl invectives against what he decries as “political correctness police, and social justice warriors” caught up in “identity politics,” not to mention his loathing for an environmental movement that “subordinates human interests to a transcendent entity, the ecosystem.” 

https://patternsofmeaning.com/2018/05/17/steven-pinkers-ideas-about-progress-are-fatally-flawed-these-eight-graphs-show-why/

Friday, 18 May 2018

Thinking is hard and language is imperfect

Turning CO2 into rocks

Since experiments began in 2014, it's been scaled up from a pilot project to a permanent solution, cleaning up a third of the plant's carbon emissions. "More importantly, we are a testing ground for a method that can be applied elsewhere, be that a power plant, heavy industries or any other CO2 emitting source", says Dr Aradottir.

The process starts with the capture of waste CO2 from the steam, which is then dissolved into large volumes of water. "We use a giant soda-machine", says Dr Aradottir as she points to the gas separation station, an industrial shed that stands behind the roaring turbines. "Essentially, what happens here is similar to the process in your kitchen, when you are making yourself some sparkling water: we add fizz to the water".

The fizzy liquid is then piped to the injection site - otherworldly, geometric igloo-shaped structure 2km away. There it is pumped 1,000m (3,200ft) beneath the surface. In a matter of months, chemical reactions will solidify the CO2 into rock - thus preventing it from escaping back into the atmosphere for millions of years.

"Before the injection started in CarbFix, the consensus within the scientific community was that it would take decades to thousands of years for the injected CO2 to mineralise", says Prof Gislason explains. "Then we found out that it was already mineralised after 400 days".

Some critics warn high-tech fixes such as this one pose a bigger risk - that of distracting researchers and the public from the pressing need drastically to reduce greenhouse gases levels. In a recent report, the European Academies Science Advisory Council warned that such technologies have "limited realistic potential" if emissions are not reduced. "CarbFix is not a silver bullet. We have to cut emissions and develop renewable energies, and we have to do CCS too," says Prof Gislason. We have to change the way we live, which has proved very hard for people to understand."

Either way, we're probably going to need some way to remove the carbon we've already put in the atmosphere.
http://www.bbc.com/news/world-43789527

Your ancestors were horrible people, deal with it

One of the few things I will state with certainty is that someone, somewhere in your ancestry, was a complete and utter jerk.

Mendelsohn, a journalist, author and passionate genealogist, has been using people's public family history to beat back some of the uglier claims about immigrants and how they fit into US history. She calls it #resistancegenealogy, and it only takes a few online tools and some instinctive sleuthing for her to call out public figures who oppose common forms of immigration.

Most recently, the movement has set its eyes on Stephen Miller, President Donald Drumpf's top policy adviser. Miller was an architect of the administration's poorly-received "zero-tolerance" immigration policy, as well as Drumpf's controversial 2017 "travel ban" that affected some Muslim-majority countries.

"I challenge any news organization here: Do a poll, ask these questions," Miller said, after saying he thought voters would want the immigration system changed. "Do you think we should favor applicants to our country who speak English, yes or no?... Do you think we should prioritize people based on skill?" Turns out, Miller is a descendant of immigrants who did not speak English, according to Mendelsohn's research. She unearthed that tidbit after said press briefing, when she found a 1910 Census record that she said notes the language skills of Miller's immigrant great-grandmother.

This week, another document regarding Miller's family began to recirculate: His great-grandfather's naturalization test, which the genealogist who found it last summer points out was rejected due to "ignorance."


https://www.cnn.com/2018/01/24/us/immigration-resistance-genealogy-jennifer-mendelsohn-trnd/index.html

REM sleep may or may not be tremendously important

Creativity is apparently the internet's flavour of the month. This one is also interesting with regards to the purpose of sleep.

As you start to fall asleep, you enter non-REM sleep. That includes a light phase that takes up most of the night, and a period of much heavier slumber called slow-wave sleep, or SWS, when millions of neurons fire simultaneously and strongly, like a cellular Greek chorus. “It’s something you don’t see in a wakeful state at all,” says Lewis. “You’re in a deep physiological state of sleep and you’d be unhappy if you were woken up.”

During that state, the brain replays memories. For example, the same neurons that fired when a rat ran through a maze during the day will spontaneously fire while it sleeps at night, in roughly the same order. These reruns help to consolidate and strengthen newly formed memories, integrating them into existing knowledge. But Lewis explains that they also help the brain extract generalities from specifics—an idea that others have also supported.

This process happens all the time, but Lewis argues that it’s especially strong during SWS because of a tight connection between two parts of the brain. The first—the hippocampus—is a seahorse-shaped region in the middle of the brain that captures memories of events and places. The second—the neocortex—is the outer layer of the brain and, among other things, it’s where memories of facts, ideas, and concepts are stored. Lewis’s idea is that the hippocampus nudges the neocortex into replaying memories that are thematically related—that occur in the same place, or share some other detail. That makes it much easier for the neocortex to pull out common themes.

“Some people argued that slow wave sleep is important for creativity and others argued that it’s REM. We’re saying it’s both.” Essentially, non-REM sleep extracts concepts, and REM sleep connects them. Crucially, they build on one another. The sleeping brain goes through one cycle of non-REM and REM sleep every 90 minutes or so. Over the course of a night—or several nights—the hippocampus and neocortex repeatedly sync up and decouple, and the sequence of abstraction and connection repeats itself. “An analogy would be two researchers who initially work on the same problem together, then go away and each think about it separately, then come back together to work on it further,” Lewis writes.

Which is all very interesting but there's a major weakness here :

People can be deprived of REM sleep without suffering from any obvious mental problems. One Israeli man, for example, lost most REM sleep after a brain injury; “he’s a high-functioning lawyer and he writes puzzles for his local newspaper,” Lewis says. “That is definitely a problem for us.”
https://www.theatlantic.com/science/archive/2018/05/sleep-creativity-theory/560399/

Creative people don't have mood disorders

TLDR : Nope, not really. There's so much subjectivity and bias to measuring both creativity and mood disorders that they're almost immeasurable, never mind looking for a correlation. What evidence there is can be equally well explained by completely different factors. For example, on people in creative occupations having more mood disorders :

The creative occupations considered in these studies are overwhelmingly in the arts, which frequently provide greater autonomy and less rigid structure than the average nine-to-five job. This makes these jobs more conducive to the success of individuals who struggle with performance consistency as the result of a mood disorder... Therefore, it is possible that many people who suffer from mood disorders gravitate towards these types of professions, regardless of creative ability or inclination.

And to skip ahead to the conclusions :

Believing that creativity is due to some underlying, uncontrollable factor reinforces the idea that few people are capable of true creativity, which prevents many from realising their own potential. It also undermines the skill and effort that creative endeavours require, if we can simply chalk it up to the consequence of a disorder. And the connection between mood disorders and creativity influences the very way we view the creative work of others: university students who were told the story of Van Gogh cutting off his ear before they examined his painting Sunflowers (1888) took a more favourable view of it than those who weren’t told the story. Similarly, students priced a piece of artwork higher when a fictitious artist’s biography briefly mentioned that he was ‘often described as very eccentric’.

There's an obvious similarity to the myth of the lone genius, of course. People like to believe that if someone has an exceptional skill or ability then they must be compensationally mediocre in other ways : "Assuredly, no one man has been blessed with all God's gifts. You, Hannibal, know how to gain a victory; you do not know how to use it." And frankly, people who are amazingly talented, successful and attractive are frickin' annoying rare. Yet I'll vouch that the popular idea of scientists being socially awkward misanthropes is both damaging and untrue - it gives people an excuse to reject, shall we say, inconvenient truths (perhaps more interesting in this context is whether or not anyone can be good at mathematical/scientific analysis and that the lone genius myth prevents them from achieving that). Artists, I'll venture, probably don't much live up to the moody dishevelled stereotype much either (though I'll bet the poverty one is more accurate, but hope to be wrong about that too).
https://aeon.co/essays/is-there-any-evidence-linking-creativity-and-mood-disorders

Thursday, 17 May 2018

The Guardian on faux rage

This is essentially the Guardian's version of that other article about social media being so much faux rage. Ignore the headline, it's useless.

There is a through-line to these spurts of emotion we get from spectatorship: the subject matter is not important. It could be human rights abuse or a party-wall dispute; it does not matter, so long as it delivers a shot of righteous anger. Bile connects each issue. We see rage and we meet it with our own, always wanting more.

...There are elements of the human emotional journey that are novel and are driven by modern conditions. Aaron Balick, a psychotherapist and the author of a perceptive and surprisingly readable academic account, The Psychodynamics of Social Networking, says: “I think for sure anger is more expressed. What you see of it is a consequence of emotional contagion, which I think social media is partly responsible for. There’s an anger-bandwagon effect: someone expresses it and this drives someone else to express it as well.” Psychologically speaking, the important thing is not the emotion, but what you do with it; whether you vent, process or suppress it.

“A hysterical emotional response is when you’re having too much emotion, because you’re not in touch with the foundational feeling. An example would be office bitching. Everybody in the office is bitching and it becomes a hysterical negativity that never treats itself; nobody is taking it forwards.” This has the hammer thud of deep truth. I have worked in only a couple of offices, but there was always a gentle hubbub of whinging, in which important and intimate connections were forged by shared grievance, but it was underpinned by a deliberate relinquishing of power. You complained exactly because you did not intend to address the grievance meaningfully.

Social media has given us a way to transmute that anger from the workplace – which often we do not have the power to change – to every other area of life. You can go on Mumsnet to get angry with other people’s lazy husbands and interfering mother-in-laws; Twitter to find comradeship in fury about politics and punctuation; Facebook for rage-offs about people who shouted at a baby on a train or left their dog in a hot car. These social forums “enable hysterical contagion”, says Balick, but that does not mean it is always unproductive. The example he uses of a groundswell of infectious anger that became a movement is the Arab spring, but you could point to petitions websites such as 38 Degrees and Avaaz or crowdfunded justice projects. Most broad, collaborative calls for change begin with a story that enrages people.

[See, the thing about complaining is that it can be a genuinely effective way of coping with pressure. Keeping things bottled up is not healthy, but being told to keep them bottled up when you need to vent is even worse. There are a great many faux-rage discussions on social media that would be absolutely fine in the real world, i.e. between relevant colleagues, but are absolutely useless when talking to a wider audience. Sometimes a good rant about some personal issue does you good, especially on smaller social media where you're never likely to attract anyone outside your social circle anyway. You'll get your sympathy, feel better, and move on. The problems start when it reaches a wider audience who see it as all part of the downfall of civilisation or whatnot, and take the damn thing far more seriously than they ought to : instead of sympathy you get a call to righteous anger to fix something which is apparently now a major problem in society.]

https://www.theguardian.com/science/2018/may/16/living-in-an-age-of-anger-50-year-rage-cycle?CMP=Share_iOSApp_Other

Review : Pagan Britain

Having read a good chunk of the original stories, I turn away slightly from mythological themes and back to something more academical : the ...