Reproducing the letter in full :
Dear Mrs May
I am in France having a break having come here on the train all the way from Settle. I just read your letter to me and the rest of Britain wanting us all to unite behind the damp squib you call a deal. Unite? I laughed so much the mouthful of frogs legs I was eating ended up dancing all over the bald head of the bloke on the opposite table.
Your party’s little civil war has divided this country irreparably. The last time this happened Cromwell discontinued the custom of kings wearing their heads on their shoulders.
I had a mother who was of Irish descent, an English father who lies in a Dutch graveyard in the village where his Lancaster bomber fell in flames. I had a Polish stepfather who drove a tank for us in WW2 and I have two half Polish sisters and a half Polish brother who is married to a girl from Donegal.
My two uncles of Irish descent fought for Britain in N Africa and in Burma.
So far you have called us Citizens Of Nowhere and Queue Jumpers. You have now taken away our children and grandchildren’s freedom to travel, settle, live and work in mainland Europe.
You have made this country a vicious and much diminished place. You as Home Sec sent a van round telling foreigners to go home. You said “ illegal” but that was bollocks as the legally here people of the Windrush generation soon discovered.
Your party has sold off our railways, water, electricity, gas, telecoms, Royal Mail etc until all we have left is the NHS and that is lined up for the US to have as soon as Hannon and Hunt can arrange it.
You have lied to the people of this country. You voted Remain yet changed your tune when the chance to grab the job of PM came. You should have sacked those lying bastards Gove and Bojo but daren’t because you haven’t the actual power.
You have no answer to the British border on the island of Ireland nor do you know how the Gib border with Spain will work once we are out.
Mrs May you have helped to divide this country to such an extent that families and friends are now no longer talking to each other, you have managed to negotiate a deal far worse than the one we had and all to keep together a party of millionaires, Eton Bullingdon boys, spivs and WI harridans. Your party conserves nothing. It has sold everything off in the name of the free market.
You could have kept our industries going with investment and development – Germany managed it. But no – The Free Market won so Sunderland, Barnsley, Hamilton etc could all go to the devil.
So Mrs May my answer to your plea for unity is firstly that it is ridiculous.
48% of us will never forgive you for Brexit and secondly, of the 52% that voted for it many will not forgive you for not giving them what your lying comrades like Rees Mogg and Fox promised them.
There are no unicorns, there is no £350 million extra for the NHS. The economy will tank and there will be less taxes to help out the poor. We have 350,000 homeless (not rough sleepers – homeless) in one of the richest countries on Earth and you are about to increase that number with your damn fool Brexit.
The bald man has wiped the frogs legs of his head, I’ve bought him a glass of wine to say sorry; I’m typing this with one finger on my phone in France and I’m tired now and want to stop before my finger gets too tired to join the other one in a sailors salute to you and your squalid Brexit, your shabby xenophobia and Little Englander mentality.
Two fingers to you and your unity from this proud citizen of nowhere. I and roughly half the country will never forgive you or your party.
https://www.thelondoneconomic.com/news/politics/this-brilliant-response-to-theresa-mays-brexit-letter-is-going-viral/26/11/
Sister blog of Physicists of the Caribbean in which I babble about non-astronomy stuff, because everyone needs a hobby
Wednesday, 28 November 2018
Tuesday, 27 November 2018
Writing fake news in order to expose fake news, for some reason
Completely bizarre and woefully misguided. Worth reading its entirety but here's the condensed version.
Blair spent more than two decades as a construction worker, a trade that took a toll on his body. In the late 2000s, when the Great Recession hit and his industry slumped, he started looking for another source of income in liberal political blogging. But although it was fun and a few people started reading, blogging didn’t pay. And so he tried another tactic. He began to write fabricated tales that looked like real news headlines. Streams of consciousness flowed from his head to the keyboard.
When he saw the results online, the hundreds and thousands of likes and shares his posts were getting, he felt validated. Far more people were interested in fake news than Blair’s opinions or true stories.
He took on the personas of patriotic Americans outraged at President Obama, liberals, feminists, the Black Lives Matter movement and more. He delighted in people who took the lies for the truth and shared the stories as if they had come from real news websites. The success of the fakes led Blair to create a Facebook page called America’s Last Line of Defense. It was dedicated to fake news stories aimed at staunch Republicans and supporters of President Donald Drumpf.
The headlines were sensational and sometimes even offensive. They had one aim - to provoke an emotional response that would get people to share them. Blair himself describes the headlines he wrote as “racist and bigoted”. But they went viral - and those shares turned into clicks, which turned into cash.
Satire - not news, not opinion, and not propaganda - is how Blair [a committed liberal Democrat] describes his work. His aim is to trick conservative Americans into sharing false news, in the hope of showing what he calls their “stupidity”.
“We’ve gone out of our way to market it as satire, to make sure that everybody knows that this isn’t real,” he says, pointing out that some of his pages have more prominent disclaimers than the world-famous satire site The Onion.
Once his stories go viral, the Facebook comments burst forth. And that’s when Christopher Blair the fake news writer becomes Christopher Blair the crusading left-wing troll. The faker becomes the exposer, weeding out and reporting the most extreme users among his fans.
“I can show you hundreds of profiles we’ve had taken down,” he says. He claims that he’s exposed Ku Klux Klan members and hardcore racists. “We’ve had people fired from their jobs,” he says. “We’ve exposed them to their families. Say what you want about me being a monster. I’m pretty proud.”
Yeah, sure, expose and shame the chronic lunatics, but could you please do that without feeding the fires of hyperpartisan politics ? If you have to fake news - and you don't - couldn't you write fake stories that would work in favour of your (professed) own liberal policies rather than against them ?
Mad as clams.
https://www.bbc.co.uk/news/resources/idt-sh/the_godfather_of_fake_news
Blair spent more than two decades as a construction worker, a trade that took a toll on his body. In the late 2000s, when the Great Recession hit and his industry slumped, he started looking for another source of income in liberal political blogging. But although it was fun and a few people started reading, blogging didn’t pay. And so he tried another tactic. He began to write fabricated tales that looked like real news headlines. Streams of consciousness flowed from his head to the keyboard.
When he saw the results online, the hundreds and thousands of likes and shares his posts were getting, he felt validated. Far more people were interested in fake news than Blair’s opinions or true stories.
He took on the personas of patriotic Americans outraged at President Obama, liberals, feminists, the Black Lives Matter movement and more. He delighted in people who took the lies for the truth and shared the stories as if they had come from real news websites. The success of the fakes led Blair to create a Facebook page called America’s Last Line of Defense. It was dedicated to fake news stories aimed at staunch Republicans and supporters of President Donald Drumpf.
The headlines were sensational and sometimes even offensive. They had one aim - to provoke an emotional response that would get people to share them. Blair himself describes the headlines he wrote as “racist and bigoted”. But they went viral - and those shares turned into clicks, which turned into cash.
Satire - not news, not opinion, and not propaganda - is how Blair [a committed liberal Democrat] describes his work. His aim is to trick conservative Americans into sharing false news, in the hope of showing what he calls their “stupidity”.
“We’ve gone out of our way to market it as satire, to make sure that everybody knows that this isn’t real,” he says, pointing out that some of his pages have more prominent disclaimers than the world-famous satire site The Onion.
Once his stories go viral, the Facebook comments burst forth. And that’s when Christopher Blair the fake news writer becomes Christopher Blair the crusading left-wing troll. The faker becomes the exposer, weeding out and reporting the most extreme users among his fans.
“I can show you hundreds of profiles we’ve had taken down,” he says. He claims that he’s exposed Ku Klux Klan members and hardcore racists. “We’ve had people fired from their jobs,” he says. “We’ve exposed them to their families. Say what you want about me being a monster. I’m pretty proud.”
Yeah, sure, expose and shame the chronic lunatics, but could you please do that without feeding the fires of hyperpartisan politics ? If you have to fake news - and you don't - couldn't you write fake stories that would work in favour of your (professed) own liberal policies rather than against them ?
Mad as clams.
https://www.bbc.co.uk/news/resources/idt-sh/the_godfather_of_fake_news
A radar satellite images some choice targets
Sydney Harbour and the Egyptian pyramids feature in the debut images from the first all-UK radar spacecraft. NovaSAR was developed jointly by Surrey Satellite Technology Limited of Guildford and Airbus in Portsmouth, and launched to orbit in September.
Its pictures are now being assessed for use in diverse applications, including crop analysis, flood and forestry mapping, and maritime surveillance. The intention ultimately is to fly a constellation of NovaSAR-like sats. Such a network would enable repeat images of locations to be acquired more quickly - something that is important if changes detected in a scene require a rapid response. Reacting to an oil spill at sea would be a good example.
https://www.bbc.com/news/science-environment-46312874
Its pictures are now being assessed for use in diverse applications, including crop analysis, flood and forestry mapping, and maritime surveillance. The intention ultimately is to fly a constellation of NovaSAR-like sats. Such a network would enable repeat images of locations to be acquired more quickly - something that is important if changes detected in a scene require a rapid response. Reacting to an oil spill at sea would be a good example.
https://www.bbc.com/news/science-environment-46312874
Lost tribes : we should learn from their mistakes, no idolise them
Well the main lesson at the moment is obviously, "don't fuck with 'em".
Dr Plotkin, who heads the Amazon Conservation Team advocacy group, said that while there was plenty that developed countries could learn from simpler lifestyles, he advised caution. "There is a risk of us over-romanticising them, just as there are risks in accepting Jesus and that you are going to live forever, or in moving to a city and thinking you are going to have two cars and a computer. Don't over-romanticise how they live, but learn from it."
Quite right, but the article is nothing but a selection of Utopian attributes about the tribes. Watch the documentary "Lost Tribes of the Amazon" for a much more warts-and-all approach. Living in the jungle is a shitty existence. More interesting, I think, are the bigger trends : that different societies have come up with radically different solutions, even in similar environments with similar populations. Some of those answers have completely failed, others have been more successful. I would have preferred it if the article also looked at aspects of these cultures that a Western audience would find less appealing; I see no point in ignoring mistakes.
I suspect most of these solutions simply wouldn't work on larger populations which are unavoidably more diverse, let alone ones which are dependent on technology. In spite of this - or perhaps because of it - it's still really interesting.
It is possible to live in complete equality - and it can make for a peaceful community. There's one catch: it involves becoming an anarchist. No government, no state, only the individual and their will to do as they please.
The Piaroa (who number about 14,000, live near the Orinoco River in Amazonas State, Venezuela) disavow violence, and don't punish children physically. They believe peace is achieved by dismissing concepts of ownership, competition, vanity and greed. No sports are played, land isn't owned, no-one can order anyone else to work and there's a strong emphasis on learning from other people. Not that there's a deference towards elders - that would make society hierarchical, and would mean not everyone was equal.
The idea of the individual is the most important thing, although that emphasis doesn't foster selfishness. It is up to each individual to choose what they do, how they do it and when they do it, and they do not pass judgment on others' decisions. And because this is a society of absolute equals, men and women share the same status (not that status is a thing). Anyone who tries to conform to traditional images of manhood, like the hunter, is subject to pity and is seen to lack self-control.
The idea of maturing into a man does not exist, Prof Overing writes. "Nor do Piaroa young men learn the self-regarding virtues of manhood that would set them as males against females, and, as such, superior to them. "Each woman is mistress of her own fertility for which she alone is responsible: the community has no legal right to her progeny; nor does her husband if they should divorce."
https://www.bbc.com/news/world-46301059
Dr Plotkin, who heads the Amazon Conservation Team advocacy group, said that while there was plenty that developed countries could learn from simpler lifestyles, he advised caution. "There is a risk of us over-romanticising them, just as there are risks in accepting Jesus and that you are going to live forever, or in moving to a city and thinking you are going to have two cars and a computer. Don't over-romanticise how they live, but learn from it."
Quite right, but the article is nothing but a selection of Utopian attributes about the tribes. Watch the documentary "Lost Tribes of the Amazon" for a much more warts-and-all approach. Living in the jungle is a shitty existence. More interesting, I think, are the bigger trends : that different societies have come up with radically different solutions, even in similar environments with similar populations. Some of those answers have completely failed, others have been more successful. I would have preferred it if the article also looked at aspects of these cultures that a Western audience would find less appealing; I see no point in ignoring mistakes.
I suspect most of these solutions simply wouldn't work on larger populations which are unavoidably more diverse, let alone ones which are dependent on technology. In spite of this - or perhaps because of it - it's still really interesting.
It is possible to live in complete equality - and it can make for a peaceful community. There's one catch: it involves becoming an anarchist. No government, no state, only the individual and their will to do as they please.
The Piaroa (who number about 14,000, live near the Orinoco River in Amazonas State, Venezuela) disavow violence, and don't punish children physically. They believe peace is achieved by dismissing concepts of ownership, competition, vanity and greed. No sports are played, land isn't owned, no-one can order anyone else to work and there's a strong emphasis on learning from other people. Not that there's a deference towards elders - that would make society hierarchical, and would mean not everyone was equal.
The idea of the individual is the most important thing, although that emphasis doesn't foster selfishness. It is up to each individual to choose what they do, how they do it and when they do it, and they do not pass judgment on others' decisions. And because this is a society of absolute equals, men and women share the same status (not that status is a thing). Anyone who tries to conform to traditional images of manhood, like the hunter, is subject to pity and is seen to lack self-control.
The idea of maturing into a man does not exist, Prof Overing writes. "Nor do Piaroa young men learn the self-regarding virtues of manhood that would set them as males against females, and, as such, superior to them. "Each woman is mistress of her own fertility for which she alone is responsible: the community has no legal right to her progeny; nor does her husband if they should divorce."
https://www.bbc.com/news/world-46301059
All places are one place, but...
Obligatory Discworld quote :
The astro-philosophers of Krull once succeeded in proving conclusively that all places are one place and that the distance between them is an illusion, and this news was an embarrassment to all thinking philosophers because it did not explain, among other things, signposts. After years of wrangling the whole thing was then turned over to Lyn Tin Wheedle, arguably the Disc’s greatest philosopher (he always argued that he was), who after some thought proclaimed that although it was indeed true that all places were one place, that place was very large.
https://www.youtube.com/watch?v=EQNvQt-y-9E
The astro-philosophers of Krull once succeeded in proving conclusively that all places are one place and that the distance between them is an illusion, and this news was an embarrassment to all thinking philosophers because it did not explain, among other things, signposts. After years of wrangling the whole thing was then turned over to Lyn Tin Wheedle, arguably the Disc’s greatest philosopher (he always argued that he was), who after some thought proclaimed that although it was indeed true that all places were one place, that place was very large.
https://www.youtube.com/watch?v=EQNvQt-y-9E
The cutest space antenna
An adorable lil' antenna that looks like it's been lifted straight out of Star Wars.
Nothing as diminutive as the Mars satellite—which belongs to a class called CubeSats—had ever gone farther than low Earth orbit. The antenna would be stowed during launch, occupying only about 830 cubic centimeters. Shortly thereafter, it would unfurl to a size three times as large as the satellite itself. It would have to survive the 160-million-kilometer flight to the Red Planet, including the intense vibration of launch and the radiation and extreme temperatures of deep space.
As its name suggests, RainCube is built to watch the weather. Its radar will help NASA study precipitation and improve weather-forecasting models. NASA scientists are planning to launch a constellation of such satellites to achieve better temporal resolution than a single large satellite can provide. This tiny radar craft is only about the size of a cereal box (“6U” in CubeSat parlance). The power system, the computer, the control system, and everything else must fit into that box.
The satellite will send and receive radar signals through a parabolic antenna. The main dish will reflect the signals onto a device called a subreflector, which will channel them into a “feedhorn” and from there into the satellite’s radar circuitry. Prior to being deployed, however, that antenna needs to fold up into a canister measuring 10 by 10 by 15 cm. And the 35.75-gigahertz frequency at which the radar operates means that the reflector must deploy so precisely that its shape deviates from perfection by no more than 200 micrometers.
After some intense brainstorming, the RainCube antenna team settled on an antenna that works a bit like an umbrella stuffed into a jack-in-the-box. This approach was the simplest solution, given the volume available. When an umbrella opens, the ribs extend outward and stretch the fabric until it’s taut. RainCube’s antenna works the same way: During deployment, a series of ribs pull the antenna into the right shape to transmit and receive signals.
https://spectrum.ieee.org/aerospace/satellites/new-antennas-will-take-cubesats-to-mars-and-beyond
Nothing as diminutive as the Mars satellite—which belongs to a class called CubeSats—had ever gone farther than low Earth orbit. The antenna would be stowed during launch, occupying only about 830 cubic centimeters. Shortly thereafter, it would unfurl to a size three times as large as the satellite itself. It would have to survive the 160-million-kilometer flight to the Red Planet, including the intense vibration of launch and the radiation and extreme temperatures of deep space.
As its name suggests, RainCube is built to watch the weather. Its radar will help NASA study precipitation and improve weather-forecasting models. NASA scientists are planning to launch a constellation of such satellites to achieve better temporal resolution than a single large satellite can provide. This tiny radar craft is only about the size of a cereal box (“6U” in CubeSat parlance). The power system, the computer, the control system, and everything else must fit into that box.
The satellite will send and receive radar signals through a parabolic antenna. The main dish will reflect the signals onto a device called a subreflector, which will channel them into a “feedhorn” and from there into the satellite’s radar circuitry. Prior to being deployed, however, that antenna needs to fold up into a canister measuring 10 by 10 by 15 cm. And the 35.75-gigahertz frequency at which the radar operates means that the reflector must deploy so precisely that its shape deviates from perfection by no more than 200 micrometers.
After some intense brainstorming, the RainCube antenna team settled on an antenna that works a bit like an umbrella stuffed into a jack-in-the-box. This approach was the simplest solution, given the volume available. When an umbrella opens, the ribs extend outward and stretch the fabric until it’s taut. RainCube’s antenna works the same way: During deployment, a series of ribs pull the antenna into the right shape to transmit and receive signals.
https://spectrum.ieee.org/aerospace/satellites/new-antennas-will-take-cubesats-to-mars-and-beyond
Evidence for race in the genome ?
This may go against the oft-reported notion that race doesn't exist genetically.
The reference genome is an essential map of human genetic material that is used as a basis for comparison. When we sequence our own DNA for insight into health, family history, and future disease risk, we chop up the sequence into lots of little pieces and compare stretches of it to the reference genome, looking for areas where we differ. The fundamental problem with this, the scientists write in a recent paper in Nature Genetics, is that the reference genome is based largely on a single person. Considering the myriad genetic differences among the 7.7 billion people alive today, that’s obviously not ideal.
Professor of computer science and biostatistics Steven Salzberg, Ph.D., and Rachel Sherman, a Ph.D. candidate, make the case that this single reference genome doesn’t capture the diversity of human genetics. Some populations, they add, differ too much from this reference genome. To make their case, they refer to the genomes of 910 individuals from twenty different countries, all of pan-African descent.
Over the years, we’ve continuously workshopped the reference genome. But recent analysis indicates that almost seventy percent of its material was gleaned from a single African-American individual, who is referred to only as RPCI-11, explains Salzburg.
Instead of striving for a single universal reference genome, the team argues, we should have a bunch of reference genomes — perhaps one for each population of interest.
Originally shared by Sakari Maaranen
You thought #human #genome had been mapped? Well, a human genome was.
https://www.inverse.com/article/51084-reference-genome-human-genetics-testing?refresh=64
The reference genome is an essential map of human genetic material that is used as a basis for comparison. When we sequence our own DNA for insight into health, family history, and future disease risk, we chop up the sequence into lots of little pieces and compare stretches of it to the reference genome, looking for areas where we differ. The fundamental problem with this, the scientists write in a recent paper in Nature Genetics, is that the reference genome is based largely on a single person. Considering the myriad genetic differences among the 7.7 billion people alive today, that’s obviously not ideal.
Professor of computer science and biostatistics Steven Salzberg, Ph.D., and Rachel Sherman, a Ph.D. candidate, make the case that this single reference genome doesn’t capture the diversity of human genetics. Some populations, they add, differ too much from this reference genome. To make their case, they refer to the genomes of 910 individuals from twenty different countries, all of pan-African descent.
Over the years, we’ve continuously workshopped the reference genome. But recent analysis indicates that almost seventy percent of its material was gleaned from a single African-American individual, who is referred to only as RPCI-11, explains Salzburg.
Instead of striving for a single universal reference genome, the team argues, we should have a bunch of reference genomes — perhaps one for each population of interest.
Originally shared by Sakari Maaranen
You thought #human #genome had been mapped? Well, a human genome was.
https://www.inverse.com/article/51084-reference-genome-human-genetics-testing?refresh=64
Freedom of movement is good for everyone
Another regular complaint – particularly from the left – is that freedom of movement is a tool of the neo-liberal right, to shunt workers around the world at a whim. In practice, this is the reverse of the truth. Removing freedom of movement will give the multinationals more power – the freedom of movement transferred their power to workers. This may seem counterintuitive. It needs to be thought through. What freedom of movement, in its EU form, does is give the individual worker the right to live and work in any of the member states. They don’t need a work visa, they don’t need a job offer, they don’t need to go through any special bureaucracy – just the ‘right to work’ checks that people will be familiar with when they apply for any job in the UK now. Show your passport or similar form of ID. That’s putting power in the hands of the worker.
‘But it helps undercut wages and takes our jobs’. No, it really doesn’t. The evidence suggests that in the past it has been mainly beneficial to wages, except at the very lowest end – and this latter effect (described as infinitesimal by the author of the one report often cited by Leavers) is in itself very misleading.
Freedom of movement is a reciprocal right – people seem to tend to forget that it’s not just about people coming to the UK but about UK people being able to live, work, love, marry and more in the rest of the EU. It’s a positive thing. It’s a freedom. By removing it we’re making ourselves less free. We’re taking something away from ourselves. We’re narrowing rather than broadening our horizons – and all for either a misunderstanding of the concept or a misinterpretation of the evidence – or worse, from xenophobia.
https://paulbernal.wordpress.com/2018/11/23/a-few-words-on-freedom-of-movement/
‘But it helps undercut wages and takes our jobs’. No, it really doesn’t. The evidence suggests that in the past it has been mainly beneficial to wages, except at the very lowest end – and this latter effect (described as infinitesimal by the author of the one report often cited by Leavers) is in itself very misleading.
Freedom of movement is a reciprocal right – people seem to tend to forget that it’s not just about people coming to the UK but about UK people being able to live, work, love, marry and more in the rest of the EU. It’s a positive thing. It’s a freedom. By removing it we’re making ourselves less free. We’re taking something away from ourselves. We’re narrowing rather than broadening our horizons – and all for either a misunderstanding of the concept or a misinterpretation of the evidence – or worse, from xenophobia.
https://paulbernal.wordpress.com/2018/11/23/a-few-words-on-freedom-of-movement/
The secret of Madrid's traffic bollards
On the top of the bollard in question, Murcia noticed an unusual pattern comprised of 40 lines that would soon be all over Spanish Twitter and featured by dozens of Spanish news outlets. The lines extended from close to the center of the bollard to the border of the circumference. And they were grouped in a strange way: first one line, then a space. Then 12, and another space. Then 2, 13, 3, and 9.
This being Twitter, there were plenty of jokes, including the theory that it must be related to crop circles. Others just gave up, mystified. “Two hours looking at a bollard like it was a matter of life and death,” said @LizTravel1. “My vote is that is just a decorative design.” One person made a 3D printed replica. Another person wrote a song based on the pattern.
I love that.
https://www.atlasobscura.com/articles/secret-code-on-madrid-security-bollards
This being Twitter, there were plenty of jokes, including the theory that it must be related to crop circles. Others just gave up, mystified. “Two hours looking at a bollard like it was a matter of life and death,” said @LizTravel1. “My vote is that is just a decorative design.” One person made a 3D printed replica. Another person wrote a song based on the pattern.
I love that.
https://www.atlasobscura.com/articles/secret-code-on-madrid-security-bollards
Saturday, 24 November 2018
Full body scans in real time
We're living in the future.
EXPLORER combines positron emission tomography and x-ray computed tomography, allowing the impressive scanner to capture radiation far more efficiently than its predecessors or current competitors. This means that the novel imaging tool can produce a picture in as little as one second and then, with just a little more time, can provide actual video clips that track the progression of substances in the body.
"We could see features that you just don't see on regular PET scans. And the dynamic sequence showing the radiotracer moving around the body in three dimensions over time was, frankly, mind-blowing. There is no other device that can obtain data like this in humans, so this is truly novel."
https://interestingengineering.com/the-first-human-images-by-the-worlds-first-total-body-scanner-are-here
EXPLORER combines positron emission tomography and x-ray computed tomography, allowing the impressive scanner to capture radiation far more efficiently than its predecessors or current competitors. This means that the novel imaging tool can produce a picture in as little as one second and then, with just a little more time, can provide actual video clips that track the progression of substances in the body.
"We could see features that you just don't see on regular PET scans. And the dynamic sequence showing the radiotracer moving around the body in three dimensions over time was, frankly, mind-blowing. There is no other device that can obtain data like this in humans, so this is truly novel."
https://interestingengineering.com/the-first-human-images-by-the-worlds-first-total-body-scanner-are-here
The problems caused by empathy
From the transcript, lightly edited :
The sense of empathy that I'm concerned about is the capacity to put yourself in the shoes of other people and feel what they feel. I hate terminological arguments. I don't care what you call it. But my argument is that feeling the suffering of other people, experiencing their pain—many people view this as the core of morality. I think this is mistaken. I think it makes us worse people. Empathy is morally corrosive, and we're better off without it.
No matter what kind of empathy fan you are, you've got to acknowledge that it's not essential to moral judgement. There are all sorts of things we appreciate that are wrong even if we can't find anybody to empathize with—any identifiable victim. Throwing trash out of your car window or cheating on your taxes or contributing to climate change—you can't point to somebody and say, "Well, that person is going to suffer," but you might still view these as morally wrong.
Then there are cases where empathy can clash with other moral values. A wonderful example is by the empathy scholar Dan Batson. He tells a story of an eight-year-old girl named Sheri Summers.
Sheri Summers is going to die. There's nothing you can do about that. But there's a treatment you could give her that will alleviate her suffering and pain. The problem is she's low on the line for treatment. She'll die before she could get it because others kids have been waiting longer and are in even more desperate need.
You ask people: "Would you move her up the line? You're a hospital administrator. You just move her to the top line," knowing that if you do so, some other kid is going to move down the line.
Most people say no. They say: "It's too bad, but if it's a fair list, we should leave things as they are." Another group of people get exactly the same story but are told something. They're asked: "Put yourself in her shoes. Feel her pain." Now, all of a sudden, responses shift, and people want to move her up. Batson points out that this is a case where empathy leads us astray; it overrides other moral motivations that we should be keeping in mind.
My critique is stronger. I think empathy really is like a spotlight. It zooms you in on people... like a spotlight, we could point it in the wrong places, so it's subject to bias and myopia, and like a spotlight, it's insensitive to numbers. It's innumerate and ultimately irrational.
The psychologist Paul Slovic noticed that when Natalee Holloway went missing, there was 18 times more network coverage devoted to her than to the ongoing crisis in Darfur, which took the lives of tens of thousands—maybe hundreds of thousands—of people... There are numerous experiments showing that you care more about one person than eight people if your attention is drawn to the person as an individual.
It turns out that analyses, even at the time, suggested that the furlough program was working, that is, because of the program, there were fewer assaults, fewer murders, and fewer rapes. But our emotions are insensitive to that sort of data. It is easy to feel tremendous empathy for someone who is assaulted or raped, but you can't feel empathy for some statistical abstraction of people who would have been raped but weren't.
Whenever politicians in a democratic society want to incite anger and hatred against a group and want to motivate a group to war, they tells stories about victims—sometimes true, sometimes not true—and our feelings for the victims. I remember, for instance, that the horrific stories of Saddam Hussein's monster sons and the atrocities they've committed energized us, and our empathy them energizes us to want to strike out.
If you could forgive me an anecdote, my uncle was ill last year with cancer. We were in Boston. I went with him to see different doctors. The sort of doctors he got along well with, he liked, were ones not who felt his anxiety, not who mirrored his anxiety, his worry, and his stress, but were respectful, confident, clear, and honest, ones who didn't echo his suffering but rather responded to it. You don't want to mirror people; you want to respond to them with love, compassion, and caring.
It's true that psychopaths—you can do a psychopath test; you could all do it online—one of the features is low empathy. But it's also true that of all the features of psychopathy, low empathy has zero predictive power when it comes to predicting bad behaviour toward people. What really matters in predicting whether a psychopath is going to reoffend are issues like aggression and lack of self-control.
Experiment after experiment... finds that when you're in an empathic state, it activates different parts of your brain than when you're in a compassionate state. But more to the point, they find that compassion invigorates you. It leads to more helping. Empathy exhausts. Compassion charges you up. It makes people better helpers, more efficient helpers, and kinder helpers.
Also reminds me of this :
https://www.youtube.com/watch?v=8Qjy-ydl9QE
Originally shared by Joe Carter
Those of us that feed on the passion bound vision produced by hyper moral ideas that are built on comparing what is to some form of Utopian expectation of what could or "should" be, tend to think that taxing or even destroying the current system in service of that Utopian vision will result in the realization of it, rather than the destruction of both.
It is shallow thinkers that become drunk on moralistic passion that are the most dangerous to the stability of fruitful relationships, including the stability of social structures are indeed imperfect and can be improved, but that also work. Nature provides propositions that are bloody and bloodier, not perfect and imperfect. It is those of us that demand perfection with too much zeal that sacrifice what is possible on the altar of those expectations of perfection now.
One of the axioms that feeds this recurrent tragedy of the shallow minded commons is the fact that power, influence and status can be mined by sowing discontent on a foundation of promised perfection. It is like the cheap comic that wins laughs on fiery insults or attacks on the things held sacred which also are the bones of coherent social structure. Power hungry political parasites reap a harvest of power by sowing discontent. It is like inviting the community to a celebratory victory meal but failing to mention that community is what is on the menu.
Those of us who believe these shallow thoughts that render the world into a false clarity of either “good or bad” become erronously convinced that the image we is correct because it is clear. In this case we lose sight of the larger, scarier and messier picture of reality in a shower of our own moralistic masturbations . We lose sight of the sacrifices our push will make in tribute to the unrealistic ideas we see so clearly; and when the destruction comes on the heels of this well intentioned folly, the only prize to console ourselves in those ashes from the believers will be more pointed fingers, still clinging to a false vision of a world of either angels or demons. A society that values real contributions to progress must be able to tell the difference between glamorous and significant. Significance doesn't get the recognition that best serves real progress in a society that worships the superficial. When we find and highlight the “problem” with one hand, and preserve it with the other because it's the social currency of status and power, we drown in a parasitic cesspool of blame, becoming the authors of our own poverty.
Can we be better? Of course we can. Should we? Of course, but to cultivate our best it takes a certain recognition on a zeitgeist level that our common ground is Earth, that our growth, and mature conversion of opportunity to experience happens when we cultivate each other's potential, not when we feed on each other. When we see our common enemy as things like cancer and poverty and a failure to cultivate places where people can contribute effectively are fostered and strengthened, not weakened, when we see that the enemy is not each other, then we can realize our mature potential. Then we can know that revolution is not the same as resolution; that revolution merely shifts the role of oppressed and oppressor, and that resolution is built on having each other's backs to face our common enemy, not being on each other's backs, making ourselves the enemy. The third estate (we the commoners) need to resist being carved up on the table of the manipulator class that divides the population in service of their narrow ends. I could be missing something(s)
I think Paul Bloom has a fair point about the danger of falling into the trap of hyper empathetic identity and action.
https://www.youtube.com/watch?v=E6J9dI4QniY
https://www.youtube.com/watch?v=E6J9dI4QniY
The sense of empathy that I'm concerned about is the capacity to put yourself in the shoes of other people and feel what they feel. I hate terminological arguments. I don't care what you call it. But my argument is that feeling the suffering of other people, experiencing their pain—many people view this as the core of morality. I think this is mistaken. I think it makes us worse people. Empathy is morally corrosive, and we're better off without it.
No matter what kind of empathy fan you are, you've got to acknowledge that it's not essential to moral judgement. There are all sorts of things we appreciate that are wrong even if we can't find anybody to empathize with—any identifiable victim. Throwing trash out of your car window or cheating on your taxes or contributing to climate change—you can't point to somebody and say, "Well, that person is going to suffer," but you might still view these as morally wrong.
Then there are cases where empathy can clash with other moral values. A wonderful example is by the empathy scholar Dan Batson. He tells a story of an eight-year-old girl named Sheri Summers.
Sheri Summers is going to die. There's nothing you can do about that. But there's a treatment you could give her that will alleviate her suffering and pain. The problem is she's low on the line for treatment. She'll die before she could get it because others kids have been waiting longer and are in even more desperate need.
You ask people: "Would you move her up the line? You're a hospital administrator. You just move her to the top line," knowing that if you do so, some other kid is going to move down the line.
Most people say no. They say: "It's too bad, but if it's a fair list, we should leave things as they are." Another group of people get exactly the same story but are told something. They're asked: "Put yourself in her shoes. Feel her pain." Now, all of a sudden, responses shift, and people want to move her up. Batson points out that this is a case where empathy leads us astray; it overrides other moral motivations that we should be keeping in mind.
My critique is stronger. I think empathy really is like a spotlight. It zooms you in on people... like a spotlight, we could point it in the wrong places, so it's subject to bias and myopia, and like a spotlight, it's insensitive to numbers. It's innumerate and ultimately irrational.
The psychologist Paul Slovic noticed that when Natalee Holloway went missing, there was 18 times more network coverage devoted to her than to the ongoing crisis in Darfur, which took the lives of tens of thousands—maybe hundreds of thousands—of people... There are numerous experiments showing that you care more about one person than eight people if your attention is drawn to the person as an individual.
It turns out that analyses, even at the time, suggested that the furlough program was working, that is, because of the program, there were fewer assaults, fewer murders, and fewer rapes. But our emotions are insensitive to that sort of data. It is easy to feel tremendous empathy for someone who is assaulted or raped, but you can't feel empathy for some statistical abstraction of people who would have been raped but weren't.
Whenever politicians in a democratic society want to incite anger and hatred against a group and want to motivate a group to war, they tells stories about victims—sometimes true, sometimes not true—and our feelings for the victims. I remember, for instance, that the horrific stories of Saddam Hussein's monster sons and the atrocities they've committed energized us, and our empathy them energizes us to want to strike out.
If you could forgive me an anecdote, my uncle was ill last year with cancer. We were in Boston. I went with him to see different doctors. The sort of doctors he got along well with, he liked, were ones not who felt his anxiety, not who mirrored his anxiety, his worry, and his stress, but were respectful, confident, clear, and honest, ones who didn't echo his suffering but rather responded to it. You don't want to mirror people; you want to respond to them with love, compassion, and caring.
It's true that psychopaths—you can do a psychopath test; you could all do it online—one of the features is low empathy. But it's also true that of all the features of psychopathy, low empathy has zero predictive power when it comes to predicting bad behaviour toward people. What really matters in predicting whether a psychopath is going to reoffend are issues like aggression and lack of self-control.
Experiment after experiment... finds that when you're in an empathic state, it activates different parts of your brain than when you're in a compassionate state. But more to the point, they find that compassion invigorates you. It leads to more helping. Empathy exhausts. Compassion charges you up. It makes people better helpers, more efficient helpers, and kinder helpers.
Also reminds me of this :
https://www.youtube.com/watch?v=8Qjy-ydl9QE
Originally shared by Joe Carter
Those of us that feed on the passion bound vision produced by hyper moral ideas that are built on comparing what is to some form of Utopian expectation of what could or "should" be, tend to think that taxing or even destroying the current system in service of that Utopian vision will result in the realization of it, rather than the destruction of both.
It is shallow thinkers that become drunk on moralistic passion that are the most dangerous to the stability of fruitful relationships, including the stability of social structures are indeed imperfect and can be improved, but that also work. Nature provides propositions that are bloody and bloodier, not perfect and imperfect. It is those of us that demand perfection with too much zeal that sacrifice what is possible on the altar of those expectations of perfection now.
One of the axioms that feeds this recurrent tragedy of the shallow minded commons is the fact that power, influence and status can be mined by sowing discontent on a foundation of promised perfection. It is like the cheap comic that wins laughs on fiery insults or attacks on the things held sacred which also are the bones of coherent social structure. Power hungry political parasites reap a harvest of power by sowing discontent. It is like inviting the community to a celebratory victory meal but failing to mention that community is what is on the menu.
Those of us who believe these shallow thoughts that render the world into a false clarity of either “good or bad” become erronously convinced that the image we is correct because it is clear. In this case we lose sight of the larger, scarier and messier picture of reality in a shower of our own moralistic masturbations . We lose sight of the sacrifices our push will make in tribute to the unrealistic ideas we see so clearly; and when the destruction comes on the heels of this well intentioned folly, the only prize to console ourselves in those ashes from the believers will be more pointed fingers, still clinging to a false vision of a world of either angels or demons. A society that values real contributions to progress must be able to tell the difference between glamorous and significant. Significance doesn't get the recognition that best serves real progress in a society that worships the superficial. When we find and highlight the “problem” with one hand, and preserve it with the other because it's the social currency of status and power, we drown in a parasitic cesspool of blame, becoming the authors of our own poverty.
Can we be better? Of course we can. Should we? Of course, but to cultivate our best it takes a certain recognition on a zeitgeist level that our common ground is Earth, that our growth, and mature conversion of opportunity to experience happens when we cultivate each other's potential, not when we feed on each other. When we see our common enemy as things like cancer and poverty and a failure to cultivate places where people can contribute effectively are fostered and strengthened, not weakened, when we see that the enemy is not each other, then we can realize our mature potential. Then we can know that revolution is not the same as resolution; that revolution merely shifts the role of oppressed and oppressor, and that resolution is built on having each other's backs to face our common enemy, not being on each other's backs, making ourselves the enemy. The third estate (we the commoners) need to resist being carved up on the table of the manipulator class that divides the population in service of their narrow ends. I could be missing something(s)
I think Paul Bloom has a fair point about the danger of falling into the trap of hyper empathetic identity and action.
https://www.youtube.com/watch?v=E6J9dI4QniY
https://www.youtube.com/watch?v=E6J9dI4QniY
Defeating the German far right
A very happy story via Andres Soolo.
The far-right group Wir für Deutschland (We for Germany), which has been marching in the capital since 2016, has just announced that it will no longer protest there. Explaining the decision in a frustration-filled statement on Facebook, Wir für Deutschland credited three factors in particular.
First, there was fatigue; it no longer wished to march round in circles, which had been a fitting metaphor for the spiralling hatred of their ethos. Second, there was discomfort: its members had grown tired of being screamed at by anti-fascists. Third, there was the nemesis whose compassion towards refugees had made them take to the streets in the first place. “[Chancellor Angela] Merkel has defeated us,” said Kay Hönicke and Enrico Stubbe, the organisers of the demonstrations. “We must accept that.” Their deflated tone suggests that, for now at least, hate is too much like hard work.
Of course, hope is hard work too. Wir für Deutschland’s efforts were vigorously resisted by several grassroots groups in the city, perhaps the most persistent of which is Berlin gegen Nazis (Berlin Against Nazis), whose members march at least once a week: it hailed the retreat of Wir für Deutschland as “a clear success”. Its achievement cannot be overstated: Wir für Deutschland, at its peak, could raise a crowd of 3,000 extremists to march through the heart of the city. Its final demonstration, by contrast, attracted barely 100.
In the last few months, Berlin has seen anti-racism marches of 70,000 and 240,000 people; and in Chemnitz, to counter a protest of 6,000 extremists, 65,000 attended an anti-racism concert. German social media marked the latter of these events with the defiant hashtag Wir Sind Mehr (We Are More).
https://www.theguardian.com/commentisfree/2018/nov/14/marching-together-far-right-fascist-wir-fur-deutschland
The far-right group Wir für Deutschland (We for Germany), which has been marching in the capital since 2016, has just announced that it will no longer protest there. Explaining the decision in a frustration-filled statement on Facebook, Wir für Deutschland credited three factors in particular.
First, there was fatigue; it no longer wished to march round in circles, which had been a fitting metaphor for the spiralling hatred of their ethos. Second, there was discomfort: its members had grown tired of being screamed at by anti-fascists. Third, there was the nemesis whose compassion towards refugees had made them take to the streets in the first place. “[Chancellor Angela] Merkel has defeated us,” said Kay Hönicke and Enrico Stubbe, the organisers of the demonstrations. “We must accept that.” Their deflated tone suggests that, for now at least, hate is too much like hard work.
Of course, hope is hard work too. Wir für Deutschland’s efforts were vigorously resisted by several grassroots groups in the city, perhaps the most persistent of which is Berlin gegen Nazis (Berlin Against Nazis), whose members march at least once a week: it hailed the retreat of Wir für Deutschland as “a clear success”. Its achievement cannot be overstated: Wir für Deutschland, at its peak, could raise a crowd of 3,000 extremists to march through the heart of the city. Its final demonstration, by contrast, attracted barely 100.
In the last few months, Berlin has seen anti-racism marches of 70,000 and 240,000 people; and in Chemnitz, to counter a protest of 6,000 extremists, 65,000 attended an anti-racism concert. German social media marked the latter of these events with the defiant hashtag Wir Sind Mehr (We Are More).
https://www.theguardian.com/commentisfree/2018/nov/14/marching-together-far-right-fascist-wir-fur-deutschland
An explanation for why some written languages look similar
Possible psychological universals. A fascinating connection between visual and verbal pattern recognition.
Morin found, on average, that about 61% of lines across all scripts were either horizontal or vertical, higher than chance would predict. (That number rises to 70% for the Latin alphabet, in which English is written.) And vertically symmetrical characters accounted for 70% of all the symmetrical characters. Together, the findings suggest that humans are drawn to these characteristics in writing, Morin says.
But did written scripts evolve to have more of these features over time, as language users selected for certain shapes and orientations of a script? To find out, Morin looked at a subset of 93 scripts that descended from—or gave birth to—another script in the study. Morin found no evidence that scripts tend to become more horizontal or vertical over time, suggesting that the scribes who created them baked human preferences into the written word from the beginning, he reported last month in Cognitive Science.
https://www.sciencemag.org/news/2017/11/why-written-languages-look-alike-world-over
Morin found, on average, that about 61% of lines across all scripts were either horizontal or vertical, higher than chance would predict. (That number rises to 70% for the Latin alphabet, in which English is written.) And vertically symmetrical characters accounted for 70% of all the symmetrical characters. Together, the findings suggest that humans are drawn to these characteristics in writing, Morin says.
But did written scripts evolve to have more of these features over time, as language users selected for certain shapes and orientations of a script? To find out, Morin looked at a subset of 93 scripts that descended from—or gave birth to—another script in the study. Morin found no evidence that scripts tend to become more horizontal or vertical over time, suggesting that the scribes who created them baked human preferences into the written word from the beginning, he reported last month in Cognitive Science.
https://www.sciencemag.org/news/2017/11/why-written-languages-look-alike-world-over
Evil collaborators
If at all possible, make it known to the relevant people when people act in this way. You don't have to publically shame them on the internet, that probably wouldn't help anyway.
Originally shared by Joerg Fliege
On collaborating in academia. As noted by the OP, collaborations are like kissing. Which is a very apt metaphor, I think. Nobody is getting into anyone's panties (yet), but you don't just kiss a rando you meet at a conference. And you definitely don't kiss that arrogant, selfish, untrustworthy snake that you have there on campus.
The trick is to know who that arrogant, selfish, untrustworthy snake is. As it happens with kissing, you should know beforehand.
https://xykademiqz.com/2018/11/21/evil-collaborator/
Originally shared by Joerg Fliege
On collaborating in academia. As noted by the OP, collaborations are like kissing. Which is a very apt metaphor, I think. Nobody is getting into anyone's panties (yet), but you don't just kiss a rando you meet at a conference. And you definitely don't kiss that arrogant, selfish, untrustworthy snake that you have there on campus.
The trick is to know who that arrogant, selfish, untrustworthy snake is. As it happens with kissing, you should know beforehand.
https://xykademiqz.com/2018/11/21/evil-collaborator/
Friday, 23 November 2018
Cat + ducklings = ....
The cutest thing you'll see this month, via Tim Stoev.
https://www.youtube.com/watch?v=570khFoaE4s
https://www.youtube.com/watch?v=570khFoaE4s
The differences between programming instinct and true intelligence
For Lorenz there is an insurmountable difference in the type of instinctive behaviour that we see in simple insects and the type of intelligent behaviour that human beings display in learning; he is arguing that intelligence does not and cannot emerge out of instinct by any form of evolutionary process.
No example of this is more extreme than the European (common) cuckoo. The common cuckoo parasitizes hundreds of species of bird, of which the Eurasian reed warbler is amongst the smallest. A reed warbler weighs around 13g, but feeds up a cuckoo to a weight of around 130g without any indication of realisation that the cuckoo is not its own offspring. From our perspective, the reed warbler is clearly vastly smaller than the cuckoo, with different patterning and morphology. But the reed warbler is driven by its misdirected instinct to feed whatever chick is in the nest. For all the benefits of its instinctive behaviour being able to be driven by a small brain, the reed warbler accepts a cost from nest parasites.
von Frisch was working at the opposite end of the spectrum, showing how simple instincts can combine to yield complex behaviour in honey bees... Most astonishingly, in the waggle dance, a worker bee can pass on information about a food source to other works by an intricate and enchanting display. von Frisch laid the way for subsequent research to manipulate a bee’s senses to trick a recruited workers into flying the wrong way. For all the complexity, the instincts that underlie the behaviour are still fixed and incapable of intuitive or intelligent use of sensory information in behaviour.
A classic (but not always helpful) distinction which can help to break through the various possible sources of behavioural software is hailing from either genes or environment as two ends of a broad spectrum. Instinct is toward one extreme of genetically-sourced behaviour, which is consistently and rigidly determined by fixed patterns. Human-like learning is closer to the opposite end of the spectrum, where behaviour is almost entirely determined by flexible rules that are learnt from environmental sources like experience, other organisms and stored information.
Both genetic and environmental sources could, in principle, be equivalently ‘intelligent’, but in general greater flexibility of behaviour has given rise to greater intelligence... Instead of learning a response per sentence, with flexible components or otherwise, it is staggeringly more efficient to learn to how the rules of language which can equip you to respond to any question in an intelligent manner. For the most part, this is what intelligence has come to mean — the ability to adaptively respond to a stimulus without prior experience.
The ability to respond to a stimulus does not equate intelligence. All living things respond to their environments, but only a select few are considered intelligent. The ability of a machine to change in response to the environment does imply it is anything like biological learning... Machine learning is competence without comprehension, or simply responsiveness not learning, unless machines are given sentience. Sentience enables learning without a predefined utility function.
No matter how many signal-response pathways are coded into a computer, the machine would not display human-like intelligence. Human intelligence is reached at the extreme of environmental influence on behaviour, where individuals learn behaviour from self-discovery through direct mimicry to abstract rules. To reach this extreme of learning, a machine would need enormous freedom of expression which no machine is currently afforded because it is not known how to do it.
https://becominghuman.ai/from-instinct-to-intelligence-has-ai-taken-a-wrong-turning-e84582dc4600
No example of this is more extreme than the European (common) cuckoo. The common cuckoo parasitizes hundreds of species of bird, of which the Eurasian reed warbler is amongst the smallest. A reed warbler weighs around 13g, but feeds up a cuckoo to a weight of around 130g without any indication of realisation that the cuckoo is not its own offspring. From our perspective, the reed warbler is clearly vastly smaller than the cuckoo, with different patterning and morphology. But the reed warbler is driven by its misdirected instinct to feed whatever chick is in the nest. For all the benefits of its instinctive behaviour being able to be driven by a small brain, the reed warbler accepts a cost from nest parasites.
von Frisch was working at the opposite end of the spectrum, showing how simple instincts can combine to yield complex behaviour in honey bees... Most astonishingly, in the waggle dance, a worker bee can pass on information about a food source to other works by an intricate and enchanting display. von Frisch laid the way for subsequent research to manipulate a bee’s senses to trick a recruited workers into flying the wrong way. For all the complexity, the instincts that underlie the behaviour are still fixed and incapable of intuitive or intelligent use of sensory information in behaviour.
A classic (but not always helpful) distinction which can help to break through the various possible sources of behavioural software is hailing from either genes or environment as two ends of a broad spectrum. Instinct is toward one extreme of genetically-sourced behaviour, which is consistently and rigidly determined by fixed patterns. Human-like learning is closer to the opposite end of the spectrum, where behaviour is almost entirely determined by flexible rules that are learnt from environmental sources like experience, other organisms and stored information.
Both genetic and environmental sources could, in principle, be equivalently ‘intelligent’, but in general greater flexibility of behaviour has given rise to greater intelligence... Instead of learning a response per sentence, with flexible components or otherwise, it is staggeringly more efficient to learn to how the rules of language which can equip you to respond to any question in an intelligent manner. For the most part, this is what intelligence has come to mean — the ability to adaptively respond to a stimulus without prior experience.
The ability to respond to a stimulus does not equate intelligence. All living things respond to their environments, but only a select few are considered intelligent. The ability of a machine to change in response to the environment does imply it is anything like biological learning... Machine learning is competence without comprehension, or simply responsiveness not learning, unless machines are given sentience. Sentience enables learning without a predefined utility function.
No matter how many signal-response pathways are coded into a computer, the machine would not display human-like intelligence. Human intelligence is reached at the extreme of environmental influence on behaviour, where individuals learn behaviour from self-discovery through direct mimicry to abstract rules. To reach this extreme of learning, a machine would need enormous freedom of expression which no machine is currently afforded because it is not known how to do it.
https://becominghuman.ai/from-instinct-to-intelligence-has-ai-taken-a-wrong-turning-e84582dc4600
Thursday, 22 November 2018
Potential problems with a journal dedicated to controversial results
Valid criticisms, I think. I suppose the intention is that peer review will be sufficient to weed out the crazies* (maybe the reviewer's names should be made public ?). I'd like to see what happens : maybe it will degenerate into farce, maybe it will produce something interesting. The journal itself is the experiment...
* Cough cough TIME TRAVELLING ALIEN OCTOPUS cough cough cough...
There is one major problem here though :
For the reader of a paper, attaching authors to papers is important to help them decide how seriously to take the results. Here the difference between anonymous and pseudonymous authorship becomes important: if an author uses the same pseudonym over a period of time, the academic community can begin to get a sense of how good their work is (consider the Bourbaki pseudonym, which has been in use long enough to get a track-record), but if a publication is anonymous, the audience must rely solely on the credibility of the publishing journal and its editors.
What about the content itself ? Judging the content by the author is something we'd do well to avoid. Maybe all papers should be anonymous for six months after publishing, or something. I dunno. Anyway, I'm curious to see what happens with this.
"The Journal of Controversial Ideas ...proposes to allow academics to publish papers on controversial topics under a pseudonym. The hope is that this will allow researchers to write freely on controversial topics without the danger of social disapproval or threats. Thus the journal removes the author’s motivations, conflicts of interests and worldview from the presentation of a potentially controversial idea. This proposal heralds the death of the academic author – and, unlike Barthes, we think believe this is a bad thing."
"Defenders of The Journal of Controversial Ideas see it as a forum for true academic freedom. While academic freedom is important, it is not an unlimited right. Freedom without responsibility is recklessness. It is a lack of regard for the danger or consequences of one’s ideas. True academic freedom does not mean that writers get to choose when to avoid controversy. The pseudonymous authorship proposal allows authors to manipulate the credit and blame systems of the academy in the name of academic freedom."
"When it is working well, academic inquiry is a conversation. Researchers make claims and counterclaims, exchange reasons, and work together to open up new fields of inquiry. A conversation needs speakers: we need to keep track of who is talking, what they have said before, and who they are talking to. Pseudonymous authorship is an opt-out from the conversation, and the academic community will be worse off if its members no longer want to engage in intellectual conversation."
http://theconversation.com/the-journal-of-controversial-ideas-its-academic-freedom-without-responsibility-and-thats-recklessness-107106?utm_medium=Social&utm_source=Facebook#Echobox=1542706990
* Cough cough TIME TRAVELLING ALIEN OCTOPUS cough cough cough...
There is one major problem here though :
For the reader of a paper, attaching authors to papers is important to help them decide how seriously to take the results. Here the difference between anonymous and pseudonymous authorship becomes important: if an author uses the same pseudonym over a period of time, the academic community can begin to get a sense of how good their work is (consider the Bourbaki pseudonym, which has been in use long enough to get a track-record), but if a publication is anonymous, the audience must rely solely on the credibility of the publishing journal and its editors.
What about the content itself ? Judging the content by the author is something we'd do well to avoid. Maybe all papers should be anonymous for six months after publishing, or something. I dunno. Anyway, I'm curious to see what happens with this.
"The Journal of Controversial Ideas ...proposes to allow academics to publish papers on controversial topics under a pseudonym. The hope is that this will allow researchers to write freely on controversial topics without the danger of social disapproval or threats. Thus the journal removes the author’s motivations, conflicts of interests and worldview from the presentation of a potentially controversial idea. This proposal heralds the death of the academic author – and, unlike Barthes, we think believe this is a bad thing."
"Defenders of The Journal of Controversial Ideas see it as a forum for true academic freedom. While academic freedom is important, it is not an unlimited right. Freedom without responsibility is recklessness. It is a lack of regard for the danger or consequences of one’s ideas. True academic freedom does not mean that writers get to choose when to avoid controversy. The pseudonymous authorship proposal allows authors to manipulate the credit and blame systems of the academy in the name of academic freedom."
"When it is working well, academic inquiry is a conversation. Researchers make claims and counterclaims, exchange reasons, and work together to open up new fields of inquiry. A conversation needs speakers: we need to keep track of who is talking, what they have said before, and who they are talking to. Pseudonymous authorship is an opt-out from the conversation, and the academic community will be worse off if its members no longer want to engage in intellectual conversation."
http://theconversation.com/the-journal-of-controversial-ideas-its-academic-freedom-without-responsibility-and-thats-recklessness-107106?utm_medium=Social&utm_source=Facebook#Echobox=1542706990
Wednesday, 21 November 2018
An ion engine for aircraft
With no visible exhaust and no roaring jet or whirling propeller—no moving parts at all, in fact—the aircraft seemed silently animated by an ethereal source. “It was very exciting,” Barrett says. “Then it crashed into the wall, which wasn’t ideal.”
But Barrett and his team figured out three main things to make Version 2 work. The first was the ionic wind thruster design. Version 2’s thrusters consist of two rows of long metal strands draped under its sky blue wings. The front row conducts some 40,000 volts of electricity—166 times the voltage delivered to the average house, and enough energy to strip the electrons off ample nitrogen atoms hanging in the atmosphere.
When that happens, the nitrogen atoms turn into positively charged ions. Because the back row of metal filaments carries a negative charge, the ions careen toward it like magnetized billiard balls. “Along the way, there are millions of collisions between these ions and neutral air molecules,” Barrett notes. That shoves the air molecules toward the back of the plane, creating a wind that pushes the plane forward fast and hard enough to fly.
https://www.scientificamerican.com/article/silent-and-simple-ion-engine-powers-a-plane-with-no-moving-parts/
But Barrett and his team figured out three main things to make Version 2 work. The first was the ionic wind thruster design. Version 2’s thrusters consist of two rows of long metal strands draped under its sky blue wings. The front row conducts some 40,000 volts of electricity—166 times the voltage delivered to the average house, and enough energy to strip the electrons off ample nitrogen atoms hanging in the atmosphere.
When that happens, the nitrogen atoms turn into positively charged ions. Because the back row of metal filaments carries a negative charge, the ions careen toward it like magnetized billiard balls. “Along the way, there are millions of collisions between these ions and neutral air molecules,” Barrett notes. That shoves the air molecules toward the back of the plane, creating a wind that pushes the plane forward fast and hard enough to fly.
https://www.scientificamerican.com/article/silent-and-simple-ion-engine-powers-a-plane-with-no-moving-parts/
Digital Rights Management taken to extremes
A chilling vision of things to come. With dishes.
To ensure food safety and the proper delivery of your Disher experience, your Speckless will not switch on if it detects unknown objects; only authorized Disher Kitchen Store products are certified for use with your Disher Speckless.
The trademarks and other intellectual property in the products sold by different Disher affiliated companies through the regional Kitchen Stores are licensed for use on a territory-by-territory basis. In many cases, different territorial licensors own the exclusive right to manufacture and distribute different brands in the Kitchen Store, and part of Disher's commitment to respecting international laws and intellectual property is our use of the sensors in Disher Speckless systems to optimize your Disher experience by ensuring that our devices do not violate these important contractual arrangements.
http://reason.com/archives/2018/11/17/sole-and-despotic-dominion
To ensure food safety and the proper delivery of your Disher experience, your Speckless will not switch on if it detects unknown objects; only authorized Disher Kitchen Store products are certified for use with your Disher Speckless.
The trademarks and other intellectual property in the products sold by different Disher affiliated companies through the regional Kitchen Stores are licensed for use on a territory-by-territory basis. In many cases, different territorial licensors own the exclusive right to manufacture and distribute different brands in the Kitchen Store, and part of Disher's commitment to respecting international laws and intellectual property is our use of the sensors in Disher Speckless systems to optimize your Disher experience by ensuring that our devices do not violate these important contractual arrangements.
http://reason.com/archives/2018/11/17/sole-and-despotic-dominion
Tuesday, 20 November 2018
The weird connection between language and shape
Onomatopoeia is pretty weird.
Originally shared by Joe Carter
The way shapes influence the shape of sounds in language.
https://www.youtube.com/watch?v=rQX1ax96l7Y
https://www.youtube.com/watch?v=rQX1ax96l7Y
Originally shared by Joe Carter
The way shapes influence the shape of sounds in language.
https://www.youtube.com/watch?v=rQX1ax96l7Y
https://www.youtube.com/watch?v=rQX1ax96l7Y
I'm cautiously skeptical of a "replication crisis"
In astronomy we often have to do repeat observations of potential detections to confirm they're real. A good confirmation rate is about 50%. Much less than this and we'd be wasting telescope time, and we'd start to worry that some of the sources we thought were real might not be so secure. Conversely, a much higher fraction would also be a waste of time, and would imply that we hadn't been as careful in our search as we thought - there'd still be other interesting things hidden in the data that we hadn't seen.
I suggest that this is also true to some extent in psychology. There seems a science-wide call for more risky, controversial research. Well, risky, controversial research requires a certain failure rate : if every finding was replicated, that would suggest the research wasn't been risky enough; if none of them were, that would imply lousy research practises. The actual replication rate turns out to be, by happy coincidence, about 50%.
But likewise, in astronomy we don't write a paper in which we consider sources we haven't confirmed yet (or at least it's a very bad idea to do so). We wait until we've got those repeat observations before drawing any conclusions. Risky, preliminary pilot studies ought to have a failure rate by definition, otherwise they wouldn't be risky at all. The big "end-result" studies on the other hand, the ones that are actually used to draw secure conclusions and, in the case of psychology, influence social policy, well those you'd want at least their basic results to be on a secure footing.
The Many Labs 2 project was specifically designed to address these criticisms. With 15,305 participants in total, the new experiments had, on average, 60 times as many volunteers as the studies they were attempting to replicate. The researchers involved worked with the scientists behind the original studies to vet and check every detail of the experiments beforehand. And they repeated those experiments many times over, with volunteers from 36 different countries, to see if the studies would replicate in some cultures and contexts but not others.
Despite the large sample sizes and the blessings of the original teams, the team failed to replicate half of the studies it focused on. It couldn’t, for example, show that people subconsciously exposed to the concept of heat were more likely to believe in global warming, or that moral transgressions create a need for physical cleanliness in the style of Lady Macbeth, or that people who grow up with more siblings are more altruistic. And as in previous big projects, online bettors were surprisingly good at predicting beforehand which studies would ultimately replicate. Somehow, they could intuit which studies were reliable.
Maybe anecdotes are evidence, after all... :P
Many Labs 2 “was explicitly designed to examine how much effects varied from place to place, from culture to culture,” says Katie Corker, the chair of the Society for the Improvement of Psychological Science. “And here’s the surprising result: The results do not show much variability at all.” If one of the participating teams successfully replicated a study, others did, too. If a study failed to replicate, it tended to fail everywhere.
Many researchers have noted that volunteers from Western, educated, industrialized, rich, and democratic countries—weird nations—are an unusual slice of humanity who think differently than those from other parts of the world. In the majority of the Many Labs 2 experiments, the team found very few differences between weird volunteers and those from other countries. But Miyamoto notes that its analysis was a little crude—in considering “non-weird countries” together, it’s lumping together people from cultures as diverse as Mexico, Japan, and South Africa. “Cross-cultural research,” she writes, “must be informed with thorough analyses of each and all of the cultural contexts involved.”
Sanjay Srivastava from the University of Oregon says the lack of variation in Many Labs 2 is actually a positive thing. Sure, it suggests that the large number of failed replications really might be due to sloppy science. But it also hints that the fundamental business of psychology—creating careful lab experiments to study the tricky, slippery, complicated world of the human mind—works pretty well. “Outside the lab, real-world phenomena can and probably do vary by context,” he says. “But within our carefully designed studies and experiments, the results are not chaotic or unpredictable. That means we can do valid social-science research.”
Originally shared by Eli Fennell
Latest Big Social Psych Replication Program Reaffirms and Refines 'Crisis'
There has been much talk in recent years about a 'Replication Crisis' in social psychology, and in some of the other sciences as well it should be added (even medicine). Given, though, the greater lack of replicable foundational findings compared with most fields, however, and the real world policy implications of their alleged findings, such a crisis in social psychology is especially daunting.
Previous replication projects designed to assess the replicability of social psychology experiments have typically found a replication rate of about half, i.e. half of studies are successfully replicated. Previous projects, however, have had numerous limitations that undercut their basic assertions: low sample sizes (meaning insufficient power for replication), critical sampling differences (e.g. using a different sample type, but not using many different sample types that include the original type), imperfect replication protocols, few international replications, or the selection of weak or low prominence findings for testing.
The latest replication project, by a group called Many Labs 2, is helping resolve these issues by focusing on prominent, high impact findings; using samples generally larger than the original studies; carefully replicating the original studies, and even consulting with original researchers to ensure this; and testing many types of participant samples, including international samples.
Surprisingly, their results are in line with previous efforts: about half of all findings replicated successfully, making this an ironically very replicable result. Given the complexity of human behaviour, this is actually not as terrible as it sounds in a way, but is worse in another way since many of these are highly foundational and influential 'findings', such as the Marshmallow Effect.
On the bright side, their findings also suggest that sample differences may matter far less than supposed: if a result was replicated, it tended to be replicated across different samples, even in very different cultures, genders, and ages. The question of whether a given finding generalises beyond a given sample has long haunted the social sciences, especially with their reliance on undergraduate participant samples for much of their experimental research. In fact, those results which were replicated by Many Labs 2 tended to replicate even in groups the original researchers specifically expected not to find them replicated.
They also replicated previous findings suggesting that online betters are surprisingly accurate at predicting which results will be replicated in follow-up studies and which won't, suggesting that intuition may be able to play more of an important role in designing and prioritising research than normally suspected, though with the caveat that those directly involved in research are least likely to themselves possess objective intuitions about their likelihood of success.
Moreover, it is worth remembering that the fact that an experiment replicated, does not mean it necessarily tells us anything about the real world, or tells us what it claims to tell us about it at any rate. As an example, experimental social psychologists consistently find that younger samples have better time management skills than elderly samples; yet studies of real world time management consistently show elderly samples miss fewer deadlines and appointments in practice, and easily so.
In my opinion, what all of this points to is that the experimental psychologies fell prey to a desire to quickly 'catch ip' with very basic reductive operational sciences, like basic physics and chemistry, and were at times too willing to accept highly theatrical demonstrations of effects later shown to be overstated at best (e.g. the Stanford Prison Experiments) to get there. What they ignored is that, even in other operational sciences dealing with complex materials, replicability can be tricky, such as biochemistry or quantum physics.
And in the case of studying humans, controlling all variables is especially tricky, and there is little motivation for most researchers to pursue slow, methodical programs of study when what they want is to name a theory of love after themselves or discover a new therapy for ASD children.
It is important for the social sciences to continue to evolve into full fledged operational sciences, and to embrace reductive approaches wherever appropriate, but there has been perhaps a degree of patience, humility, and sweating-the-details absent in much of the research to-date. Hence the replication crisis. Or, at least, I suspect this is certainly part of the answer.
https://www.theatlantic.com/science/archive/2018/11/psychologys-replication-crisis-real/576223/
I suggest that this is also true to some extent in psychology. There seems a science-wide call for more risky, controversial research. Well, risky, controversial research requires a certain failure rate : if every finding was replicated, that would suggest the research wasn't been risky enough; if none of them were, that would imply lousy research practises. The actual replication rate turns out to be, by happy coincidence, about 50%.
But likewise, in astronomy we don't write a paper in which we consider sources we haven't confirmed yet (or at least it's a very bad idea to do so). We wait until we've got those repeat observations before drawing any conclusions. Risky, preliminary pilot studies ought to have a failure rate by definition, otherwise they wouldn't be risky at all. The big "end-result" studies on the other hand, the ones that are actually used to draw secure conclusions and, in the case of psychology, influence social policy, well those you'd want at least their basic results to be on a secure footing.
The Many Labs 2 project was specifically designed to address these criticisms. With 15,305 participants in total, the new experiments had, on average, 60 times as many volunteers as the studies they were attempting to replicate. The researchers involved worked with the scientists behind the original studies to vet and check every detail of the experiments beforehand. And they repeated those experiments many times over, with volunteers from 36 different countries, to see if the studies would replicate in some cultures and contexts but not others.
Despite the large sample sizes and the blessings of the original teams, the team failed to replicate half of the studies it focused on. It couldn’t, for example, show that people subconsciously exposed to the concept of heat were more likely to believe in global warming, or that moral transgressions create a need for physical cleanliness in the style of Lady Macbeth, or that people who grow up with more siblings are more altruistic. And as in previous big projects, online bettors were surprisingly good at predicting beforehand which studies would ultimately replicate. Somehow, they could intuit which studies were reliable.
Maybe anecdotes are evidence, after all... :P
Many Labs 2 “was explicitly designed to examine how much effects varied from place to place, from culture to culture,” says Katie Corker, the chair of the Society for the Improvement of Psychological Science. “And here’s the surprising result: The results do not show much variability at all.” If one of the participating teams successfully replicated a study, others did, too. If a study failed to replicate, it tended to fail everywhere.
Many researchers have noted that volunteers from Western, educated, industrialized, rich, and democratic countries—weird nations—are an unusual slice of humanity who think differently than those from other parts of the world. In the majority of the Many Labs 2 experiments, the team found very few differences between weird volunteers and those from other countries. But Miyamoto notes that its analysis was a little crude—in considering “non-weird countries” together, it’s lumping together people from cultures as diverse as Mexico, Japan, and South Africa. “Cross-cultural research,” she writes, “must be informed with thorough analyses of each and all of the cultural contexts involved.”
Sanjay Srivastava from the University of Oregon says the lack of variation in Many Labs 2 is actually a positive thing. Sure, it suggests that the large number of failed replications really might be due to sloppy science. But it also hints that the fundamental business of psychology—creating careful lab experiments to study the tricky, slippery, complicated world of the human mind—works pretty well. “Outside the lab, real-world phenomena can and probably do vary by context,” he says. “But within our carefully designed studies and experiments, the results are not chaotic or unpredictable. That means we can do valid social-science research.”
Originally shared by Eli Fennell
Latest Big Social Psych Replication Program Reaffirms and Refines 'Crisis'
There has been much talk in recent years about a 'Replication Crisis' in social psychology, and in some of the other sciences as well it should be added (even medicine). Given, though, the greater lack of replicable foundational findings compared with most fields, however, and the real world policy implications of their alleged findings, such a crisis in social psychology is especially daunting.
Previous replication projects designed to assess the replicability of social psychology experiments have typically found a replication rate of about half, i.e. half of studies are successfully replicated. Previous projects, however, have had numerous limitations that undercut their basic assertions: low sample sizes (meaning insufficient power for replication), critical sampling differences (e.g. using a different sample type, but not using many different sample types that include the original type), imperfect replication protocols, few international replications, or the selection of weak or low prominence findings for testing.
The latest replication project, by a group called Many Labs 2, is helping resolve these issues by focusing on prominent, high impact findings; using samples generally larger than the original studies; carefully replicating the original studies, and even consulting with original researchers to ensure this; and testing many types of participant samples, including international samples.
Surprisingly, their results are in line with previous efforts: about half of all findings replicated successfully, making this an ironically very replicable result. Given the complexity of human behaviour, this is actually not as terrible as it sounds in a way, but is worse in another way since many of these are highly foundational and influential 'findings', such as the Marshmallow Effect.
On the bright side, their findings also suggest that sample differences may matter far less than supposed: if a result was replicated, it tended to be replicated across different samples, even in very different cultures, genders, and ages. The question of whether a given finding generalises beyond a given sample has long haunted the social sciences, especially with their reliance on undergraduate participant samples for much of their experimental research. In fact, those results which were replicated by Many Labs 2 tended to replicate even in groups the original researchers specifically expected not to find them replicated.
They also replicated previous findings suggesting that online betters are surprisingly accurate at predicting which results will be replicated in follow-up studies and which won't, suggesting that intuition may be able to play more of an important role in designing and prioritising research than normally suspected, though with the caveat that those directly involved in research are least likely to themselves possess objective intuitions about their likelihood of success.
Moreover, it is worth remembering that the fact that an experiment replicated, does not mean it necessarily tells us anything about the real world, or tells us what it claims to tell us about it at any rate. As an example, experimental social psychologists consistently find that younger samples have better time management skills than elderly samples; yet studies of real world time management consistently show elderly samples miss fewer deadlines and appointments in practice, and easily so.
In my opinion, what all of this points to is that the experimental psychologies fell prey to a desire to quickly 'catch ip' with very basic reductive operational sciences, like basic physics and chemistry, and were at times too willing to accept highly theatrical demonstrations of effects later shown to be overstated at best (e.g. the Stanford Prison Experiments) to get there. What they ignored is that, even in other operational sciences dealing with complex materials, replicability can be tricky, such as biochemistry or quantum physics.
And in the case of studying humans, controlling all variables is especially tricky, and there is little motivation for most researchers to pursue slow, methodical programs of study when what they want is to name a theory of love after themselves or discover a new therapy for ASD children.
It is important for the social sciences to continue to evolve into full fledged operational sciences, and to embrace reductive approaches wherever appropriate, but there has been perhaps a degree of patience, humility, and sweating-the-details absent in much of the research to-date. Hence the replication crisis. Or, at least, I suspect this is certainly part of the answer.
https://www.theatlantic.com/science/archive/2018/11/psychologys-replication-crisis-real/576223/
BFR has been given a new name and it's stupid
That's a terrible name. You can't call a spaceship that isn't a starship, "Starship", because that's mad. Might as well name your cat "dog" or your rabbit "hare" or your Eurasian wolf "North America Timber Wolf".
Sheesh. Just call it Spacey McSpaceface like everyone else...
https://www.geekwire.com/2018/goodbye-bfr-hello-starship-elon-musk-gives-classic-name-mars-spaceship/
Sheesh. Just call it Spacey McSpaceface like everyone else...
https://www.geekwire.com/2018/goodbye-bfr-hello-starship-elon-musk-gives-classic-name-mars-spaceship/
There'll be no accusations, just friendly molluscs...
"I had the privilege of visiting this incredible animal for almost a year. It totally trusted me, lost all fear, it would take me on hunting expeditions and let me into its secret world. Octopuses have different personalities, some are quite bold, others very shy, she was in between," Mr Foster told the BBC, describing how she would come over and greet him when she became accustomed to his visits.
"It is a great privilege to step into that world to learn - not like a mammal - but like a fellow spineless creature in her invertebrate world," he said.
Her den was mainly a hole she had dug in the ocean floor, which the diver described as a "proper home". "She hunts over 50 species but you can only find that out when you're allowed into her den and can pick up the bones of the animals she has eaten," he said, referring to the lobster and crab shells he saw. "You realise, my goodness, her life is so detailed and crazily connected to everything around her."
The diver has also had amazing encounters with great white sharks, possibly some of the ones that have been responsible for attacks on surfers and bathers on surfers and bathers in nearby False Bay over the years. Unlike the aggressive hunters of human flesh they are often portrayed to be, he paints a totally different picture of a magnificent serene animal.
"When the great white sees a human it scans us, its search image is picking up something that's not prey. They are not sure what we are, they may be curious but it's not something that's good for them to eat and they know that. They aren't animals that are after us, if they were, there would be attacks every day. If they see a seal, a fish or some of the other prey that's a different story but humans are not on their menu."
"The one attack a year is an aberration. There's something in that person they attack that's triggering a response in that shark, it's incredibly rare. Maybe it's the muscle tension that's high, maybe the shark is in a bad mood. I have made eye contact with them. I once had five great whites circling me in open water and I could see no aggression towards me whatsoever," Mr Foster said. "I've had a couple of close meetings with Tiger Sharks but they're also very gentle if you're relaxed. These animals are not the killers they're made out to be."
https://www.bbc.com/news/world-africa-45967535
"It is a great privilege to step into that world to learn - not like a mammal - but like a fellow spineless creature in her invertebrate world," he said.
Her den was mainly a hole she had dug in the ocean floor, which the diver described as a "proper home". "She hunts over 50 species but you can only find that out when you're allowed into her den and can pick up the bones of the animals she has eaten," he said, referring to the lobster and crab shells he saw. "You realise, my goodness, her life is so detailed and crazily connected to everything around her."
The diver has also had amazing encounters with great white sharks, possibly some of the ones that have been responsible for attacks on surfers and bathers on surfers and bathers in nearby False Bay over the years. Unlike the aggressive hunters of human flesh they are often portrayed to be, he paints a totally different picture of a magnificent serene animal.
"When the great white sees a human it scans us, its search image is picking up something that's not prey. They are not sure what we are, they may be curious but it's not something that's good for them to eat and they know that. They aren't animals that are after us, if they were, there would be attacks every day. If they see a seal, a fish or some of the other prey that's a different story but humans are not on their menu."
"The one attack a year is an aberration. There's something in that person they attack that's triggering a response in that shark, it's incredibly rare. Maybe it's the muscle tension that's high, maybe the shark is in a bad mood. I have made eye contact with them. I once had five great whites circling me in open water and I could see no aggression towards me whatsoever," Mr Foster said. "I've had a couple of close meetings with Tiger Sharks but they're also very gentle if you're relaxed. These animals are not the killers they're made out to be."
https://www.bbc.com/news/world-africa-45967535
A hotspot under Antarctica
A "hotspot" is melting the base of the Antarctic Ice Sheet at the South Pole. The area affected is three times that of Greater London. Scientists suspect a combination of unusually radioactive rocks and geothermal springs may be responsible.
The warm bedrock is removing some 6mm a year from the underside of the 3km-thick ice sheet, producing a mass of meltwater that then flows away through sub-glacial rivers and lakes towards the continent's coastline. The roughly 100km-by-50km hotspot came to light when researchers examined radar images of the ice sheet at 88 degrees South. This revealed a startling sagging in the ice layers directly above the hotspot.
Antarctica is in no danger of melting away as a result of this hotspot. In the grand scheme of things, the area affected and the amount of melting is simply too small to have a significant impact. But the knowledge adds to our understanding of the under-ice hydrology of the continent. There is vast network of sub-glacial rivers and lakes in Antarctica and they influence the way the ice sheet moves above them.
https://www.bbc.com/news/science-environment-46202255
The warm bedrock is removing some 6mm a year from the underside of the 3km-thick ice sheet, producing a mass of meltwater that then flows away through sub-glacial rivers and lakes towards the continent's coastline. The roughly 100km-by-50km hotspot came to light when researchers examined radar images of the ice sheet at 88 degrees South. This revealed a startling sagging in the ice layers directly above the hotspot.
Antarctica is in no danger of melting away as a result of this hotspot. In the grand scheme of things, the area affected and the amount of melting is simply too small to have a significant impact. But the knowledge adds to our understanding of the under-ice hydrology of the continent. There is vast network of sub-glacial rivers and lakes in Antarctica and they influence the way the ice sheet moves above them.
https://www.bbc.com/news/science-environment-46202255
Barnard's Star has planets after all
I remember reading old popular books claiming that Barnard's Star has planets - way back before the modern exoplanet era. Turns out it does, but probably not the ones it was supposed to.
The planet's mass is thought to be 3.2 times that of our own, placing it in a category of world known as a "super-Earth". It orbits Barnard's star, which sits "just" six light-years away. The star is an extremely faint "red dwarf" that's about 3% as bright as the Sun.
Co-author Guillem Anglada Escudé said the newly discovered world was "possibly a mostly rocky planet with a massive atmosphere. It's probably very rich in volatiles like water, hydrogen, carbon dioxide - things like this. Many of them are frozen on the surface".
Dr Anglada Escudé, from Queen Mary University of London, added: "The closest analogue we may have in the Solar System might be the moon of Saturn called Titan, which also has a very thick atmosphere and is made of hydrocarbons. It has rain and lakes made of methane."
The planet, Barnard's Star b, is about as far away from its star as Mercury is from the Sun. On distance alone, it's estimated that temperatures would be about -150C on the planet's surface. However, a massive atmosphere could potentially warm the planet, making conditions more hospitable to life.
When the next generation of telescopes come online, scientists will be able to characterise the planet's properties. This will likely include a search for gases like oxygen and methane in the planet's atmosphere, which might be markers for biology. "The James Webb Space Telescope might not help in this case, because it was not designed for what's called high contrast imaging. But in the US, they are also developing WFirst - a small telescope that's also used for cosmology," said Dr Anglada Escudé.
WFIRST will have a mirror the same size as Hubble's (2.4 m diameter), designed for big cosmological projects, with a budget of $3.2 billion. Calling it a "small telescope that's also used for cosmology" is a bit like saying that coffee is a minor drink popular in some parts of the world and also used to keep people awake.
"If you take the specs of how it should perform, it should easily image this planet. When we have the image we can then start to do spectroscopy - looking at different wavelengths, in the optical, in the infrared, looking at whether light is absorbed at different colours, meaning there are different things in the atmosphere."
https://www.bbc.com/news/science-environment-46196279
The planet's mass is thought to be 3.2 times that of our own, placing it in a category of world known as a "super-Earth". It orbits Barnard's star, which sits "just" six light-years away. The star is an extremely faint "red dwarf" that's about 3% as bright as the Sun.
Co-author Guillem Anglada Escudé said the newly discovered world was "possibly a mostly rocky planet with a massive atmosphere. It's probably very rich in volatiles like water, hydrogen, carbon dioxide - things like this. Many of them are frozen on the surface".
Dr Anglada Escudé, from Queen Mary University of London, added: "The closest analogue we may have in the Solar System might be the moon of Saturn called Titan, which also has a very thick atmosphere and is made of hydrocarbons. It has rain and lakes made of methane."
The planet, Barnard's Star b, is about as far away from its star as Mercury is from the Sun. On distance alone, it's estimated that temperatures would be about -150C on the planet's surface. However, a massive atmosphere could potentially warm the planet, making conditions more hospitable to life.
When the next generation of telescopes come online, scientists will be able to characterise the planet's properties. This will likely include a search for gases like oxygen and methane in the planet's atmosphere, which might be markers for biology. "The James Webb Space Telescope might not help in this case, because it was not designed for what's called high contrast imaging. But in the US, they are also developing WFirst - a small telescope that's also used for cosmology," said Dr Anglada Escudé.
WFIRST will have a mirror the same size as Hubble's (2.4 m diameter), designed for big cosmological projects, with a budget of $3.2 billion. Calling it a "small telescope that's also used for cosmology" is a bit like saying that coffee is a minor drink popular in some parts of the world and also used to keep people awake.
"If you take the specs of how it should perform, it should easily image this planet. When we have the image we can then start to do spectroscopy - looking at different wavelengths, in the optical, in the infrared, looking at whether light is absorbed at different colours, meaning there are different things in the atmosphere."
https://www.bbc.com/news/science-environment-46196279
Irn-Bru fans are seriously serious
Naveed Sattar, professor of Cardiology and Medical Sciences at the University of Glasgow, remembers growing up without fear of overdoing the sugar. UK obesity levels were 4%-7% in the 1980s when Sattar was a kid helping in his father’s corner shop. He often watched customers stock up on four litres of Irn-Bru a day... One Irn-Bru addict at the cardiovascular clinic Sattar runs was referred due to high blood fat levels and a recent heart attack. He regularly drank six litres a day, Sattar says, and even got up in the night for a sip. He lost eight kilos after Sattar treated him, and began enjoying proper food for the first time in years.
...Among its long list of solutions was the levy on sugary drinks. Teenagers in England, the biggest consumers of such drinks in Europe, could max out their recommended daily intake of sugar with a single 330 millilitre (ml) can. The time had come to tackle one of the underlying sources of the problem.
People might have expected drinks companies to erupt in outrage at the announcement and for the Treasury to sit in eager anticipation of a funding boost. But it was just the opposite. It was almost as if they had seen it coming. Soft drink manufacturers rapidly began looking at ways to reformulate their products around the new guidelines, often swapping sugar for artificial sweeteners. Supermarkets Tesco, Asda and drinks companies Fanta, Ribena, Britvic and Lucozade slashed the sugar content in their drinks, some by half or more.
Even with thousands of other sugary foods at their disposal, the more outspoken Irn-Bru fans think the tax has gone too far. Irn-Bru declined to comment when approached by BBC Capital. Allen followed up his cone stunt with a petition aimed at parent company AG Barr that received more than 50,000 signatures. Another man came up with his own Irn-Bru-flavoured concoction that he claims tastes just like the original. Others stockpiled the full-sugar version and now sell it illegally online.
Allen says he drinks four cans a week, which would cost him about £124 ($161) annually if he purchased each 300ml can individually. Had the makers refused to alter their recipe and passed the tax to Allen, his annual spend would have risen to about £140 ($182). “I would have rather paid the tax,” he says. “It’s brand-suicide and it’s a betrayal of the customers who have made it what it is.”
Few can match Allen’s passion, but he’s not alone. According to Kiti Soininen, head of food, drink and foodservice research at Mintel, 64% of people say they drink the same amount they did a year ago and 10% say they drink more. The extra-hot summer weather likely affected consumption a bit, Soininen says, but it’s an indicator that the UK has a long way to go.
I wonder, if they'd just paid the tax at first and then gradually reduced the sugar content, would anyone really notice ? A friend of mine swears that Cadbury's Creme Eggs taste much worse than they used to but I can't tell the difference.
(I've tried the American version of Cadbury's chocolate though, and that's genuinely disgusting)
http://www.bbc.com/capital/story/20181113-bittersweet---whos-benefitting-from-the-sugar-tax
...Among its long list of solutions was the levy on sugary drinks. Teenagers in England, the biggest consumers of such drinks in Europe, could max out their recommended daily intake of sugar with a single 330 millilitre (ml) can. The time had come to tackle one of the underlying sources of the problem.
People might have expected drinks companies to erupt in outrage at the announcement and for the Treasury to sit in eager anticipation of a funding boost. But it was just the opposite. It was almost as if they had seen it coming. Soft drink manufacturers rapidly began looking at ways to reformulate their products around the new guidelines, often swapping sugar for artificial sweeteners. Supermarkets Tesco, Asda and drinks companies Fanta, Ribena, Britvic and Lucozade slashed the sugar content in their drinks, some by half or more.
Even with thousands of other sugary foods at their disposal, the more outspoken Irn-Bru fans think the tax has gone too far. Irn-Bru declined to comment when approached by BBC Capital. Allen followed up his cone stunt with a petition aimed at parent company AG Barr that received more than 50,000 signatures. Another man came up with his own Irn-Bru-flavoured concoction that he claims tastes just like the original. Others stockpiled the full-sugar version and now sell it illegally online.
Allen says he drinks four cans a week, which would cost him about £124 ($161) annually if he purchased each 300ml can individually. Had the makers refused to alter their recipe and passed the tax to Allen, his annual spend would have risen to about £140 ($182). “I would have rather paid the tax,” he says. “It’s brand-suicide and it’s a betrayal of the customers who have made it what it is.”
Few can match Allen’s passion, but he’s not alone. According to Kiti Soininen, head of food, drink and foodservice research at Mintel, 64% of people say they drink the same amount they did a year ago and 10% say they drink more. The extra-hot summer weather likely affected consumption a bit, Soininen says, but it’s an indicator that the UK has a long way to go.
I wonder, if they'd just paid the tax at first and then gradually reduced the sugar content, would anyone really notice ? A friend of mine swears that Cadbury's Creme Eggs taste much worse than they used to but I can't tell the difference.
(I've tried the American version of Cadbury's chocolate though, and that's genuinely disgusting)
http://www.bbc.com/capital/story/20181113-bittersweet---whos-benefitting-from-the-sugar-tax
Hitting an asteroid with a katana
In a paper detailing early experiments, the team – including Genrokuro Matsunaga, a 70-year-old swordsmith and Takeo Watanabe at Kanagawa Institute of Technology – explain how they have made several rock corers with various metallic compositions. Four contain tamahagane, the traditional metal made from iron sand and charcoal that is used in Japanese swords. “To achieve the sharpness and plasticity demand of the corer tip, we borrowed the techniques of traditional Japanese sword-smithing in fabricating the corer samples,” the authors write.
The resulting corers are small, cylindrical devices with a bladed edge angled inwards. Instead of swiping a katana sword at the asteroid – which would be cool but impractical – the idea is to launch the tamahagane-tipped corer at the space rock at great speed. In theory, it will dig into the asteroid and allow for a sample to be scooped up. A tether back to the mothership spacecraft could then reel the device and asteroid fragments in.
Yeah, you lost me at "swiping at a katana sword at the asteroid". Everything from this point on is redundant and only for interest's sake.
But even getting surface or near-surface samples is difficult because of a basic property of asteroids: they’re small and don’t have much of a gravitational pull. Every time a rover or collecting device connects with or pushes down against an asteroid, it can easily bounce back off it again. “Anything that allows you to use less force to cut into the asteroid surface – like this tamahagane steel – has to help,” says Elvis.
So far, the Japanese team have tested some of their corers by dropping them down a long pipe towards a concrete slab at the bottom of a tall stairwell. The samplers successfully extracted some concrete, but occasionally dropped it when being retrieved. “A mechanism to prevent samples from falling off during the extraction and recovery phase needs to be devised,” the authors note.
Plus, the tamahagane corers themselves were not tested – because they were so expensive. Still, these are the first steps towards blasting a sword-inspired sampling technology into space.
http://www.bbc.com/future/story/20181113-a-samurai-swordsmith-is-designing-a-space-probe
The resulting corers are small, cylindrical devices with a bladed edge angled inwards. Instead of swiping a katana sword at the asteroid – which would be cool but impractical – the idea is to launch the tamahagane-tipped corer at the space rock at great speed. In theory, it will dig into the asteroid and allow for a sample to be scooped up. A tether back to the mothership spacecraft could then reel the device and asteroid fragments in.
Yeah, you lost me at "swiping at a katana sword at the asteroid". Everything from this point on is redundant and only for interest's sake.
But even getting surface or near-surface samples is difficult because of a basic property of asteroids: they’re small and don’t have much of a gravitational pull. Every time a rover or collecting device connects with or pushes down against an asteroid, it can easily bounce back off it again. “Anything that allows you to use less force to cut into the asteroid surface – like this tamahagane steel – has to help,” says Elvis.
So far, the Japanese team have tested some of their corers by dropping them down a long pipe towards a concrete slab at the bottom of a tall stairwell. The samplers successfully extracted some concrete, but occasionally dropped it when being retrieved. “A mechanism to prevent samples from falling off during the extraction and recovery phase needs to be devised,” the authors note.
Plus, the tamahagane corers themselves were not tested – because they were so expensive. Still, these are the first steps towards blasting a sword-inspired sampling technology into space.
http://www.bbc.com/future/story/20181113-a-samurai-swordsmith-is-designing-a-space-probe
A race to the bottom with only one competitor
It's like he's in a race to the bottom with himself.
35. Would it have been nicer if we got Osama bin Laden a lot sooner than that, wouldn't it been nice? Living -- think of this, living in Pakistan, beautifully in Pakistan and what I guess in what they considered a nice mansion. I don't know, I've seen nicer."
Originally shared by Andres Soolo
File under #innuendo:
3. "She was with me for a long time, although I don't know her. She's really somebody I don't know very well. But we're going to move her around because she's got certain talents."
OK, so here's Trump on Mira Ricardel, the NSC staffer removed at the request of Melania Trump: 1) She was with him for a long time b) He doesn't know her c) He doesn't know her very well c) She'll stay within the administration because she has "certain talents." We all clear on that? Good!
https://edition.cnn.com/2018/11/19/politics/donald-trump-chris-wallace-interview/index.html
35. Would it have been nicer if we got Osama bin Laden a lot sooner than that, wouldn't it been nice? Living -- think of this, living in Pakistan, beautifully in Pakistan and what I guess in what they considered a nice mansion. I don't know, I've seen nicer."
Originally shared by Andres Soolo
File under #innuendo:
3. "She was with me for a long time, although I don't know her. She's really somebody I don't know very well. But we're going to move her around because she's got certain talents."
OK, so here's Trump on Mira Ricardel, the NSC staffer removed at the request of Melania Trump: 1) She was with him for a long time b) He doesn't know her c) He doesn't know her very well c) She'll stay within the administration because she has "certain talents." We all clear on that? Good!
https://edition.cnn.com/2018/11/19/politics/donald-trump-chris-wallace-interview/index.html
Monday, 19 November 2018
The volcanic winter of 536 AD
A mysterious fog plunged Europe, the Middle East, and parts of Asia into darkness, day and night—for 18 months. "For the sun gave forth its light without brightness, like the moon, during the whole year," wrote Byzantine historian Procopius. Temperatures in the summer of 536 fell 1.5°C to 2.5°C, initiating the coldest decade in the past 2300 years. Snow fell that summer in China; crops failed; people starved. The Irish chronicles record "a failure of bread from the years 536–539." Then, in 541, bubonic plague struck the Roman port of Pelusium, in Egypt. What came to be called the Plague of Justinian spread rapidly, wiping out one-third to one-half of the population of the eastern Roman Empire and hastening its collapse, McCormick says.
But the eastern Empire survived near enough another millennium. It shrank quite a bit, but it survived.
Historians have long known that the middle of the sixth century was a dark hour in what used to be called the Dark Ages, but the source of the mysterious clouds has long been a puzzle. Now, an ultraprecise analysis of ice from a Swiss glacier by a team led by McCormick and glaciologist Paul Mayewski at the Climate Change Institute of The University of Maine (UM) in Orono has fingered a culprit.
SNIGGER !
At a workshop at Harvard this week, the team reported that a cataclysmic volcanic eruption in Iceland spewed ash across the Northern Hemisphere early in 536. Two other massive eruptions followed, in 540 and 547. The repeated blows, followed by plague, plunged Europe into economic stagnation that lasted until 640, when another signal in the ice—a spike in airborne lead—marks a resurgence of silver mining, as the team reports in Antiquity this week.
https://www.sciencemag.org/news/2018/11/why-536-was-worst-year-be-alive
But the eastern Empire survived near enough another millennium. It shrank quite a bit, but it survived.
Historians have long known that the middle of the sixth century was a dark hour in what used to be called the Dark Ages, but the source of the mysterious clouds has long been a puzzle. Now, an ultraprecise analysis of ice from a Swiss glacier by a team led by McCormick and glaciologist Paul Mayewski at the Climate Change Institute of The University of Maine (UM) in Orono has fingered a culprit.
SNIGGER !
At a workshop at Harvard this week, the team reported that a cataclysmic volcanic eruption in Iceland spewed ash across the Northern Hemisphere early in 536. Two other massive eruptions followed, in 540 and 547. The repeated blows, followed by plague, plunged Europe into economic stagnation that lasted until 640, when another signal in the ice—a spike in airborne lead—marks a resurgence of silver mining, as the team reports in Antiquity this week.
https://www.sciencemag.org/news/2018/11/why-536-was-worst-year-be-alive
NASA contemplates swapping SLS for BFR
An unexpectedly sensible move.
NASA hopes to test-launch the first Block 1 rocket in June 2020 on a flight called Exploration Mission-1 (EM-1). The mission aims to prove SLS is safe and reliable by sending an uncrewed Orion spacecraft around the moon and back to Earth. A crewed Exploration Mission-2 (EM-2) would follow several years later.
But so far NASA has spent about $11.9 billion on SLS, and the agency is projected to need $4-5 billion more than it has planned by 2021. Relatedly, the scheduled launch date for EM-1 in June 2020 is about 2.5 years behind-schedule... Such issues have some experts estimating an average cost of $5 billion per launch of SLS, which is a single-use rocket. Presumably, SpaceX or Blue Origin could launch at a fraction of that price since their upcoming vehicles are reusable.
2.5 years is nothing for a large project like this, but the development cost is silly. The launch cost is offensive. I believe it's worth investing large amounts of money in manned space exploration... but c'mon, not that much. $5 billion per launch is nuts.
But agency leaders are already contemplating the retirement of the Space Launch System (SLS), as the towering and yet-to-fly government rocket is called, and the Orion space capsule that'll ride on top. NASA is anticipating the emergence of two reusable, and presumably more affordable, mega-rockets that private aerospace companies are creating. Those systems are the Big Falcon Rocket (BFR), which is being built by Elon Musk's SpaceX; and the New Glenn, a launcher being built by Jeff Bezos' Blue Origin.
"I think our view is that if those commercial capabilities come online, we will eventually retire the government system, and just move to a buying launch capacity on those [rockets]," Stephen Jurczyk, NASA's associate administrator, told Business Insider at The Economist Space Summit on November 1.
https://www.businessinsider.com/nasa-sls-replacement-spacex-bfr-blue-origin-new-glenn-2018-11
NASA hopes to test-launch the first Block 1 rocket in June 2020 on a flight called Exploration Mission-1 (EM-1). The mission aims to prove SLS is safe and reliable by sending an uncrewed Orion spacecraft around the moon and back to Earth. A crewed Exploration Mission-2 (EM-2) would follow several years later.
But so far NASA has spent about $11.9 billion on SLS, and the agency is projected to need $4-5 billion more than it has planned by 2021. Relatedly, the scheduled launch date for EM-1 in June 2020 is about 2.5 years behind-schedule... Such issues have some experts estimating an average cost of $5 billion per launch of SLS, which is a single-use rocket. Presumably, SpaceX or Blue Origin could launch at a fraction of that price since their upcoming vehicles are reusable.
2.5 years is nothing for a large project like this, but the development cost is silly. The launch cost is offensive. I believe it's worth investing large amounts of money in manned space exploration... but c'mon, not that much. $5 billion per launch is nuts.
But agency leaders are already contemplating the retirement of the Space Launch System (SLS), as the towering and yet-to-fly government rocket is called, and the Orion space capsule that'll ride on top. NASA is anticipating the emergence of two reusable, and presumably more affordable, mega-rockets that private aerospace companies are creating. Those systems are the Big Falcon Rocket (BFR), which is being built by Elon Musk's SpaceX; and the New Glenn, a launcher being built by Jeff Bezos' Blue Origin.
"I think our view is that if those commercial capabilities come online, we will eventually retire the government system, and just move to a buying launch capacity on those [rockets]," Stephen Jurczyk, NASA's associate administrator, told Business Insider at The Economist Space Summit on November 1.
https://www.businessinsider.com/nasa-sls-replacement-spacex-bfr-blue-origin-new-glenn-2018-11
Sunday, 18 November 2018
The problems with TED, according to TED
There are some genuinely excellent TED talks, of course. But overall I tend to find it a bit like a diet of pure chocolate... I suppose it depends on what the goal is. If it's edutainment, then it probably should stay the way it is. If it's actually about enacting change, then it shouldn't. For that you need historians, philosophers, politicians, psychologists, sociologists... and a lot more examination of the darker cynicism of human nature.
Snippets from the transcript :
But have you ever wondered why so little of the bright futures promised in TED talks actually come true? Is something wrong with the ideas? Or with the notion of what ideas can do all by themselves?
The first reason is over-simplification. Now, to be clear, I have nothing against the idea of interesting people who do smart things explaining their work in a way that everyone can understand. But TED goes way beyond that. This is not popularisation. This is taking something with substance and value and coring it out so that it can be swallowed without chewing. This is not how we'll confront our most frightening problems, this is one of our most frightening problems.
TED is perhaps a proposition, one that says if we talk about world-changing ideas enough, then the world will change. Well, this is not true either. And that's the second problem.
You see, when inspiration becomes manipulation, inspiration becomes obfuscation. And if you're not cynical, you should be skeptical. You should be as skeptical of placebo politics as you are of placebo medicine.
The future on offer is one in which everything can change, so long as everything stays the same. We'll have Google Glass, but we'll still have business casual. This timidity is not our path to the future. This is incredibly conservative. And more gigaflops won't inoculate us. Because, if a problem is endemic to a system, then the exponential effects of Moore's law also amplify what's broken. It's more computation along the wrong curve, and I hardly think this is a triumph of Reason.
Our problems are not "puzzles" to be solved. This metaphor implies that all the necessary pieces are already on the table, just need to be rearranged and reprogrammed. It's not true. "Innovation" defined as "puzzles", as rearranging pieces and adding more processing power, is not some Big Idea that's going to disrupt the broken status quo — that precisely is the broken status quo.
https://www.youtube.com/watch?v=Yo5cKRmJaf0
Snippets from the transcript :
But have you ever wondered why so little of the bright futures promised in TED talks actually come true? Is something wrong with the ideas? Or with the notion of what ideas can do all by themselves?
The first reason is over-simplification. Now, to be clear, I have nothing against the idea of interesting people who do smart things explaining their work in a way that everyone can understand. But TED goes way beyond that. This is not popularisation. This is taking something with substance and value and coring it out so that it can be swallowed without chewing. This is not how we'll confront our most frightening problems, this is one of our most frightening problems.
TED is perhaps a proposition, one that says if we talk about world-changing ideas enough, then the world will change. Well, this is not true either. And that's the second problem.
You see, when inspiration becomes manipulation, inspiration becomes obfuscation. And if you're not cynical, you should be skeptical. You should be as skeptical of placebo politics as you are of placebo medicine.
The future on offer is one in which everything can change, so long as everything stays the same. We'll have Google Glass, but we'll still have business casual. This timidity is not our path to the future. This is incredibly conservative. And more gigaflops won't inoculate us. Because, if a problem is endemic to a system, then the exponential effects of Moore's law also amplify what's broken. It's more computation along the wrong curve, and I hardly think this is a triumph of Reason.
Our problems are not "puzzles" to be solved. This metaphor implies that all the necessary pieces are already on the table, just need to be rearranged and reprogrammed. It's not true. "Innovation" defined as "puzzles", as rearranging pieces and adding more processing power, is not some Big Idea that's going to disrupt the broken status quo — that precisely is the broken status quo.
https://www.youtube.com/watch?v=Yo5cKRmJaf0
Rain in the desert can cause mass extinctions
Not terribly surprising, but still interesting.
When the rains came to one of the driest places on Earth, an unprecedented mass extinction ensued. The assumption was that this rainfall would turn this remote region of the Atacama Desert in Chile into a wondrous, floral haven — dormant seeds hidden in the parched landscape would suddenly awake, triggered by the “life-giving” substance they hadn’t seen for centuries — but it instead decimated over three quarters of the native bacterial life, microbes that shun water in favor of the nitrogen-rich compounds the region has locked in its dry soil. In other words, death fell from the skies.
“The hyperdry soils before the rains were inhabited by up to 16 different, ancient microbe species. After it rained, there were only two to four microbe species found in the lagoons,” he added in a statement. “The extinction event was massive.”
“Our results show for the first time that providing suddenly large amounts of water to microorganisms — exquisitely adapted to extract meager and elusive moisture from the most hyperdry environments — will kill them from osmotic shock,” said Fairen.
http://astroengine.com/2018/11/17/water-could-kill-life-on-mars/
When the rains came to one of the driest places on Earth, an unprecedented mass extinction ensued. The assumption was that this rainfall would turn this remote region of the Atacama Desert in Chile into a wondrous, floral haven — dormant seeds hidden in the parched landscape would suddenly awake, triggered by the “life-giving” substance they hadn’t seen for centuries — but it instead decimated over three quarters of the native bacterial life, microbes that shun water in favor of the nitrogen-rich compounds the region has locked in its dry soil. In other words, death fell from the skies.
“The hyperdry soils before the rains were inhabited by up to 16 different, ancient microbe species. After it rained, there were only two to four microbe species found in the lagoons,” he added in a statement. “The extinction event was massive.”
“Our results show for the first time that providing suddenly large amounts of water to microorganisms — exquisitely adapted to extract meager and elusive moisture from the most hyperdry environments — will kill them from osmotic shock,” said Fairen.
http://astroengine.com/2018/11/17/water-could-kill-life-on-mars/
The importance of fungi in Antarctica
No trees grow in Antarctica today, but the team speculated that Cadophora may be related to fungi that lived on the continent 200 million years ago, when forests and swamps dominated the landscape. Researchers took soil samples from areas where petrified wood is found in Antarctica’s Allan Hills and Mount Fleming in order to isolate fungi similar to those isolated from historic area sites. However, they have yet to complete a DNA analysis that would allow them to determine the relationship of currently found fungi in Antarctica to those observed in the fossil record.
Many of the Antarctic fungi are highly pigmented, which gives them UV protection and helps them absorb solar energy. (In fact, many conservationists mistook the fungi in the cabins for soot at first, said McDonald.) The majority are also cold-tolerant rather than cold-loving: cold-tolerant organisms are capable of growth at 0 degrees Celsius, but grow optimally above 15 degrees Celsius, while true psychrophiles grow best below 15 degrees Celsius.
“If we take these fungi and put them into the lab, under the best of conditions, they can degrade [wood] quite fast. They have the potential for rapid decay,” said Blanchette. They are only limited by the degree-days above zero Celsius.
https://medium.com/hhmi-science-media/fungi-live-large-at-the-poles-9b172f96b38c
Many of the Antarctic fungi are highly pigmented, which gives them UV protection and helps them absorb solar energy. (In fact, many conservationists mistook the fungi in the cabins for soot at first, said McDonald.) The majority are also cold-tolerant rather than cold-loving: cold-tolerant organisms are capable of growth at 0 degrees Celsius, but grow optimally above 15 degrees Celsius, while true psychrophiles grow best below 15 degrees Celsius.
“If we take these fungi and put them into the lab, under the best of conditions, they can degrade [wood] quite fast. They have the potential for rapid decay,” said Blanchette. They are only limited by the degree-days above zero Celsius.
https://medium.com/hhmi-science-media/fungi-live-large-at-the-poles-9b172f96b38c
The trees that live underground for millenia
With most of their branches safely underground and just their leaves and perhaps some twigs poking up above the surface, these subterranean versions of their above ground ancestors are close to indestructible. Some can live for more than 10,000 years.
Of course almost all trees have roots that bury through the soil in search of nutrients and water, and so live underground to some extent. But the underground savannah trees, sometimes called geoxylic suffrutices, are different.
Beneath flimsy leafy shoots, some grow large woody structures as much as one metre wide, while others form branched networks of stems measuring up to 10 metres across. Their shoots are so small and thin that it makes little difference to the tree if it occasionally loses them to wildfire - they can quickly regrow.
There seems to have been a global trigger around 8 million years ago that led to an increase in wildfires, the spread of savannahs and the appearance of underground trees. Scientists know that seasonal variation became more pronounced at that time. The hotter, drier dry seasons would have made wildfires more frequent, and this may have been a factor.
While fire is something grasses that use C4 photosynthesis can live with because they quickly re-sprout in the aftermath, for most trees it spells death. As a result, savannahs can almost literally fuel their own spread.
These grasses thrive in the wet season, but when they shrivel up in the dry season, they can easily catch fire in the sun, generating wildfires that will destroy some of the neighbouring forest. Come the next wet season, it's the C4 grasses that are quickest to take advantage of the space formerly occupied by forest. The savannahs grow and the forests shrink.
http://www.bbc.com/earth/story/20141103-why-some-trees-live-underground
Of course almost all trees have roots that bury through the soil in search of nutrients and water, and so live underground to some extent. But the underground savannah trees, sometimes called geoxylic suffrutices, are different.
Beneath flimsy leafy shoots, some grow large woody structures as much as one metre wide, while others form branched networks of stems measuring up to 10 metres across. Their shoots are so small and thin that it makes little difference to the tree if it occasionally loses them to wildfire - they can quickly regrow.
There seems to have been a global trigger around 8 million years ago that led to an increase in wildfires, the spread of savannahs and the appearance of underground trees. Scientists know that seasonal variation became more pronounced at that time. The hotter, drier dry seasons would have made wildfires more frequent, and this may have been a factor.
While fire is something grasses that use C4 photosynthesis can live with because they quickly re-sprout in the aftermath, for most trees it spells death. As a result, savannahs can almost literally fuel their own spread.
These grasses thrive in the wet season, but when they shrivel up in the dry season, they can easily catch fire in the sun, generating wildfires that will destroy some of the neighbouring forest. Come the next wet season, it's the C4 grasses that are quickest to take advantage of the space formerly occupied by forest. The savannahs grow and the forests shrink.
http://www.bbc.com/earth/story/20141103-why-some-trees-live-underground
The fungal internet of plants
The "wood wide web" :
In mycorrhizal associations, plants provide fungi with food in the form of carbohydrates. In exchange, the fungi help the plants suck up water, and provide nutrients like phosphorus and nitrogen, via their mycelia. Fungal networks also boost their host plants' immune systems. That's because, when a fungus colonises the roots of a plant, it triggers the production of defense-related chemicals. These make later immune system responses quicker and more efficient, a phenomenon called "priming". Simply plugging in to mycelial networks makes plants more resistant to disease.
We now know that mycorrhizae also connect plants that may be widely separated. Suzanne Simard of the University of British Columbia in Vancouver found one of the first pieces of evidence. She showed that Douglas fir and paper birch trees can transfer carbon between them via mycelia. Others have since shown that plants can exchange nitrogen and phosphorus as well, by the same route.
... other researchers have found evidence that plants can go one better, and communicate through the mycelia. In 2010, Ren Sen Zeng of South China Agricultural University in Guangzhou found that when plants are attached by harmful fungi, they release chemical signals into the mycelia that warn their neighbours... Johnson found that broad bean seedlings that were not themselves under attack by aphids, but were connected to those that were via fungal mycelia, activated their anti-aphid chemical defenses. Those without mycelia did not.
Animals might also exploit the fungal internet. Some plants produce compounds to attract friendly bacteria and fungi to their roots, but these signals can be picked up by insects and worms looking for tasty roots to eat. In 2012, Morris suggested that the movement of these signalling chemicals through fungal mycelia may inadvertently advertise the plants presence to these animals. However, she says this has not been demonstrated in an experiment.
http://www.bbc.com/earth/story/20141111-plants-have-a-hidden-internet
In mycorrhizal associations, plants provide fungi with food in the form of carbohydrates. In exchange, the fungi help the plants suck up water, and provide nutrients like phosphorus and nitrogen, via their mycelia. Fungal networks also boost their host plants' immune systems. That's because, when a fungus colonises the roots of a plant, it triggers the production of defense-related chemicals. These make later immune system responses quicker and more efficient, a phenomenon called "priming". Simply plugging in to mycelial networks makes plants more resistant to disease.
We now know that mycorrhizae also connect plants that may be widely separated. Suzanne Simard of the University of British Columbia in Vancouver found one of the first pieces of evidence. She showed that Douglas fir and paper birch trees can transfer carbon between them via mycelia. Others have since shown that plants can exchange nitrogen and phosphorus as well, by the same route.
... other researchers have found evidence that plants can go one better, and communicate through the mycelia. In 2010, Ren Sen Zeng of South China Agricultural University in Guangzhou found that when plants are attached by harmful fungi, they release chemical signals into the mycelia that warn their neighbours... Johnson found that broad bean seedlings that were not themselves under attack by aphids, but were connected to those that were via fungal mycelia, activated their anti-aphid chemical defenses. Those without mycelia did not.
Animals might also exploit the fungal internet. Some plants produce compounds to attract friendly bacteria and fungi to their roots, but these signals can be picked up by insects and worms looking for tasty roots to eat. In 2012, Morris suggested that the movement of these signalling chemicals through fungal mycelia may inadvertently advertise the plants presence to these animals. However, she says this has not been demonstrated in an experiment.
http://www.bbc.com/earth/story/20141111-plants-have-a-hidden-internet
Saturday, 17 November 2018
Just watch the damn video
Epic. Watch it on the biggest screen you can find at a volume level liable to induce involuntary bowel movements. Don't watch it on your mobile phone or I shall personally come and spit in your eye. A strong contender for the title of "best thing I've seen all year". It might not be to everyone's taste, but those people are objectively wrong and I don't like them.
Originally shared by Frank Summers
For the past 18 months, we have been working with composer / conductor Eric Whitacre on astronomy visuals to accompany his Hubble-inspired symphony “Deep Field”. The film was just released on YouTube today, and we are traveling to Kennedy Space Center for the VIP premiere tonight.
We hope you enjoy this artistic combination of evocative music and celestial wonders. More info at deepfieldfilm.com
https://www.youtube.com/playlist?list=PL3r-Yu9CBDbxPnzCLdXqkPkr4QJwV_FDk
http://deepfieldfilm.com
Originally shared by Frank Summers
For the past 18 months, we have been working with composer / conductor Eric Whitacre on astronomy visuals to accompany his Hubble-inspired symphony “Deep Field”. The film was just released on YouTube today, and we are traveling to Kennedy Space Center for the VIP premiere tonight.
We hope you enjoy this artistic combination of evocative music and celestial wonders. More info at deepfieldfilm.com
https://www.youtube.com/playlist?list=PL3r-Yu9CBDbxPnzCLdXqkPkr4QJwV_FDk
http://deepfieldfilm.com
Radicalisation drives itself
Google is unlikely to be trying to radicalize YouTube viewers. I don’t believe Facebook was, either. Instead, the explanation is that by serving up more extreme content, you encourage people to keep clicking, keep watching, keep spending time on site. What’s the best way to do that? Serve them something more exciting or incendiary than what they started with....The most depressing thing about the 2016 election, to me personally, wasn’t the victor (or the defeated candidate). The most depressing thing about the 2016 election was the number of people who thought a badly shot, unsourced YouTube video from whatever meatsack they personally favored constituted proof of evil activities carried out by either Donald Drumpf or Hillary Clinton.
There is an argument that lifts the burden of blame from Facebook’s shoulders by arguing that these outcomes were unpredictable or unknown. This is untrue. The study of propaganda and disinformation campaigns and how they spread is decades old. While the amount of research focused on the intersection of social media and propaganda has exploded since 2016, there were forerunners, like the Computational Propaganda project, that set out to analyze how algorithms, automation, and computational propaganda impacted public life beginning in 2012.
Facebook had no way of knowing the exact particulars of what it might unleash — but more than enough information existed to show that the company ought to behave cautiously. It did not. It was easier to focus on pushing growth than to consider where and what that growth might be coming from.
By choosing to take no action at any point during its own early boom or replacement of much of the traditional news media, it chose to advance a set of values in which truthful, accurate reporting was easily replaced with flagrant displays of bullshit. The platform was candy to those seeking to earn a buck with no regard for the truth of the information they peddled.
https://www.extremetech.com/internet/280738-no-one-wants-to-talk-about-how-completely-we-were-lied-to
There is an argument that lifts the burden of blame from Facebook’s shoulders by arguing that these outcomes were unpredictable or unknown. This is untrue. The study of propaganda and disinformation campaigns and how they spread is decades old. While the amount of research focused on the intersection of social media and propaganda has exploded since 2016, there were forerunners, like the Computational Propaganda project, that set out to analyze how algorithms, automation, and computational propaganda impacted public life beginning in 2012.
Facebook had no way of knowing the exact particulars of what it might unleash — but more than enough information existed to show that the company ought to behave cautiously. It did not. It was easier to focus on pushing growth than to consider where and what that growth might be coming from.
By choosing to take no action at any point during its own early boom or replacement of much of the traditional news media, it chose to advance a set of values in which truthful, accurate reporting was easily replaced with flagrant displays of bullshit. The platform was candy to those seeking to earn a buck with no regard for the truth of the information they peddled.
https://www.extremetech.com/internet/280738-no-one-wants-to-talk-about-how-completely-we-were-lied-to
Tech leaders are as flawed as the rest of us
No-one should ever be worshipped for the simple reason that everybody poops. If I had to choose, I'd go for technology developers over airhead socialite celebrities any day of the week. But you're supposed to respect people because of what they've done, not respect the things they've done because of who they are. That might be a crude definition/explanation of fanboyism, I think... a sort of positive feedback loop that results in the subject of the devotion being able to do no wrong, a manifestation of absolutist thinking.
https://astrorhysy.blogspot.com/2016/05/politically-correct-or-politically.html
Think about that quintessential tech guru, Steve Jobs. Even after his deaths, articles come out praising his leadership skills, his visionary forward-thinking, and his business acumen. He’s worshipped like a folk god or a saint––and if you think I’m exaggerating, then you should know his belongings are being auctioned off like capitalist relics. Articles about him open with biblical verses. His deeds are chronicled in almost hagiographic reverence. But here’s the thing––he was darn close to sociopathic.
Max Weber helps us understand why that is. Scholar Eileen Barker has applied Weber’s concept of charismatic authority to new religious movements. In her analysis, charismatic authority is particularly powerful––and capricious––because it is not bound by tradition, law, or bureaucracy as the other types of authority are. Charismatic leaders, by virtue of their own mysterious authority-granting magical charisma, get to set the rules that everyone else plays be.
http://alltop.com/viral/time-stop-worshipping-tech-leaders
https://astrorhysy.blogspot.com/2016/05/politically-correct-or-politically.html
Think about that quintessential tech guru, Steve Jobs. Even after his deaths, articles come out praising his leadership skills, his visionary forward-thinking, and his business acumen. He’s worshipped like a folk god or a saint––and if you think I’m exaggerating, then you should know his belongings are being auctioned off like capitalist relics. Articles about him open with biblical verses. His deeds are chronicled in almost hagiographic reverence. But here’s the thing––he was darn close to sociopathic.
Max Weber helps us understand why that is. Scholar Eileen Barker has applied Weber’s concept of charismatic authority to new religious movements. In her analysis, charismatic authority is particularly powerful––and capricious––because it is not bound by tradition, law, or bureaucracy as the other types of authority are. Charismatic leaders, by virtue of their own mysterious authority-granting magical charisma, get to set the rules that everyone else plays be.
http://alltop.com/viral/time-stop-worshipping-tech-leaders
Fanworm eyes are frickin' weird
On the bizarre and wonderful complexity of the fanworm vision system. I bet none of you woke up thinking you'd read about that today.
The butt eyes – pygidial eyes, to be less crude and more precise – are quite simple, and their job seems to be to find the darkest spot to build a new tube if a worm has been unhoused. The thorax eyes seem to alert worms who’ve embarrassingly exposed themselves to clamber back down into the tube.
Together with the cerebral eyes – little more than simple pigment cups now, buried inside fanworm brains – all these eyes do little more than detect light. And they are structurally and biochemically similar to the eyes of other invertebrates.
The tentacular eyes, on the other hand, are cut from very different cloth. Photoreceptors can be constructed from two types of cellular protrusions: microvilli, and cilia. A microvillus is a little finger-like cell membrane projection. Our small intestines are lined with these (they help us absorb nutrients). A cilium is a beating hair.
What seems to have happened in fanworms is that a few simple, ciliated photoreceptors employed only in sensing ambient light happened to be hanging out in the vicinity of the radiolar tentacles, minding their own business, when the fanworm nabbed them and gave them a new job as lookouts... It is possible these eyes provide only shadow-detection. But it is also possible the compound eye versions may be capable of rudimentary motion detection and image formation, and that fanworms have something like “vision” as a result.
https://blogs.scientificamerican.com/artful-amoeba/fanworms-natures-eye-factories-stick-them-pretty-much-anywhere/
The butt eyes – pygidial eyes, to be less crude and more precise – are quite simple, and their job seems to be to find the darkest spot to build a new tube if a worm has been unhoused. The thorax eyes seem to alert worms who’ve embarrassingly exposed themselves to clamber back down into the tube.
Together with the cerebral eyes – little more than simple pigment cups now, buried inside fanworm brains – all these eyes do little more than detect light. And they are structurally and biochemically similar to the eyes of other invertebrates.
The tentacular eyes, on the other hand, are cut from very different cloth. Photoreceptors can be constructed from two types of cellular protrusions: microvilli, and cilia. A microvillus is a little finger-like cell membrane projection. Our small intestines are lined with these (they help us absorb nutrients). A cilium is a beating hair.
What seems to have happened in fanworms is that a few simple, ciliated photoreceptors employed only in sensing ambient light happened to be hanging out in the vicinity of the radiolar tentacles, minding their own business, when the fanworm nabbed them and gave them a new job as lookouts... It is possible these eyes provide only shadow-detection. But it is also possible the compound eye versions may be capable of rudimentary motion detection and image formation, and that fanworms have something like “vision” as a result.
https://blogs.scientificamerican.com/artful-amoeba/fanworms-natures-eye-factories-stick-them-pretty-much-anywhere/
Booooo !
Booooo !
Originally shared by Eli Fennell
Artificial AI 'Fingerprints' Can Fool Biometric Scanners
Despite their popularity, and a wide perception of offering strong and reliable security, biometric security technologies which measure things like fingerprints, faces, eyes, etc... are not without their vulnerabilities.
One argument against being overly concerned about biometric hacking, for everyday users, is that it would be too much trouble for most malicious actors to 'hack' the biometric data of average people. A fingerprint could be faked, for example, but the difficulty of doing so would make it worthwhile only against prime targets, such as the wealthy and powerful, whom presumably would also be likely to have additional forms of security to protect them.
Fingerprint scanners, in particular, are becoming a popular and highly touted security method, even becoming standard for high end phones and computing devices. That's why it is so disconcerting that a research team at New York University Tandon and Michigan State University have been able to create 'Skeleton Key' artificial fingerprints, called MasterPrints, able to fool them at alarming rates.
Using a type of AI tool called a Generative Adversarial Network (GAN), they were able to generate artificial fingerprints which contain features sufficiently common to many human fingerprints to be able to frequently trick the scanners. In their tests on a system with a supposed false match rate of 0.1%, these MasterPrints were accepted 22.5% of the time, or more than one time in five, rather than one time in a thousand.
This means that even average, everyday people who depend on fingerprint biometric security could become viable targets for various forms of hacking and identity theft, without the culprits ever needing their real fingerprint to do it (should nefarious agents gain access to similar technologies, of course).
You can add this to the long pile of reasons why I, personally, refuse to ever use any form of biometric security. They create an appearance of absolute security where none exists, and they're the only form of security that can be permanently hacked (since, after all, you can't change your fingerprints like a PIN Code).
#AI #ArtificialIntelligence #Biometrics
https://nakedsecurity.sophos.com/2018/11/16/ai-generated-skeleton-keys-fool-fingerprint-scanners/
Originally shared by Eli Fennell
Artificial AI 'Fingerprints' Can Fool Biometric Scanners
Despite their popularity, and a wide perception of offering strong and reliable security, biometric security technologies which measure things like fingerprints, faces, eyes, etc... are not without their vulnerabilities.
One argument against being overly concerned about biometric hacking, for everyday users, is that it would be too much trouble for most malicious actors to 'hack' the biometric data of average people. A fingerprint could be faked, for example, but the difficulty of doing so would make it worthwhile only against prime targets, such as the wealthy and powerful, whom presumably would also be likely to have additional forms of security to protect them.
Fingerprint scanners, in particular, are becoming a popular and highly touted security method, even becoming standard for high end phones and computing devices. That's why it is so disconcerting that a research team at New York University Tandon and Michigan State University have been able to create 'Skeleton Key' artificial fingerprints, called MasterPrints, able to fool them at alarming rates.
Using a type of AI tool called a Generative Adversarial Network (GAN), they were able to generate artificial fingerprints which contain features sufficiently common to many human fingerprints to be able to frequently trick the scanners. In their tests on a system with a supposed false match rate of 0.1%, these MasterPrints were accepted 22.5% of the time, or more than one time in five, rather than one time in a thousand.
This means that even average, everyday people who depend on fingerprint biometric security could become viable targets for various forms of hacking and identity theft, without the culprits ever needing their real fingerprint to do it (should nefarious agents gain access to similar technologies, of course).
You can add this to the long pile of reasons why I, personally, refuse to ever use any form of biometric security. They create an appearance of absolute security where none exists, and they're the only form of security that can be permanently hacked (since, after all, you can't change your fingerprints like a PIN Code).
#AI #ArtificialIntelligence #Biometrics
https://nakedsecurity.sophos.com/2018/11/16/ai-generated-skeleton-keys-fool-fingerprint-scanners/
Friday, 16 November 2018
The complex problems of using inoculation to guard against fake news
The difficulty is that once doubt settles, it is hard to dislodge it. Van der Linden and his colleagues wondered what would happen if they reached people before the nay-sayers did. They dug up a real-life disinformation campaign: the so-called Oregon Petition, which in 2007 falsely claimed that over 31,000 American scientists rejected the position that humans caused climate change.
The team prepared three documents. First, they wrote a ‘truth brief’ explaining that 97% of climate scientists agree that humans are responsible for climate change. They also prepared a ‘counter-brief’ revealing the flaws in the Oregon Petition – for instance, that among the Petition’s 31,000 names are people like the deceased Charles Darwin and the Spice Girls, and that fewer than 1% of the signatories are climate scientists.
When participants first were asked about the scientific consensus on climate change, they calculated it to be around 72% on average. But they then changed their estimates based on what they read. When the scientists provided a group with the ‘truth brief’, the average rose to 90%. For those who only read the Oregon Petition, the average sank to 63%. When a third group read them both – first the ‘truth brief’ and then the petition – the average remained unchanged from participants’ original instincts: 72%.
Enter inoculation. When a group of participants read the ‘truth brief’ and also were told that politically motivated groups could try to mislead the public on topics like climate change – the ‘vaccine’ – the calculated average rose to almost 80%. Strikingly, this was true even after receiving the Oregon Petition.
There is one great weakness of this approach: it takes a lot of time and effort to go case by case, inoculating people... if you receive the counterarguments to climate denial, you might still be vulnerable to fake news on other topics.
Fake news epistemic bubbles already use a different but related sort of inoculation : they sow distrust of any source that goes against theiralternative facts bullshit. This method is a bit subtler and more sophisticated : these people could be using these tools to further their agenda. Unfortunately there's no particular reason to think that fake and real news are processed any differently by the brain, as the article says :
Before believing a piece of new information, most people scrutinise it in at least five ways, they found. We usually want to know if other people believe it, if there is evidence supporting this new claim, if it fits with our previous knowledge on the matter (hence the grey-haired man, who might fit your idea of a senior citizen), if the internal argument makes sense and whether the source is credible enough.
So there's no reason to suppose that fake news can't use this same inoculation technique. Indeed by discrediting personal motivations, it already does. The second proposal in the article (which I've not quoted) of using a game to describe the general methods of manipulation techniques is better, but still suffers from the difficulty of being able to reach large numbers of people.
The best way to immunise people is through better educational systems, as that's the only common theatre of knowledge. If sufficiently aware of the problem, clickbait headlines lose their appeal and therefore advertising revenue. The second best way, given the existence of people who already believe ridiculous stuff and aren't in school any more, is regulation (either by direct action against the information itself or through more subtle financial or personal controls). I believe it is possible to do this without inducing any kind of backfire or Streisand effect and without impacting freedom of thought. And if I ever get the chance to finish writing it up, I shall tell you how.
http://www.bbc.com/future/story/20181114-could-this-game-be-a-vaccine-against-fake-news
The team prepared three documents. First, they wrote a ‘truth brief’ explaining that 97% of climate scientists agree that humans are responsible for climate change. They also prepared a ‘counter-brief’ revealing the flaws in the Oregon Petition – for instance, that among the Petition’s 31,000 names are people like the deceased Charles Darwin and the Spice Girls, and that fewer than 1% of the signatories are climate scientists.
When participants first were asked about the scientific consensus on climate change, they calculated it to be around 72% on average. But they then changed their estimates based on what they read. When the scientists provided a group with the ‘truth brief’, the average rose to 90%. For those who only read the Oregon Petition, the average sank to 63%. When a third group read them both – first the ‘truth brief’ and then the petition – the average remained unchanged from participants’ original instincts: 72%.
Enter inoculation. When a group of participants read the ‘truth brief’ and also were told that politically motivated groups could try to mislead the public on topics like climate change – the ‘vaccine’ – the calculated average rose to almost 80%. Strikingly, this was true even after receiving the Oregon Petition.
There is one great weakness of this approach: it takes a lot of time and effort to go case by case, inoculating people... if you receive the counterarguments to climate denial, you might still be vulnerable to fake news on other topics.
Fake news epistemic bubbles already use a different but related sort of inoculation : they sow distrust of any source that goes against their
Before believing a piece of new information, most people scrutinise it in at least five ways, they found. We usually want to know if other people believe it, if there is evidence supporting this new claim, if it fits with our previous knowledge on the matter (hence the grey-haired man, who might fit your idea of a senior citizen), if the internal argument makes sense and whether the source is credible enough.
So there's no reason to suppose that fake news can't use this same inoculation technique. Indeed by discrediting personal motivations, it already does. The second proposal in the article (which I've not quoted) of using a game to describe the general methods of manipulation techniques is better, but still suffers from the difficulty of being able to reach large numbers of people.
The best way to immunise people is through better educational systems, as that's the only common theatre of knowledge. If sufficiently aware of the problem, clickbait headlines lose their appeal and therefore advertising revenue. The second best way, given the existence of people who already believe ridiculous stuff and aren't in school any more, is regulation (either by direct action against the information itself or through more subtle financial or personal controls). I believe it is possible to do this without inducing any kind of backfire or Streisand effect and without impacting freedom of thought. And if I ever get the chance to finish writing it up, I shall tell you how.
http://www.bbc.com/future/story/20181114-could-this-game-be-a-vaccine-against-fake-news
Subscribe to:
Posts (Atom)
Whose cloud is it anyway ?
I really don't understand the most militant climate activists who are also opposed to geoengineering . Or rather, I think I understand t...
-
"To claim that you are being discriminated against because you have lost your right to discriminate against others shows a gross lack o...
-
For all that I know the Universe is under no obligation to make intuitive sense, I still don't like quantum mechanics. Just because some...
-
Hmmm. [The comments below include a prime example of someone claiming they're interested in truth but just want higher standard, where...