Sister blog of Physicists of the Caribbean in which I babble about non-astronomy stuff, because everyone needs a hobby

Monday, 19 March 2018

Learning by pure correlation produces bizarre and inexplicable results

AI does not understand the difference between correlation and causation, or anything much at all, really.

This data-driven approach means they can make spectacular blunders, such as that time a neural network concluded a 3D printed turtle was, in fact, a rifle. The programs can’t think conceptually, along the lines of “it has scales and a shell, so it could be a turtle”. Instead, they think in terms of patterns – in this case, visual patterns in pixels. Consequently, altering a single pixel in an image can tip the scales from a sensible answer to one that’s memorably weird.

Neural networks don’t have language skills, so they can’t explain to you what they’re doing or why. And like all AI, they don’t have any common sense. A few decades ago, Caruana applied a neural network to some medical data. It included things like symptoms and their outcomes, and the intention was to calculate each patient’s risk of dying on any given day, so that doctors could take preventative action. It seemed to work well, until one night a grad student at the University of Pittsburgh noticed something odd. He was crunching the same data with a simpler algorithm, so he could read its decision-making logic, line by line. One of these read along the lines of “asthma is good for you if you have pneumonia”.

“We asked the doctors and they said ‘oh that’s bad, you want to fix that’,” says Caruana. Asthma is a serious risk factor for developing pneumonia, since they both affect the lungs. They’ll never know for sure why the machine learnt this rule, but one theory is that when patients with a history of asthma begin to get pneumonia, they get to the doctor, fast. This may be artificially bumping up their survival rates.
http://www.bbc.com/capital/story/20180316-why-a-robot-wont-steal-your-job-yet

2 comments:

  1. ... actually, we don't hear much about it these days, but frame-based AI solves causal connections and attribute-based reasoning. They look like semantic networks, in fact that's kinda where they got their start.

    ReplyDelete
  2. ha, I didn't knew that one of the core concepts of the solution I am trying to design was developed so long ago...I am definitely on the right track, at least if I manage to sort out the pointer overload madness that kind of solves some issues, but requires radical instance management and custom memory allocation layer

    ReplyDelete

Due to a small but consistent influx of spam, comments will now be checked before publishing. Only egregious spam/illegal/racist crap will be disapproved, everything else will be published.

Review : Pagan Britain

Having read a good chunk of the original stories, I turn away slightly from mythological themes and back to something more academical : the ...