Sister blog of Physicists of the Caribbean in which I babble about non-astronomy stuff, because everyone needs a hobby

Friday, 3 November 2017

A school bus is not an ostrich

I particularly liked the subheadline on the main news page :

Tiny changes can make image recognition systems think a school bus is an ostrich, find scientists

But the examples in the actual article are just as weird :

Computers can be fooled into thinking a picture of a taxi is a dog just by changing one pixel, suggests research. The limitations emerged from Japanese work on ways to fool widely used AI-based image recognition systems. There is no quick and easy way to fix image recognition systems to stop them being fooled in this way, warn experts.

The researchers found that changing one pixel in about 74% of the test images made the neural nets wrongly label what they saw. Some errors were near misses, such as a cat being mistaken for a dog, but others, including labelling a stealth bomber a dog, were far wider of the mark.

"As far as we know, there is no data-set or network that is much more robust than others," said Mr Jiawei, from Kyushu, who led the research.

One example made by Mr Athalye and his colleagues is a 3D printed turtle that one image classification system insists on labelling a rifle.

One can only imagine the tragic-yet-hilarious disasters that would result on the battlefield as a result of this. "Sir, the computer says the enemy snipers are armed with turtles ! And the fleet of enemy aircraft are actually poodles !"
http://www.bbc.com/news/technology-41845878

8 comments:

  1. I've not read the original work yet but I imagine it's not a random pixel that's changed but one in a specific region. I also suspect this applies to one frame of a video, so no temporal information is used.

    ReplyDelete
  2. Oliver Hamilton I gave a lecture this week in which I showed that the addition of a single data point could completely destroy a fairly controversial problem in astrophysics. But that relies on a single well-chosen errant pixel in a very small data set (~30 data points).

    It seems that temporal information is not used, but the algorithm is very consistent in identifying the turtle as a rifle :
    youtube.com - Synthesizing Robust Adversarial Examples: Adversarial Turtle

    ReplyDelete
  3. Interesting problem. Thankfully, humans can't be fooled by changing a single pixel. Still, this is a serious issue and needs to be fixed.

    ReplyDelete
  4. Reminds me of an old (apocryphal?) story about using AI to detect camouflaged tanks hidden in trees. Getting almost 100% success in tests, it failed miserably with real data. Then they realized all the sample hidden tanks were taken on sunny days and the sample false data was on cloudy days. They'd spent millions training a computer to identify nice weather.

    ReplyDelete
  5. Oliver Hamilton Yeah. Thought so. That's why I had (apocryphal?) in my telling above :-)

    ReplyDelete
  6. Udhay Shankar More fuel for the fire. ;-)

    ReplyDelete
  7. 1. Neural networks are not AI.
    2. Neural networks overfit
    3. Neural networks are black box machines.

    Hype about NN is just this: a hype.

    ReplyDelete

Due to a small but consistent influx of spam, comments will now be checked before publishing. Only egregious spam/illegal/racist crap will be disapproved, everything else will be published.

Whose cloud is it anyway ?

I really don't understand the most militant climate activists who are also opposed to geoengineering . Or rather, I think I understand t...