I particularly liked the subheadline on the main news page :
Tiny changes can make image recognition systems think a school bus is an ostrich, find scientists
But the examples in the actual article are just as weird :
Computers can be fooled into thinking a picture of a taxi is a dog just by changing one pixel, suggests research. The limitations emerged from Japanese work on ways to fool widely used AI-based image recognition systems. There is no quick and easy way to fix image recognition systems to stop them being fooled in this way, warn experts.
The researchers found that changing one pixel in about 74% of the test images made the neural nets wrongly label what they saw. Some errors were near misses, such as a cat being mistaken for a dog, but others, including labelling a stealth bomber a dog, were far wider of the mark.
"As far as we know, there is no data-set or network that is much more robust than others," said Mr Jiawei, from Kyushu, who led the research.
One example made by Mr Athalye and his colleagues is a 3D printed turtle that one image classification system insists on labelling a rifle.
One can only imagine the tragic-yet-hilarious disasters that would result on the battlefield as a result of this. "Sir, the computer says the enemy snipers are armed with turtles ! And the fleet of enemy aircraft are actually poodles !"
http://www.bbc.com/news/technology-41845878
Sister blog of Physicists of the Caribbean in which I babble about non-astronomy stuff, because everyone needs a hobby
Subscribe to:
Post Comments (Atom)
Review : A History of Britain In Ten Enemies
What better way to start the New Year than with a book review ? Let's keep it light to get back into the swing of things and start off w...
-
"To claim that you are being discriminated against because you have lost your right to discriminate against others shows a gross lack o...
-
I've noticed that some people care deeply about the truth, but come up with batshit crazy statements. And I've caught myself rationa...
-
"The price quoted by Tesla does not include installation of the unit. To this needs to be added the cost of installing solar panels to ...
I've not read the original work yet but I imagine it's not a random pixel that's changed but one in a specific region. I also suspect this applies to one frame of a video, so no temporal information is used.
ReplyDeleteOliver Hamilton I gave a lecture this week in which I showed that the addition of a single data point could completely destroy a fairly controversial problem in astrophysics. But that relies on a single well-chosen errant pixel in a very small data set (~30 data points).
ReplyDeleteIt seems that temporal information is not used, but the algorithm is very consistent in identifying the turtle as a rifle :
youtube.com - Synthesizing Robust Adversarial Examples: Adversarial Turtle
Interesting problem. Thankfully, humans can't be fooled by changing a single pixel. Still, this is a serious issue and needs to be fixed.
ReplyDeleteReminds me of an old (apocryphal?) story about using AI to detect camouflaged tanks hidden in trees. Getting almost 100% success in tests, it failed miserably with real data. Then they realized all the sample hidden tanks were taken on sunny days and the sample false data was on cloudy days. They'd spent millions training a computer to identify nice weather.
ReplyDeleteAl Hunt although I recently read this gwern.net - The Neural Net Tank Urban Legend - Gwern.net
ReplyDeleteOliver Hamilton Yeah. Thought so. That's why I had (apocryphal?) in my telling above :-)
ReplyDeleteUdhay Shankar More fuel for the fire. ;-)
ReplyDelete1. Neural networks are not AI.
ReplyDelete2. Neural networks overfit
3. Neural networks are black box machines.
Hype about NN is just this: a hype.