I particularly liked the subheadline on the main news page :
Tiny changes can make image recognition systems think a school bus is an ostrich, find scientists
But the examples in the actual article are just as weird :
Computers can be fooled into thinking a picture of a taxi is a dog just by changing one pixel, suggests research. The limitations emerged from Japanese work on ways to fool widely used AI-based image recognition systems. There is no quick and easy way to fix image recognition systems to stop them being fooled in this way, warn experts.
The researchers found that changing one pixel in about 74% of the test images made the neural nets wrongly label what they saw. Some errors were near misses, such as a cat being mistaken for a dog, but others, including labelling a stealth bomber a dog, were far wider of the mark.
"As far as we know, there is no data-set or network that is much more robust than others," said Mr Jiawei, from Kyushu, who led the research.
One example made by Mr Athalye and his colleagues is a 3D printed turtle that one image classification system insists on labelling a rifle.
One can only imagine the tragic-yet-hilarious disasters that would result on the battlefield as a result of this. "Sir, the computer says the enemy snipers are armed with turtles ! And the fleet of enemy aircraft are actually poodles !"
http://www.bbc.com/news/technology-41845878
Sister blog of Physicists of the Caribbean in which I babble about non-astronomy stuff, because everyone needs a hobby
Subscribe to:
Post Comments (Atom)
Whose cloud is it anyway ?
I really don't understand the most militant climate activists who are also opposed to geoengineering . Or rather, I think I understand t...
-
"To claim that you are being discriminated against because you have lost your right to discriminate against others shows a gross lack o...
-
For all that I know the Universe is under no obligation to make intuitive sense, I still don't like quantum mechanics. Just because some...
-
Hmmm. [The comments below include a prime example of someone claiming they're interested in truth but just want higher standard, where...
I've not read the original work yet but I imagine it's not a random pixel that's changed but one in a specific region. I also suspect this applies to one frame of a video, so no temporal information is used.
ReplyDeleteOliver Hamilton I gave a lecture this week in which I showed that the addition of a single data point could completely destroy a fairly controversial problem in astrophysics. But that relies on a single well-chosen errant pixel in a very small data set (~30 data points).
ReplyDeleteIt seems that temporal information is not used, but the algorithm is very consistent in identifying the turtle as a rifle :
youtube.com - Synthesizing Robust Adversarial Examples: Adversarial Turtle
Interesting problem. Thankfully, humans can't be fooled by changing a single pixel. Still, this is a serious issue and needs to be fixed.
ReplyDeleteReminds me of an old (apocryphal?) story about using AI to detect camouflaged tanks hidden in trees. Getting almost 100% success in tests, it failed miserably with real data. Then they realized all the sample hidden tanks were taken on sunny days and the sample false data was on cloudy days. They'd spent millions training a computer to identify nice weather.
ReplyDeleteAl Hunt although I recently read this gwern.net - The Neural Net Tank Urban Legend - Gwern.net
ReplyDeleteOliver Hamilton Yeah. Thought so. That's why I had (apocryphal?) in my telling above :-)
ReplyDeleteUdhay Shankar More fuel for the fire. ;-)
ReplyDelete1. Neural networks are not AI.
ReplyDelete2. Neural networks overfit
3. Neural networks are black box machines.
Hype about NN is just this: a hype.