According to the BBC there is growing concern in the machine learning community that as their algorithms are deployed in the real world they can be easily confused by knowledgeable attackers. These algorithms don’t process information in the same way humans do, a small sticker placed strategically on a sign could render it invisible to a self driving car.
The article points out that a sticker on a stop sign “is enough for the car to ‘see’ the stop sign as something completely different from a stop sign,” while researchers have created an online collection of images which currently fool AI systems. “In one project, published in October, researchers at Carnegie Mellon University built a pair of glasses that can subtly mislead a facial recognition system — making the computer confuse actress Reese Witherspoon for Russell Crowe.”
One computer academic says that unlike a spam-blocker, “if you’re relying on the vision system in a self-driving car to know where to go and not crash into anything, then the stakes are much higher,” adding ominously that “The only way to completely avoid this is to have a perfect model that is right all the time.” Although on the plus side, “If you’re some political dissident inside a repressive regime and you want to be able to conduct activities without being targeted, being able to avoid automated surveillance techniques based on machine learning would be a positive use.”
Read more of this story at Slashdot.