Adversarial examples are like optical (or audio) illusions for AI. By altering a handful of pixels, a computer scientist can fool a machine learning classifier into thinking, say, a picture of a rifle is actually one of a helicopter. But to you or me, the image still would look like a gun—it almost seems like the algorithm is hallucinating. – Wired