how do you call it when researchers design a specific blur to layer on top of an image to fool an image classifier into classifying e.g. a picture of a fox as car?
This technique is commonly referred to as adversarial examples or adversarial attacks. The specific type of attack you described, where the image is deliberately altered with a specific blur to fool the classifier, is called a fooling attack.
Besides a fooling attack is there other kinds of attacks that can be used against an image classifiers? Specifically in regards to self driving cars, as there are higher risks when it comes to failures. What can we do to prevent these attacks from happening in the future?