You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hello. I am looking for the possible solution for backdoor attack. I've read the interesting and promising research, but still in confusion.
Why distillation with pruned model as teacher can purify the poisoned model, do you have more detailed insights?
Have you have tried bigger model and dataset?
There is an attack against the pruning-defense(through pruning in the training period, however unrealistic in real world), what do you think of such attackers which are specially designed for pruning.
Looking for your reply.
The text was updated successfully, but these errors were encountered:
Hi, thanks for your interest in our work. The response to your questions are as follows:
Firstly, we would like to point out that the teacher model used in NAD is not a pruned model(described in your question), but it is a backdoored model after fine-tuning (See the Figure 1 in our paper). Actually, the cause of effectiveness for NAD is mainly due to the regularization and integration of attention maps. We have provided both intuitive analysis (see section 4.3) and experimental results that compare the defense effect on feature maps and attention maps (see Table 8), as well as the comparison of feature visualization between the different functions of attention operations (see Figure 11). We also believe a depth-reading to the whole content of our paper would benefit to your understanding of NAD.
We include a variety of combinations of model architecture in WRN model in Table 2. For the other dataset please check the results in our newly published paper on ABL (which also includes more specific results for NAD).
In my opinion, the pruning-based defense performs promising defense results as shown in the paper ANP. As such, It is still an open topic to design an effective attack efficiently against pruning-based defense.
Hello. I am looking for the possible solution for backdoor attack. I've read the interesting and promising research, but still in confusion.
Looking for your reply.
The text was updated successfully, but these errors were encountered: