-
Notifications
You must be signed in to change notification settings - Fork 43
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Does this DN4 network contains a pre-triaing stage? #18
Comments
Thanks. Hope this can help you! |
Thank you for your reply! Can I ask the reason why the DN4 can achieve such a great result without the pre-training? Are some of the features of DN4 that make it possible to learn quickly in a small number of samples without fine-tuning? Or is it found in your experiments that satisfactory classification performance can be obtained without pre-training? |
You are welcome. As mentioned in our paper, one key reason is that we employ the much richer non-summarized local representations to represent both the query image and support class. This can be seen as a natural data augmentation, which can especially benefit the few-shot setting. On the other hand, the image-to-class measure can make full use of these local representations owing to the exchangeability of visual patterns. You can just run our codes or use our latest implementation in ADM from https://github.com/WenbinLee/ADM.git. |
Thanks a lot for your answers! I've tried with the DN4 on my dataset and it can achieve a very promising result. Although DN4 can perform very well on my dataset, I am wondering how can I further improve the performance. So far I tried with adding a Transformer block to adjust the feature maps returned by the feature extractor but this Transformer block can't help with the overall accuracy. I think maybe this specific Transformer block is ineffective. Would you please give me some suggestions on using Transformer to enhance performance of DN4? Or can you please share some recommended Transformer literature with potential for DN4 enhancement? |
It's my pleasure. I am glad that DN4 works on your dataset! Hope this can help you. |
Thank you so much for the previous support! |
Yes, it's a normal situation. Because DN4 use a Sum operation to aggregate all the local similarities for a query image, this will make the similarity list flat. Fortunately, the following Softmax operation will make the the similarity list somewhat sharp. Also, if you want to explicitly make the similarity list sharp, you may use temperature or mean/weighted average operation. Hope this can help you. |
Hello, this is a great work and thank you for open the resource.
I realized that a lot of few-shot learning network has a pre-training stage and maybe fine-tuning during the classification. But I have not found any code regarding to the pre-training, so does the DN4 need a pre-training or it is just train from scratch based on the data with a few labeled samples?
The text was updated successfully, but these errors were encountered: