Official PyTorch implementation of the paper "Glasses: Enabling Fast Environment-aware Few-Shot Learning via Device-Cloud Collaboration".
🎉🎉 Congratulations! Our paper "Glasses: Enabling Fast Environment-aware Few-Shot Learning via Device-Cloud Collaboration" has been accepted to the ACM Web Conference 2026 (WWW '26)!
The paper introduces Focura, which is available at Focura. Additionally, you need to download the ImageNet dataset. Once downloaded, place it in the appropriate directory. You can specify the paths for both datasets by setting the edge_data_path and data_path in lib.set_args to your respective paths.
For the model backbone, we provide support for various pre-trained models, including those from Hugging Face. If the model is hosted on Hugging Face, the code will automatically download it for you. For other models such as DINO or iBOT, please refer to the original research papers to download the model.
The following models are supported:
vit_base: "google/vit-base-patch16-224"
vit_tiny: "facebook/deit-tiny-patch16-224"
deit_base: "facebook/deit-base-distilled-patch16-224"
deit_small: "facebook/deit-small-distilled-patch16-224"
swin_tiny: "microsoft/swin-tiny-patch4-window7-224"
swin_small: "microsoft/swin-small-patch4-window7-224"
iBotvit_small: "https://lf3-nlp-opensource.bytetos.com/obj/nlp-opensource/archive/2022/ibot/vits_16/checkpoint.pth"
Dinovit_small: "https://dl.fbaipublicfiles.com/dino/dino_deitsmall16_pretrain/dino_deitsmall16_pretrain_full_checkpoint.pth"Download and prepare Focura and ImageNet datasets. Ensure they are placed in the correct locations on your system.
Once the datasets are ready, simply run the main script to train or deploy the model:
python main.pyThis command will execute the model training or inference process based on the configuration settings.