{"payload":{"pageCount":1,"repositories":[{"type":"Public","name":"Expedit-DINO","owner":"Expedit-LargeScale-Vision-Transformer","isFork":false,"description":"[NeurIPS2022] This is the official implementation of the paper \"Expediting Large-Scale Vision Transformer for Dense Prediction without Fine-tuning\" on DINO and MaskDINO.","allTopics":[],"primaryLanguage":{"name":"Python","color":"#3572A5"},"pullRequestCount":0,"issueCount":0,"starsCount":8,"forksCount":1,"license":"Apache License 2.0","participation":null,"lastUpdated":{"hasBeenPushedTo":true,"timestamp":"2023-10-29T03:04:29.532Z"}},{"type":"Public","name":"Expedit-SAM","owner":"Expedit-LargeScale-Vision-Transformer","isFork":false,"description":"[NeurIPS2022] This is the official implementation of the paper \"Expediting Large-Scale Vision Transformer for Dense Prediction without Fine-tuning\" for SAM.","allTopics":[],"primaryLanguage":{"name":"Jupyter Notebook","color":"#DA5B0B"},"pullRequestCount":0,"issueCount":0,"starsCount":81,"forksCount":8,"license":"Apache License 2.0","participation":null,"lastUpdated":{"hasBeenPushedTo":true,"timestamp":"2023-10-29T03:03:47.265Z"}},{"type":"Public","name":".github-private","owner":"Expedit-LargeScale-Vision-Transformer","isFork":false,"description":"","allTopics":[],"primaryLanguage":null,"pullRequestCount":0,"issueCount":0,"starsCount":0,"forksCount":0,"license":null,"participation":null,"lastUpdated":{"hasBeenPushedTo":true,"timestamp":"2023-10-23T13:14:31.859Z"}},{"type":"Public","name":".github","owner":"Expedit-LargeScale-Vision-Transformer","isFork":false,"description":"","allTopics":[],"primaryLanguage":null,"pullRequestCount":0,"issueCount":0,"starsCount":0,"forksCount":0,"license":null,"participation":null,"lastUpdated":{"hasBeenPushedTo":true,"timestamp":"2023-09-04T04:26:26.421Z"}},{"type":"Public","name":"Expedit-Mask2Former","owner":"Expedit-LargeScale-Vision-Transformer","isFork":false,"description":"[NeurIPS2022] This is the official implementation of the paper \"Expediting Large-Scale Vision Transformer for Dense Prediction without Fine-tuning\" on Mask2Former.","allTopics":[],"primaryLanguage":{"name":"Python","color":"#3572A5"},"pullRequestCount":0,"issueCount":0,"starsCount":5,"forksCount":0,"license":"Other","participation":null,"lastUpdated":{"hasBeenPushedTo":true,"timestamp":"2023-09-04T04:19:15.577Z"}},{"type":"Public","name":"Expedit-Video-Swin-Transformer","owner":"Expedit-LargeScale-Vision-Transformer","isFork":false,"description":"[NeurIPS2022] This is the official implementation of the paper \"Expediting Large-Scale Vision Transformer for Dense Prediction without Fine-tuning\" on Video Swin Transformer.","allTopics":[],"primaryLanguage":{"name":"Python","color":"#3572A5"},"pullRequestCount":0,"issueCount":0,"starsCount":2,"forksCount":0,"license":"Apache License 2.0","participation":null,"lastUpdated":{"hasBeenPushedTo":true,"timestamp":"2023-09-02T13:55:22.934Z"}},{"type":"Public","name":"Expedit-OneFormer","owner":"Expedit-LargeScale-Vision-Transformer","isFork":false,"description":"[NeurIPS2022] This is the official implementation of the paper \"Expediting Large-Scale Vision Transformer for Dense Prediction without Fine-tuning\" on OneFormer.","allTopics":[],"primaryLanguage":{"name":"Jupyter Notebook","color":"#DA5B0B"},"pullRequestCount":0,"issueCount":0,"starsCount":1,"forksCount":0,"license":"MIT License","participation":null,"lastUpdated":{"hasBeenPushedTo":true,"timestamp":"2023-09-02T11:57:24.208Z"}},{"type":"Public","name":"Expedit-Segmenter","owner":"Expedit-LargeScale-Vision-Transformer","isFork":false,"description":"[NeurIPS2022] This is the official implementation of the paper \"Expediting Large-Scale Vision Transformer for Dense Prediction without Fine-tuning\" on Segmenter.","allTopics":[],"primaryLanguage":{"name":"Jupyter Notebook","color":"#DA5B0B"},"pullRequestCount":0,"issueCount":1,"starsCount":5,"forksCount":0,"license":"Apache License 2.0","participation":null,"lastUpdated":{"hasBeenPushedTo":true,"timestamp":"2023-08-23T16:51:02.144Z"}},{"type":"Public","name":"Expedit-SWAG","owner":"Expedit-LargeScale-Vision-Transformer","isFork":false,"description":"[NeurIPS2022] This is the official implementation of the paper \"Expediting Large-Scale Vision Transformer for Dense Prediction without Fine-tuning\" on SWAG.","allTopics":[],"primaryLanguage":{"name":"Jupyter Notebook","color":"#DA5B0B"},"pullRequestCount":0,"issueCount":0,"starsCount":3,"forksCount":0,"license":"Other","participation":null,"lastUpdated":{"hasBeenPushedTo":true,"timestamp":"2023-03-23T16:04:16.861Z"}},{"type":"Public","name":"Expedit-DPT","owner":"Expedit-LargeScale-Vision-Transformer","isFork":false,"description":"[NeurIPS2022] This is the official implementation of the paper \"Expediting Large-Scale Vision Transformer for Dense Prediction without Fine-tuning\" on DPT.","allTopics":[],"primaryLanguage":{"name":"Python","color":"#3572A5"},"pullRequestCount":0,"issueCount":2,"starsCount":10,"forksCount":0,"license":"MIT License","participation":null,"lastUpdated":{"hasBeenPushedTo":true,"timestamp":"2023-03-23T16:01:53.781Z"}}],"repositoryCount":10,"userInfo":null,"searchable":true,"definitions":[],"typeFilters":[{"id":"all","text":"All"},{"id":"public","text":"Public"},{"id":"source","text":"Sources"},{"id":"fork","text":"Forks"},{"id":"archived","text":"Archived"},{"id":"template","text":"Templates"}],"compactMode":false},"title":"Expedit-LargeScale-Vision-Transformer repositories"}