欢迎来到 PaperHub!这是 Lab4AI-Hub 社区的算法复现库,致力于提供高质量的AI算法复现。
我们热烈欢迎每一位对AI充满热情的你加入我们的贡献者行列!我们为您设计了一套清晰、标准化的协作流程,并提供丰厚的算力奖励。
这是【必做之事】!请首先仔细阅读我们详细的贡献者指南,它将告诉您所有流程细节、复现标准和奖励规则。 |
您可以从我们的待复现清单中选择课题,也可以推荐您自己感兴趣的论文。无论哪种方式,都需下载并填写《2-论文筛选表》。 |
准备好填写完整的表格后,请前往我们的 Issue区,选择对应的模板,提交您的正式申请,开启您的复现之旅! |
论文名称 & 作者 | 会议来源 & 年份 | 论文链接 | 前往平台体验 |
---|---|---|---|
Attention Is All You Need Ashish Vaswani, et al. |
NeurIPS 2017 | 📄 arXiv | ➡️ 立即体验 |
Can We Get Rid of Handcrafted Feature Extractors? Lei Su, et al. |
AAAI 2025 | 📄 arXiv | ➡️ 立即体验 |
MOMENT: A Family of Open Time-series Foundation Models Mononito Goswami, et al. |
ICML 2025 | 📄 arXiv | ➡️ 立即体验 |
Automatic Prompt Augmentation and Selection with Chain-of-Thought from Labeled Data Kashun Shum,et al. |
EMNLP 2023 | 📄 arXiv | ➡️ 立即体验 |
Chronos: Learning the Language of Time Series Abdul Fatir Ansari, et al. |
other 2024 | 📄 arXiv | ➡️ 立即体验 |
Generative Photography: Scene-Consistent Camera Control for Realistic Text-to-Image Synthesis Yu Yuan, et al. |
CVPR 2025 | 📄 arXiv | ➡️ 立即体验 |
PhotoDoodle: Learning Artistic Image Editing from Few-Shot Pairwise Data Shijie Huang, et al. |
ICCV 2025 | 📄 arXiv | ➡️ 立即体验 |
Self-Instruct: Aligning Language Models with Self-Generated Instructions Yizhong Wang, et al. |
ACL 2023 | 📄 arXiv | ➡️ 立即体验 |
RobustSAM: Segment Anything Robustly on Degraded Images Wei-Ting Chen, et al. |
CVPR 2024 | 📄 arXiv | ➡️ 立即体验 |
Side Adapter Network for Open-Vocabulary Semantic Segmentation Mengde Xu, et al. |
CVPR 2023 | 📄 arXiv | ➡️ 立即体验 |
Improving day-ahead Solar Irradiance Time Series Forecasting by Leveraging Spatio-Temporal Context Oussama Boussif, et al. |
NIPS 2023 | 📄 arXiv | ➡️ 立即体验 |
Lag-Llama: Towards Foundation Models for Probabilistic Time Series Forecasting Kashif Rasul, et al. |
other 2023 | 📄 arXiv | ➡️ 立即体验 |
CoTracker3: Simpler and Better Point Tracking by Pseudo-Labelling Real Videos Lei Nikita Karaev, et al. |
CVPR 2024 | 📄 arXiv | ➡️ 立即体验 |
Unified Training of Universal Time Series Forecasting Transformers Gerald Woo, et al. |
ICML 2024 | 📄 arXiv | ➡️ 立即体验 |
A decoder-only foundation model for time-series forecasting Abhimanyu Das, et al. |
ICML 2024 | 📄 arXiv | ➡️ 立即体验 |
Timer: Generative Pre-trained Transformers Are Large Time Series Models Yong Liu, et al. |
ICML 2024 | 📄 arXiv | ➡️ 立即体验 |
LamRA: Large Multimodal Model as Your Advanced Retrieval Assistant Yikun Liu, et al. |
other 2024 | 📄 arXiv | ➡️ 立即体验 |
我们已经筛选并整理了一份详细的待复现论文清单。这不仅是我们的工作计划,更是我们邀请您参与共建的蓝图。
如果您对路线图中的项目感兴趣,或有新的论文推荐,请在我们的 Issue 列表 中发起讨论。