diff --git a/_includes/sections/about.html b/_includes/sections/about.html index e2d66c9..810b270 100644 --- a/_includes/sections/about.html +++ b/_includes/sections/about.html @@ -113,8 +113,8 @@
Visit 30 states in the US (34/30)
❄️AK, 🌉CA, 🏂 CO, 📃 CT, 🐼DC, 1⃣ DE, 🍊 FL, 🍑 GA, 🌋HI, 💨IL, 🏁IN, 🚜IA, 🏇KY, 🔮MA, 🐢MD, 🚘MI, ♈ MO, 🌟MN, ✈ NC, 🐍 NH, 💡NJ, 🏜️NV, 🗽NY, 🏈 OH, 🌹 OR, 🔔 PA, 🌊 RI, 🌴 SC, 🎸 TN, 🗼TX, 🚬 VA, 🍺WI, ☔WA, 🗻 WV.
Visit 30 National Parks in the US (12/30)
- Indiana Dunes NP, Yosemite NP, Mount Rainier NP, North Cascades NP, Olympic NP, Great Smoky Mountains NP, Gateway Arch NP, Rocky Mountain NP, Mammoth Cave NP, Congaree NP, New River Gorge NP, Shenandoah NP
Visit 30 National Parks in the US (14/30)
+ Indiana Dunes NP, Yosemite NP, Mount Rainier NP, North Cascades NP, Olympic NP, Great Smoky Mountains NP, Gateway Arch NP, Rocky Mountain NP, Mammoth Cave NP, Congaree NP, New River Gorge NP, Shenandoah NP, Crater Lake NP, Redwood NSP
Visit 30 Other National Parks Services in the US (51/30)
Boston NHP, Manhattan Project NHP, Harpers Ferry NHP, First State NHP, Minute Man NHP, Independence NHP, Lewis & Clark NHP, San Francisco Maritime NHP, Hopewell Culture NHP, Golden Gate NRA, Ross Lake NRA, Lake Chelan NRA, Big South Fork NRNA, Gauley River NRA, Herbert Hoover NHS, Edgar Allan Poe NHS, Gloria Dei Church NHS, Lincoln Home NHS, Ulysses S. Grant NHS, Boston African American NHS, Fort Point NHS, Saugus Iron Works NHS, Salem Maritime NHS, Statue of Liberty NM, Fort Pulaski NM, Muir Woods NM, Fort Mchenry NM, Florissant Fossil Beds NM, Lewis & Clark NHT, Washington-Rochambeau Revolutionary Route NHT, Star-Spangled Banner NHT, Juan Bautista de Anza NHT, Sleeping Bear Dunes NL, Ice Age NST, Appalachian NST, Point Reyes NS, Obed WSR, Bluestone NSR, Korean War Veterans Memorial, Lincoln Memorial, Pullman Memorial, Pearl Harbor Memorial, Vietnam Veterans Memorial, White House, Alcatraz Island, Presidio of San Francisco, Washington Monument, World War II Memorial,
Wing Luke Museum Affiliated Area, Blue Ridge Parkway, Baltimore-Washington Parkway
Visit 30 Airpots (42/30)
-KORD, KJFK, ZSHC, ZSPD, KATL, KDTW, ZYHB, ZBTJ, ZBAA, ZJSY, ZLXY, KLAX, KSFO, KLEX, KEWR, KLGA, VHHH, KMIA, KSJC, KMCO, KMSP, KLAN, KDFW, KDEN, ZSAM, KIAH, KAUS, KBWI, KDCA, KSEA, KCID, KSLC, KLAS, KHNL, KIAD, KBRL, ZGGG, ZGSZ, VVTS, ZSYT, KBOS, KBDL
Visit 30 Airpots (44/30)
+KORD, KJFK, ZSHC, ZSPD, KATL, KDTW, ZYHB, ZBTJ, ZBAA, ZJSY, ZLXY, KLAX, KSFO, KLEX, KEWR, KLGA, VHHH, KMIA, KSJC, KMCO, KMSP, KLAN, KDFW, KDEN, ZSAM, KIAH, KAUS, KBWI, KDCA, KSEA, KCID, KSLC, KLAS, KHNL, KIAD, KBRL, ZGGG, ZGSZ, VVTS, ZSYT, KBOS, KPDX, CYVR
Take flights with 30 Airplane Companies (18/30)
HU, CA, DL, AA, UA, CZ, MU, JD, CX, MF, KA, F9, NK, WN, AS, AC, 9K, NK
{{ publication.desc.detail }}
- {{ publication.date.detail }} -|
+
+
+
+
+
+
+ |
+
+
+
+ + Haozheng Luo*, Chenghao Qiu*, Maojiang Su, Zhihan Zhou, Zoe Mehta, Guo Ye, Jerry Yao-Chieh Hu, Han Liu + + International Conference on Machine Learning (ICML) 2025 + + paper / + code / + model + + + GERM is a genomic foundation model optimized for low-resource settings by removing outliers, enhancing low-rank adaptation and quantization, achieving up to 64.34% efficiency gains and 37.98% better fine-tuning performance over baseline models. + + |
+
|
+
+
+
+
+
+ |
+
+
+
+
+ + Jiahao Yu*, Haozheng Luo*, Jerry Yao-Chieh, Yan Chen, Wenbo Guo, Han Liu, Xinyu Xing + + USENIX Security Symposium (USENIX Security) 2025 + + paper / + code + + + Mind the Inconspicuous is a study showing that appending multiple \eos tokens triggers context segmentation in aligned LLMs, shifting inputs toward refusal boundaries and enabling jailbreaks, with up to 16× increased attack success rates across 16 models and major APIs like OpenAI and Anthropic. + + |
+
|
+
+
+
+
+
+ |
+
+
+
+
+ + Haozheng Luo*, Chenghao Qiu*, Yimin Wang, Shang Wu, Jiahao Yu, Han Liu, Binghui Wang, Yan Chen + + Preprint, 2025 + + paper / + code / + datasets / + + + GenoArmory is the first unified adversarial attack benchmark for Genomic Foundation Models (GFMs), offering a comprehensive framework and the GenoAdv dataset to evaluate model vulnerabilities across architectures, quantization, and tasks, revealing that classification GFMs are more robust than generative ones and that attacks often target biologically meaningful regions. + + + |
+
|
+
+
+
+
+
+ |
+
+
+
+
+ + Haozheng Luo*, Jiahao Yu*, Wenxin Zhang*, Jialong Li, Jerry Yao-Chieh Hu, Yan Chen, Binghui Wang, Xinyu Xing, Han Liu + + Workshop on MemFM @ ICML 2025 + + paper / + code + + + We propose a low-resource method to align LLMs for safety by distilling alignment-relevant knowledge from well-aligned models and identifying essential components via delta debugging, enabling plug-and-play integration into unaligned LLMs. + + + |
+
|
+
+
+
+
+
+ |
+
+
+
+
+ + Haoyu He*, Haozheng Luo*, Yan Chen, Qi R. Wang + + Workshop on Efficient Systems for Foundation Models III@ ICML2025 + + paper + + + + RHYTHM is a framework that uses hierarchical temporal tokenization and frozen LLMs to efficiently model human mobility, achieving 2.4% higher accuracy (5.0% on weekends) and 24.6% faster training by capturing spatio-temporal dependencies with reduced sequence lengths and enriched prompt embeddings. + + |
+
|
+
+
+
+
+
+ |
+
+
+
+
+ + Yu Zhang*, Mei Di*, Haozheng Luo*, Chenwei Xu, Richard Tzong-Han Tsai + + Information Systems Volume 133, 2025 + + paper / + code + + + SMUTF is a schema matching framework that combines rule-based features, pre-trained and generative LLMs with novel “generative tags” to enable effective cross-domain matching, achieving up to 11.84% F1 and 5.08% AUC gains over SOTA, with the new HDXSM dataset released to support large-scale open-domain schema matching. + + + |
+
|
+
+
+
+
+
+ |
+
+
+
+
+ + Zhenyu Pan, Haozheng Luo, Manling Li, Han Liu + + International Conference on Learning Representations (ICLR) 2025 + + paper / + code + + + CoA is a Chain-of-Action framework for multimodal and retrieval-augmented QA that decomposes complex questions into reasoning steps with plug-and-play retrieval actions, reducing hallucinations and token usage while improving reasoning and factual accuracy across benchmarks and a Web3 case study. + + + |
+
|
+
+
+
+
+
+ |
+
+
+
+
+ + Jerry Yao-Chieh Hu*, Pei-Hsuan Chang*, Haozheng Luo*, Hong-Yu Chen, Weijian Li, Wei-Po Wang, Han Liu + + International Conference on Machine Learning (ICML) 2024 + + paper / + code / + model + + + We debut an outlier-efficient modern Hopfield model, OutEffHop, providing robust outlier-reduction for large transformer-based models from associative memory models. + + + |
+
|
+
+
+
+
+
+ |
+
+
+
+
+ + Shang Wu*, Yen-Ju Lu*, Haozheng Luo*, Jerry Yao-Chieh Hu, Jiayi Wang, Najim Dehak, Jesus Villalba, Han Liu + + Workshop on Efficient Systems for Foundation Models II@ ICML2024 + + paper + + + + SpARQ is an outlier-free SpeechLM framework that replaces attention with a stabilized layer to mitigate performance drops from cross-modal low-rank adaptation and quantization, achieving 41% and 45% relative improvements respectively, plus 1.33× faster training on OPT-1.3B across ASR, TTS, and multi-modal tasks. + + |
+
|
+
+
+
+
+
+ |
+
+
+
+
+ + Haozheng Luo*, Ruiyang Qin*, Chenwei Xu, Guo Ye, Zening Luo + + IEEE International Conference on Robot and Human Interactive Communication (RO-MAN) 2023 + + paper / + code + + + We introduce a robotic agent that combines video recognition and language models to assist users through language-based interactions in video scenes, showing improved human-robot interaction efficiency and achieving 2–3% gains over benchmark methods. + + + |
+
|
+
+
+
+
+
+ |
+
+
+
+
+ + Haozheng Luo, Tianyi Wu, Feiyu Han, Zhijun Yan + + IEEE International Conference on Machine Learning and Applications (ICMLA) 2022 + + paper / + code + + + IGN is a distributional reinforcement learning model that integrates GAN-based quantile regression with IQN, achieving state-of-the-art performance and risk-sensitive policy optimization across 57 Atari games. + + + |
+
|
+
+
+
+
+
+ |
+
+
+
+
+ + Haozheng Luo, Ningwei Liu, Charles Feng + + Future of Information and Communication Conference (FICC) 2022 + + paper + + + We present a Deep Contextualized Transformer model that enhances QA classification by handling aberrant expressions, achieving up to 83.1% accuracy on SQuAD and SwDA datasets—outperforming prior models for industry-level QA tasks. + + |
+