M.S. Researcher · Vision & Learning Lab, UNIST · South Korea
3D Human Avatars · Gaussian Splatting · Vision–Language Reasoning
I'm a Master's researcher at the Vision & Learning Lab, UNIST, advised by Prof. Seungryul Baek and Prof. Binod Bhattarai. My research focuses on 3D human avatar reconstruction, 3D Gaussian Splatting, and vision–language reasoning for hand understanding.
My most recent first-author work — FlexiAvatar (ECCV 2026, under review) — proposes a visibility-aware 3D Gaussian avatar framework that reconstructs animatable avatars from monocular video across full-body, upper-body, and head-only settings within a single unified pipeline. Other contributions span hand-pose benchmarking (CVPR 2026), hand–object mesh generation (CVPR Findings 2026), real-time two-hand manipulation (AAAI 2025), and generative replay for continual detection (CVPR 2024 Highlight).
📍 Based in Ulsan, South Korea · ✉️ Open to collaborations and PhD discussions.
| 🧍 3D Human Avatars | 3D Gaussian Splatting, NeRF, animatable avatar reconstruction |
| 🎯 Visibility-Aware Optimization | Partial-view reconstruction, occlusion-robust modeling |
| 🦴 Parametric Body Models | SMPL-X, FLAME, differentiable rendering, multi-view geometry |
| ✋ Hand Understanding | Hand pose estimation, 3D hand–object interaction |
| 🎨 Generative Models | Diffusion models for texture completion, novel view synthesis |
| 👁️ Vision–Language Models | Spatial reasoning, multimodal benchmarks |
First Author · ECCV 2026 (Under Review)
A visibility-aware optimization framework that restricts Gaussian splatting supervision to observed body regions, eliminating hallucinated geometry and texture drift under partial-view inputs. Achieves state-of-the-art results on NeuMan, ZJU-MoCap, TalkShow, and INSTA — with ~3% PSNR gain, ~50% memory reduction, and ~34% faster rendering for head-only avatars.
Highlights: Occlusion-robust SMPL-X tracking · Triplane-conditioned hybrid mesh–Gaussian representation · Part-specific residual MLPs for face/hands · Diffusion-based texture completion · Otsu-based adaptive visibility thresholding.
* denotes equal contribution
| Year | Venue | Title | Role |
|---|---|---|---|
| 2026 | ECCV (under review) | Unified 3D Gaussian Human Avatars Under Arbitrary Body Visibility | First Author |
| 2026 | CVPR | HandVQA: Diagnosing Fine-Grained Spatial Reasoning Failures in VLMs via Hand Pose QA | Co-author |
| 2026 | CVPR Findings | THOM: Generating Physically Plausible Hand-Object Meshes From Text | Co-author |
| 2025 | AAAI | QORT-Former: Query-Optimized Real-time Transformer for Two-Hand Manipulation | Co-author |
| 2024 | CVPR 🏆 Highlight | SDDGR: Stable Diffusion-based Deep Generative Replay for Class Incremental Object Detection | Co-author |
| 2024 | ICASSP | Class-Wise Buffer Management for Incremental Object Detection | Co-author |
| 2023 | IEIE | Reducing Data Imbalance for Object Detection | Co-author |
🏆 CVPR 2024 Highlight — accepted in the top 2.8% of submissions (324 / 11,532).
🏛️ Ulsan National Institute of Science & Technology (UNIST) | Ulsan, South Korea
- 🎓 M.S. in Computer Science & Engineering — Sep 2024 – Present (Expected Aug 2026) · GPA: 4.1 / 4.3 (98/100)
- 🎓 B.S. in Computer Science & Engineering — Sep 2020 – Jun 2024 · Cum Laude
- 🥇 Korean Government Graduate School Scholarship — Full tuition + stipend (M.S.)
- 🥇 UNIST Dream Scholarship — Full tuition + stipend (B.S.)
- 🌟 CVPR 2024 Highlight Paper — Top 2.8% of accepted submissions
Core ML & Programming
3D Vision & Rendering
Generative & Multimodal Models
Data & Tooling
- Teaching Assistant — Introduction to AI Programming in Python (Spring 2025, Fall 2025, Spring 2026)
- Teaching Assistant — Discrete Mathematics (Fall 2024, Fall 2025)
I'm always happy to discuss research, collaborations, or PhD opportunities in 3D vision, human avatars, and generative models.
- ✉️ Email: yihalemyimolal@unist.ac.kr
- 🌐 Homepage: yihalem1.github.io
- 💼 LinkedIn: Yihalem Yimolal Tiruneh
- 📚 Google Scholar: Publications
“Reconstructing the visible world, one Gaussian at a time.”