I'm an AI engineer and open-source enthusiast studying at Shanghai Jiao Tong University (SJTU), School of Artificial Intelligence. My core focus is Deep Learning, Recommendation Systems, and Full-Stack AI application engineering — building things that work end-to-end, not just in notebooks.
My guiding philosophy comes from a Chinese statesman's wisdom:
"功成不必在我,但功成必定有我" — Success need not be mine to claim, but success must carry my contribution.
I believe open-source is the lever that multiplies human creativity. Every commit, every PR, every well-documented repo is a brick in a cathedral someone else may finish — and that's enough reason to build carefully.
- 🧬 Deep Learning — model architecture, training dynamics, TF↔PyTorch migration, GPU-accelerated inference
- 🎯 Recommendation Systems — ranking models, embedding pipelines, and retrieval-augmented approaches
- 🌐 Full-Stack Engineering — production AI apps with Next.js, TypeScript, Docker, and cloud APIs
- 🎓 SJTU AI Coursework — sharing notes and implementations so the next person climbs faster
| Project | What it does | Stack |
|---|---|---|
| ai-virtual-tryon | Photorealistic virtual try-on in the browser via IDM-VTON + Replicate. Async polling, FSM state machine, Docker deploy. | Next.js 15 · TypeScript · Replicate API · Docker |
| bert-attention-tf2pytorch | Faithful PyTorch port of HuggingFace TFBertSelfAttention — shape semantics, masking conventions, gradient preservation all verified. |
PyTorch 2.0 · TensorFlow 2 · pytest |
| minibtc-from-scratch | Bitcoin consensus stack in pure Python: Secp256k1 ECC, ECDSA verify, double-SHA256 PoW mining. Zero runtime dependencies. | Python 3.9+ · stdlib only |
| pygcn-gpu-accelerated | Drop-in GPU replacement for tkipf/pygcn: >50× faster adjacency normalization, >4× faster inference, numerically equivalent. |
PyTorch · CUDA · Sparse Tensor |
---
Sharing course materials from Shanghai Jiao Tong University's AI curriculum to help fellow learners.
Repos in this series cover: Deep Learning fundamentals · Computer Vision · NLP & Transformers · Graph Neural Networks · Distributed Systems
Feel free to ⭐ star, fork, and build on top of them — that's the whole point.