This is a PyTorch implementation of ALOHA proposed by our paper "Adaptive Location Hierarchy Learning for Long-Tailed Mobility Prediction".
Abstract: Human mobility prediction is crucial for applications ranging from location-based recommendations to urban planning, which aims to forecast users' next location visits based on historical trajectories. While existing mobility prediction models excel at capturing sequential patterns through diverse architectures for different scenarios, they are hindered by the long-tailed distribution of location visits, leading to biased predictions and limited applicability. This highlights the need for a solution that enhances the long-tailed prediction capabilities of these models with broad compatibility and efficiency across diverse architectures. To address this need, we propose the first architecture-agnostic plugin for long-tailed human mobility prediction, named \textbf{A}daptive \textbf{LO}cation \textbf{H}ier\textbf{A}rchy learning (ALOHA). Inspired by Maslow's theory of human motivation, we exploit and explore common mobility knowledge of head and tail locations derived from human mobility trajectories to effectively mitigate long-tailed bias. Specifically, we introduce an automatic pipeline to construct city-tailored location hierarchies based on Large Language Models (LLMs) and Chain-of-Thought (CoT) prompts, capturing high-level mobility semantics with minimal human verification. We further design an Adaptive Hierarchical Loss (AHL) that rebalances learning through Gumbel disturbance and node-wise adaptive weighting, enabling both exploitation of multi-level signals and exploration within semantically related groups. Extensive experiments across multiple state-of-the-art models demonstrate that ALOHA consistently improves long-tailed mobility prediction performance by up to 16.59% while maintaining efficiency and robustness. Our code is at https://github.com/Star607/ALOHA.
The framework of our proposed ALOHA. (a) City-tailored Hierarchy Generation: hierarchies are automatically constructed for any city using LLMs with CoT prompts, requiring minimal human verification for reliability. (b) Adaptive Hierarchical Optimization: based on constructed hierarchies, logits from arbitrary prediction architectures are transformed into probability trees via Gumbel-Softmax, with node-wise adaptive weights yielding the Adaptive Hierarchical Loss.
To ease the configuration of the environment, I list versions of my hardware and software equipments:
- Hardware:
- GPU: NVIDIA RTX A6000
- Cuda: 11.7
- Driver Version: 525.105.17
- CPU: Intel(R) Xeon(R) Gold 5318Y
- Software:
- Python: 3.9.7
- Pytorch: 2.0.0+cu117
You can pip install the requirements.txt to configure the environment.
You can reproduce all ALOHA experiments by running the bash as follows:
# run on Graph-Flashback
cd ./Graph-Flashback
bash run_ALOHA.sh # test our method
bash run_baseline.sh # test long-tailed baselines
# run on STHGCN
cd ./STHGCN
bash run_ALOHA.sh # test our method
bash run_baseline.sh # test long-tailed baselines
# run on MCLP
cd ./MCLP/model
bash run_ALOHA.sh # test our method
bash run_baseline.sh # test long-tailed baselines
# run on Diff-POI
cd ./Diff-POI
bash run_ALOHA.sh # test our method
bash run_baseline.sh # test long-tailed baselines
@inproceedings{wang2026aloha,
title={Adaptive Location Hierarchy Learning for Long-Tailed Mobility Prediction},
author={Yu Wang, Junshu Dai, Yuchen Ying, Hanyang Yuan, Zunlei Feng, Tongya Zheng, Mingli Song},
journal={Proceedings of the ACM Web Conference},
year={2026}
}
We thank Yuxuan Liang, Longjiao Zhang and Bohao Wang for valuable discussions.
The implemention is based on Graph-Flashback, STHGCN, MCLP, Diff-POI.

