Skip to content

zhengwang125/EB-OTS

Repository files navigation

MARL4TS

This is the implementation of our paper "Error-Bounded Online Trajectory Simplification with Multi-agent Reinforcement Learning" (KDD 2021).

Requirements

  • Linux Ubuntu OS (16.04 is tested)
  • Python >= 3.5 (Anaconda3 is recommended and 3.6 is tested)
  • Tensorflow & Keras (1.8.0 and 2.2.0 are tested)

Please follow the recommended envirnoment and we noticed GPU (with Tensorflow 2) may not handle the model training/testing. Please refer to the source code to install the required packages that have not been installed in your environment such as matplotlib in Python for visualization. You can install these packages with conda in a shell as

conda install matplotlib

Dataset & Preprocessing

Download & unzip the dataset Geolife and put its folder into ./TrajData. Note that the input data generated by preprocess.py will also be stored in this folder.

python preprocess.py

Running Procedures

Hyperparameters

There are several hyperparameters in rl_brain_multi_agent_with_constraint.py, you may try to turn these parameters for a better performance when training, including units = 23 or 25 (for DeepQNetwork_OW or DeepQNetwork_CW), activation = tf.nn.tanh, learning_rate = 0.01, epsilon_decay = 0.99 and discount_rate = 0.99.

Training

Run rl_index_run_multi_agent_with_constraint.py, the generated models will be stored in the folder ./save automatically, and you can pick one model with the best performance on the validation data as your model from them. You can set several parameters in the training, including skipping steps (J), selected points in a window (k), delay mechanism (D) and constraint mechanism (C). They all provide a trade-off between the effectiveness and efficiency.

python rl_index_run_multi_agent_with_constraint.py

Here, we provide an interface load(checkpoint), and you can load an intermediate model to continue the training from the checkpoint, which saves your efforts caused by some unexpected exceptions and no need to train again. After your model is trained, we provide a fast interface called fast_online_act(state), which replaces the function of DL tool and implements the NN forward more efficiently.

Error Measurements

We implemented four mainstream error measurements of trajectory simplification in data_utils.py, including SED (sed_op, sed_error), PED (ped_op, ped_error), DAD (dad_op, dad_error), and SAD (speed_op, speed_error), where '_op' denotes the error on an anchor segment, and "_error" denotes the error between the orignal trajectory and its simplified trajectory. More details can be found in the paper. The default error measurement is SED, if you want to test more measurements, just simply replace the name in rl_states_drop_points_multi_agent_with_constraint.py.

Visualization

We provide an interface data_utils.draw(ori_traj, sim_traj, label='sed') to visualize the simplified trajectory vis.pdf, you can also use it to observe the model performance during the training or comment it in the codes for your purpose. Note that this part is supported by matplotlib in Python 3.6.

F.draw(self.ori_traj_set[episode], sim_traj)

Evaluation

You can directly run the rl_evaluate.py once you obtain the trained model.

python rl_evaluate.py

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Languages