Skip to content

IronMan+alpha: Graph Neural Network and Reinforcement Learning in High-Level Synthesis

License

Notifications You must be signed in to change notification settings

lydiawunan/IronMan

Repository files navigation

IronMan+alpha: Graph Neural Network and Reinforcement Learning in High-Level Synthesis

Related Papers

Trulli

The proposed IronMan-Pro is a learning-based framework composed of CT (Code Transformer), GPP (GNN-based Performance Predictor), and RLMD (RL-based Multi-objective DSE). During training, IronMan-Pro takes HLS C/C++ code and IRs as inputs and the actual RTL performance (e.g., resource and timing) as the ground truth to train GPP and RLMD. During inference, the well-trained GPP provides graph embeddings and performance predictions to RLMD; the trained RLMD either finds optimized directives that satisfy user-specified design constraints such as available resources, or generates Pareto-solutions with various trade-offs between different resource types.

Prerequisites

  • TensorFlow 3.6.x
  • Stellargraph >= 1.2.0
  • Vitis HLS
  • Vivado

Dataset Generation

GPP: GNN-based Performance Predictor

  • Three GNN-based models to predict DSP, LUT, and critical path (CP) timing.
  • After GPP is well trained, move the proxy models and the embedding models to RLMD.

RLMD: Rinforcement Learning based Multi-Objective Design Space Exploration

  • Two RL methods are included in RLMD: actor critic, and policy gradient.
  • hls_env.py: the environment.
  • target_tuples.py: shuffle the tuples for RLMD training.

Meta-heuristics

  • We include simulated annealing (SA), genetic algorithm (GA), particle swarm optimization (PSO), and ant colony optimization (ACO).
  • Note that these meta-heuristics also need the help of GPP.

Data Post-processing

  • data_postprocess.ipynb: convert generated solutions to json files.
  • Get_Perf.py: invoke Vitis HLS and Vivado to get the post-implementation resource usage and timing, given the json files (solutions) and cc files (designs in C++).

Contact

  • If there is any question, please drop an email to nanwu@ucsb.edu
  • If you find IronMan useful, please cite our paper:
    @inproceedings{wu2021ironman,
     title={Ironman: GNN-assisted Design Space Exploration in High-Level Synthesis via Reinforcement Learning},
     author={Wu, Nan and Xie, Yuan and Hao, Cong},
     booktitle={Proceedings of the 2021 on Great Lakes Symposium on VLSI},
     pages={39--44},
     year={2021},
     organization={IEEE}
     }
    
    @ARTICLE{wu2022ironman,
     title={IronMan-Pro: Multi-objective Design Space Exploration in HLS via Reinforcement Learning and Graph Neural Network based Modeling},
     author={Wu, Nan and Xie, Yuan and Hao, Cong},
     journal={IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems}, 
     year={2022},
     doi={10.1109/TCAD.2022.3185540}
     }
    

About

IronMan+alpha: Graph Neural Network and Reinforcement Learning in High-Level Synthesis

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published