Related Papers
- [GLSVLSI'21] IronMan: GNN-assisted Design Space Exploration in High-Level Synthesis via Reinforcement Learning
- [TCAD'22] IronMan-Pro: Multi-objective Design Space Exploration in HLS via Reinforcement Learning and Graph Neural Network based Modeling
- TensorFlow 3.6.x
- Stellargraph >= 1.2.0
- Vitis HLS
- Vivado
- Synthetic DFGs generation.
- After DFGs are generated, move them to DATASET.
- data_preprocess.py and generate_graph_datasets.py produce the necessary dataset files for GPP training.
- Three GNN-based models to predict DSP, LUT, and critical path (CP) timing.
- After GPP is well trained, move the proxy models and the embedding models to RLMD.
- Two RL methods are included in RLMD: actor critic, and policy gradient.
- hls_env.py: the environment.
- target_tuples.py: shuffle the tuples for RLMD training.
- We include simulated annealing (SA), genetic algorithm (GA), particle swarm optimization (PSO), and ant colony optimization (ACO).
- Note that these meta-heuristics also need the help of GPP.
- data_postprocess.ipynb: convert generated solutions to json files.
- Get_Perf.py: invoke Vitis HLS and Vivado to get the post-implementation resource usage and timing, given the json files (solutions) and cc files (designs in C++).
- If there is any question, please drop an email to nanwu@ucsb.edu
- If you find IronMan useful, please cite our paper:
@inproceedings{wu2021ironman, title={Ironman: GNN-assisted Design Space Exploration in High-Level Synthesis via Reinforcement Learning}, author={Wu, Nan and Xie, Yuan and Hao, Cong}, booktitle={Proceedings of the 2021 on Great Lakes Symposium on VLSI}, pages={39--44}, year={2021}, organization={IEEE} }
@ARTICLE{wu2022ironman, title={IronMan-Pro: Multi-objective Design Space Exploration in HLS via Reinforcement Learning and Graph Neural Network based Modeling}, author={Wu, Nan and Xie, Yuan and Hao, Cong}, journal={IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems}, year={2022}, doi={10.1109/TCAD.2022.3185540} }