Skip to content

jwzhanggy/Graph_Toolformer

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Graph-Toolformer

Note

  • 8-bit tensor cores are not supported on the CPU. bitsandbytes can be run on 8-bit tensor core-supported hardware, which are Turing and Ampere GPUs (RTX 20s, RTX 30s, RTX 40s, A40-A100, T4+).

framework

Graph-ToolFormer: To Empower LLMs with Graph Reasoning Ability via Prompt Augmented by ChatGPT

Paper URL at IFMLab: http://www.ifmlab.org/files/paper/graph_toolformer.pdf

Paper URL at arxiv: https://arxiv.org/pdf/2304.11116.pdf

Paper description in Chinese: 文章中文介绍

References

@article{Zhang2023GraphToolFormerTE,
  title={Graph-ToolFormer: To Empower LLMs with Graph Reasoning Ability via Prompt Augmented by ChatGPT},
  author={Jiawei Zhang},
  journal={ArXiv},
  year={2023},
  volume={abs/2304.11116}
}

Organization of the project source code, data and model checkpoints

Source code

This project source code is divided into two directories:

  • LLM_Tuning: Language model fine-tuning code with graph reasoning prompt dataset
  • Graph_Toolformer_Package: The Graph-Toolformer reasoning demo code, which will load the fine-tuned LLMs (tuned by the LLM_Tuning code) and the other pre-trained GNN models.

Dataset

The datasets used in this paper inclode both the the generated graph reasoning prompt datasets, and the raw graph benchmark datasets

  • Prompt Datasets: The graph reasoning prompts created in this paper for LLM fine-tuning. The prompt dataset has been included in the LLM_Tuning directory already.
  • Graph Raw Datasets: The 15 graph benchmark datasets used in this paper. The graph raw dataset (about 100MB) should be download from the google drive.

Model checkpoints

The pre-trained/fine-tuned model checkpoints released by this project also have two parts

  • Fine-tuned LLM Checkpoint: The checkpoint of fine-tuned language model by the above LLM_Tuning code, readers can either tune the LLMs by yourselves or use the provided checkpoint by us. If you plan to use the checkpoint released by us, you may need to download it from the Google drive, and the zip file is about 5GB.
  • Pre-trained GNN Checkpoints: We use 5 different graph models in this project, the checkpoints of the pre-trained GNNs are also provided. The readers can either pre-train their own graph models, or use the pre-trained graph model checkpoints released by us, which can be downloaded from Google drive. The zip file is about 100MB, and the upziped folder can be about 5GB.

How to play with the code?

Environment setup

First of all, as mentioned above, please set up the environment for running the code. We recommend you create the code environment with the environment.yml file shared from us. You can create the environment with the following command

conda env create -f environment.yml

For the packages cannot be installed with the above conda command, you may consider to install via pip.

Play with LLM_Tuning code

After downloading the LLM_Tuning directory, installing the conda environment (see this file), you can just go ahead to run the code with the following command to start the LLMs fine-tuning with the prompt datasets:

Note

1. To avoid getting OOM, depending on your machine (GPU memory capacty), please also adjust the batch_size and max_length parameters accordingly.

2. For 8bit models, it seems some CPU will not support the mixed precision computation. We recommend you use GPU instead of CPU for running the code.

3. If you plan to use the fine-tuned LLM checkpoint for graph reasoning, please (1) replace the fine-tuned checkpoints in the downloaded Graph_Toolformer_Package/koala/language_models/gptj_8bit/local_data/finetuned_model/graph_toolformer_GPTJ directory of the Graph_Toolformer_Package for graph reasoning demos, (2) also remember to change the checkpoint names to graph_toolformer_GPTJ before pasting it to the koala folder, so the demo framework will load the checkpoint.

python3 gtoolformer_gptj_script.py

For more information, you can also refer to the README.md provided in the LLM_Tuning directory as well.

Play with Graph_Toolformer Demo code

After downloading the Graph_Toolformer_Package directory, also downloading the graph_datasets.zip, koala/graph_models.zip and koala/language_models.zip, installing the conda environment, you can just go ahead to run the code with the following command to start the demo

python3 ./src/Framework_Graph_Toolformer.py

You can type in inputs which are similar to the prompt inputs, and the model will carry out the reasoning task and return the otuputs. The reasoning process will call both the LLMs and GNN models, so generating the output will take some time. The GNN models are pre-trained according to the previous papers, and the LLMs is fine-tuned with the code in the LLM_Tunign directory based on the prompt datasets.

By changing the "if 0" to "if 1" in the bottom main function of Framework_Graph_Toolformer.py, you can try different reasoning tasks.

For more information, you can also refer to the README.md provided in the Graph_Toolformer_Package directory as well.


🔴 🟠 ⚫ ⚪ 🟣 🟢 🟡 🔵

Tasks to be done

  • 🟢 Polish the framework: 7/7 done

    • 🟢 add working memory module
    • 🟢 add query parser module
    • 🟢 add query excutor module
    • 🟢 add graph dataset hub
    • 🟢 add graph model hub
    • 🟢 add graph reasoning task hub
    • 🟢 add llm model hub
  • 🟢 Expand the framework: 3/3 done

    • 🟢 Include graph datasets: done
      • 🟢 graph property dataset
      • 🟢 bibliographic networks: cora, pubmed, citeseer
      • 🟢 molecular graphs: proteins, nci1, mutag, ptc
      • 🟢 social networks: twitter, foursquare
      • 🟢 recommender system: amazon, last.fm, movielens
      • 🟢 knowledge graphs: wordnet, freebase
    • 🟢 Add pre-trained graph models: done
      • 🟢 Toolx
      • 🟢 Graph-Bert
      • 🟢 SEG-Bert
      • 🟢 KMeans Clustering
      • 🟢 BPR
      • 🟢 TransE
    • 🟢 Include graph reasoning tasks: done
      • 🟢 graph property reasoning
      • 🟢 bibliographic paper topic reasoning
      • 🟢 molecular graph function reasoning
      • 🟢 social network community reasoning
      • 🟢 recommender system reasoning
      • 🟢 knowledge graph reasoning
  • 🟢 Polish and release the datasets: 4/4 released

    • 🟢 graph raw data:
    • 🟢 graph reasoning prompt data
    • 🟢 pre-trained graph model checkpoints
    • 🟢 fine-tuned llm model checkpoints
  • 🟠 Add and test more LLMs: 1/5 done

    • 🟢 GPT-J
    • LLaMA
    • GPT-2
    • OPT
    • Bloom
  • 🟠 Release the framework and service: 0/5 done

    • 🟠 Implement the CLI for framework usage
    • 🟠 Provide the demo for graph reasoning
    • 🟠 Add API for customerized graph reasoning
    • 🟠 Release GUI and web access/service

About

No description, website, or topics provided.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages