Skip to content

dtim-upc/THOR

Repository files navigation

Text Homogenization from Oblivion to Reality (THOR)

Codebase for ICDE 2024 Paper:

"Mitigating Data Sparsity in Integrated Data through Text Conceptualization"


How to Run THOR:

  1. Open the THOR_Conceptualization.ipynb script and click on the Open In Colab Button.
  2. Run the script from Runtime -> Run All
  3. It will automatically Download our Disease A-Z Dataset from the repository.
  4. At the bottom of the notebook, in the Main Function, you will be asked which EVALUATION set you want to use OR if you want to do only INFERENCING.
    • EVALUATION: This will RUN the Evaluation for the selected split, and save the RESULTS (2 Excel Files) in the "output" Folder.
      • By Default the Threshold is set to T=0.80 (80%). In order to change the threshold, please change this line in Main Function:
        matcher = initiate_matcher(patterns=accu_data, threshold=80)
    • INFERENCING: In order inference on your own text:
      • UPLOAD ONE TEXT (.txt) DOCUMENT (per-run) containing Disease and Condition related information (from your device) by clicking on Choose Files button.

How to Run Baseline:

How to Run LM-SD/LM-Human:

  • The LM-SD.ipynb and LM-Human.ipynb NEEDS GPU in order to run.
  • Colab Free offers a limited GPU option; thus, we assume you have access to either Colab Pro or a Local GPU (at least 6GB VRAM).
  • Please follow the instructions inside the model_config folder.

How to Run UniversalNER:

  • The UniversalNER.ipynb also requires a Big GPU (Minimum 40 GB VRAM) with at least 32 GB of system RAM.
  • You need to UPLOAD the test data Masked_Text_Only_Test.json into the same directory as the code.
    • To run in Colab, upload it into the local cache directory: '/content/Masked_Text_Only_Test.json'

GPT-4:

  • Please follow the content of the GPT-4 folder in order to reproduce this experiment.

Generalizability Experiment:

NOTE: Running on the Colab might take up-to 3x inference time due to the slow I/O bandwidth of Colab.


EXPERIMENTAL RESULTS:

  • You can find all the evaluation scores (.xlsx) in the Results folder.
  • We followed the Evaluation Scheme proposed in SemEval 2013 for the Entity Recognition Task (9.1)