Skip to content

We play around with different tasks revolving around causality in LLMs like GPT-3. Our goal is to measure the quality of its causal modelling capabilities in real-world tasks, toy problems and adversarial examples

Notifications You must be signed in to change notification settings

mariushobbhahn/causality_LLMs

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

29 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

causality_LLMs

We play around with different tasks revolving around causality in LLMs like GPT-3. Our goal is to measure the quality of its causal modelling capabilities in real-world tasks, toy problems and adversarial examples.

The final report can be found on the Alignment Forum

To reproduce the results

  1. set your OpenAI key as a variably by typing export OPENAI_KEY="<insert_your_key_here>"in your console before running your experiments.
  2. (optional) check the playground notebooks to get a better feeling for the experiments.
  3. Run all of the experiment.py scripts to produce the results (Don't forget that running experiments costs money).
  4. Run the evaluation jupyter notebooks.
  5. (optional) run the analysis for report.ipynb to reproduce the exact figures of the report.

Please note that this is just a small side project and the code has not been optimized for efficiency or readability.

About

We play around with different tasks revolving around causality in LLMs like GPT-3. Our goal is to measure the quality of its causal modelling capabilities in real-world tasks, toy problems and adversarial examples

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published