- Experimenting GPT capability on Laboratory Automation.
- The core of the repository is the conversational text data stored in
/question_and_answer/*.txt
.- Then, with
evaluate.ipynb
, you can create plots & tables. - If you would like to add more conversational data, try running
call_multiple_model.py
and the new conversations will be saved at/question_and_answer/*.txt
.
- Then, with
config.py
- Prompts are defined here.
call_multiple_model.py
- Open and edit
prompt_ver='v1'
as you like, and runpython3 call_multiple_model.py
and it'll save new conversations with GPT to/question_and_answer/*.txt
- Open and edit
evaluate.ipynb
- Run the cells in the notebook and you'll see some figures and tables in the notebook. It will run the Python scripts that are extracted from ChatGPT conversations with
opentrons_simulate
and store the result to pandas DataFrame. Then, plot the error count (or success) in bar plot. Each conversation has uniqueuuid()
so that one can connect the conversations later.
- Run the cells in the notebook and you'll see some figures and tables in the notebook. It will run the Python scripts that are extracted from ChatGPT conversations with
utils.py
- Utility functions to evaluate conversation with GPT using
opentrons_simulate
and etc.
- Utility functions to evaluate conversation with GPT using
-
/question_and_answer/*.txt
- Raw conversational text between user and GPT, separated by
prompt:*************************
andanswer:*************************
- Raw conversational text between user and GPT, separated by
-
/question_and_answer/tmp/*.py
- Extracted Python scripts from ChatGPT's answer
- Open
evaluate.ipynb
and run the cells from top to bottom. - A pandas dataframe
df_eval
will be populated with the prompts, answer, and the Python error (or success). - You can use the dataframe to analyze, and visualize the results.
- Add your OpenAI
API_SECRET
toconfig.py
- Install dependencies by running
pip install -r requirements.txt
- Add your prompts in a dictionary
PROMPT_LIST
atconfig.py
. - Edit
main
function incall_multiple_models.py
to specify number of api calls (n_calls
) and prompt to use. - Run
python3 call_multiple_models.py
and wait for the process to finish. It might take from a few minutes to a few hours depends onn_calls
.