Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

GPT4 cutoff date is September 2021 - how did this impact evals? #50

Closed
qrdlgit opened this issue Jun 13, 2023 · 8 comments
Closed

GPT4 cutoff date is September 2021 - how did this impact evals? #50

qrdlgit opened this issue Jun 13, 2023 · 8 comments
Labels
apibench-data APIBench data

Comments

@qrdlgit
Copy link

qrdlgit commented Jun 13, 2023

Any new API info would not be in GPT4 training.

How much impact do you think this has with respect to relative performance between GPT4 and Gorilla?

Did you do any eval on APIs that existed prior to 09/21 versus those after?

I reviewed the paper but could not find any discussion on this. https://arxiv.org/abs/2305.15334

To be clear, I am not saying this invalidates the ideas, which I think were a fantastic contribution to OS LLMs, but rather that it would be good to understand the precise reason for the superior performance.

@qrdlgit qrdlgit added the apibench-data APIBench data label Jun 13, 2023
@fritzprix
Copy link

@qrdlgit It's basically based on RAG method so don't need to be contained within training set.

@qrdlgit
Copy link
Author

qrdlgit commented Jun 14, 2023

Yes, but the paper claims superior performance to GPT4.

I have consistently found that GPT4 hallucinates less on data that it has been trained on. When you add vector retrieval it even does a better job.

For APIs that were added after the cutoff date, it wouldn't be surprising that GPT4 hallucinations would increase.

This might explain why Gorilla can out perform GPT4.

This is not a complaint. Gorilla paper was really great and has lots of fantastic ideas.

I didn't see any discussion of this in the paper. If there was and I missed it, please let me know. I just want to understand.

If you compare performance between Gorilla and GPT4 on APIs that were added after the cutoff date versus ones that came before, what would it look like?

@fritzprix
Copy link

They used APIs that has been quite stable a while and I believe not much has been changed after the cut-off of GPT4 pre-training. so the benchmark seems to me fair enough.

@qrdlgit
Copy link
Author

qrdlgit commented Jun 15, 2023

Don't take this personally, but I'm not sure you are familiar with these details.

eg, from https://github.com/ShishirPatil/gorilla/blob/main/data/apibench/huggingface_train.json

I found microsoft/xclip-base-patch16-zero-shot which had an initial commit in the last 9 months.

@tianjunz
Copy link
Collaborator

@qrdlgit Thank you for your comments! One thing we need to clarify:
We don't require GPT-4 to output exactly same API here, as long as the API from GPT-4's output has the same functionality, we count as correct. See the script from here: https://github.com/ShishirPatil/gorilla/blob/main/eval/eval-scripts/ast_eval_hf.py. This has been very consistent from the very beginning.

@qrdlgit
Copy link
Author

qrdlgit commented Jun 15, 2023

That answers the question, but not in the way you probably intended - Ie, evals were not done with API dates in mind.

Again, the gorilla is still a great idea and paper. A lot of good takeaways for sure.

However, in the future you probably want to be more careful about data leakage / data contamination issues. This is a problem I'm seeing in a lot of papers coming out recently.

One thing you might want to try is evaluating post cutoff APIs alone. The lack of fine tuning capability on GPT4 and its cutoff date is a significant achilles heel, at least for the moment.

If the performance is even more SOTA, that would be a great example of how using OS LLMs can be superior for certain use cases. GPT4 really is an (obsolete) jack of all trades, master of none.

@ShishirPatil
Copy link
Owner

Thank you for your question and insightful discussion @qrdlgit and @fritzprix! When it comes to the issue of data contamination, we are completely aligned. We have been cautious to ensure that Gorilla doesn't encounter any of the test set data during its training phase. However, we are unable to provide any comment on the training/test data for models that are closed-source.

Your point about splitting APIs before and after 09/2021 is well taken. As @fritzprix pointed out, we would ideally like to believe that an oracle retriever can address the issue concerning the cut-off date as effectively as possible.

To validate this hypothesis, you can conduct a straightforward experiment - given that our training and evaluation datasets are open-sourced, it should be relatively simple to filter out APIs published post 09/2021 and validate this experiment. If you do end up doing it, please feel free to share the results. We would certainly appreciate such a contribution!

@qrdlgit
Copy link
Author

qrdlgit commented Jun 17, 2023

Heh! This is my contribution I'm afraid. g'luck.

@qrdlgit qrdlgit closed this as completed Jun 17, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
apibench-data APIBench data
Projects
None yet
Development

No branches or pull requests

4 participants