-
Notifications
You must be signed in to change notification settings - Fork 1.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Logging/Saving Settings and Instructions for Inference Jobs #1646
Comments
Alot of information is stored in the save directory in history.json for every inference job. It has everything you mentioned, except not perhaps as much detail -- e.g.no input files, but just whether there are input files. In addition, vLLM or TGI can also change its logging. |
I cant seem to find history.json under /save, is there perhaps a run option that I'm missing? |
|
Great! It works now. though I'm missing the docs/chunks used in the job. I really think it needs to be added. |
Hi, the API call and history.json does contain 'save_dict['sources']` as the list of sources used. |
I see, thanks! |
There needs to be a way to log or save all settings and instructions provided for every inference job that the vLLM inference server receives. This would be useful for debugging purposes, as it would allow us to track and analyze the input data and configurations used for each job.
Proposed solution :
Implement a logging mechanism that captures and stores the following information for each inference job:
The text was updated successfully, but these errors were encountered: