-
Notifications
You must be signed in to change notification settings - Fork 559
Stream model deployer logs through CLI #557
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Found one tiny type annotation that can be improved (but I had to look for a long time ;)). Looks great otherwise!
AlexejPenner
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks good to me! :)
schustmi
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM!
Co-authored-by: Michael Schuster <schustmi@users.noreply.github.com>
d3b0e57 to
dfe6bac
Compare
Describe changes
Implements a
zenml served-models logsCLI command that streams the log contents ofmodel servers through the CLI, for easy access to the back-end logs. The command supports "following" the remote logs, "tailing" the last X lines of logs and also pretty-prints everything in the
logs by default, so the result is an improved version of the remote logs. Large files are also supported through the use of IO buffering, HTTP streaming and Python generators.
Example output:
Log streaming now works not only with the Seldon and MLflow model servers, but also with
every local daemon service. The base Service class now specifies this abstract method:
TODO:
Pre-requisites
Please ensure you have done the following:
Types of changes