-
Notifications
You must be signed in to change notification settings - Fork 770
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add prediction logging to AWS Lambda deployment #790
Conversation
* make sagemaker docker image have same file structure * use consistent file names * move operator code out of __init__ to avoid loading unused code in model server startup * refactor deployment validator * reorganize bento repository code * deployment valiator test&linting error fix * more repository code cleanup * renaming and adding inline comments * move out lambda operator code to separate file
Hi @jackyzha0, thanks for the PR! Looks like the AWS lambda related unit test is failing now, you can find more details by clicking the Travis CI links on this page below. https://travis-ci.org/github/bentoml/BentoML/jobs/697719556 You can also run that test locally and see if you could reproduce the error:
|
return bento_service_api.handle_aws_lambda_event(event) | ||
print(f'Got prediction request with body "{event["body"]}"') | ||
prediction = bento_service_api.handle_aws_lambda_event(event) | ||
print( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is great.
We can take one step further and make this into JSON format with the input and output. Users can stream the logs out to ElasticSearch for searching and/or use the result for training later on.
You can check out the current prediction log format in the BentoAPIServer for reference:
BentoML/bentoml/server/bento_api_server.py
Line 293 in a134e3a
request_log = { |
Hello @jackyzha0, Thanks for updating this PR. There are currently no PEP 8 issues detected in this PR. Cheers! 🍻 Comment last updated at 2020-06-16 00:39:35 UTC |
Hi @jackyzha0, looks like CI failed on code formatting check, you will need to run |
@@ -45,11 +49,11 @@ | |||
if not os.path.exists(bento_bundle_path): | |||
bento_bundle_path = os.path.join('/tmp/requirements', bento_name) | |||
|
|||
print(f'Loading BentoService bundle from path: "{bento_bundle_path}"') | |||
logger.info('Loading BentoService bundle from path: "%s"', bento_bundle_path) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
how about changing all these to logger.debug
and only keep the prediction log in logger.info
?
import logging | ||
|
||
logger = logging.getLogger(__name__) | ||
logger.setLevel(logging.DEBUG) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
after import bentoml
below, the logging format and level might get reset, so probably don't need this setLevel
here.
Could you move logger = logging.getLogger(__name__)
to after the from bentoml import load
line? and keep the logging message in download_extra_resources
just using print
?
It is a bit tricky here because bentoml
package may not be available to import before the download_extra_resources
call, as the package file might be stored in s3 and needs to be downloaded first.
@@ -38,18 +40,21 @@ | |||
os.environ['BENTOML_HOME'] = '/tmp/bentoml/' | |||
from bentoml import load # noqa | |||
|
|||
logger = logging.getLogger(__name__) | |||
logger.setLevel(logging.DEBUG) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think we can remove this and hide debug logs by default, those lots are usually only used for BentoML developer/contributor to debug the feature and not so much for BentoML user. For BentoML developer, we can always enable debug log with a simple environment variable change BENTOML__LOGGING__LOGGING_LEVEL=debug
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What is the default logging level for BentoML? Is it info? When I don't explicitly set the logging level to be DEBUG or INFO, my logger.info() statements don't show up
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The default is INFO https://github.com/bentoml/BentoML/blob/master/bentoml/configuration/default_bentoml.cfg#L30
I just realized this will not be using the logger set for BentoML module because lambda_app.py
will be copy'd to a separate directory when deployed to AWS lambda. To use BentoML's logging config here, you will need to use something like logger = logging.getLogger('bentoml.lambda_app')
Codecov Report
@@ Coverage Diff @@
## master #790 +/- ##
==========================================
+ Coverage 55.75% 55.89% +0.14%
==========================================
Files 114 114
Lines 8346 8400 +54
==========================================
+ Hits 4653 4695 +42
- Misses 3693 3705 +12
Continue to review full report at Codecov.
|
logger.info('Got prediction request with body "%s"', {event["body"]}) | ||
prediction = bento_service_api.handle_aws_lambda_event(event) | ||
|
||
logger.debug( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
following the comment about logging level above, I think here it should be updated to logger.info
, and all others can be set to either logger.debug
or logger.error
, that way the user can get a very clean and usable prediction log
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Would line 76 and line 90 be logger.debug then?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actually, it's redundant information so I'll just remove those lines
Thanks for updating the PR @jackyzha0, I ran the lambda end-to-end tests on my end and verified the prediction JSON logs showed up on AWS cloud watch, everything looks great! Merging now! |
* Repository and Deployment refactor and cleanup (bentoml#771) * make sagemaker docker image have same file structure * use consistent file names * move operator code out of __init__ to avoid loading unused code in model server startup * refactor deployment validator * reorganize bento repository code * deployment valiator test&linting error fix * more repository code cleanup * renaming and adding inline comments * move out lambda operator code to separate file * fix some typos + adding logging to lambda callback * fix some linting problems * fix mock not returning proper form * add trailing comma * add better logging + docs * whitespace fixes * move logging and consolidate debug line * update docs + fixed logging * fix linting issues * remove redundant logging and switch to bento logger * update docs Co-authored-by: Chaoyu <paranoyang@gmail.com> Co-authored-by: cory <cory.massaro@gmail.com>
Description
Motivation and Context
How Has This Been Tested?
Types of changes
Components (if applicable)
Checklist:
./dev/format.sh
and./dev/lint.sh
script have passed(instructions).