Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Error in Serverless deployment with AWS Lambda #273

Closed
ji-clara opened this issue Aug 28, 2019 · 18 comments
Closed

Error in Serverless deployment with AWS Lambda #273

ji-clara opened this issue Aug 28, 2019 · 18 comments
Assignees
Labels
bug Something isn't working

Comments

@ji-clara
Copy link

Describe the bug
While running:
!bentoml deploy ./model --platform aws-lambda --region us-west-2

this error appears:
[2019-08-27 17:25:40,854] INFO - Using user AWS region: us-west-2
[2019-08-27 17:25:40,855] INFO - Using AWS stage: dev
Encounter error when deploying to aws-lambda
Error: 'Service Information' is not in list

To Reproduce
sudo npm install serverless@1.49.0 --global (tried serverless@1.50.0. didn't work for the example below too.)

  1. Go to BentoML/examples/deploy-with-serverless/deploy-with-serverless.ipynb
  2. Run all the codes until !bentoml deploy ./model --platform aws-lambda --region us-west-2
  3. See error

Expected behavior
Successful deployment

Environment:

  • MacOS 10.13.6
  • serverless 1.49.0
  • Python 3.7.3
  • BentoML 0.3.4
  • ipython 7.6.1
@ji-clara ji-clara added the bug Something isn't working label Aug 28, 2019
@yubozhao
Copy link
Contributor

Hi @ji-clara

Thank you for reporting the issue and include a detailed steps for reproduction!

From looking at the issue, I am fairly certain that I am over processing the serverless information that caused this. However I am unable to reproduce it from your steps.

The error displayed to you happens when after we made the deployment request to Lambda and parse the return response. If we parse the response incorrectly, it will throw error even the deployment to AWS Lambda is successful.

Can you log into your AWS console and navigate to the Lambda page to see is the deployed function up and running? That will helps me figure out what went wrong on your system.

Thanks!
Bo

@ji-clara
Copy link
Author

Hi @yubozhao,

Thanks for your help! It was not successfully deployed. The function was not shown in the AWS Lambda page.

Thanks,
Ji

@yubozhao
Copy link
Contributor

@ji-clara

I am sorry the deployment wasn't successful. Since you attempted deploy it, BentoML keep an record of it and snapshot it in your local file directory. You can find it at ~/bentoml/deployment-snapshots/aws-lambda/{Service name}/{Service version}/{Date-time}

Once you navigate to there, you should see generated files for AWS lambda and a directory with the name of your service. Inside that directory, you could use serverless deploy command to directly deploy to AWS lambda. If you could post the logs from that command for me, that would really help me to understand where went wrong in the process.

Thank you so much for reporting and helping improving this project.

Bo

@ji-clara
Copy link
Author

@yubozhao

I ran as you said above. Please see the error message below and let me know if you need anything else. Thanks.

Error --------------------------------------------------

STDOUT:

STDERR: ERROR: Double requirement given: bentoml==v0.3.4 (from -r /var/task/requirements.txt (line 2)) (already in bentoml (from -r /var/task/requirements.txt (line 1)), name='bentoml')

@yubozhao
Copy link
Contributor

@ji-clara I see where the problem is. Our example notebook is not updated as we improve this project.

In the example notebook directory, if you remove the bentoml from requirements.txt. Everything should works. If you can ran that again, and see what you end up with. That would be great.

I will update the example notebook to make sure this doesn't happen anymore

@ji-clara
Copy link
Author

@yubozhao

The same error occurred after I removing the bentoml from requirements.txt in the notebook directory. But after I removed the bentoml from requirements.txt in ~/bentoml/deployment-snapshots/aws-lambda/{Service name}/{Service version}/{Date-time}, I was able to use serverless deploy and successfully deployed the model.

So updating the example notebook may not be sufficient.

Look forward to a solution from your end. Thanks again for your help!

@yubozhao
Copy link
Contributor

Good job @ji-clara ! Glad you figure out solution

This issue is caused by we didn’t update our example notebooks to keep up with the project improvement. We are going to be more vigilance about this and will go through examples to bring them up to date

I am going to close this issue for now. Feel free to reopen it.

@ji-clara
Copy link
Author

ji-clara commented Sep 3, 2019

@yubozhao Although I deployed the model using serverless deploy, I couldn't get all the way through. Continue running deploy-with-serverless.ipynb, I got API Gateway error. So I would prefer to reopen this ticket until you fix the script deploy-with-serverless.ipynb so it can run from end to end. Thanks!

@yubozhao
Copy link
Contributor

yubozhao commented Sep 3, 2019

@ji-clara sounds good, lets reopen it and continue the discussion here.

Could you provide me logs of the api gateway error? I am struggling to reproduce the issue from your suggested method. I will work with your logs for now, while trying to reproduce it on my system

Thank you for the following up!

@yubozhao yubozhao reopened this Sep 3, 2019
@yubozhao yubozhao self-assigned this Sep 3, 2019
@ji-clara
Copy link
Author

ji-clara commented Sep 4, 2019

@yubozhao I'll reproduce it later.

I think there might be a simple way to resolve this. The call:
!bentoml deploy ./model --platform aws-lambda --region us-west-2, will generate requirements.txt file, which will contains bentoml. I guess if you don't generate bentoml in requirements.txt, it might resolve the issue.

Please note, even if bentoml is removed from requirements.txt in the example folder, it will still be generated during the deploy call.

Just a thought.

@yubozhao
Copy link
Contributor

yubozhao commented Sep 4, 2019

Add bentoml into the requirements.txt is an expected behavior. We will need BentoML to load the service and its artifacts.

I am very curious of the API gateway error you mentioned. Previously We would need include additional serverless plugin for allow image comes throu in base64 format for AWS lambda, since the serverless support wasn't there. That might be one possibility of causing API gateway error.

I think after serverless version 1.45.0 or a version close to this, there is an official support for Gateway and its media type. I want to come back and look into this part and improve the experience.

@ji-clara
Copy link
Author

@yubozhao
Sorry for the delay. I couldn't successfully serverless deploy anymore. So couldn't reproduce the error before. The new error log is below. Hope we are not in a loop. Just curious. Are you able to successfully run the example deploy-with-serverless? Thanks.

Serverless: Adding Python requirements helper...
Serverless: Generated requirements from /Users/jili/bentoml/deployment-snapshots/aws-lambda/SentimentLRModel/2019_08_30_43ba2696/2019-08-30T11:53:15.855362/requirements.txt in /Users/jili/bentoml/deployment-snapshots/aws-lambda/SentimentLRModel/2019_08_30_43ba2696/2019-08-30T11:53:15.855362/.serverless/requirements.txt...
Serverless: Using static cache of requirements found at /Users/jili/Library/Caches/serverless-python-requirements/d0f91a2aa7a8154072d8e504a6cb33fff7a32c883a75cf27cb8fdddb467dffd6_slspyc ...
Serverless: Zipping required Python packages...
Serverless: Packaging service...
Serverless: Excluding development dependencies...
Serverless: Removing Python requirements helper...
Serverless: Injecting required Python packages to package...
Serverless: Uploading CloudFormation file to S3...
Serverless: Uploading artifacts...
Serverless: Uploading service SentimentLRModel.zip file to S3 (71.99 MB)...
Serverless: Validating template...
Serverless: Updating Stack...
Serverless: Checking Stack update progress...
....
Serverless: Operation failed!
Serverless: View the full error output: https://us-west-2.console.aws.amazon.com/cloudformation/home?region=us-west-2#/stack/detail?stackId=arn%3Aaws%3Acloudformation%3Aus-west-2%3A837815079154%3Astack%2FSentimentLRModel-dev%2F07f16820-cb59-11e9-be7b-065114511ac0
 
  Serverless Error ---------------------------------------
 
  An error occurred: PredictLambdaFunction - Function not found: arn:aws:lambda:us-west-2:837815079154:function:SentimentLRModel-dev-predict (Service: AWSLambdaInternal; Status Code: 404; Error Code: ResourceNotFoundException; Request ID: 7024a9c7-ad1a-4aed-b413-e2e58a2c5f93).
 
  Get Support --------------------------------------------
     Docs:          docs.serverless.com
     Bugs:          github.com/serverless/serverless/issues
     Issues:        forum.serverless.com
 
  Your Environment Information ---------------------------
     Operating System:          darwin
     Node Version:              10.15.3
     Serverless Version:        1.49.0
     Enterprise Plugin Version: 1.3.10
     Platform SDK Version:      2.1.0

@ji-clara
Copy link
Author

Hi @yubozhao,

Thanks for your recent updates.

I ended up running
!bentoml deployment create sentiment-serverless --bento {bento_tag} --platform aws-lambda --region us-west-2 with the following error message.
After I changed to apply, received another error message on the bottom.

Do you have any idea how to proceed?

Thanks!

  1. !bentoml deployment create sentiment-serverless --bento {bento_tag} --platform aws-lambda --region us-west-2
    Traceback (most recent call last):
    File "/deploy-with-serverless/.serverless_env/bin/bentoml", line 12, in
    sys.exit(cli())
    File "
    /deploy-with-serverless/.serverless_env/lib/python3.7/site-packages/click/core.py", line 764, in call
    return self.main(*args, **kwargs)
    File "/deploy-with-serverless/.serverless_env/lib/python3.7/site-packages/click/core.py", line 717, in main
    rv = self.invoke(ctx)
    File "
    /deploy-with-serverless/.serverless_env/lib/python3.7/site-packages/click/core.py", line 1137, in invoke
    return _process_result(sub_ctx.command.invoke(sub_ctx))
    File "/deploy-with-serverless/.serverless_env/lib/python3.7/site-packages/click/core.py", line 1137, in invoke
    return _process_result(sub_ctx.command.invoke(sub_ctx))
    File "
    /deploy-with-serverless/.serverless_env/lib/python3.7/site-packages/click/core.py", line 956, in invoke
    return ctx.invoke(self.callback, **ctx.params)
    File "/deploy-with-serverless/.serverless_env/lib/python3.7/site-packages/click/core.py", line 555, in invoke
    return callback(*args, **kwargs)
    File "
    /deploy-with-serverless/.serverless_env/lib/python3.7/site-packages/bentoml/cli/deployment.py", line 194, in create
    ' instead'.format(name=name)
    bentoml.exceptions.BentoMLDeploymentException: Deployment sentiment-serverless already existed, please use update or apply command instead

  2. !bentoml deployment apply sentiment-serverless
    Traceback (most recent call last):
    File "/deploy-with-serverless/.serverless_env/bin/bentoml", line 12, in
    sys.exit(cli())
    File "
    /deploy-with-serverless/.serverless_env/lib/python3.7/site-packages/click/core.py", line 764, in call
    return self.main(*args, **kwargs)
    File "/deploy-with-serverless/.serverless_env/lib/python3.7/site-packages/click/core.py", line 717, in main
    rv = self.invoke(ctx)
    File "
    /deploy-with-serverless/.serverless_env/lib/python3.7/site-packages/click/core.py", line 1137, in invoke
    return _process_result(sub_ctx.command.invoke(sub_ctx))
    File "/deploy-with-serverless/.serverless_env/lib/python3.7/site-packages/click/core.py", line 1135, in invoke
    sub_ctx = cmd.make_context(cmd_name, args, parent=ctx)
    File "
    /deploy-with-serverless/.serverless_env/lib/python3.7/site-packages/click/core.py", line 641, in make_context
    self.parse_args(ctx, args)
    File "/deploy-with-serverless/.serverless_env/lib/python3.7/site-packages/click/core.py", line 940, in parse_args
    value, args = param.handle_parse_result(ctx, opts, args)
    File "
    /deploy-with-serverless/.serverless_env/lib/python3.7/site-packages/click/core.py", line 1477, in handle_parse_result
    self.callback, ctx, self, value)
    File "/deploy-with-serverless/.serverless_env/lib/python3.7/site-packages/click/core.py", line 96, in invoke_param_callback
    return callback(ctx, param, value)
    File "
    /deploy-with-serverless/.serverless_env/lib/python3.7/site-packages/bentoml/cli/click_utils.py", line 131, in parse_yaml_file_callback
    yml_content = value.read()
    AttributeError: 'NoneType' object has no attribute 'read'

@yubozhao
Copy link
Contributor

@ji-clara Thank you for keep working with me on this!

Let's clean up your current Yatai server(BentoML deployment server) first.
These few commands are very useful to me, when I am debugging and building services/

bentoml deployment delete DEPLOYMENT_NAME --force it will remove the deployment record regardless any errors from cloud provider. With this command, you can continue to create the deployment without changing the deployment name.

Now you can use bentoml deployment create command without ran into the error you provide in the first item.

For bentoml deployment apply command, it is my bad that I didn't include more helpful message. And also I should mark --file option is required. You will need construct a yaml file like this.

name: DEPLOYMENT_NAME
spec:
   bento_name: BENTO_NAME
   bento_version: BENTO_VERSION
   operator: aws-lambda
   aws_lambda_operator_config:
       region: us-west-2

For now, I think the best course forward for you is:

  1. use delete command to remove deployment record
  2. log into aws console,
    1. remove serverless bucket
    2. mark delete of the cloud formation deployment inside cloud formation
      These steps above will ensure you have a 'clean' environment to start again.

Now, you can run bentoml deployment create ... again.

@ji-clara
Copy link
Author

@yubozhao Sure.

After running !bentoml deployment delete sentiment-serverless --force, got
Error: no such option: --force.
Note: pulled the latest repo.

@yubozhao
Copy link
Contributor

Oh man my bad. Totally forgot the PR haven’t merge yet.

If you only have one deployment record, you can go into ~/bentoml and delete storage.db. It acts similarly as delete —force

@ji-clara
Copy link
Author

@yubozhao

Everything works now finally. It may help others if the instructions you gave to me are added to the repo. Feel free to close this. Thank you for helping all the way through!

@yubozhao
Copy link
Contributor

@ji-clara Awesome! I am glad it worked for you.

Yes, you are right that we need to document all of the instructions. That's a top priority for us, after the current release of the deployment server

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

2 participants