New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Failure of bento serve in production with AnyIO error #2271
Comments
Is this a duplication of #2271? Could you try it again with |
The setup is similar to #2270 which was indeed solved by running |
Issue solved by downgrading bentoml to v1.0.0a2 |
can you try to install bentoml with |
thanks @aarnphm but it does not work. The only solution was to downgrade bentoml. |
After sending multiple curl requests, I see the following error when running the bento service in production mode:
Basically only the first request works fine. Starting with the second request I get the asyncio errors above. The curl request is successful, but the errors above could affect the performance of the service I suspect. Any idea why these errors happen? I'm using 1.0.0a2. |
I believe that the new |
When upgrading to
|
@andreea-anghel could you share your service/runner definition code? |
@andreea-anghel are you running the example from the gallery project? I can't seem to reproduce it on my end. Could you share the detailed error log? |
Here is the service definition:
and here is the client:
When I run the client I get the expected predictions. However on the server side I see errors
This is how I start the server |
This error does not show up in the |
If you cannot reproduce it, could you please share your env details (the list of python packages + their versions)? Thanks! |
everything works on my end, 1.0.0a4, on both conda 3.9.7 and pyenv 3.9.7. Also works with Can you send the version of anyio: |
Thanks for checking @aarnphm and sorry for my late reply. I'm using |
can you try with the latest releases? |
@andreea-anghel I tested your code on a clean ubuntu 20 machine in aws with the exact same python and anyio. Did not have the issue that you see with the latest build or with a4. How long do you have to wait before that error log message appears? What environment are you running this in? You can definitely send your pip dependencies over but maybe it's something about the way you're running it. |
@timliubentoml thank you for looking into this issue. The problem shows up very quickly, after the second HTTP packet received. Here is a list of the Python packages installed in my environment pythonenv.txt |
@andreea-anghel Just to confirm, you're running on Ubuntu 20.04 right? |
@timliubentoml yes, that's correct |
@andreea-anghel I tried again and couldn't reproduce your issue. I took another issue at your pythonenv.txt and you are running a very old version (at this point) of bentoml and it's dependencies. It looks like you're at a2. We are now at a7. I would really recommend that you update your bentoml to the latest version by doing this: If you need the code to the quick start without having to create again yourself, you can also get it from here along with instructions: |
I believe it was fixed here #2223. |
@timliubentoml Thanks for trying to reproduce the error. Have you tried running the code on a s390x system? |
@andreea-anghel Do you have any suggestions for running bentoml on the s390x system? I tried using the docker container here: https://hub.docker.com/r/s390x/ubuntu/ But ran into a few issues getting the necessary libraries to build. Any idea how I could try to reproduce this? If you've already got it setup and the issue is reproducible on your side, I would say that you should try to get the latest bentoml version. We upgraded a bunch of stuff with regard to anyio |
@timliubentoml I've installed bentoml v1.0.0a7 and reinstalled my python env (see the attached pip freeze output). By the way, it would be great to have a docker image for s390x. |
@andreea-anghel That's awesome! Good to hear. Let me talk about the docker image for s390x. Are you using docker right now for prod deployment? |
@timliubentoml Unfortunately I'm not using a docker at the moment. Otherwise, I would have been happy to contribute. |
I am experiencing the exact same bug with Environment: |
Addendum: after containerizing, running the bento as docker container works. |
Describe the bug
The sklearn example available here https://docs.bentoml.org/en/latest/quickstart.html#installation fails at inference time with AnyIO error. The bento service is deployed in production mode
bentoml serve iris_classifier:latest --production
. When deployed in development mode, the inference works as expected.To Reproduce
Steps to reproduce the issue:
pip install bentoml --pre
bentoml build
bentoml serve iris_classifier:latest --production
Expected behavior
The response should be the classification result, namely 1.
Screenshots/Logs
The error generated by the server is the following:
Environment:
Additional context
The text was updated successfully, but these errors were encountered: