-
Notifications
You must be signed in to change notification settings - Fork 27
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Ensemble model cannot be inferenced by clients without clear error log to debug. #70
Comments
Hi @Edwardmark ! Thank you for extensive description of the problem. I suspect your issue might be connected to the Should you like to verify that it's about the GPU input, please update your |
@szalpal I changed the dali_det_post pipeline as follows:
But I met the same error:
In addition, my first preprocess model is defined as follows:
Any advise to make it work please? Thanks. @szalpal |
@szalpal I changed the version to 21.04 and change all input to cpu, but still no error log is shown, and I get the same log as below, what is your advise? Thanks.
|
It's possible, that even though you changed the ExternalSource to Could you try it out and verify if the GPU input solves you problem, or we need to dig deeper? The instructions how to build |
@szalpal It works, thank you very much. |
@szalpal How to build the docker without download the git repositorys? I mean if I download the related git repos beforehand, what changes should I make to the cmakelists in dali_backend? When build the docker, it happens the following erros which seems like network error:
|
as far as I now, unfortunately cloning git repos is immanent for building backends in Triton. Is there a particular reason you would like to clone repos beforehand? If you want to use the latest tritonserver version (21.05), I merged today the PR, which applies that #68 . So you can clone the upstream dali_backend |
@szalpal because the network is not always good, so I want to clone repos beforhand, then just use the repo to make the build process more quicklyl. |
I see. It would be possible to tweak the root IMPORTANT: this is a dirty explanation of a workaround and we certainly do not support nor plan to support this way of building in foreseeable future. We also highly discourage changing this building procedure for production environments. The point is, that there are these three repos, that need to be acquired for proper building any backend: Lines 54 to 71 in bb9204c
Should you like to change them to be acquired from your disk, firstly clone all three repos you need and then you can switch from fetching content from git repository to fetching content from disk location, by changing |
@szalpal Thank you very much. |
@szalpal could you please give me more hit on how to change GIT, GIT_SHALLOW and GIT_REPOSITORY subcommands? Thanks. I changed the lines as follows:
is that right?
I build the docker image successfully. |
what is the problem you are facing? |
So how to deal with that? |
As I mentioned above, we do not support nor plan to support this kind of building procedure. Therefore I unfortunately won't be able to answer all the questions, simply because I didn't tried it nor tested it. The error you're facing is there because the server verifies the API version the backend has been built with. Be sure to use proper version of #define TRITONBACKEND_API_VERSION_MAJOR 1
#define TRITONBACKEND_API_VERSION_MINOR 0 |
I checkout to the 21.05 branch, and the problem is solved. Thank you very much.@szalpal |
@szalpal Do I have to install nvidia-dali-nightly? |
@szalpal Thanks. |
Not necessarily. We recommend using latest DALI release |
If I use dali 1.2, would the dali_backend support gpu input? |
@Edwardmark, yes. Although we don't guarantee backwards compatibility. Therefore, only the latest DALI version is properly tested and maintained |
Description
I run a ensemble model contains three model which executed sequencely one by one, I check each model, and each one is ok, and I also check two models ensemble, that is ok too. But when I connect them together,and I run the grpc client, the server crashed without meaningful error logs as follows:
Triton Information
21.03 docker container
To Reproduce
The ensemble config.pbtxt is as follows:
The client is as follows:
Expected behavior
results should be obtained without error, but now, the server just crashes.
@deadeyegoodwin Looking forward to your reply.
The dali_det_post model config.pbtxt is as follows:
the above pipeline is generated using the following code:
The above dali_det_post modle can run correctly itself, but connecting it to the firt two models causes crashes in server.
Change the above post-processing model with python backend model as follows can run without error:
Python config.pbtxt is as follows:
Any suggestions please? @deadeyegoodwin Thanks in advance.
The text was updated successfully, but these errors were encountered: