-
-
Notifications
You must be signed in to change notification settings - Fork 17
Open
Description
I have successfully used mint to slim the docker images of my AI project but the inference time has almost doubled for almost all of the images without any logical explanation. The entire working of the non-slim and slim images is exactly the same, and so are the results, but the inference time has doubled.
Why is it so? Does mint add anything extra, or maybe remove something essential that is necessary for the inference of AI models. I read somewhere that the extra security might add sometime but doubling the time still doesn't make any sense. Can anyone help me with that? I didn't find any solution for this anywhere, maybe the creators themselves might be able to shed some light on the issue.
Metadata
Metadata
Assignees
Labels
No labels