Working on Model Inference Servers @aws
-
Amazon Web Services
- San Francisco, CA
- https://www.linkedin.com/in/aaquib/
Pinned Loading
-
pytorch/serve
pytorch/serve PublicServe, optimize and scale PyTorch models in production
-
deepjavalibrary/djl-serving
deepjavalibrary/djl-serving PublicA universal scalable machine learning model deployment solution
-
triton-inference-server/server
triton-inference-server/server PublicThe Triton Inference Server provides an optimized cloud and edge inferencing solution.
-
awslabs/multi-model-server
awslabs/multi-model-server PublicMulti Model Server is a tool for serving neural net models for inference
-
aws/deep-learning-containers
aws/deep-learning-containers PublicAWS Deep Learning Containers are pre-built Docker images that make it easier to run popular deep learning frameworks and tools on AWS.
-
aws/sagemaker-inference-toolkit
aws/sagemaker-inference-toolkit PublicServe machine learning models within a 🐳 Docker container using 🧠 Amazon SageMaker.
Something went wrong, please refresh the page to try again.
If the problem persists, check the GitHub status page or contact support.
If the problem persists, check the GitHub status page or contact support.