Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

triton resnet example #3431

Closed
wants to merge 2 commits into from
Closed

triton resnet example #3431

wants to merge 2 commits into from

Conversation

asaiacai
Copy link
Contributor

@asaiacai asaiacai commented Apr 8, 2024

Requested by #3347. This shows how run a torch model server with NVIDIA triton. Requires torch models to be exportable by torch.jit

To run the example:

sky launch -c triton tritonserver.yaml
pip install tritonclient[http] numpy
export TRITONSERVER=$(sky status --ip triton)
python triton_client.py

Copy link
Collaborator

@Michaelvll Michaelvll left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for adding the example for triton @asaiacai! This is awesome. I am wondering if we can add a readme under that repo for the usage of the example, which could simply be copying the comments at the top of the YAML to a README file.

I will try to test this soon.

Just curious, will it be possible to get the triton server example work with our SkyPilot serve, so it can be easily scaled up?

Another future work would be getting it work for some popular LLMs, and we can consider adding both the image model and LLM on our AI gallery: https://skypilot.readthedocs.io/en/latest/gallery/index.html

Copy link

github-actions bot commented Aug 8, 2024

This PR is stale because it has been open 120 days with no activity. Remove stale label or comment or this will be closed in 10 days.

@github-actions github-actions bot added the Stale label Aug 8, 2024
@asaiacai
Copy link
Contributor Author

I'm going to close this since I don't really have time to see this through. The last blocker I ran into for the image inference example was sky serve was actually giving me different outputs than the normal sky launch which says to me there was something weird maybe happening to the payload in transit between the sky serve load balancer and the replica.

I think the original issue was to get an example going that compares vLLM and TRT-LLM. A good resource to start from is the vLLM project runs nightly benchmarks against both so porting these to yamls is probably sufficient if someone else would like to pick up from there.

@asaiacai asaiacai closed this Aug 14, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants