Skip to content

Conversation

@cota
Copy link
Collaborator

@cota cota commented Jan 26, 2024

With this we also allow picking backends that we ignored until now, namely eager and openxla{,_eval}+lazytensor.

While at it, shorten the printed names of "PytorchXLA" to "XLA" and "WorkloadNumber" to "Workload". This paves the way for an upcoming "tabulate" mode.

Copy link
Collaborator

@frgossen frgossen left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thank you!

@cota
Copy link
Collaborator Author

cota commented Jan 26, 2024

I just removed openxla_eval+lazytensor, which makes no sense (openxla_eval only works with dynamo).
I've also added sanity checks for the passed --backend flags.

With this we also allow picking backends that we ignored
until now, namely eager and openxla+lazytensor.

While at it, shorten the printed names of "PytorchXLA" to "XLA"
and "WorkloadNumber" to "Workload". This paves the way for
an upcoming "tabulate" mode.
@cota cota merged commit 2b7f435 into pytorch:master Jan 29, 2024
@cota cota deleted the backends branch January 29, 2024 20:00
cota added a commit that referenced this pull request Feb 13, 2024
The existing check became obsolete once we added the --backends flag in PR #6392; in particular, len(speedups) could be 1 when a single backend is passed to --backends. Fix it and add a test to make sure we emit no output to stdout (note that the warning message goes to stderr).
amithrm pushed a commit to amithrm/xla that referenced this pull request Mar 1, 2024
The existing check became obsolete once we added the --backends flag in PR pytorch#6392; in particular, len(speedups) could be 1 when a single backend is passed to --backends. Fix it and add a test to make sure we emit no output to stdout (note that the warning message goes to stderr).
bhavya01 pushed a commit that referenced this pull request Apr 22, 2024
With this we also allow picking backends that we ignored until now, namely eager and openxla{,_eval}+lazytensor.

While at it, shorten the printed names of "PytorchXLA" to "XLA" and "WorkloadNumber" to "Workload". This paves the way for an upcoming "tabulate" mode.
bhavya01 pushed a commit that referenced this pull request Apr 22, 2024
The existing check became obsolete once we added the --backends flag in PR #6392; in particular, len(speedups) could be 1 when a single backend is passed to --backends. Fix it and add a test to make sure we emit no output to stdout (note that the warning message goes to stderr).
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants