Skip to content

[Relay] Fix CUDA batchmatmul strategy to allow mixed precision#9540

Closed
AndrewZhaoLuo wants to merge 1 commit intoapache:mainfrom
AndrewZhaoLuo:aluo/fix-batchmatmul-outdtype
Closed

[Relay] Fix CUDA batchmatmul strategy to allow mixed precision#9540
AndrewZhaoLuo wants to merge 1 commit intoapache:mainfrom
AndrewZhaoLuo:aluo/fix-batchmatmul-outdtype

Conversation

@AndrewZhaoLuo
Copy link
Contributor

In the past, mixed precision workloads for cuda-batchmatmul were removed for some reason. This allows this to occur. Honestly don't know why its this way but we need this for #9186

@AndrewZhaoLuo
Copy link
Contributor Author

Superseded by #9186

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant