Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

bugfix with float type casting in pytorch backend #58

Merged
merged 2 commits into from
Nov 27, 2022

Conversation

hajanssen
Copy link
Contributor

Hello,

i had an issue using the cuda backend.
The type casting, with bd.float() in grid.py dosen work for torch.float64()
This cant be used like NumPy np.float64() to my knowledge.

A torch equivalent may be the following:
pytorchValue= torch.tensor(someValue,dtype=torch.float64())

I have proposed some changes that have done the trick for me.
I hope it is an ok approach, if not I am happy to hear a critique.

Thanks for this nice package I enjoy it a lot :)
Greeting Hauke

@flaport
Copy link
Owner

flaport commented Nov 27, 2022

Thanks for your contribution, @hajanssen ,

However, I think it might be better to not use bd.float directly for type casting after all. Your solution would work in this specific case, but it will break using bd.float as dtype argument for other functions like bd.array and so on.

In stead I propose to use bd.array(..., dtype=bd.float) directly in stead of bd.float(...).

I updated the PR as such.

Thanks again for your input!

@flaport flaport merged commit 644478a into flaport:master Nov 27, 2022
@hajanssen
Copy link
Contributor Author

Ah ok, good foresight with the change, and thanks for the feedback and quick change!

Have a good day!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

2 participants