Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Some questions about the future plans of pytorch-fortran #1

Closed
luc99hen opened this issue Dec 1, 2021 · 2 comments
Closed

Some questions about the future plans of pytorch-fortran #1

luc99hen opened this issue Dec 1, 2021 · 2 comments

Comments

@luc99hen
Copy link

luc99hen commented Dec 1, 2021

Hi @alexeedm, I am LuChen, a postgraduate majored in Software Engineering in Tongji University, China. And my current research interests are around Climate AI. Since I can't find your contact information, I create an issue here.

As you may think, we also encountered the lack of AI ecology issues during our research. Therefore, I have developed a tool Fortran-Torch-Adapter by myself from scratch in the past few months and used it in my research. (🤣Yes, exactly based on the same idea with pytorch-fortran, calling a TorchScript model directly from Fortran through interoperability between C++ and Fortran.) And I was also working on a paper to introduce this new tool as I found your repo yesterday. It seems that Nvidia was also working on this even earlier. What a coincidence! 😂😂

Since so, I want to know what are the future plans for pytorch-fortran. For the project, Fortran-Torch-Adapter was still in its infancy and I would love to see a more powerful and well-organized tool like pytorch-fortran to take it over and maybe I could also make some small contributions to this wonderful project. For the paper, I don't know if Nvidia has any plan to apply a pattern or maybe a paper for this? Since I was preparing a paper for this currently, if you are interested, you are very welcome to join this by co-authoring or anything else.

It's all open by now. Just want to hear your thoughts.

@alexeedm
Copy link
Owner

alexeedm commented Dec 8, 2021

Hey @luc99hen nice to hear from you, thanks for your interest in this project! I see your code is indeed pretty similar to what I have, even the same resnet example.

I'm definitely interested in collaboration, that'd be wonderful. I'm preparing the code to do complex training through zero-copy calling into Python script directly, that would allow arbitrary training pipeline to run from within Fortran directly. And of course better docs, also incoming.
What could be of great help is if you try out the bindings and leave your feedback: whether it works at all :), whether API is adequate, fast, and so on. If you find something that you'd like to improve you can let me know or submit a PR.

I'd be glad if you use the bindings and reference the project in your paper, but I don't believe this project deserves a paper of its own (at least not just yet :)

@luc99hen
Copy link
Author

luc99hen commented Dec 11, 2021

Thank you for your reply!

I'm preparing the code to do complex training through zero-copy calling into Python script directly, that would allow arbitrary training pipeline to run from within Fortran directly.

This sounds like an amazing and huge project! My current understanding is that it will build a complete set of Fortran API based on libtorch library for AI training (If I have misunderstood, please help me point it out :>). Actually, supporting training in Fortran environment was also part of my future plans. I am glad to see pytorch-fortran has made some progress on this. And it will be even better if I could also be part of this great plan.

What could be of great help is if you try out the bindings and leave your feedback: whether it works at all :).

Since I haven't got much development experience in HPC field, I have learned a lot from reading your implementation(Thank you, Alex😉). There was one confusing point with the forward API. I have seen the reversed input dimension in the resnet example . I know this is introduced by the inconsistent array mem layout between Fortran and C++. And I was wondering if it would be better that this inconsistency was handled by the binding like this rather than make users to reverse their input dimensions manually.

I'd be glad if you use the bindings and reference the project in your paper, but I don't believe this project deserves a paper of its own (at least not just yet :)

I agree with you that the adapter was too simple to support the whole paper. That's why I was planning to introduce it in the context of neural Earth system modelling with some well-crafted sample use cases. Currently, researchers are desperate for a simpler and more efficient way to incorporating AI models into Fortran environment. And there are already many attempts like FKB to solve this. I believe our solution will make a difference to this field. (BTW, I would also like to know if there have been some successful applications of pytorch-fortran for more complex scenarios. Thank you in advance!)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants