Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Using pyEIT for larger models #22

Open
jareer22 opened this issue May 27, 2021 · 8 comments
Open

Using pyEIT for larger models #22

jareer22 opened this issue May 27, 2021 · 8 comments
Assignees

Comments

@jareer22
Copy link

hi @liubenyuan, I'm an undergraduate biomedical student and I'm new to EIT and I was interested in this pyEIT project. I looked into all the examples that are given. however, I'm curious if pyEIT can be applied in larger models like human lungs datasets or it's just coded for small prototypes. if it's possible to apply in larger models may I know it is done? thank you.

@liubenyuan
Copy link
Collaborator

liubenyuan commented Feb 12, 2022

you mean run pyeit with a realistic mesh of thorax or brain with millions of tetrahedrons? pyeit has not yet optimize for running on this large models. I started coding on NY-head years ago but stop trying on this model due to some personal issues.

Are your model the same size as this one? if so, we can start tweaking on this.

Sorry for the late reply. I am working on other projects last year.

@liubenyuan liubenyuan self-assigned this Feb 15, 2022
@liubenyuan
Copy link
Collaborator

#46 is working on a demo using NY-HEAD phantom (I do not know whether EITForward will fit in memory).

@ChabaneAmaury
Copy link
Contributor

It could be possible to run the calculations I implemented with Numpy on the GPU, if one is ever detected, but the mean we need to install and import a new package (e.g.: Tensorflow, Cupy, etc.). It is not totally impossible, however, and could even be easily done with Tensorflow as they have added an experimental API (see Tensorflow doc).
However, I am a bit reluctant to using it, as it can slow down the overall program (at least at startup), and/or make it memory hungry and inefficient if not used correctly.
CuPy, on the other hand, might be better for the use case (see CuPy doc), but I don't know how to use it.
It is also possible to interchange CuPy and Numpy (and Tensorflow as well, but this one is automatically done by Tensorflow itself), at startup by checking whether a GPU is detected or not.

I am not sure if this was the question or not at first, but this is a lead a can check on if needed.

@liubenyuan
Copy link
Collaborator

Hi, cupy deploy the computation load on GPU which might be a better choice. I have collected some articles/projects on precision 3D EIT simulation, though my current priority is to implement a complete electrode model into existing pyeit. The CEM model would make the simulation much accuracy.

@ChabaneAmaury
Copy link
Contributor

I am currently trying to implement CuPy, though it needs some heavy installation on the cuda part. It is not entirely possible to automate during package import unfortunately. I am trying as well to implement a fallback method in case the installation is not complete.

@liubenyuan
Copy link
Collaborator

Are you trying to use some big models like NY-HEAD or openSAHE? If you need any help or something that I would do, please PM me.

@ChabaneAmaury
Copy link
Contributor

Not right now, first I am to setup a working installation as easily as possible (since it must be setup manually by the user). Once this is done and everything can be performed on the GPU, I will take a look at this.

@ChabaneAmaury
Copy link
Contributor

Update: Seems like the GPU implementation needs to rework entirely the module, as well as limiting the use of it to only Linux users (specifically for at least the scipy.spatial.Delaunay import). The GPU may be, in theory, a good idea, at least for accelerating calculations, but my findings so far are as follows :

  • The memory available is drastically reduced due to being limited to the GPU memory (VRAM)
  • The installation process is heavier, longer and needs some adjustements from the user if the module is not in a Conda env
  • Some necessary dependencies cannot implicitely work both with Numpy and CuPy, meaning we need to adjust each use of them in pyEIT (not only the imports of course)
  • Works only with Nvidia GPUs
  • Partial use of the GPU only (possible with some minimal code adjustements only) makes the program slower (needs to constantly move data between GPU and RAM)

I will try and take a look at the problem itself with a big model like NY-HEAD, find the bottleneck and assess the possibility of improving it.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

3 participants