Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feature request: limit memory usage when handling messages with datasets #517

Closed
r1b opened this issue Jul 25, 2020 · 3 comments
Closed

Comments

@r1b
Copy link
Contributor

r1b commented Jul 25, 2020

Is your feature request related to a problem? Please describe.
I work on a "DICOM router" type of solution with multiple tenants and potentially large datasets. Its important to me to limit the amount of memory pynetdicom uses on both the send / recv side. I am particularly interested in the scaling properties of send / recv C-STORE messages.

Describe the solution you'd like
I would like to be able to read datasets from disk when sending them and write datasets to disk when receiving them. From poking around the code it seems like this is feasible. I'd just like to hear your thoughts and gauge your interest in supporting this before I dive in.

Describe alternatives you've considered
From reading through the pydicom issue tracker it seems like mmap is the go-to solution for limiting memory usage. I think this is appropriate for some applications e.g viewers but doesn't work well in a multi-tenant application.

@scaramallion
Copy link
Member

I'm potentially in favour, although I'm not really sure what you intend. What do you mean when you say you'd like to read from disk on send and write to disk on receive and how would that be different to what currently happens in Association.send_c_store() and the EVT_C_STORE handler?

One change I've been meaning to make would be to support file paths with Association.send_c_store()'s dataset parameter.

@scaramallion
Copy link
Member

I think I can see what you mean for sending; pass a file path to send_c_store -> during message encoding read it in chunks and send those. Although I'm not sure how you'll get the SOP Instance and SOP Class, maybe by decoding just the file meta?.

@r1b
Copy link
Contributor Author

r1b commented Jul 26, 2020

Yep, I think it all boils down to having the ability to use an io.FileIO for the data_set attribute on DIMSEMessage. On the read side it is as you say, you can just use the file you provided to send_c_store (with the caveats about encoding you raised). On the write side it is a little trickier:

  1. Where does the io.FileIO come from? I was imagining you could write the datasets you receive to either a temporary directory or some user-specified path. Not sure how this could work concretely in the API - need to poke around some more.
  2. What is the lifetime of the io.FileIO after you receive it?

Thanks for your help!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

2 participants