Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

AttributeError pathml.core #402

Open
tdenize opened this issue Dec 13, 2023 · 8 comments
Open

AttributeError pathml.core #402

tdenize opened this issue Dec 13, 2023 · 8 comments

Comments

@tdenize
Copy link

tdenize commented Dec 13, 2023

Hello,
I am trying to replicate the first steps of the tutorial but I run into an error really early on.

I installed pathml following the directives, launch Spyder on the pathml environment and set up openslide. Subsequently I:

from pathml.core import HESlide, CODEXSlide, VectraSlide, SlideData, types

wsi = HESlide("AG_1.svs")

And get the following error:

Traceback (most recent call last):

  Cell In[7], line 1
    wsi = HESlide("AG_1.svs")

  File D:\Files\Pycharm\Test\pathml\lib\site-packages\pathml\core\slide_data.py:512 in __init__
    super().__init__(*args, **kwargs)

  File D:\Files\Pycharm\Test\pathml\lib\site-packages\pathml\core\slide_data.py:201 in __init__
    self.h5manager = pathml.core.h5managers.h5pathManager(slidedata=self)

AttributeError: module 'pathml.core' has no attribute 'h5managers'

This prevents me from opening the slide.
Do you know what could be going wrong?

Thank you for your help,
Thomas

@sreekarreddydfci
Copy link
Collaborator

Hi Thomas,

Could you please replicate this error in Jupyter Notebooks and and let me know? I haven't encountered any issues when opening HE slides before.

Thank you.
Sreekar.

@tdenize
Copy link
Author

tdenize commented Dec 16, 2023

Hi Sreekar,
I have the exact same issue with Jupyter Notebooks (screenshot attached).
I would really appreciate any input.
Thanks,
Thomas
Screenshot 2023-12-16 092858

@sreekarreddydfci
Copy link
Collaborator

We can try two approaches,

  1. Setting up the environment from scratch.
  2. Installing the docker image as mentioned in this link

Best would be to use the docker image. Let me know if that resolves the issue.

And, could you tell me the version of PathML that's installed.

import pathml
pathml.__version__

Thanks,
Sreekar.

@tdenize
Copy link
Author

tdenize commented Dec 16, 2023

Thanks!

What do you mean by setting up the environment from scratch?

I tried with docker. It doesn't give me the h5manager error, which is nice, but it looks like it doesn't find my file. I tried various locations and I can't make it work.

The version of PathML i have installed is '2.1.1'

I really appreciate your help.
Thomas
Screenshot 2023-12-16 144538

@sreekarreddydfci
Copy link
Collaborator

I mean, reinstalling the conda environment and PathML as mentioned here.

To load the slide, you may upload the file to JupyterLab. Alternatively, you can mount the local directory and run the docker image.

Here are the commands to mount a folder.

docker run -it -p 8888:8888 -v E:/test.svs:/home/pathml/test.svs pathml/pathml

and execute the below code to load the slide.

from pathml.core import HESlide
image = HESlide('/home/pathml/test.svs')

Let me know if that works.

@tdenize
Copy link
Author

tdenize commented Dec 18, 2023

Hey,
It looks like it works.
My workers die every time both on docker and on colab (I even upgraded to paying version to see if it worked, it didn't) every time I try to run the first pipeline with a relatively small WSI (115 MB).
Am I doing something wrong?
Thanks again for your help

@jamesgwen
Copy link
Collaborator

jamesgwen commented Dec 20, 2023

Regarding the issue of AttributeError: module 'pathml.core' has no attribute 'h5managers', I have adjusted the import statements in pathml/pathml/core/slide_data.py

Adjusted import statements in 4961630

Regarding the workers dying out, is there an output msg when the workers die?

@tdenize
Copy link
Author

tdenize commented Dec 20, 2023

Thank you for the modification of the import statement.
I did get a subsequent error, though (sorry :( )

wsi = HESlide(r"C:\Users\deniz\OneDrive\Bureau\python_work\AG_1.svs")
Traceback (most recent call last):

  Cell In[18], line 1
    wsi = HESlide(r"C:\Users\deniz\OneDrive\Bureau\python_work\AG_1.svs")

  File D:\Files\Pycharm\Test\pathml\lib\site-packages\pathml\core\slide_data.py:513 in __init__
    super().__init__(*args, **kwargs)

  File D:\Files\Pycharm\Test\pathml\lib\site-packages\pathml\core\slide_data.py:202 in __init__
    self.h5manager = h5pathManager(slidedata=self)

  File D:\Files\Pycharm\Test\pathml\lib\site-packages\pathml\core\h5managers.py:84 in __init__
    self.slide_type = pathml.core.slide_types.SlideType(**slide_type_dict)

AttributeError: module 'pathml' has no attribute 'core'

Regarding the workers, on colab, I get the following message:

INFO:distributed.http.proxy:To route to workers diagnostics web server please install jupyter-server-proxy: python -m pip install jupyter-server-proxy
INFO:distributed.scheduler:State start
INFO:distributed.scheduler:  Scheduler at:     tcp://127.0.0.1:39663
INFO:distributed.scheduler:  dashboard at:  http://127.0.0.1:8787/status
INFO:distributed.nanny:        Start Nanny at: 'tcp://127.0.0.1:33993'
INFO:distributed.nanny:        Start Nanny at: 'tcp://127.0.0.1:42997'
INFO:distributed.nanny:        Start Nanny at: 'tcp://127.0.0.1:33561'
INFO:distributed.nanny:        Start Nanny at: 'tcp://127.0.0.1:35047'
INFO:distributed.scheduler:Register worker <WorkerState 'tcp://127.0.0.1:43365', name: 1, status: init, memory: 0, processing: 0>
INFO:distributed.scheduler:Starting worker compute stream, tcp://127.0.0.1:43365
INFO:distributed.core:Starting established connection to tcp://127.0.0.1:60068
INFO:distributed.scheduler:Register worker <WorkerState 'tcp://127.0.0.1:35483', name: 0, status: init, memory: 0, processing: 0>
INFO:distributed.scheduler:Starting worker compute stream, tcp://127.0.0.1:35483
INFO:distributed.core:Starting established connection to tcp://127.0.0.1:60040
INFO:distributed.scheduler:Register worker <WorkerState 'tcp://127.0.0.1:40517', name: 3, status: init, memory: 0, processing: 0>
INFO:distributed.scheduler:Starting worker compute stream, tcp://127.0.0.1:40517
INFO:distributed.core:Starting established connection to tcp://127.0.0.1:60076
INFO:distributed.scheduler:Register worker <WorkerState 'tcp://127.0.0.1:38553', name: 2, status: init, memory: 0, processing: 0>
INFO:distributed.scheduler:Starting worker compute stream, tcp://127.0.0.1:38553
INFO:distributed.core:Starting established connection to tcp://127.0.0.1:60052
INFO:distributed.scheduler:Receive client connection: Client-b2c608d7-9f88-11ee-9209-0242ac1c000c
INFO:distributed.core:Starting established connection to tcp://127.0.0.1:40474
INFO:distributed.core:Event loop was unresponsive in Scheduler for 3.39s.  This is often caused by long-running GIL-holding functions or moving large chunks of data. This can cause timeouts and instability.
INFO:distributed.core:Event loop was unresponsive in Nanny for 3.39s.  This is often caused by long-running GIL-holding functions or moving large chunks of data. This can cause timeouts and instability.
INFO:distributed.core:Event loop was unresponsive in Nanny for 3.40s.  This is often caused by long-running GIL-holding functions or moving large chunks of data. This can cause timeouts and instability.
INFO:distributed.core:Event loop was unresponsive in Nanny for 3.40s.  This is often caused by long-running GIL-holding functions or moving large chunks of data. This can cause timeouts and instability.
INFO:distributed.core:Event loop was unresponsive in Nanny for 3.40s.  This is often caused by long-running GIL-holding functions or moving large chunks of data. This can cause timeouts and instability.
INFO:distributed.core:Event loop was unresponsive in Nanny for 37.13s.  This is often caused by long-running GIL-holding functions or moving large chunks of data. This can cause timeouts and instability.
INFO:distributed.core:Event loop was unresponsive in Nanny for 37.14s.  This is often caused by long-running GIL-holding functions or moving large chunks of data. This can cause timeouts and instability.
INFO:distributed.core:Event loop was unresponsive in Scheduler for 37.20s.  This is often caused by long-running GIL-holding functions or moving large chunks of data. This can cause timeouts and instability.
INFO:distributed.core:Event loop was unresponsive in Nanny for 37.12s.  This is often caused by long-running GIL-holding functions or moving large chunks of data. This can cause timeouts and instability.
INFO:distributed.core:Event loop was unresponsive in Nanny for 37.12s.  This is often caused by long-running GIL-holding functions or moving large chunks of data. This can cause timeouts and instability.
INFO:distributed.nanny:Closing Nanny at 'tcp://127.0.0.1:33993'. Reason: nanny-close
INFO:distributed.nanny:Nanny asking worker to close. Reason: nanny-close
INFO:distributed.nanny:Closing Nanny at 'tcp://127.0.0.1:42997'. Reason: nanny-close
INFO:distributed.nanny:Nanny asking worker to close. Reason: nanny-close
INFO:distributed.nanny:Closing Nanny at 'tcp://127.0.0.1:33561'. Reason: nanny-close
INFO:distributed.nanny:Nanny asking worker to close. Reason: nanny-close
INFO:distributed.nanny:Closing Nanny at 'tcp://127.0.0.1:35047'. Reason: nanny-close
INFO:distributed.nanny:Nanny asking worker to close. Reason: nanny-close
INFO:distributed.core:Received 'close-stream' from tcp://127.0.0.1:60040; closing.
INFO:distributed.core:Received 'close-stream' from tcp://127.0.0.1:60068; closing.
INFO:distributed.core:Received 'close-stream' from tcp://127.0.0.1:60052; closing.
INFO:distributed.scheduler:Remove worker <WorkerState 'tcp://127.0.0.1:35483', name: 0, status: closing, memory: 6336, processing: 0> (stimulus_id='handle-worker-cleanup-1703112777.2456512')
INFO:distributed.scheduler:Remove worker <WorkerState 'tcp://127.0.0.1:43365', name: 1, status: closing, memory: 6337, processing: 0> (stimulus_id='handle-worker-cleanup-1703112777.4646034')
INFO:distributed.scheduler:Remove worker <WorkerState 'tcp://127.0.0.1:38553', name: 2, status: closing, memory: 6336, processing: 0> (stimulus_id='handle-worker-cleanup-1703112777.6592765')
INFO:distributed.core:Received 'close-stream' from tcp://127.0.0.1:60076; closing.
INFO:distributed.scheduler:Remove worker <WorkerState 'tcp://127.0.0.1:40517', name: 3, status: closing, memory: 6336, processing: 0> (stimulus_id='handle-worker-cleanup-1703112777.9168477')
INFO:distributed.scheduler:Lost all workers
INFO:distributed.scheduler:Scheduler closing due to unknown reason...
INFO:distributed.scheduler:Scheduler closing all comms

Thanks again for your help!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants