Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Module import takes too long #648

Open
4 tasks
bw4sz opened this issue Apr 1, 2024 · 0 comments
Open
4 tasks

Module import takes too long #648

bw4sz opened this issue Apr 1, 2024 · 0 comments
Labels
good first issue Good for newcomers

Comments

@bw4sz
Copy link
Collaborator

bw4sz commented Apr 1, 2024

Describe the bug
import deepforest takes a very long time in some instances. 10+ seconds. Sometimes it feels as if it is related to GPU error checking

To Reproduce

import deepforest

Environment (please complete the following information):

  • OS: linux
  • Python version and environment :

Screenshots

The naive thing I did was just hit ctrl z during load and looked at where we, which is inside of torch/onnx/_. What is strange is if you import torch, that is fast. Clearly we are loading something differently.

(DeepForest) [b.weinstein@c0308a-s25 DeepForest]$ python
Python 3.10.13 | packaged by conda-forge | (main, Oct 26 2023, 18:07:37) [GCC 12.3.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import pandas as pd
>>> import deepforest.main as m
^CTraceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "/home/b.weinstein/DeepForest/deepforest/main.py", line 10, in <module>
    import pytorch_lightning as pl
  File "/orange/ewhite/b.weinstein/miniconda3/envs/DeepForest/lib/python3.10/site-packages/pytorch_lightning/__init__.py", line 27, in <module>
    from pytorch_lightning.callbacks import Callback  # noqa: E402
  File "/orange/ewhite/b.weinstein/miniconda3/envs/DeepForest/lib/python3.10/site-packages/pytorch_lightning/callbacks/__init__.py", line 14, in <module>
    from pytorch_lightning.callbacks.batch_size_finder import BatchSizeFinder
  File "/orange/ewhite/b.weinstein/miniconda3/envs/DeepForest/lib/python3.10/site-packages/pytorch_lightning/callbacks/batch_size_finder.py", line 24, in <module>
    from pytorch_lightning.callbacks.callback import Callback
  File "/orange/ewhite/b.weinstein/miniconda3/envs/DeepForest/lib/python3.10/site-packages/pytorch_lightning/callbacks/callback.py", line 22, in <module>
    from pytorch_lightning.utilities.types import STEP_OUTPUT
  File "/orange/ewhite/b.weinstein/miniconda3/envs/DeepForest/lib/python3.10/site-packages/pytorch_lightning/utilities/types.py", line 40, in <module>
    from torchmetrics import Metric
  File "/orange/ewhite/b.weinstein/miniconda3/envs/DeepForest/lib/python3.10/site-packages/torchmetrics/__init__.py", line 14, in <module>
    from torchmetrics import functional  # noqa: E402
  File "/orange/ewhite/b.weinstein/miniconda3/envs/DeepForest/lib/python3.10/site-packages/torchmetrics/functional/__init__.py", line 14, in <module>
    from torchmetrics.functional.audio._deprecated import _permutation_invariant_training as permutation_invariant_training
  File "/orange/ewhite/b.weinstein/miniconda3/envs/DeepForest/lib/python3.10/site-packages/torchmetrics/functional/audio/__init__.py", line 14, in <module>
    from torchmetrics.functional.audio.pit import permutation_invariant_training, pit_permutate
  File "/orange/ewhite/b.weinstein/miniconda3/envs/DeepForest/lib/python3.10/site-packages/torchmetrics/functional/audio/pit.py", line 22, in <module>
    from torchmetrics.utilities import rank_zero_warn
  File "/orange/ewhite/b.weinstein/miniconda3/envs/DeepForest/lib/python3.10/site-packages/torchmetrics/utilities/__init__.py", line 14, in <module>
    from torchmetrics.utilities.checks import check_forward_full_state_property
  File "/orange/ewhite/b.weinstein/miniconda3/envs/DeepForest/lib/python3.10/site-packages/torchmetrics/utilities/checks.py", line 25, in <module>
    from torchmetrics.metric import Metric
  File "/orange/ewhite/b.weinstein/miniconda3/envs/DeepForest/lib/python3.10/site-packages/torchmetrics/metric.py", line 30, in <module>
    from torchmetrics.utilities.data import (
  File "/orange/ewhite/b.weinstein/miniconda3/envs/DeepForest/lib/python3.10/site-packages/torchmetrics/utilities/data.py", line 22, in <module>
    from torchmetrics.utilities.imports import _TORCH_GREATER_EQUAL_1_12, _XLA_AVAILABLE
  File "/orange/ewhite/b.weinstein/miniconda3/envs/DeepForest/lib/python3.10/site-packages/torchmetrics/utilities/imports.py", line 41, in <module>
    _TORCHVISION_GREATER_EQUAL_0_8: Optional[bool] = compare_version("torchvision", operator.ge, "0.8.0")
  File "/orange/ewhite/b.weinstein/miniconda3/envs/DeepForest/lib/python3.10/site-packages/lightning_utilities/core/imports.py", line 73, in compare_version
    pkg = importlib.import_module(package)
  File "/orange/ewhite/b.weinstein/miniconda3/envs/DeepForest/lib/python3.10/importlib/__init__.py", line 126, in import_module
    return _bootstrap._gcd_import(name[level:], package, level)
  File "/orange/ewhite/b.weinstein/miniconda3/envs/DeepForest/lib/python3.10/site-packages/torchvision/__init__.py", line 6, in <module>
    from torchvision import _meta_registrations, datasets, io, models, ops, transforms, utils
  File "/orange/ewhite/b.weinstein/miniconda3/envs/DeepForest/lib/python3.10/site-packages/torchvision/models/__init__.py", line 2, in <module>
    from .convnext import *
  File "/orange/ewhite/b.weinstein/miniconda3/envs/DeepForest/lib/python3.10/site-packages/torchvision/models/convnext.py", line 8, in <module>
    from ..ops.misc import Conv2dNormActivation, Permute
  File "/orange/ewhite/b.weinstein/miniconda3/envs/DeepForest/lib/python3.10/site-packages/torchvision/ops/__init__.py", line 1, in <module>
    from ._register_onnx_ops import _register_custom_op
  File "/orange/ewhite/b.weinstein/miniconda3/envs/DeepForest/lib/python3.10/site-packages/torchvision/ops/_register_onnx_ops.py", line 5, in <module>
    from torch.onnx import symbolic_opset11 as opset11
  File "/orange/ewhite/b.weinstein/miniconda3/envs/DeepForest/lib/python3.10/site-packages/torch/onnx/__init__.py", line 10, in <module>
    from . import (  # usort:skip. Keep the order instead of sorting lexicographically
  File "/orange/ewhite/b.weinstein/miniconda3/envs/DeepForest/lib/python3.10/site-packages/torch/onnx/errors.py", line 9, in <module>
    from torch.onnx._internal import diagnostics
  File "/orange/ewhite/b.weinstein/miniconda3/envs/DeepForest/lib/python3.10/site-packages/torch/onnx/_internal/diagnostics/__init__.py", line 1, in <module>
    from ._diagnostic import (
  File "/orange/ewhite/b.weinstein/miniconda3/envs/DeepForest/lib/python3.10/site-packages/torch/onnx/_internal/diagnostics/_diagnostic.py", line 11, in <module>
    from torch.onnx._internal.diagnostics import infra
  File "/orange/ewhite/b.weinstein/miniconda3/envs/DeepForest/lib/python3.10/site-packages/torch/onnx/_internal/diagnostics/infra/__init__.py", line 1, in <module>
    from ._infra import (
  File "/orange/ewhite/b.weinstein/miniconda3/envs/DeepForest/lib/python3.10/site-packages/torch/onnx/_internal/diagnostics/infra/_infra.py", line 10, in <module>
    from torch.onnx._internal.diagnostics.infra import formatter, sarif
  File "/orange/ewhite/b.weinstein/miniconda3/envs/DeepForest/lib/python3.10/site-packages/torch/onnx/_internal/diagnostics/infra/formatter.py", line 11, in <module>
    from torch.onnx._internal.diagnostics.infra import sarif
  File "/orange/ewhite/b.weinstein/miniconda3/envs/DeepForest/lib/python3.10/site-packages/torch/onnx/_internal/diagnostics/infra/sarif/__init__.py", line 5, in <module>
    from torch.onnx._internal.diagnostics.infra.sarif._artifact import Artifact
  File "/orange/ewhite/b.weinstein/miniconda3/envs/DeepForest/lib/python3.10/site-packages/torch/onnx/_internal/diagnostics/infra/sarif/_artifact.py", line 9, in <module>
    from torch.onnx._internal.diagnostics.infra.sarif import (
  File "<frozen importlib._bootstrap>", line 1027, in _find_and_load
  File "<frozen importlib._bootstrap>", line 1006, in _find_and_load_unlocked
  File "<frozen importlib._bootstrap>", line 688, in _load_unlocked
  File "<frozen importlib._bootstrap_external>", line 879, in exec_module
  File "<frozen importlib._bootstrap_external>", line 975, in get_code
  File "<frozen importlib._bootstrap_external>", line 1074, in get_data
KeyboardInterrupt
>>> import torch
>>> 

Additional context
I was certain we had this an issue and I went to paste in this recent example, but did not see it. Close if already exists.

may be related?
pytorch/pytorch#44269

Next steps

  • profile in both CPU and GPU contexts
  • Determine whether it is the use of cache, for example, is it faster the 2nd time it is called?
  • identify torch bottlenecks
  • remedy the issue, clearly some diagnostics.
@bw4sz bw4sz added the good first issue Good for newcomers label Apr 1, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
good first issue Good for newcomers
Projects
None yet
Development

No branches or pull requests

1 participant