Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

apex uninstall #1612

Open
694344851 opened this issue Mar 14, 2023 · 0 comments
Open

apex uninstall #1612

694344851 opened this issue Mar 14, 2023 · 0 comments

Comments

@694344851
Copy link

694344851 commented Mar 14, 2023

WARNING: Implying --no-binary=:all: due to the presence of --build-option / --global-option / --install-option. Consider using --config-settings for more flexibility.
DEPRECATION: --no-binary currently disables reading from the cache of locally built wheels. In the future --no-binary will not influence the wheel cache. pip 23.1 will enforce this behaviour change. A possible replacement is to use the --no-cache-dir option. You can use the flag --use-feature=no-binary-enable-wheel-cache to test the upcoming behaviour. Discussion can be found at https://github.com/pypa/pip/issues/11453
Looking in indexes: https://repo.huaweicloud.com/repository/pypi/simple
Processing /root/autodl-tmp/apex
  Running command python setup.py egg_info


  torch.__version__  = 1.7.0


  running egg_info
  creating /tmp/pip-pip-egg-info-lni098z_/apex.egg-info
  writing /tmp/pip-pip-egg-info-lni098z_/apex.egg-info/PKG-INFO
  writing dependency_links to /tmp/pip-pip-egg-info-lni098z_/apex.egg-info/dependency_links.txt
  writing requirements to /tmp/pip-pip-egg-info-lni098z_/apex.egg-info/requires.txt
  writing top-level names to /tmp/pip-pip-egg-info-lni098z_/apex.egg-info/top_level.txt
  writing manifest file '/tmp/pip-pip-egg-info-lni098z_/apex.egg-info/SOURCES.txt'
  reading manifest file '/tmp/pip-pip-egg-info-lni098z_/apex.egg-info/SOURCES.txt'
  adding license file 'LICENSE'
  writing manifest file '/tmp/pip-pip-egg-info-lni098z_/apex.egg-info/SOURCES.txt'
  Preparing metadata (setup.py) ... done
Requirement already satisfied: packaging>20.6 in /root/miniconda3/envs/glm3.8/lib/python3.8/site-packages (from apex==0.1) (23.0)
Installing collected packages: apex
  DEPRECATION: apex is being installed using the legacy 'setup.py install' method, because the '--no-binary' option was enabled for it and this currently disables local wheel building for projects that don't have a 'pyproject.toml' file. pip 23.1 will enforce this behaviour change. A possible replacement is to enable the '--use-pep517' option. Discussion can be found at https://github.com/pypa/pip/issues/11451
  Running command Running setup.py install for apex


  torch.__version__  = 1.7.0



  Compiling cuda extensions with
  nvcc: NVIDIA (R) Cuda compiler driver
  Copyright (c) 2005-2021 NVIDIA Corporation
  Built on Mon_May__3_19:15:13_PDT_2021
  Cuda compilation tools, release 11.3, V11.3.109
  Build cuda_11.3.r11.3/compiler.29920130_0
  from /usr/local/cuda/bin

  Traceback (most recent call last):
    File "<string>", line 2, in <module>
    File "<pip-setuptools-caller>", line 34, in <module>
    File "/root/autodl-tmp/apex/setup.py", line 171, in <module>
      check_cuda_torch_binary_vs_bare_metal(CUDA_HOME)
    File "/root/autodl-tmp/apex/setup.py", line 33, in check_cuda_torch_binary_vs_bare_metal
      raise RuntimeError(
  RuntimeError: Cuda extensions are being compiled with a version of Cuda that does not match the version used to compile Pytorch binaries.  Pytorch binaries were compiled with Cuda 10.2.
  In some cases, a minor-version mismatch will not cause later errors:  https://github.com/NVIDIA/apex/pull/323#discussion_r287021798.  You can try commenting out this check (at your own risk).
  error: subprocess-exited-with-error

  × Running setup.py install for apex did not run successfully.
  │ exit code: 1
  ╰─> See above for output.

  note: This error originates from a subprocess, and is likely not a problem with pip.
  full command: /root/miniconda3/envs/glm3.8/bin/python -u -c '
  exec(compile('"'"''"'"''"'"'
  # This is <pip-setuptools-caller> -- a caller that pip uses to run setup.py
  #
  # - It imports setuptools before invoking setup.py, to enable projects that directly
  #   import from `distutils.core` to work with newer packaging standards.
  # - It provides a clear error message when setuptools is not installed.
  # - It sets `sys.argv[0]` to the underlying `setup.py`, when invoking `setup.py` so
  #   setuptools doesn'"'"'t think the script is `-c`. This avoids the following warning:
  #     manifest_maker: standard file '"'"'-c'"'"' not found".
  # - It generates a shim setup.py, for handling setup.cfg-only projects.
  import os, sys, tokenize

  try:
      import setuptools
  except ImportError as error:
      print(
          "ERROR: Can not execute `setup.py` since setuptools is not available in "
          "the build environment.",
          file=sys.stderr,
      )
      sys.exit(1)

  __file__ = %r
  sys.argv[0] = __file__

  if os.path.exists(__file__):
      filename = __file__
      with tokenize.open(__file__) as f:
          setup_py_code = f.read()
  else:
      filename = "<auto-generated setuptools caller>"
      setup_py_code = "from setuptools import setup; setup()"

  exec(compile(setup_py_code, filename, "exec"))
  '"'"''"'"''"'"' % ('"'"'/root/autodl-tmp/apex/setup.py'"'"',), "<pip-setuptools-caller>", "exec"))' --cpp_ext --cuda_ext install --record /tmp/pip-record-b0rwtz2a/install-record.txt --single-version-externally-managed --compile --install-headers /root/miniconda3/envs/glm3.8/include/python3.8/apex
  cwd: /root/autodl-tmp/apex/
  Running setup.py install for apex ... error
error: legacy-install-failure

× Encountered error while trying to install package.
╰─> apex

note: This is an issue with the package mentioned above, not pip.
hint: See above for output from the failure

How cuda and pytorch should match?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant