diff --git a/Acknowledgements.txt b/Acknowledgements.txt index d7ce384347c..5ab2b5ce7a8 100644 --- a/Acknowledgements.txt +++ b/Acknowledgements.txt @@ -4285,4 +4285,128 @@ JAX distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and - limitations under the License. \ No newline at end of file + limitations under the License. + +============================================================================== +Python + +A. HISTORY OF THE SOFTWARE +========================== + +Python was created in the early 1990s by Guido van Rossum at Stichting +Mathematisch Centrum (CWI, see https://www.cwi.nl) in the Netherlands +as a successor of a language called ABC. Guido remains Python's +principal author, although it includes many contributions from others. + +In 1995, Guido continued his work on Python at the Corporation for +National Research Initiatives (CNRI, see https://www.cnri.reston.va.us) +in Reston, Virginia where he released several versions of the +software. + +In May 2000, Guido and the Python core development team moved to +BeOpen.com to form the BeOpen PythonLabs team. In October of the same +year, the PythonLabs team moved to Digital Creations, which became +Zope Corporation. In 2001, the Python Software Foundation (PSF, see +https://www.python.org/psf/) was formed, a non-profit organization +created specifically to own Python-related Intellectual Property. +Zope Corporation was a sponsoring member of the PSF. + +All Python releases are Open Source (see https://opensource.org for +the Open Source Definition). Historically, most, but not all, Python +releases have also been GPL-compatible; the table below summarizes +the various releases. + + Release Derived Year Owner GPL- + from compatible? (1) + + 0.9.0 thru 1.2 1991-1995 CWI yes + 1.3 thru 1.5.2 1.2 1995-1999 CNRI yes + 1.6 1.5.2 2000 CNRI no + 2.0 1.6 2000 BeOpen.com no + 1.6.1 1.6 2001 CNRI yes (2) + 2.1 2.0+1.6.1 2001 PSF no + 2.0.1 2.0+1.6.1 2001 PSF yes + 2.1.1 2.1+2.0.1 2001 PSF yes + 2.1.2 2.1.1 2002 PSF yes + 2.1.3 2.1.2 2002 PSF yes + 2.2 and above 2.1.1 2001-now PSF yes + +Footnotes: + +(1) GPL-compatible doesn't mean that we're distributing Python under + the GPL. All Python licenses, unlike the GPL, let you distribute + a modified version without making your changes open source. The + GPL-compatible licenses make it possible to combine Python with + other software that is released under the GPL; the others don't. + +(2) According to Richard Stallman, 1.6.1 is not GPL-compatible, + because its license has a choice of law clause. According to + CNRI, however, Stallman's lawyer has told CNRI's lawyer that 1.6.1 + is "not incompatible" with the GPL. + +Thanks to the many outside volunteers who have worked under Guido's +direction to make these releases possible. + + +B. TERMS AND CONDITIONS FOR ACCESSING OR OTHERWISE USING PYTHON +=============================================================== + +Python software and documentation are licensed under the +Python Software Foundation License Version 2. + +Starting with Python 3.8.6, examples, recipes, and other code in +the documentation are dual licensed under the PSF License Version 2 +and the Zero-Clause BSD license. + +Some software incorporated into Python is under different licenses. +The licenses are listed with code falling under that license. + + +PYTHON SOFTWARE FOUNDATION LICENSE VERSION 2 +-------------------------------------------- + +1. This LICENSE AGREEMENT is between the Python Software Foundation +("PSF"), and the Individual or Organization ("Licensee") accessing and +otherwise using this software ("Python") in source or binary form and +its associated documentation. + +2. Subject to the terms and conditions of this License Agreement, PSF hereby +grants Licensee a nonexclusive, royalty-free, world-wide license to reproduce, +analyze, test, perform and/or display publicly, prepare derivative works, +distribute, and otherwise use Python alone or in any derivative version, +provided, however, that PSF's License Agreement and PSF's notice of copyright, +i.e., "Copyright (c) 2001, 2002, 2003, 2004, 2005, 2006, 2007, 2008, 2009, 2010, +2011, 2012, 2013, 2014, 2015, 2016, 2017, 2018, 2019, 2020, 2021, 2022, 2023 Python Software Foundation; +All Rights Reserved" are retained in Python alone or in any derivative version +prepared by Licensee. + +3. In the event Licensee prepares a derivative work that is based on +or incorporates Python or any part thereof, and wants to make +the derivative work available to others as provided herein, then +Licensee hereby agrees to include in any such work a brief summary of +the changes made to Python. + +4. PSF is making Python available to Licensee on an "AS IS" +basis. PSF MAKES NO REPRESENTATIONS OR WARRANTIES, EXPRESS OR +IMPLIED. BY WAY OF EXAMPLE, BUT NOT LIMITATION, PSF MAKES NO AND +DISCLAIMS ANY REPRESENTATION OR WARRANTY OF MERCHANTABILITY OR FITNESS +FOR ANY PARTICULAR PURPOSE OR THAT THE USE OF PYTHON WILL NOT +INFRINGE ANY THIRD PARTY RIGHTS. + +5. PSF SHALL NOT BE LIABLE TO LICENSEE OR ANY OTHER USERS OF PYTHON +FOR ANY INCIDENTAL, SPECIAL, OR CONSEQUENTIAL DAMAGES OR LOSS AS +A RESULT OF MODIFYING, DISTRIBUTING, OR OTHERWISE USING PYTHON, +OR ANY DERIVATIVE THEREOF, EVEN IF ADVISED OF THE POSSIBILITY THEREOF. + +6. This License Agreement will automatically terminate upon a material +breach of its terms and conditions. + +7. Nothing in this License Agreement shall be deemed to create any +relationship of agency, partnership, or joint venture between PSF and +Licensee. This License Agreement does not grant permission to use PSF +trademarks or trade name in a trademark sense to endorse or promote +products or services of Licensee, or any third party. + +8. By copying, installing or otherwise using Python, Licensee +agrees to be bound by the terms and conditions of this License +Agreement. diff --git a/dali/python/nvidia/dali/_autograph/core/converter_testing.py b/dali/python/nvidia/dali/_autograph/core/converter_testing.py index 9e3421f3d1a..bf7b4118dac 100644 --- a/dali/python/nvidia/dali/_autograph/core/converter_testing.py +++ b/dali/python/nvidia/dali/_autograph/core/converter_testing.py @@ -15,7 +15,7 @@ """Base class for tests in this module.""" import contextlib -import imp +import types import inspect import sys @@ -32,7 +32,7 @@ def allowlist(f): """Helper that marks a callable as allowlisted.""" if "allowlisted_module_for_testing" not in sys.modules: - allowlisted_mod = imp.new_module("allowlisted_module_for_testing") + allowlisted_mod = types.ModuleType("allowlisted_module_for_testing") sys.modules["allowlisted_module_for_testing"] = allowlisted_mod config.CONVERSION_RULES = ( config.DoNotConvert("allowlisted_module_for_testing"), @@ -76,7 +76,7 @@ def __init__(self, converters, ag_overrides, operator_overload=hooks.OperatorBas def get_extra_locals(self): retval = super(TestingTranspiler, self).get_extra_locals() if self._ag_overrides: - modified_ag = imp.new_module("fake_autograph") + modified_ag = types.ModuleType("fake_autograph") modified_ag.__dict__.update(retval["ag__"].__dict__) modified_ag.__dict__.update(self._ag_overrides) retval["ag__"] = modified_ag diff --git a/dali/python/nvidia/dali/backend.py b/dali/python/nvidia/dali/backend.py index f8238298a4d..41e40ec94a4 100644 --- a/dali/python/nvidia/dali/backend.py +++ b/dali/python/nvidia/dali/backend.py @@ -41,14 +41,6 @@ def deprecation_warning(what): Init(OpSpec("CPUAllocator"), OpSpec("PinnedCPUAllocator"), OpSpec("GPUAllocator")) initialized = True - # py3.12 warning - if sys.version_info[0] == 3 and sys.version_info[1] >= 12: - deprecation_warning( - "DALI support for Python {0}.{1} is experimental and some " - "functionalities may not work." - "".format(sys.version_info[0], sys.version_info[1]) - ) - # py3.6 warning if sys.version_info[0] == 3 and sys.version_info[1] == 6: deprecation_warning( diff --git a/dali/python/nvidia/dali/reducers.py b/dali/python/nvidia/dali/reducers.py index a9561fbd388..db195a38bc9 100644 --- a/dali/python/nvidia/dali/reducers.py +++ b/dali/python/nvidia/dali/reducers.py @@ -20,8 +20,10 @@ import importlib -def dummy_lambda(): - pass +# Don't allow any reformatters turning it into regular `def` function. +# The whole point of this object is to have +# properties (name) specific to lambda. +dummy_lambda = lambda: 0 # noqa: E731 # unfortunately inspect.getclosurevars does not yield global names referenced by @@ -121,9 +123,13 @@ def reducer_override(self, obj): try: pickle.dumps(obj) except AttributeError as e: - if "Can't pickle local object" in str(e): + str_e = str(e) + # For Python <3.12.5 and 3.12.5 respectively. + if "Can't pickle local object" in str_e or "Can't get local object" in str_e: return function_by_value_reducer(obj) except pickle.PicklingError as e: - if "it's not the same object as" in str(e): + str_e = str(e) + # For jupyter notebook issues and Python 3.12.5+ respectively + if "it's not the same object as" in str_e or "Can't pickle local object" in str_e: return function_by_value_reducer(obj) return NotImplemented diff --git a/dali/python/setup.py.in b/dali/python/setup.py.in index 1f5035f928e..5014584b599 100644 --- a/dali/python/setup.py.in +++ b/dali/python/setup.py.in @@ -83,6 +83,9 @@ For more details please check the # Currently supported range of versions. 'astunparse >= 1.6.0', 'gast >= 0.3.3', + # the latest astunparse (1.6.3) doesn't work with any other six than + # 1.16 on python 3.12 due to import six.moves + 'six >= 1.16', 'dm-tree', @DALI_INSTALL_REQUIRES_NVIMGCODEC@ ], diff --git a/dali/test/python/autograph/converters/test_call_trees.py b/dali/test/python/autograph/converters/test_call_trees.py index cbfd0b6efd5..b03c9c0a38a 100644 --- a/dali/test/python/autograph/converters/test_call_trees.py +++ b/dali/test/python/autograph/converters/test_call_trees.py @@ -14,7 +14,7 @@ # ============================================================================== """Tests for call_trees module.""" -import imp +import types from nvidia.dali._autograph.converters import call_trees from nvidia.dali._autograph.converters import functions @@ -193,7 +193,7 @@ def f(h, g, a, *args): def test_debugger_set_trace(self): tracking_list = [] - pdb = imp.new_module("fake_pdb") + pdb = types.ModuleType("fake_pdb") pdb.set_trace = lambda: tracking_list.append(1) def f(): diff --git a/dali/test/python/autograph/core/test_converter.py b/dali/test/python/autograph/core/test_converter.py index 0292f334ecc..01b4efe469c 100644 --- a/dali/test/python/autograph/core/test_converter.py +++ b/dali/test/python/autograph/core/test_converter.py @@ -14,7 +14,7 @@ # ============================================================================== """Tests for converter module.""" -import imp +import types from nvidia.dali._autograph.core import converter from nvidia.dali._autograph.core import converter_testing @@ -40,7 +40,7 @@ def f(): opts_packed = templates.replace(template, opts_ast=opts_ast) reparsed, _, _ = loader.load_ast(opts_packed) - fake_ag = imp.new_module("fake_ag") + fake_ag = types.ModuleType("fake_ag") fake_ag.ConversionOptions = converter.ConversionOptions fake_ag.Feature = converter.Feature reparsed.ag__ = fake_ag diff --git a/dali/test/python/autograph/impl/test_api.py b/dali/test/python/autograph/impl/test_api.py index 56246c8eb20..3ef74ac65fe 100644 --- a/dali/test/python/autograph/impl/test_api.py +++ b/dali/test/python/autograph/impl/test_api.py @@ -19,7 +19,6 @@ import contextlib import functools import gc -import imp import inspect import os import re @@ -593,7 +592,7 @@ def test_converted_call_tf_op_forced(self): self.assertAllEqual(self.evaluate(x), 2) def test_converted_call_exec_generated_code(self): - temp_mod = imp.new_module("test_module") + temp_mod = types.ModuleType("test_module") dynamic_code = """ def foo(x): return x + 1 diff --git a/dali/test/python/autograph/impl/test_conversion.py b/dali/test/python/autograph/impl/test_conversion.py index 44b952061ac..16b877dfef8 100644 --- a/dali/test/python/autograph/impl/test_conversion.py +++ b/dali/test/python/autograph/impl/test_conversion.py @@ -14,7 +14,7 @@ # ============================================================================== """Tests for conversion module.""" -import imp +import types import sys import unittest @@ -40,7 +40,7 @@ def test_fn(): self.assertFalse(conversion.is_allowlisted(test_fn)) def test_is_allowlisted_callable_allowlisted_call(self): - allowlisted_mod = imp.new_module("test_allowlisted_call") + allowlisted_mod = types.ModuleType("test_allowlisted_call") sys.modules["test_allowlisted_call"] = allowlisted_mod config.CONVERSION_RULES = ( config.DoNotConvert("test_allowlisted_call"), diff --git a/dali/test/python/autograph/pyct/test_inspect_utils.py b/dali/test/python/autograph/pyct/test_inspect_utils.py index e5ecac6915e..3f4b8351ac8 100644 --- a/dali/test/python/autograph/pyct/test_inspect_utils.py +++ b/dali/test/python/autograph/pyct/test_inspect_utils.py @@ -17,7 +17,7 @@ import abc import collections import functools -import imp +import types import textwrap import unittest @@ -292,8 +292,8 @@ def local_fn(): def test_getqualifiedname(self): foo = object() - qux = imp.new_module("quxmodule") - bar = imp.new_module("barmodule") + qux = types.ModuleType("quxmodule") + bar = types.ModuleType("barmodule") baz = object() bar.baz = baz @@ -321,7 +321,7 @@ def test_getqualifiedname_efficiency(self): current_level = [] for j in range(10): mod_name = "mod_{}_{}".format(i, j) - mod = imp.new_module(mod_name) + mod = types.ModuleType(mod_name) current_level.append(mod) if i == 9 and j == 9: mod.foo = foo @@ -347,7 +347,7 @@ def test_getqualifiedname_cycles(self): ns = {} mods = [] for i in range(10): - mod = imp.new_module("mod_{}".format(i)) + mod = types.ModuleType("mod_{}".format(i)) if i == 9: mod.foo = foo # Module i refers to module i+1 diff --git a/dali/test/python/autograph/pyct/test_loader.py b/dali/test/python/autograph/pyct/test_loader.py index a45529ab5de..088e966fa3c 100644 --- a/dali/test/python/autograph/pyct/test_loader.py +++ b/dali/test/python/autograph/pyct/test_loader.py @@ -21,8 +21,10 @@ import unittest import gast +from distutils.version import LooseVersion from nvidia.dali._autograph.pyct import ast_util +from nvidia.dali._autograph.pyct import gast_util from nvidia.dali._autograph.pyct import loader from nvidia.dali._autograph.pyct import parser from nvidia.dali._autograph.pyct import pretty_printer @@ -77,6 +79,7 @@ def test_load_ast(self): decorator_list=[], returns=None, type_comment=None, + **{"type_params": []} if gast_util.get_gast_version() >= LooseVersion("0.5.5") else {}, ) module, source, _ = loader.load_ast(node) diff --git a/dali/test/python/autograph/pyct/test_templates.py b/dali/test/python/autograph/pyct/test_templates.py index 70897a4bef5..2dca416c70b 100644 --- a/dali/test/python/autograph/pyct/test_templates.py +++ b/dali/test/python/autograph/pyct/test_templates.py @@ -14,7 +14,7 @@ # ============================================================================== """Tests for templates module.""" -import imp +import types import unittest import gast @@ -132,7 +132,7 @@ def test_fn(a): node = templates.replace(template, foo="b")[0] result, _, _ = loader.load_ast(node) - mod = imp.new_module("test") + mod = types.ModuleType("test") mod.b = 3 self.assertEqual(3, result.test_fn(mod)) diff --git a/dali/test/python/checkpointing/test_dali_checkpointing.py b/dali/test/python/checkpointing/test_dali_checkpointing.py index 5804db1fa6e..925236fb828 100644 --- a/dali/test/python/checkpointing/test_dali_checkpointing.py +++ b/dali/test/python/checkpointing/test_dali_checkpointing.py @@ -27,7 +27,7 @@ get_dali_extra_path, module_functions, ) -from nose_utils import assert_warns +from nose_utils import assert_warns, assert_raises from nose2.tools import params, cartesian_params from dataclasses import dataclass from nvidia.dali import tfrecord as tfrec @@ -36,7 +36,6 @@ from nvidia.dali.auto_aug import trivial_augment as ta from reader.test_numpy import is_gds_supported from nose.plugins.attrib import attr -from nose_utils import assert_raises reader_signed_off = create_sign_off_decorator() diff --git a/dali/test/python/checkpointing/test_dali_stateless_operators.py b/dali/test/python/checkpointing/test_dali_stateless_operators.py index de16f9bc25a..e42d3756e1b 100644 --- a/dali/test/python/checkpointing/test_dali_stateless_operators.py +++ b/dali/test/python/checkpointing/test_dali_stateless_operators.py @@ -14,7 +14,6 @@ import os import glob -import nose import numpy as np import itertools import nvidia.dali as dali @@ -28,7 +27,7 @@ restrict_platform, ) from nose2.tools import params, cartesian_params -from nose_utils import assert_raises +from nose_utils import assert_raises, SkipTest from nose.plugins.attrib import attr # Test configuration @@ -584,7 +583,7 @@ def test_optical_flow_stateless(): from test_optical_flow import is_of_supported if not is_of_supported(): - raise nose.SkipTest("Optical Flow is not supported on this platform") + raise SkipTest("Optical Flow is not supported on this platform") check_single_sequence_input(fn.optical_flow, "gpu") diff --git a/dali/test/python/decoder/test_jpeg_scan_limit.py b/dali/test/python/decoder/test_jpeg_scan_limit.py index 6f0a9b07c5f..2776a18515f 100644 --- a/dali/test/python/decoder/test_jpeg_scan_limit.py +++ b/dali/test/python/decoder/test_jpeg_scan_limit.py @@ -18,9 +18,9 @@ import unittest from nvidia.dali import pipeline_def +from nose_utils import assert_raises from nose.plugins.attrib import attr from nose2.tools import cartesian_params -from nose_utils import assert_raises class ProgressiveJpeg(unittest.TestCase): diff --git a/dali/test/python/nose_utils.py b/dali/test/python/nose_utils.py index 4db818cae60..3cbb1574153 100644 --- a/dali/test/python/nose_utils.py +++ b/dali/test/python/nose_utils.py @@ -13,11 +13,94 @@ # limitations under the License. import sys import collections + +if sys.version_info >= (3, 12): + # to make sure we can import anything from nose + from importlib import machinery, util + from importlib._bootstrap import _exec, _load + import modulefinder + import types + import unittest + + # the below are based on https://github.com/python/cpython/blob/3.11/Lib/imp.py + # based on PSF license + def find_module(name, path): + return modulefinder.ModuleFinder(path).find_module(name, path) + + def load_module(name, file, filename, details): + PY_SOURCE = 1 + PY_COMPILED = 2 + + class _HackedGetData: + """Compatibility support for 'file' arguments of various load_*() + functions.""" + + def __init__(self, fullname, path, file=None): + super().__init__(fullname, path) + self.file = file + + def get_data(self, path): + """Gross hack to contort loader to deal w/ load_*()'s bad API.""" + if self.file and path == self.path: + # The contract of get_data() requires us to return bytes. Reopen the + # file in binary mode if needed. + file = None + if not self.file.closed: + file = self.file + if "b" not in file.mode: + file.close() + if self.file.closed: + self.file = file = open(self.path, "rb") + + with file: + return file.read() + else: + return super().get_data(path) + + class _LoadSourceCompatibility(_HackedGetData, machinery.SourceFileLoader): + """Compatibility support for implementing load_source().""" + + _, mode, type_ = details + if mode and (not mode.startswith("r") or "+" in mode): + raise ValueError("invalid file open mode {!r}".format(mode)) + elif file is None and type_ in {PY_SOURCE, PY_COMPILED}: + msg = "file object required for import (type code {})".format(type_) + raise ValueError(msg) + assert type_ == PY_SOURCE, "load_module replacement supports only PY_SOURCE file type" + loader = _LoadSourceCompatibility(name, filename, file) + spec = util.spec_from_file_location(name, filename, loader=loader) + if name in sys.modules: + module = _exec(spec, sys.modules[name]) + else: + module = _load(spec) + # To allow reloading to potentially work, use a non-hacked loader which + # won't rely on a now-closed file object. + module.__loader__ = machinery.SourceFileLoader(name, filename) + module.__spec__.loader = module.__loader__ + return module + + def acquire_lock(): + pass + + def release_lock(): + pass + + context = { + "find_module": find_module, + "load_module": load_module, + "acquire_lock": acquire_lock, + "release_lock": release_lock, + } + imp_module = types.ModuleType("imp", "Mimics old imp module") + imp_module.__dict__.update(context) + sys.modules["imp"] = imp_module + unittest._TextTestResult = unittest.TextTestResult import nose.case import nose.inspector import nose.loader import nose.suite import nose.plugins.attrib +from nose import SkipTest, with_setup # noqa: F401 if sys.version_info >= (3, 10) and not hasattr(collections, "Callable"): nose.case.collections = collections.abc @@ -29,6 +112,17 @@ import nose.tools as tools import re import fnmatch +import unittest + + +class empty_case(unittest.TestCase): + def nop(): + pass + + +def assert_equals(x, y): + foo = empty_case() + foo.assertEqual(x, y) def glob_to_regex(glob): diff --git a/dali/test/python/operator_1/test_arithmetic_ops.py b/dali/test/python/operator_1/test_arithmetic_ops.py index 55a99fb8786..57373eaff98 100644 --- a/dali/test/python/operator_1/test_arithmetic_ops.py +++ b/dali/test/python/operator_1/test_arithmetic_ops.py @@ -19,13 +19,12 @@ import nvidia.dali.math as math from nvidia.dali.tensors import TensorListGPU import numpy as np -from nose.tools import assert_equals from nose.plugins.attrib import attr from nose2.tools import params import itertools from test_utils import np_type_to_dali -from nose_utils import raises, assert_raises +from nose_utils import raises, assert_raises, assert_equals def list_product(*args): diff --git a/dali/test/python/operator_1/test_audio_resample.py b/dali/test/python/operator_1/test_audio_resample.py index 62fd68c87f4..9239e8ad708 100644 --- a/dali/test/python/operator_1/test_audio_resample.py +++ b/dali/test/python/operator_1/test_audio_resample.py @@ -96,7 +96,7 @@ def test_pipe(device): print(out_arr.dtype, out_arr.shape) print("Reference: ", ref) print(ref.dtype, ref.shape) - print("Diff: ", out_arr.astype(np.float) - ref) + print("Diff: ", out_arr.astype(float) - ref) assert np.allclose(out_arr, ref, 1e-6, eps) diff --git a/dali/test/python/operator_1/test_normalize.py b/dali/test/python/operator_1/test_normalize.py index 461e6934112..13db1283560 100644 --- a/dali/test/python/operator_1/test_normalize.py +++ b/dali/test/python/operator_1/test_normalize.py @@ -34,7 +34,7 @@ def normalize(x, axes=None, mean=None, stddev=None, ddof=0, eps=0): if stddev is None: factor = num_reduced - ddof - sqr = (x - mean).astype(np.float) ** 2 + sqr = (x - mean).astype(float) ** 2 var = np.sum(sqr, axis=axes, keepdims=True) if factor > 0: var /= factor @@ -198,7 +198,7 @@ def normalize_list(whole_batch, data_batch, axes=None, mean=None, stddev=None, d if type(stddev) is not list: stddev = [stddev] * len(data_batch) return [ - normalize(data_batch[i].astype(np.float), axes, mean[i], stddev[i], ddof, eps) + normalize(data_batch[i].astype(float), axes, mean[i], stddev[i], ddof, eps) for i in range(len(data_batch)) ] diff --git a/dali/test/python/operator_1/test_numba_func.py b/dali/test/python/operator_1/test_numba_func.py index 8e97b2db262..ea966b3da18 100644 --- a/dali/test/python/operator_1/test_numba_func.py +++ b/dali/test/python/operator_1/test_numba_func.py @@ -19,7 +19,7 @@ import nvidia.dali as dali import nvidia.dali.fn as fn import nvidia.dali.types as dali_types -from nose import with_setup +from nose_utils import with_setup from test_utils import ( get_dali_extra_path, to_array, diff --git a/dali/test/python/operator_1/test_ops_expression_internals.py b/dali/test/python/operator_1/test_ops_expression_internals.py index 7c01246b1a7..38c68471ca6 100644 --- a/dali/test/python/operator_1/test_ops_expression_internals.py +++ b/dali/test/python/operator_1/test_ops_expression_internals.py @@ -14,9 +14,7 @@ import nvidia.dali.ops as ops from nvidia.dali.types import Constant -import numpy as np -from nose.tools import assert_equal -from nose_utils import assert_raises +from nose_utils import assert_equals, assert_raises def test_group_inputs(): @@ -24,30 +22,30 @@ def test_group_inputs(): e1 = ops._DataNode("op1", "cpu") inputs = [e0, e1, 10.0, Constant(0).uint8(), 42] cat_idx, edges, integers, reals = ops._group_inputs(inputs) - assert_equal([("edge", 0), ("edge", 1), ("real", 0), ("integer", 0), ("integer", 1)], cat_idx) - assert_equal([e0, e1], edges) - assert_equal([Constant(0).uint8(), 42], integers) - assert_equal([10.0], reals) + assert_equals([("edge", 0), ("edge", 1), ("real", 0), ("integer", 0), ("integer", 1)], cat_idx) + assert_equals([e0, e1], edges) + assert_equals([Constant(0).uint8(), 42], integers) + assert_equals([10.0], reals) assert_raises( TypeError, ops._group_inputs, - [np.complex()], + [complex()], glob="Expected scalar value of type 'bool', 'int' or 'float', got *.", ) _, _, _, none_reals = ops._group_inputs([e0, 10]) - assert_equal(None, none_reals) + assert_equals(None, none_reals) def test_generate_input_desc(): desc0 = ops._generate_input_desc([("edge", 0)], [], []) desc1 = ops._generate_input_desc([("edge", 0), ("edge", 1), ("edge", 2)], [], []) - assert_equal("&0", desc0) - assert_equal("&0 &1 &2", desc1) + assert_equals("&0", desc0) + assert_equals("&0 &1 &2", desc1) desc2 = ops._generate_input_desc( [("integer", 1), ("integer", 0), ("edge", 0)], [Constant(42).uint8(), 42], [] ) - assert_equal("$1:int32 $0:uint8 &0", desc2) + assert_equals("$1:int32 $0:uint8 &0", desc2) c = Constant(42) desc3 = ops._generate_input_desc( @@ -75,7 +73,7 @@ def test_generate_input_desc(): ], [], ) - assert_equal( + assert_equals( "$0:int32 $1:uint8 $2:uint16 $3:uint32 $4:uint64 $5:int8 $6:int16 $7:int32 $8:int64", desc3 ) @@ -84,4 +82,4 @@ def test_generate_input_desc(): [], [float(), c.float16(), c.float32(), c.float64()], ) - assert_equal("$0:float32 $1:float16 $2:float32 $3:float64", desc4) + assert_equals("$0:float32 $1:float16 $2:float32 $3:float64", desc4) diff --git a/dali/test/python/operator_2/test_segmentation_select_masks.py b/dali/test/python/operator_2/test_segmentation_select_masks.py index 2707a68be74..8cecf9b7a4d 100644 --- a/dali/test/python/operator_2/test_segmentation_select_masks.py +++ b/dali/test/python/operator_2/test_segmentation_select_masks.py @@ -93,7 +93,7 @@ def test_select_masks(): nvertices_range = (3, 40) for batch_size in [1, 3]: for vertex_ndim in [2, 3, 6]: - for vertex_dtype in [np.float, random.choice([np.int8, np.int16, np.int32, np.int64])]: + for vertex_dtype in [float, random.choice([np.int8, np.int16, np.int32, np.int64])]: reindex_masks = random.choice([False, True]) yield ( check_select_masks, diff --git a/dali/test/python/reader/test_numpy.py b/dali/test/python/reader/test_numpy.py index aca56269197..a32380c3102 100644 --- a/dali/test/python/reader/test_numpy.py +++ b/dali/test/python/reader/test_numpy.py @@ -150,7 +150,7 @@ def NumpyReaderPipeline( np.float_, np.complex64, np.complex128, - np.complex_, + complex, ] ) unsupported_numpy_types = set( @@ -161,7 +161,7 @@ def NumpyReaderPipeline( np.complex64, np.complex128, np.longdouble, - np.complex_, + complex, ] ) rng = np.random.RandomState(12345) diff --git a/dali/test/python/reader/test_webdataset_corner.py b/dali/test/python/reader/test_webdataset_corner.py index 78b27e7b160..afeb452ed2d 100644 --- a/dali/test/python/reader/test_webdataset_corner.py +++ b/dali/test/python/reader/test_webdataset_corner.py @@ -17,7 +17,7 @@ import os import tempfile from glob import glob -from nose.tools import assert_equal +from nose_utils import assert_equals import webdataset_base as base from nose_utils import assert_raises @@ -112,7 +112,7 @@ def test_single_sample(): num_threads=1, ) wds_pipeline.build() - assert_equal(list(wds_pipeline.epoch_size().values())[0], num_samples) + assert_equals(list(wds_pipeline.epoch_size().values())[0], num_samples) def test_single_sample_and_junk(): @@ -148,7 +148,7 @@ def test_single_sample_and_junk(): num_threads=1, ) wds_pipeline.build() - assert_equal(list(wds_pipeline.epoch_size().values())[0], num_samples) + assert_equals(list(wds_pipeline.epoch_size().values())[0], num_samples) def test_wide_sample(): @@ -189,7 +189,7 @@ def test_wide_sample(): num_threads=1, ) wds_pipeline.build() - assert_equal(list(wds_pipeline.epoch_size().values())[0], num_samples) + assert_equals(list(wds_pipeline.epoch_size().values())[0], num_samples) def test_argument_errors(): diff --git a/dali/test/python/reader/test_webdataset_requirements.py b/dali/test/python/reader/test_webdataset_requirements.py index 65958f70cd1..44cdfc86916 100644 --- a/dali/test/python/reader/test_webdataset_requirements.py +++ b/dali/test/python/reader/test_webdataset_requirements.py @@ -18,7 +18,7 @@ import nvidia.dali as dali from test_utils import compare_pipelines, get_dali_extra_path from nose_utils import assert_raises -from nose.tools import assert_equal +from nose_utils import assert_equals from webdataset_base import ( generate_temp_extract, generate_temp_index_file, @@ -99,7 +99,7 @@ def test_skip_sample(): num_threads=1, ) wds_pipeline.build() - assert_equal(list(wds_pipeline.epoch_size().values())[0], num_samples) + assert_equals(list(wds_pipeline.epoch_size().values())[0], num_samples) def test_raise_error_on_missing(): diff --git a/dali/test/python/test_backend_impl.py b/dali/test/python/test_backend_impl.py index 11143a0dba6..ff32880a391 100644 --- a/dali/test/python/test_backend_impl.py +++ b/dali/test/python/test_backend_impl.py @@ -171,7 +171,7 @@ def test_array_interface_types(): np.float32, np.float16, np.short, - np.long, + int, np.longlong, np.ushort, np.ulonglong, diff --git a/dali/test/python/test_dali_cpu_only_utils.py b/dali/test/python/test_dali_cpu_only_utils.py index 99e191e8ccf..8ed8b2747a9 100644 --- a/dali/test/python/test_dali_cpu_only_utils.py +++ b/dali/test/python/test_dali_cpu_only_utils.py @@ -22,7 +22,6 @@ import nvidia.dali.fn as fn import nvidia.dali.math as dmath from nvidia.dali.pipeline import pipeline_def -from test_audio_decoder_utils import generate_waveforms def setup_test_nemo_asr_reader_cpu(): @@ -55,6 +54,8 @@ def create_manifest_file(manifest_file, names, lengths, rates, texts): lengths = [10000, 54321, 12345] def create_ref(): + from test_audio_decoder_utils import generate_waveforms + ref = [] for i in range(len(names)): wave = generate_waveforms(lengths[i], freqs[i]) diff --git a/dali/test/python/test_dali_tf_conditionals.py b/dali/test/python/test_dali_tf_conditionals.py index f3ffc67de2a..6f8fa40fbcc 100644 --- a/dali/test/python/test_dali_tf_conditionals.py +++ b/dali/test/python/test_dali_tf_conditionals.py @@ -18,8 +18,7 @@ import nvidia.dali.fn as fn import nvidia.dali.types as types import nvidia.dali.plugin.tf as dali_tf - -from nose.tools import with_setup +from nose_utils import with_setup from test_utils_tensorflow import skip_inputs_for_incompatible_tf diff --git a/dali/test/python/test_dali_tf_dataset_eager.py b/dali/test/python/test_dali_tf_dataset_eager.py index 31f20118a23..ed59e229707 100644 --- a/dali/test/python/test_dali_tf_dataset_eager.py +++ b/dali/test/python/test_dali_tf_dataset_eager.py @@ -18,7 +18,7 @@ import nvidia.dali.plugin.tf as dali_tf from nvidia.dali.plugin.tf.experimental import Input from nvidia.dali import fn -from nose.tools import with_setup +from nose_utils import with_setup from test_dali_tf_dataset_pipelines import ( FixedSampleIterator, RandomSampleIterator, diff --git a/dali/test/python/test_dali_tf_dataset_graph.py b/dali/test/python/test_dali_tf_dataset_graph.py index a15085d4c9f..2ceb89921fb 100644 --- a/dali/test/python/test_dali_tf_dataset_graph.py +++ b/dali/test/python/test_dali_tf_dataset_graph.py @@ -16,7 +16,7 @@ import numpy as np import random as random import tensorflow as tf -from nose.tools import with_setup +from nose_utils import with_setup from nose_utils import raises from test_dali_tf_dataset_pipelines import ( diff --git a/dali/test/python/test_dali_tf_dataset_mnist_eager.py b/dali/test/python/test_dali_tf_dataset_mnist_eager.py index 2d54b319e90..d84054a62b7 100644 --- a/dali/test/python/test_dali_tf_dataset_mnist_eager.py +++ b/dali/test/python/test_dali_tf_dataset_mnist_eager.py @@ -13,7 +13,7 @@ # limitations under the License. import tensorflow as tf -from nose.tools import with_setup +from nose_utils import with_setup import test_dali_tf_dataset_mnist as mnist from test_utils_tensorflow import skip_for_incompatible_tf, available_gpus diff --git a/dali/test/python/test_dali_tf_dataset_mnist_graph.py b/dali/test/python/test_dali_tf_dataset_mnist_graph.py index fd4bce2c9f2..88bbe297912 100644 --- a/dali/test/python/test_dali_tf_dataset_mnist_graph.py +++ b/dali/test/python/test_dali_tf_dataset_mnist_graph.py @@ -14,10 +14,8 @@ import tensorflow as tf import tensorflow.compat.v1 as tf_v1 -from nose import with_setup, SkipTest - +from nose_utils import with_setup, SkipTest, raises import test_dali_tf_dataset_mnist as mnist -from nose_utils import raises from distutils.version import StrictVersion mnist.tf.compat.v1.disable_eager_execution() diff --git a/dali/test/python/test_dali_tf_dataset_shape.py b/dali/test/python/test_dali_tf_dataset_shape.py index facc5de67fd..b8b61f861be 100644 --- a/dali/test/python/test_dali_tf_dataset_shape.py +++ b/dali/test/python/test_dali_tf_dataset_shape.py @@ -20,8 +20,7 @@ from test_utils_tensorflow import skip_for_incompatible_tf import os -from nose.tools import assert_equals -from nose_utils import raises +from nose_utils import raises, assert_equals import itertools import warnings diff --git a/dali/test/python/test_dali_variable_batch_size.py b/dali/test/python/test_dali_variable_batch_size.py index 740a154572b..2b159dc3d1d 100644 --- a/dali/test/python/test_dali_variable_batch_size.py +++ b/dali/test/python/test_dali_variable_batch_size.py @@ -13,7 +13,6 @@ # limitations under the License. import inspect -import nose import numpy as np import nvidia.dali.fn as fn import nvidia.dali.math as dmath @@ -22,6 +21,7 @@ import random import re from functools import partial +from nose_utils import SkipTest from nose.plugins.attrib import attr from nose.tools import nottest from nvidia.dali.pipeline import Pipeline, pipeline_def @@ -1263,7 +1263,7 @@ def pipe(max_batch_size, input_data, device): def test_optical_flow(): if not is_of_supported(): - raise nose.SkipTest("Optical Flow is not supported on this platform") + raise SkipTest("Optical Flow is not supported on this platform") def pipe(max_batch_size, input_data, device, input_layout=None): pipe = Pipeline(batch_size=max_batch_size, num_threads=4, device_id=0) diff --git a/dali/test/python/test_external_source_impl_utils.py b/dali/test/python/test_external_source_impl_utils.py index 48bd5a0d7b6..5ccb5e64e7b 100644 --- a/dali/test/python/test_external_source_impl_utils.py +++ b/dali/test/python/test_external_source_impl_utils.py @@ -15,8 +15,7 @@ from nvidia.dali._utils import external_source_impl from nvidia.dali import tensors, pipeline_def import nvidia.dali.fn as fn -from nose.tools import assert_equals -from nose_utils import raises +from nose_utils import raises, assert_equals from nose.plugins.attrib import attr import numpy as np diff --git a/dali/test/python/test_external_source_parallel.py b/dali/test/python/test_external_source_parallel.py index 276294acfca..f74346a1b9f 100644 --- a/dali/test/python/test_external_source_parallel.py +++ b/dali/test/python/test_external_source_parallel.py @@ -14,11 +14,9 @@ import numpy as np import nvidia.dali as dali -from nose.tools import with_setup from nvidia.dali.types import SampleInfo, BatchInfo - import test_external_source_parallel_utils as utils -from nose_utils import raises +from nose_utils import raises, with_setup def no_arg_fun(): @@ -272,7 +270,7 @@ def test_num_outputs(): utils.ExtCallbackMultipleOutputs, utils.ExtCallbackMultipleOutputs, num_outputs=2, - dtypes=[np.uint8, np.float], + dtypes=[np.uint8, float], ) diff --git a/dali/test/python/test_external_source_parallel_large_sample.py b/dali/test/python/test_external_source_parallel_large_sample.py index 68f08a5ef2e..e88b4fabf86 100644 --- a/dali/test/python/test_external_source_parallel_large_sample.py +++ b/dali/test/python/test_external_source_parallel_large_sample.py @@ -13,11 +13,9 @@ # limitations under the License. import numpy as np -from nose.tools import with_setup - +from nose_utils import with_setup from nvidia.dali import pipeline_def import nvidia.dali.fn as fn - from test_external_source_parallel_utils import setup_function, teardown_function, capture_processes diff --git a/dali/test/python/test_external_source_parallel_mxnet.py b/dali/test/python/test_external_source_parallel_mxnet.py index 4bff3bc314f..7ea2abe965c 100644 --- a/dali/test/python/test_external_source_parallel_mxnet.py +++ b/dali/test/python/test_external_source_parallel_mxnet.py @@ -19,8 +19,7 @@ # to switch between the default numpy and cupy import mxnet as mx -from nose import with_setup -from nose_utils import raises +from nose_utils import raises, with_setup from test_pool_utils import setup_function from test_external_source_parallel_utils import ( diff --git a/dali/test/python/test_external_source_parallel_shared_batch.py b/dali/test/python/test_external_source_parallel_shared_batch.py index 01791170cf7..aee5fb5e2da 100644 --- a/dali/test/python/test_external_source_parallel_shared_batch.py +++ b/dali/test/python/test_external_source_parallel_shared_batch.py @@ -53,13 +53,13 @@ def test_serialize_deserialize(): [(2, 3, 4), (2, 3, 5), (3, 4, 5)], [], ]: - for dtype in [np.int8, np.float, np.int32]: + for dtype in [np.int8, float, np.int32]: yield check_serialize_deserialize, [np.full(s, 42, dtype=dtype) for s in shapes] def test_serialize_deserialize_random(): for max_shape in [(12, 200, 100, 3), (200, 300, 3), (300, 2)]: - for dtype in [np.uint8, np.float]: + for dtype in [np.uint8, float]: rsdi = RandomlyShapedDataIterator(10, max_shape=max_shape, dtype=dtype) for i, batch in enumerate(rsdi): if i == 10: diff --git a/dali/test/python/test_external_source_parallel_utils.py b/dali/test/python/test_external_source_parallel_utils.py index bf1a3f7d1e7..700937cdbee 100644 --- a/dali/test/python/test_external_source_parallel_utils.py +++ b/dali/test/python/test_external_source_parallel_utils.py @@ -14,8 +14,7 @@ import numpy as np import nvidia.dali as dali -from nose.tools import with_setup - +from nose_utils import with_setup from test_pool_utils import capture_processes, teardown_function, setup_function from test_utils import ( compare_pipelines, diff --git a/dali/test/python/test_fw_iterators.py b/dali/test/python/test_fw_iterators.py index ab8bb273291..73960ebbb78 100644 --- a/dali/test/python/test_fw_iterators.py +++ b/dali/test/python/test_fw_iterators.py @@ -220,7 +220,7 @@ def test_mxnet_iterator_empty_array(): np.float32, np.float16, np.short, - np.long, + int, np.longlong, np.ushort, np.ulonglong, @@ -2716,7 +2716,7 @@ def test_jax_prepare_first_batch(): @pipeline_def def feed_ndarray_test_pipeline(): - return np.array([1], dtype=np.float) + return np.array([1], dtype=float) @attr("mxnet") diff --git a/dali/test/python/test_pipeline.py b/dali/test/python/test_pipeline.py index dc1fcb14a86..43b0b05d8a6 100644 --- a/dali/test/python/test_pipeline.py +++ b/dali/test/python/test_pipeline.py @@ -824,7 +824,7 @@ def __iter__(self): def __next__(self): batch = [] if self.i < self.n: - batch.append(np.arange(0, 1, dtype=np.float)) + batch.append(np.arange(0, 1, dtype=float)) self.i += 1 return batch else: @@ -1250,7 +1250,7 @@ def iter_setup(self): np.float32, np.float16, np.short, - np.long, + int, np.longlong, np.ushort, np.ulonglong, diff --git a/dali/test/python/test_pool.py b/dali/test/python/test_pool.py index 4a7fd8a449c..6f37d98c72f 100644 --- a/dali/test/python/test_pool.py +++ b/dali/test/python/test_pool.py @@ -20,8 +20,7 @@ from functools import wraps import numpy as np import os -from nose.tools import with_setup -from nose_utils import raises +from nose_utils import raises, with_setup from test_pool_utils import capture_processes, setup_function, teardown_function diff --git a/dali/test/python/test_utils.py b/dali/test/python/test_utils.py index 2db5b5426f6..2603c875b32 100644 --- a/dali/test/python/test_utils.py +++ b/dali/test/python/test_utils.py @@ -13,7 +13,7 @@ # limitations under the License. import nvidia.dali as dali -import nvidia.dali.types as types +import nvidia.dali.types as dali_types from nvidia.dali.backend_impl import TensorListGPU, TensorGPU, TensorListCPU from nvidia.dali import plugin_manager @@ -26,9 +26,8 @@ import subprocess import sys import tempfile -from nose import SkipTest - from distutils.version import LooseVersion +from nose_utils import SkipTest def get_arch(device_id=0): @@ -557,21 +556,21 @@ def dali_type(t): if t is None: return None if t is np.float16: - return types.FLOAT16 + return dali_types.FLOAT16 if t is np.float32: - return types.FLOAT + return dali_types.FLOAT if t is np.uint8: - return types.UINT8 + return dali_types.UINT8 if t is np.int8: - return types.INT8 + return dali_types.INT8 if t is np.uint16: - return types.UINT16 + return dali_types.UINT16 if t is np.int16: - return types.INT16 + return dali_types.INT16 if t is np.uint32: - return types.UINT32 + return dali_types.UINT32 if t is np.int32: - return types.INT32 + return dali_types.INT32 raise TypeError("Unsupported type: " + str(t)) @@ -633,18 +632,18 @@ def dali_type_to_np(type): import_numpy() dali_types_to_np_dict = { - types.BOOL: np.bool_, - types.INT8: np.int8, - types.INT16: np.int16, - types.INT32: np.int32, - types.INT64: np.int64, - types.UINT8: np.uint8, - types.UINT16: np.uint16, - types.UINT32: np.uint32, - types.UINT64: np.uint64, - types.FLOAT16: np.float16, - types.FLOAT: np.float32, - types.FLOAT64: np.float64, + dali_types.BOOL: np.bool_, + dali_types.INT8: np.int8, + dali_types.INT16: np.int16, + dali_types.INT32: np.int32, + dali_types.INT64: np.int64, + dali_types.UINT8: np.uint8, + dali_types.UINT16: np.uint16, + dali_types.UINT32: np.uint32, + dali_types.UINT64: np.uint64, + dali_types.FLOAT16: np.float16, + dali_types.FLOAT: np.float32, + dali_types.FLOAT64: np.float64, } return dali_types_to_np_dict[type] @@ -653,20 +652,20 @@ def np_type_to_dali(type): import_numpy() np_types_to_dali_dict = { - np.bool_: types.BOOL, - np.int8: types.INT8, - np.int16: types.INT16, - np.int32: types.INT32, - np.int64: types.INT64, - np.uint8: types.UINT8, - np.uint16: types.UINT16, - np.uint32: types.UINT32, - np.uint64: types.UINT64, - np.float16: types.FLOAT16, - np.float32: types.FLOAT, - np.float64: types.FLOAT64, - np.longlong: types.INT64, - np.ulonglong: types.UINT64, + np.bool_: dali_types.BOOL, + np.int8: dali_types.INT8, + np.int16: dali_types.INT16, + np.int32: dali_types.INT32, + np.int64: dali_types.INT64, + np.uint8: dali_types.UINT8, + np.uint16: dali_types.UINT16, + np.uint32: dali_types.UINT32, + np.uint64: dali_types.UINT64, + np.float16: dali_types.FLOAT16, + np.float32: dali_types.FLOAT, + np.float64: dali_types.FLOAT64, + np.longlong: dali_types.INT64, + np.ulonglong: dali_types.UINT64, } return np_types_to_dali_dict[type] @@ -939,7 +938,6 @@ def dummy_case(*args, **kwargs): def check_numba_compatibility_cpu(if_skip=True): import numba - from nose import SkipTest # There's a bug in LLVM JIT linker that makes the tests fail # randomly on 64-bit ARM platform for some NUMBA versions. @@ -959,7 +957,6 @@ def check_numba_compatibility_cpu(if_skip=True): def check_numba_compatibility_gpu(if_skip=True): - from nose import SkipTest import nvidia.dali.plugin.numba.experimental as ex if not ex.NumbaFunction._check_minimal_numba_version( diff --git a/dali/test/python/webdataset_base.py b/dali/test/python/webdataset_base.py index f18ec215fda..67014b74c6b 100644 --- a/dali/test/python/webdataset_base.py +++ b/dali/test/python/webdataset_base.py @@ -14,7 +14,7 @@ from nvidia.dali import pipeline_def from nvidia.dali.fn import readers -from nose.tools import assert_equal +from nose_utils import assert_equals import tempfile from subprocess import call import os @@ -107,7 +107,7 @@ def file_reader_pipeline( def generate_temp_index_file(tar_file_path): global wds2idx_script temp_index_file = tempfile.NamedTemporaryFile() - assert_equal( + assert_equals( call([wds2idx_script, tar_file_path, temp_index_file.name], stdout=open(os.devnull, "wb")), 0, ) diff --git a/qa/TL0_multigpu/test_pytorch.sh b/qa/TL0_multigpu/test_pytorch.sh index 45ee5dbca1a..a5d3c38eca9 100755 --- a/qa/TL0_multigpu/test_pytorch.sh +++ b/qa/TL0_multigpu/test_pytorch.sh @@ -1,6 +1,6 @@ #!/bin/bash -e # used pip packages -pip_packages='${python_test_runner_package} torch' +pip_packages='${python_test_runner_package} torch numpy' target_dir=./dali/test/python diff --git a/qa/TL0_python-self-test-base-cuda/test.sh b/qa/TL0_python-self-test-base-cuda/test.sh index 752595bb602..969a5e754e8 100644 --- a/qa/TL0_python-self-test-base-cuda/test.sh +++ b/qa/TL0_python-self-test-base-cuda/test.sh @@ -13,6 +13,7 @@ version_eq "$DALI_CUDA_MAJOR_VERSION" "12" && \ version_ge "$DALI_CUDA_MAJOR_VERSION" "11" && \ pip uninstall -y `pip list | grep nvidia-cufft | cut -d " " -f1` \ `pip list | grep nvidia-nvjpeg | cut -d " " -f1` \ + `pip list | grep nvidia-nvjpeg2k | cut -d " " -f1` \ `pip list | grep nvidia-npp | cut -d " " -f1` \ || true @@ -41,4 +42,5 @@ version_ge "$DALI_CUDA_MAJOR_VERSION" "11" && \ pip install nvidia-cufft-cu${DALI_CUDA_MAJOR_VERSION} \ nvidia-npp-cu${DALI_CUDA_MAJOR_VERSION} \ nvidia-nvjpeg-cu${DALI_CUDA_MAJOR_VERSION} \ + nvidia-nvjpeg2k-cu${DALI_CUDA_MAJOR_VERSION} \ || true diff --git a/qa/TL1_jupyter_plugins/test_pytorch.sh b/qa/TL1_jupyter_plugins/test_pytorch.sh index 0584d37d066..3a3cbc74eee 100755 --- a/qa/TL1_jupyter_plugins/test_pytorch.sh +++ b/qa/TL1_jupyter_plugins/test_pytorch.sh @@ -1,9 +1,7 @@ #!/bin/bash -e # used pip packages -# nvidia-index provides a stub for tensorboard which collides with one required by pytorch-lightning -# pin version which is not replaced -pip_packages='pillow jupyter matplotlib<3.5.3 torchvision torch fsspec==2023.1.0 pytorch-lightning tensorboard==2.2.2' +pip_packages='pillow jupyter matplotlib<3.5.3 torchvision torch fsspec==2023.1.0 pytorch-lightning tensorboard' target_dir=./docs/examples/ do_once() { diff --git a/qa/TL1_tensorflow-dali_test/test.sh b/qa/TL1_tensorflow-dali_test/test.sh index 328e9555764..12d9b1959e9 100644 --- a/qa/TL1_tensorflow-dali_test/test.sh +++ b/qa/TL1_tensorflow-dali_test/test.sh @@ -73,6 +73,8 @@ do_once() { } test_body() { + # for the compatibility with the old keras API + export TF_USE_LEGACY_KERAS=1 # test code mpiexec --allow-run-as-root --bind-to none -np ${NUM_GPUS} \ python -u resnet.py \ diff --git a/qa/nose_wrapper/__main__.py b/qa/nose_wrapper/__main__.py index 1e9132a8a20..1d5eb1977f0 100644 --- a/qa/nose_wrapper/__main__.py +++ b/qa/nose_wrapper/__main__.py @@ -1,20 +1,10 @@ import sys + +# before running the test we add dali/test/python to the python path +import nose_utils # noqa:F401 - for Python 3.10 from nose.core import run_exit -import collections -import nose.case -import nose.inspector -import nose.loader -import nose.suite -import nose.plugins.attrib import inspect -if sys.version_info >= (3, 10) and not hasattr(collections, "Callable"): - nose.case.collections = collections.abc - nose.inspector.collections = collections.abc - nose.loader.collections = collections.abc - nose.suite.collections = collections.abc - nose.plugins.attrib.collections = collections.abc - if sys.version_info >= (3, 11): def legacy_getargspec(fun): diff --git a/qa/setup_packages.py b/qa/setup_packages.py index d01a1cd7363..a30f6d98a17 100755 --- a/qa/setup_packages.py +++ b/qa/setup_packages.py @@ -476,11 +476,17 @@ def get_pyvers_name(self, url, cuda_version): all_packages = [ - PlainPackage("numpy", [">=1.17,<1.24"]), - PlainPackage("opencv-python", [PckgVer("4.8.1.78", dependencies=["numpy<1.24"])]), + PlainPackage( + "numpy", + [ + PckgVer(">=1.17,<1.24", python_min_ver="3.8", python_max_ver="3.11"), + PckgVer(">=1.17,<2", python_min_ver="3.12", python_max_ver="3.12"), + ], + ), + PlainPackage("opencv-python", [PckgVer("4.8.1.78", dependencies=["numpy<2"])]), CudaPackage( "cupy", - {"118": [PckgVer("12.2.0", python_min_ver="3.8", dependencies=["numpy<1.24"])]}, + {"118": [PckgVer("12.3.0", python_min_ver="3.8")]}, "cupy-cuda11x", ), CudaPackage( @@ -494,68 +500,58 @@ def get_pyvers_name(self, url, cuda_version): alias="tensorflow", dependencies=[ "protobuf<4", - "numpy<1.24", "urllib3<2.0", ], ), PckgVer( "2.14.1", python_min_ver="3.9", + python_max_ver="3.11", alias="tensorflow", dependencies=[ "protobuf<4", - "numpy<1.24", "urllib3<2.0", ], ), ], "120": [ PckgVer( - "2.14.1", + "2.16.2", python_min_ver="3.9", alias="tensorflow", - dependencies=[ - "protobuf<4", - "numpy<1.24", - "urllib3<2.0", - ], - ), - PckgVer( - "2.15.1", - python_min_ver="3.9", - alias="tensorflow", - dependencies=[ - "protobuf<4", - "numpy<1.24", - "urllib3<2.0", - ], + dependencies=["protobuf<4", "urllib3<2.0", "tf_keras==2.16"], ), PckgVer( - "2.16.1", + "2.17.0", python_min_ver="3.9", alias="tensorflow", - dependencies=[ - "protobuf<4", - "numpy<1.24", - "urllib3<2.0", - ], + dependencies=["protobuf<4", "urllib3<2.0", "tf_keras==2.17"], ), ], }, ), CudaPackageExtraIndex( "torch", - {"118": [PckgVer("2.1.0", python_min_ver="3.8", dependencies=["numpy<1.24"])]}, + {"118": [PckgVer("2.2.0", python_min_ver="3.8", python_max_ver="3.12")]}, extra_index="https://download.pytorch.org/whl/cu{cuda_v}/", ), CudaPackageExtraIndex( "torchvision", - {"118": [PckgVer("0.16.0", python_min_ver="3.8", dependencies=["numpy<1.24"])]}, + {"118": [PckgVer("0.17.0", python_min_ver="3.8")]}, extra_index="https://download.pytorch.org/whl/cu{cuda_v}/", ), CudaPackageExtraIndex( "paddlepaddle-gpu", - {"110": [PckgVer("2.5.2.post117", dependencies=["protobuf<4", "numpy<1.24"])]}, + { + "110": [ + PckgVer( + "2.6.0.post117", + dependencies=["protobuf<4", "numpy<2"], + python_min_ver="3.8", + python_max_ver="3.12", + ) + ] + }, links_index="https://www.paddlepaddle.org.cn/" "whl/linux/mkl/avx/stable.html", ), CudaPackageExtraIndex( @@ -567,7 +563,9 @@ def get_pyvers_name(self, url, cuda_version): ), # dax.fn.jax_function requires at least 0.4.16 which is the first one supporting # `__dlpack__` method, while 0.4.13 is the last one supported with Python3.8 - PckgVer("0.4.16", python_min_ver="3.9", dependencies=["jaxlib"]), + PckgVer( + "0.4.16", python_min_ver="3.9", python_max_ver="3.11", dependencies=["jaxlib"] + ), ] }, # name used during installation @@ -585,7 +583,7 @@ def get_pyvers_name(self, url, cuda_version): python_max_ver="3.8", dependencies=["numpy<1.24"], ), - PckgVer("0.59.1", python_min_ver="3.9", dependencies=["numpy<1.24"]), + PckgVer("0.59.1", python_min_ver="3.9", dependencies=["numpy<2"]), ] }, ), diff --git a/qa/test_template_impl.sh b/qa/test_template_impl.sh index 547ab354fef..ca49f4733db 100755 --- a/qa/test_template_impl.sh +++ b/qa/test_template_impl.sh @@ -12,7 +12,7 @@ topdir=$(cd "$( dirname "${BASH_SOURCE[0]}" )" >/dev/null && pwd )/.. source $topdir/qa/setup_test_common.sh # Set runner for python tests -export PYTHONPATH=${PYTHONPATH}:$topdir/qa +export PYTHONPATH=${PYTHONPATH}:$topdir/qa:$topdir/dali/test/python python_test_runner_package="nose nose2 nose-timer nose2-test-timer" # use DALI nose wrapper to patch nose to support Python 3.10 python_test_runner="python -m nose_wrapper" @@ -159,6 +159,7 @@ do NPP_VERSION=$(if [[ $DALI_CUDA_MAJOR_VERSION == "12" ]]; then echo "==12.2.5.30"; else echo ""; fi) install_pip_pkg "pip install --upgrade nvidia-npp-cu${DALI_CUDA_MAJOR_VERSION}${NPP_VERSION} \ nvidia-nvjpeg-cu${DALI_CUDA_MAJOR_VERSION} \ + nvidia-nvjpeg2k-cu${DALI_CUDA_MAJOR_VERSION} \ nvidia-cufft-cu${DALI_CUDA_MAJOR_VERSION} \ -f /pip-packages" fi