Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Array objects of arbitrary rank are infeasible - require a reasonable range of ranks instead #479

Closed
Zac-HD opened this issue Sep 20, 2022 · 19 comments · Fixed by #702
Closed
Assignees
Labels
Narrative Content Narrative documentation content.
Milestone

Comments

@Zac-HD
Copy link

Zac-HD commented Sep 20, 2022

https://github.com/data-apis/array-api/blob/main/spec/API_specification/array_object.rst#17 says:

Furthermore, a conforming implementation of the array API standard must support array objects of arbitrary rank N (i.e., number of dimensions), where N is greater than or equal to zero.

Unfortunately this is infeasible: on any computer there's going to be a finite maximum dimensionality, and in practice it's usually much smaller - e.g. numpy.array_api is limited to 32-dimensional arrays. When it's impossible to comply with the standard, implementations will quite reasonably choose whatever noncompliant option makes sense for them.

I'd therefore propose that this should instead require implementations to support arrays with between zero and 32 dimensions inclusive (or another reasonable constant TBD), and permit implementations to support a larger number of dimensions if they wish.

@tylerjereddy
Copy link

From the pykokkos side, I asked @crtrott on Kokkos team, since we ultimately depend on the compiled C++ Kokkos bindings under the hood.

Kokkos right now supports rank 8

Mdspan supports arbitrary

In practice your compiler will hit recursion limits at between rank 100 and 200 or so

But mdspan is the multidimensional array which comes the closest to truly arbitrary as far as i know

@rgommers rgommers added the Narrative Content Narrative documentation content. label Sep 20, 2022
@rgommers
Copy link
Member

That seems like a reasonable change to make. I think the absolute minimum number of dimensions that are needed to support the features used in the standard (e.g., linear algebra routines with a batch dimension) is 3. In practice, >3 are used though, so it'd be good to set the number higher.

Is there any library that does not support 8 dimensions?

@leofang
Copy link
Contributor

leofang commented Sep 20, 2022

8 is way too small for some APIs like einsum. We've hit several issues with cuQuantum in which NumPy/CuPy only support up to 32 ndim. cuTENSOR has supported up to 64 (and will further relax in a future release) and PyTorch has no limitation.

@kgryte
Copy link
Contributor

kgryte commented Sep 20, 2022

@Zac-HD What is motivating this request? Am I correct that this stems from a concern in Hypothesis?

Without further context, I am leery about imposing any explicit limits on array rank in the specification, as does not make sense to me to arbitrarily impose constraints based on present practice, when we don't have a crystal ball for the future.

While not constrained in Python, ndim is likely constrained in practice (e.g., either int32 or int64). In which case, I'd be fine saying that the number of dimensions must fit within an int32.

@kgryte
Copy link
Contributor

kgryte commented Sep 20, 2022

@Zac-HD Is your proposal that the spec modify its language something to the effect of

Furthermore, a conforming implementation of the array API standard must support array objects of rank N (i.e., number of dimensions), where N is on the interval [0, 32] and where the upper bound is the minimum upper bound (i.e., a conforming implementation must support arrays having at least 32 dimensions and may support arrays having more than 32 dimensions).

@asmeurer
Copy link
Member

8 is way too small for some APIs like einsum. We've hit several issues with cuQuantum in which NumPy/CuPy only support up to 32 ndim. cuTENSOR has supported up to 64 (and will further relax in a future release) and PyTorch has no limitation.

Presumably many of the dimensions in these cases are size 1? A rank $k$ array without trivial dimensions has at least $2^k$ entries. I'm assuming that's where NumPy's 32 limit comes from, since intp can be int32.

@Zac-HD
Copy link
Author

Zac-HD commented Sep 20, 2022

I ran into this when thinking about how to generate validly-shaped arrays for Xarray (pydata/xarray#6908 (comment)), and obviously it does have implications for our Array-API array_shapes() strategy. My objection is that the current specification is impossible to implement, and if the spec is inconsistent with any implementation then the spec should probably change.

I'm certainly not proposing that the specification should cap the maximum array rank. It does specify that rank-zero arrays must be supported, and I think that's worth keeping. I'm also dubious about requiring all implementations to support some specific rank - in practice rank-N arrays are widely used up to at least rank-4, but the comments above that requiring rank-8 would be too little seem more domain-specific to me. To propose some specific language:

Furthermore, a conforming implementation of the array API standard must support array objects of rank N (i.e., number of dimensions), where N is greater than or equal to zero, and document their maximum supported rank N.

Depending on the library this might be a deliberate choice like Numpy's rank-32 limit or Kokkos rank-8, or it might be implementation-defined like "your compiler will likely crash somewhere in the hundreds of dimensions".

In summary: if literally nobody complies with the spec, the spec should change.

@leofang
Copy link
Contributor

leofang commented Sep 21, 2022

Presumably many of the dimensions in these cases are size 1?

Precisely. In many cases we encounter tensors with many axes of extent 1, so ndim can easily go beyond 32 while the whole array can still be stored on single GPU while making sense.

My objection is that the current specification is impossible to implement, and if the spec is inconsistent with any implementation then the spec should probably change.

As mentioned above, this is not true. PyTorch does not have this limitation, so you get at least 1 compliant library 🙂 (Update: CuPy can easily go beyond ndim=32 too)

@leofang
Copy link
Contributor

leofang commented Sep 21, 2022

Furthermore, a conforming implementation of the array API standard must support array objects of rank N (i.e., number of dimensions), where N is greater than or equal to zero, and document their maximum supported rank N.

I would think this is a safe statement.

@Zac-HD
Copy link
Author

Zac-HD commented Sep 21, 2022

As mentioned above, this is not true. PyTorch does not have this limitation, so you get at least 1 compliant library 🙂 (Update: CuPy can easily go beyond ndim=32 too)

In [6]: torch.zeros((1,)*2000)
Out[6]: ---------------------------------------------------------------------------
RecursionError                            Traceback (most recent call last)
...
RecursionError: maximum recursion depth exceeded while calling a Python object

Still not arbitrarily large! (I'm impressed that it handles rank-1000 though 🤯)

@kgryte
Copy link
Contributor

kgryte commented Sep 21, 2022

My sense is that we'd still want a minimum upper bound. Otherwise, users lack portability guarantees. E.g., one may be able to perform a batch operation on a rank 10 array in library X, but then encounter an error in library Y. Would be beneficial if spec compliant array libraries all had a minimum level of support.

@leofang
Copy link
Contributor

leofang commented Sep 21, 2022

Right I think Zac was suggesting the same thing too. Would also be nice to learn about the limitation in distributed array libraries. I just checked cuNumeric, they offer prebuilt binaries up to ndim=4, and building from source would support up to ndim=9. Ping @jakirkham for Dask.

@kgryte
Copy link
Contributor

kgryte commented Sep 21, 2022

@leofang I'd want something a bit stronger than the following, however:

Furthermore, a conforming implementation of the array API standard must support array objects of rank N (i.e., number of dimensions), where N is greater than or equal to zero, and document their maximum supported rank N.

@tylerjereddy
Copy link

I believe the pykokkos-base templated C++ bindings built through CMake also experience a massive increase in compile time as you increase the max supported ranks from 4 to 7 or so. IIRC you can be talking about more than an hour vs. a minute and a half or so. Not sure if other libs experience that as well.

@leofang
Copy link
Contributor

leofang commented Sep 21, 2022

If kokkos uses MAX_NDIM as a template parameter that would be expected. That's what cuNumeric does, too. More template instantiations to do...

For NumPy/CuPy, things are easier because MAX_NDIM is only a macro that controls the lengths of a dozen of data structures. NumPy then internally loops up to MAX_NDIM and CuPy has a JIT compiler (+ such a loop in some places), so bumping it and rebuilding the library would just work.

mdspan is not affected because MAX_NDIM is not used as a template parameter IIRC.

I wish I know how PyTorch pulls it off 😄

@seberg
Copy link
Contributor

seberg commented Sep 21, 2022

I don't really have an opinion on this. Two thoughts, though:

  1. We could prescribe an information dict (or so) so implementors must/can report their maximum dimensions. Note that "maximum" is not always clear though: Some NumPy functions may temporarily double the dimensions, so they may start failing at 17 rather than 32 dimensions).
    • maybe having the information available will help hypothesis generalize a bit better.
  2. If desired, the spec could give recommendations or just list 3 examples so someone starting from scratch knows that 32 seems plenty for most but not quite all users (e.g. quantum people).

@rgommers
Copy link
Member

Given that there are two libraries that only support 4 dimensions (at least by default), that should be the max we can require. Linear algebra functionality requires at least 2 dimensions to even make sense. Lots of applications do require 3 or 4 dimensions, so stating that 4 is the minimum number of dimensions supported for compliance did sound reasonable to folks in the call today.

So it looks like we can adopt this phrasing after replacing "zero" by "four":

Furthermore, a conforming implementation of the array API standard must support array objects of rank N (i.e., number of dimensions), where N is greater than or equal to zero, and document their maximum supported rank N.

@leofang
Copy link
Contributor

leofang commented Sep 22, 2022

IIRC we explicitly mentioned "zero" out of the potential concern that 0D arrays are excluded in certain array libraries. We should just spell it out?

Furthermore, a conforming implementation of the array API standard must support array objects of rank N (i.e., number of dimensions), including N=0, 1, 2, 3 and 4, and document their maximum supported rank N.

tylerjereddy added a commit to kokkos/pykokkos-base that referenced this issue Oct 9, 2022
* reduce compiled view ranks from 5 to 4 in attempt
to alleviate compilation memory issue

* the reason for going above 3 is that 4 is the current
preferred minimum for the Python array API standard
per this discussion:
data-apis/array-api#479
NaderAlAwar pushed a commit to NaderAlAwar/pykokkos-base that referenced this issue Oct 11, 2022
* reduce compiled view ranks from 5 to 4 in attempt
to alleviate compilation memory issue

* the reason for going above 3 is that 4 is the current
preferred minimum for the Python array API standard
per this discussion:
data-apis/array-api#479
NaderAlAwar added a commit to kokkos/pykokkos-base that referenced this issue Oct 12, 2022
* Sync pykokkos-base with kokkos version 3.7.00 commit d19aab9

* MAINT: PR 39 revisions

* reduce compiled view ranks from 5 to 4 in attempt
to alleviate compilation memory issue

* the reason for going above 3 is that 4 is the current
preferred minimum for the Python array API standard
per this discussion:
data-apis/array-api#479

* Tests: add add kokkos.finalize() to tearDownClass()

* CI: disable multiple cores when building pykokkos-base in python-build action to avoid running out of memory

* CI: bind kokkos.is_finalized() and guard finalize() call in tearDownClass

* formatting

Co-authored-by: Tyler Reddy <tyler.je.reddy@gmail.com>
@rgommers rgommers self-assigned this Apr 20, 2023
@kgryte kgryte added this to the v2023 milestone Jun 29, 2023
@leofang
Copy link
Contributor

leofang commented Sep 25, 2023

FWIW, Python buffer protocol specifies the upper limit for exchange to be 64:
https://docs.python.org/3/c-api/buffer.html#c.PyBUF_MAX_NDIM

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Narrative Content Narrative documentation content.
Projects
None yet
Development

Successfully merging a pull request may close this issue.

7 participants