Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add 64 bits support to Array underlying storage #23132

Open
GPSnoopy opened this issue Mar 8, 2019 · 22 comments
Milestone

Comments

@GPSnoopy
Copy link

@GPSnoopy GPSnoopy commented Mar 8, 2019

While System.Array API supports LongLength and operator this[long i], the CLR does not allow arrays to be allocated with more than 2^31-1 elements (int.MaxValue).

This limitation has become a daily annoyance when working with HPC or big data. We frequently hit this limit.

Why this matters

  • .NET arrays is the de facto storage unit of many APIs & collections.
  • Currently one has to allocate native memory instead, which does not work with managed code (Span could have helped, but it's been limited to int32 as well).
  • HPC frameworks expects data to be contiguous in memory (i.e. the data cannot be split into separate arrays). E.g. BLAS libraries.
  • Requires applications to be coded and designed differently if they handle <2G elements or >2G elements. I.e. It does not scale.
  • It's an arbitrary limitation with little value to the user and application:
    • With a desktop with 64GB, why can one only allocate 3.1% of its memory in one go?
    • With a data centre machine with 1,536GB, why can one only allocate 0.1% of its memory in one go?

In C++ this is solved with std::size_t (whose typedef changes depending on the target platform). Ideally, .NET would have taken the same route when designing System.Array. Why they haven't is a mystery, given that AMD64 and .NET Framework appeared around the same time.

Proposal
I suggest that when the CLR/JIT runs the .NET application in x64, it allows the array long constructor to allocate more than int.MaxValue items:

  • Indexing the array with operator this[long i] should work as expected and give access to the entire array.
  • Indexing the array with operator this[int i] should work as expected but implicitly limit the access to only the first int.MaxValue elements.
  • The LongLength property should return the total number of elements in the array.
  • The Length property should return the total number of elements in the array, or throw OverflowException of there are more than int.MaxValue elements (this matches the current behaviour on multi-dimensional arrays with more than int.MaxValue elements).

I naively believe that the above should not break any existing application.

Bonus points for extending 64-bit support to Span and ReadOnlySpan.

@tannergooding

This comment has been minimized.

Copy link
Member

@tannergooding tannergooding commented Mar 8, 2019

I naively believe that the above should not break any existing application.

One of the simplest breaks becomes any program that is doing the following:

for (int i = 0; i < array.Length; i++)
{
}

This works fine for any existing programs, since CoreCLR actually defines a limit that is just under int.MaxValue. Allowing values that are greater than int.MaxValue would cause an overflow to -2147483648 and either cause unexpected behavior or cause an IndexOutOfRangeException to be thrown.

@giuliojiang

This comment has been minimized.

Copy link

@giuliojiang giuliojiang commented Mar 8, 2019

I frequently run into this array when processing large chunks of data from network streams into byte arrays, and always need to implement chunking logic in order to be able to process the data.
It would be great to be able to use long indexes on arrays

@GPSnoopy

This comment has been minimized.

Copy link
Author

@GPSnoopy GPSnoopy commented Mar 8, 2019

@tannergooding That's why I proposed above to keep the existing behaviour of throwing OverflowException on Length when there are more than int.MaxValue elements. Nothing changes there.

I'm suggesting that changing the CLR implementation as proposed above would allow applications and people who want to to use large arrays without breaking existing applications. You are right that simply passing a large array into a library that does not support it will break, but at least this will give us choice. We need to start somewhere, and .NET cannot keep ignoring this problem.

@philjdf

This comment has been minimized.

Copy link

@philjdf philjdf commented Mar 8, 2019

Yesterday I happily created an array in Python which contained more than two billion elements. When can I do the same in .NET?

Currently we get an exception if we try to construct an array with more than 2B elements. What's wrong with deferring that exception until something calls the Length property which can no longer return a valid int? @tannergooding's example wouldn't cause problems. Are there other examples which break?

@bartonjs bartonjs added this to the Future milestone Mar 12, 2019
@GrabYourPitchforks

This comment has been minimized.

Copy link
Member

@GrabYourPitchforks GrabYourPitchforks commented Mar 21, 2019

This is an interesting idea, but I wonder if it would be better to have a LargeArray<T> or similar class that has these semantics rather than try to shoehorn it into the existing Array class. The reason I suggest this course of action is that the GC currently has a hard dependency on the element count of any variable-length managed object fitting into a 32-bit signed integer. Changing the normal Array class to have a 64-bit length property would at minimum also affect the String type, and it may have other unintended consequences throughout the GC that would hinder its efficiency, even when collecting normal fixed-size objects. Additionally, accessing the existing Array.Length property would no longer be a simple one-instruction dereference; it'd now be an overflow check with associated branching.

If we had a theoretical LargeArray<T> class, it could be created from the beginning with a "correct" API surface, including even using nuint instead of long for the indexer. If it allowed T to be a reference type, we could also eliminate the weird pseudo-covariance / contravariance behavior that existing Array instances have, which would make writing to a LargeArray<T> potentially cheaper than writing to a normal Array.

@GPSnoopy GPSnoopy changed the title Add 64-bit support to Array underlying storage Add 64 bits support to Array underlying storage Apr 4, 2019
@GPSnoopy

This comment has been minimized.

Copy link
Author

@GPSnoopy GPSnoopy commented Apr 4, 2019

@GrabYourPitchforks interesting facts about the GC. I wasn't aware of such limitations.

A LargeArray<T> class could be a temporary solution. My main concern is that it would stay a very niche class with no interoperability with the rest of the .NET ecosystem, and ultimately would be an evolutionary dead end. I do like the idea of nuint/nint though.

My gut feeling is that Array, String and the GC are ripe for a 64 bits overhaul. We should bite the bullet and do it. So far I've been quite impressed by the .NET Core team willingness to revisit old decisions and areas that Microsoft had kept shut in the past (e.g. SIMD/platform specific instructions, float.ToString() roundtrip fixes, bug fixes that change backward compatibility, etc).

@juliusfriedman

This comment has been minimized.

Copy link
Collaborator

@juliusfriedman juliusfriedman commented Apr 5, 2019

I guess I could palette a LargeArray but if that's going to be implemented than I hope it doesn't create a new type which is not actually an Array, IMHO it would have been much easier to address this if we would have created another sub type of Array internally instead of inventing Span however Span also solves other problems....

@GSPP

This comment has been minimized.

Copy link

@GSPP GSPP commented Apr 22, 2019

These days 2GB arrays are barely enough for many applications to run reliably. RAM prices have stagnated for a few years now. Surely, the industry will resolve this problem sooner or later. As RAM amounts resume increasing at Moores law rate this 2GB array issue will become very commonplace sooner or later.

A LargeArray<T> type might be a good medium term solution. But will 2GB arrays not be very commonplace 10 years from now? Do we then want to litter application code and API surfaces with LargeArray<T>? It would often be a hard choice whether to go for LargeArray<T> or T[].

Thinking in the very long term it seems far better to find a way to fix T[].

@GSPP

This comment has been minimized.

Copy link

@GSPP GSPP commented Apr 22, 2019

If 64 bit support is implemented there could be a tool that analyzes your code for legacy patterns (e.g. usage of int Length or the typical for loop for (int i = 0; i < array.Length; i++)). The tool should then be able to mass upgrade the source code. This could be a Roslyn analyzer.

@GrabYourPitchforks

This comment has been minimized.

Copy link
Member

@GrabYourPitchforks GrabYourPitchforks commented Apr 24, 2019

Since this would be such a massive ecosystem breaking change, one other thing you'd probably have to do is analyze the entire graph of all dependencies your application consumes. If the change were made to T[] directly (rather than LargeArray<T> or similar), assemblies would need to mark themselves with something indicating "I support this concept!", and the loader would probably want to block / warn when such assemblies are loaded. Otherwise you could end up in a scenario where two different assemblies loaded into the same application have different views of what an array is, which would result a never-ending bug farm.

@juliusfriedman

This comment has been minimized.

Copy link
Collaborator

@juliusfriedman juliusfriedman commented Apr 26, 2019

Not if large array was an array... i.e. derived from it if only internally perhaps; like I suggested back in the span threads.

@GrabYourPitchforks

This comment has been minimized.

Copy link
Member

@GrabYourPitchforks GrabYourPitchforks commented Apr 27, 2019

If large array is a glorified array, then you could pass a large array into an existing API that accepts a normal array, and you'd end up right back with the problems as originally described in this thread.

@GrabYourPitchforks

This comment has been minimized.

Copy link
Member

@GrabYourPitchforks GrabYourPitchforks commented Apr 27, 2019

Furthermore, I'm not sure I buy the argument that adding large array would bifurcate the ecosystem. The scenario for large array is that you're operating with enormous data sets (potentially over 2bn elements). By definition you wouldn't be passing this data to legacy APIs anyway since those APIs wouldn't know what to do with that amount of data. Since this scenario is so specialized it almost assumes that you've already accepted that you're limited to calling APIs which have been enlightened.

@juliusfriedman

This comment has been minimized.

Copy link
Collaborator

@juliusfriedman juliusfriedman commented Apr 27, 2019

You have LongLength on the Array

The only fundamental diff is that one is on the LOH and one is not.

By virtue of the same fact span wouldn't be able to hold more than such either so large span must be needed also...

@GPSnoopy

This comment has been minimized.

Copy link
Author

@GPSnoopy GPSnoopy commented Oct 2, 2019

From what I can gather and summarise from the above, there are two pieces of work.

  1. Update the CLR so that Array can works with 64-bit length and indices. This includes changes to the Array implementation itself, but as comments have pointed above, also to System.String and the Garbage Collector. It is likely to be relatively easy to come up with a fork of coreclr that can achieve this, as a proof of concept with no regard for backward compatibility.

  2. Find a realistic way to achieve backward compatibility. This is the hard part. I think this is unlikely to succeed without compromising some aspect of the CLR. Whether it is Length throwing on overflow, or awkwardly introducing new specific classes like LargeArray.

But the more I think about it, the more I think this issue is missing the point and ultimately the real problem with .NET as it stands. Even if the initial proposal was to be implemented, it would only fix the immediate 64-bit issue with Array but still leave collections and Span with the same indexing and length limitations.

I've started reading The Rust Programming Language (kind of felt overdue) and it struck me that Rust also mimics C++ size_t and ssize_t with usize and isize. C# on the other hand somehow decided not to expose this CPU architectural detail and forces everyone to the lowest common denominator for most of it's API: a 32-bit CPU with 32-bit addressing.

I'd like to emphasis that the 32-bit limitation is purely arbitrary from a user point of view. There is no such thing as a small array and a big array; an image application should not have to be implemented differently whether it works with 2,147,483,647 pixels or 2,147,483,648 pixels. Especially when it's data driven and the application has little control on what the user is up to. Even more frustrating if the hardware has long been capable of it. If you do not believe me or think I'm talking nonsense, I invite you to learn how to program for MS-DOS 16-bit with NEAR and FAR pointers (hint: there is a reason why Doom required a 386 32-bit CPU).

Instead of tinkering around the edges, what is the general appetite for a more ambitious approach to fix this limitation?

Here is a controversial suggestion, bit of a kick in the nest:

  • Take on @GrabYourPitchforks idea, introduce nint and nuint (I also like size and ssize, but I can imagine a lot of clashes with existing code).
  • Allow implicit conversion from int to nint (and uint to nuint) but not the reverse,
  • Change all Length properties on Array, String, Span and collections to return nint (leaving aside the debate of signed vs unsigned and keeping the .NET convention of signed lengths).
  • Change all indexing operators on Array, String, Span and collection to only take nint.
  • Remove LongLength properties and old indexing operators.
  • Take a page from Nullable Reference book and allow this to be an optional compilation feature (this is where it hurts).
  • Only allow a nint assembly to depend on another assembly if it's also using the new length types.
  • But allow a global or per-reference "I know what I'm doing" override, in which case calling the old int32 Length property on an Array is undefined (or just wraps around the 64-bit value).
  • Spend a decade refactoring for loops and var i = 0.

I understand this is far from ideal and can create uncomfortable ecosystem situations (Python2 vs Python3 anyone?). Open to suggestions on how to introduce size types in .NET in a way that doesn't leave .NET and C# more irrelevant on modern hardware each year.

@MichalStrehovsky

This comment has been minimized.

Copy link
Member

@MichalStrehovsky MichalStrehovsky commented Oct 2, 2019

If we can solve the issue with the GC not being tolerant of variable-length objects bigger than 2 GB,
a couple things that might make LargeArray<T> with a native-word-sized-Length more palatable:

  • Array length is already represented as a pointer-sized integer in the memory layout of arrays in .NET. On 64-bit platforms, the extra bytes serve as padding and are always zero.
  • If we were to introduce a LargeArray<T>, we can give it the same memory layout as existing arrays. The only difference is that the bytes that serve as padding for normal arrays would have a meaning for LargeArray<T>.
  • If the memory layout is the same, code that operates on LargeArray<T> can also operate on normal arrays (and for a smaller LargeArray<T>, vice-versa)
  • We can enable casting between LargeArray<T> and normal arrays
  • Casting from an array to LargeArray<T> is easy - we just follow the normal array casting rules (we could get rid of the covariance though, as @GrabYourPitchforks calls out). If we know the element types, it's basically a no-op.
  • When casting from LargeArray<T> to normal arrays, we would additionally check the size and throw InvalidCastException if the LargeArray<T> instance is too long.
  • Similar rules would apply when casting to collection types (e.g. you can cast LargeArray<T> to ICollection<T> only if the element count is less than MaxInt).
  • Existing code operating on arrays doesn't have to worry about Length being longer than MaxInt. Casting rules guarantee this can't happen.
  • LargeArray<T> would not derive from System.Array, but one could explicitly cast to it (the length-check throwing InvalidCastException would happen in that case). The same for non-generic collection types (e.g. casting to ICollection that has the Int32 Count property would only be allowed for a small LargeArray<T>).

This scheme would allow LargeArray<T> and normal arrays to co-exist pretty seamlessly.

@kasthack

This comment has been minimized.

Copy link

@kasthack kasthack commented Oct 2, 2019

@MichalStrehovsky

Similar rules would apply when casting to collection types (e.g. you can cast LargeArray to ICollection only if the element count is less than MaxInt).

This would make LargeArray<T> incompatible with a lot of older code in these cases:

  • Methods that currently accept a more specific type than they actually need(like ICollection<T> instead of IEnumerable<T> while the code just enumerates the values).

  • Probably, some cases where ICollection<T> isn't used directly but inherited from. For instance, methods that accept ISet/IList/IDictionary<...>(I assume, that those interfaces and their implementations would be eventually updated for 64-bit lengths as well) which are inherited from ICollection<T>.

I would go with overflow checks when .Count is called to keep the compatibility and add .LongCount property with a default implementation to the old interfaces.

@MichalStrehovsky

This comment has been minimized.

Copy link
Member

@MichalStrehovsky MichalStrehovsky commented Oct 2, 2019

@kasthack I was trying to come up with a compatible solution where one never has to worry about getting OverflowException in the middle of a NuGet package that one doesn't have the source for. Allowing cast to ICollection<T> to succeed no matter the size is really no different from just allowing arrays to be longer than 2 GB (no need for a LargeArray<T> type). Some code will work, some won't.

With explicit casting, it's clear that we're crossing a boundary into "legacy land" and we need to make sure "legacy land" can do everything it would reasonably be able to do with a normal ICollection<T> before we do the cast.

methods that accept ISet/IList/IDictionary<...>

Arrays don't implement ISet/IDictionary so a cast to these would never succeed. For IList, the same rules would apply as for ICollection (ICollection was just an example above).

@philjdf

This comment has been minimized.

Copy link

@philjdf philjdf commented Oct 3, 2019

@GPSnoopy's post makes me wonder whether the following variation might make sense:

  1. Introduce new nint and nuint types, but don't change the signatures of anything to use them. Nothing breaks.
  2. Introduce new arrays types (with fixed covariance), new span types, etc, which use nint and nuint. Keep the old ones and don't touch them. Make it fast and easy to convert between old and new versions of these types (with an exception if your 64-bit value is too big to fit into the 32-bit counterpart), but conversion should be explicit. Nothing breaks, type safety and all that.
  3. Add a C# compiler switch /HeyEveryoneIts2019 which, when you write double[] you get the new type of array instead of the old one, everything's nint and nuint, and the compiler adds conservative/obvious stuff to convert to/from old-style arrays when you call outside assemblies which want old-style arrays. This way if it gets through the conversion without an exception, you won't break any old referenced code.
@GSPP

This comment has been minimized.

Copy link

@GSPP GSPP commented Oct 3, 2019

It has been proposed that we could make Array.Length and array indices native-sized (nint or IntPtr).

This would be a portability issue. Code would need to be tested on both bitnesses which currently is rarely required for most codebases. Code on the internet would be subtly broken all the time because developers would only test their own bitness.

Likely, there will be language level awkwardness when nint and int come into contact. This awkwardness is a main reason unsigned types are not generally used.

In C languages the zoo of integer types with their loose size guarantees is a pain point.

I don't think we want to routinely use variable-length types in normal code. If nint is introduced as a type it should be for special situations. Likely, it is most useful as a performance optimization or when interoperating with native code.


All arrays should transparently support large sizes and large indexing. There should be no LargeArray<T> and no LargeSpan<T> so that we don't bifurcate the type system. This would entail an enormous duplication of APIs that operate on arrays and spans.

If the object size increase on 32 bit is considered a problem (it might well be) this could be behind a config switch.


Code, that cares about large arrays needs to switch to long.

Likely, it will be fine even in the very long term to keep most code on int. In my experience, over all the code I ever worked on, it is quite rare to have large collections. Most collections are somehow related to a concept that is inherently fairly limited. For example, a list of customers will not have billions of items in it except if you work for one of 10 companies in the entire world. They can use long. Luckily for us, our reality is structured so that most types of objects do not exist in amounts of billions.

I see no realistic way to upgrade the existing collection system to 64 bit indexes. It would create unacceptable compatibility issues. For example, if ICollection<T>.Count becomes 64 bit, all calling code is broken (all arithmetic but also storing indexes somewhere). This must be opt-in.

It would be nicer to have a more fundamental and more elegant solution. But I think this is the best tradeoff that we can achieve.

@FraserWaters-GR

This comment has been minimized.

Copy link

@FraserWaters-GR FraserWaters-GR commented Oct 8, 2019

Just note there is already a case today where Length can throw. Multi-dimensional arrays can be large enough that the total number of elements is greater than Int.MaxValue. I think this is why LongCount was originally added.

@huoyaoyuan

This comment has been minimized.

Copy link
Contributor

@huoyaoyuan huoyaoyuan commented Oct 8, 2019

Note: Currently, when you write foreach over an array, the C# (and also VB) compiler actually generates a for loop, and store index in 32 bits. This means that existing code must break with array that > 2G, or at least a recompile is required.

I really hopes all size-like parameters among the ecosystem is using nuint (avoid checking for >= 0).
There can be a [SizeAttribute] on all the parameters, and JIT generates the positive guard and bit expansion to allow existing int-size-compiled assembly to run with native-sized-corelib.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
You can’t perform that action at this time.