-
Notifications
You must be signed in to change notification settings - Fork 4.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[API Proposal]: Future of Numerics and AI - Provide a downlevel System.Numerics.Tensors.TensorPrimitives
class
#89639
Comments
Tagging subscribers to this area: @dotnet/area-system-numerics Issue DetailsFuture of Numerics and AI.NET provides a broad range of support for various development domains, ranging from the creation of performance-oriented framework code to the rapid development of cloud native services, and beyond. In recent years, especially with the rise of AI and machine learning, there has been a prevalent push towards improving numerical support and allowing developers to more easily write and consume general purpose and reusable algorithms that work with a range of types and scenarios. While .NET's support for scalar algorithms and fixed-sized vectors/matrices is strong and continues to grow, its built-in library support for other concepts, such as tensors and arbitrary length vectors/matrices, is behind the times. Developers writing .NET applications, services, and libraries currently need to seek external dependencies in order to utilize functionality that is considered core or built-in to other ecosystems. In particular, for developers incorporating AI and copilots into their existing .NET applications and services, we strive to ensure that the core numerics support necessary to be successful is available and efficient, and that .NET developers are not forced to seek out non-.NET solutions in order for their .NET projects to be successful. .NET Framework Compatible APIsWhile there are many reasons for developers to migrate to modern .NET and it is the goto target for many new libraries or applications, there remains many large repos that have been around for years where migration can be non-trivial. And while there are many new language and runtime features which cannot ever work on .NET Framework, there still remains a subset of APIs that would be beneficial to provide and which can help those existing codebases bridge the gap until they are able to successfully complete migration. It is therefore proposed that an out of band Provide the
|
Author: | tannergooding |
---|---|
Assignees: | - |
Labels: |
|
Milestone: | - |
What is going to happen with the existing content of |
They will be removed and exist purely in source control history for the near term. The general design of the types in the existing preview package isn't incorrect. However, they was done as a prototype and done several releases ago at this point, so it doesn't take into account newer features such as What we're doing here is providing the foundations on which the package can be stabilized and shipped properly. With the initial stabilization being done for .NET Standard to answer some core needs. The |
Can we rename the |
For the tensor operations should there be |
Re: |
We could, but we already define other APIs named This is also common in other libraries/spaces. If we were to prefix it as
It would have to be named We could also expose
Same here as with "Euclidean Distance". Many libraries simply expose the core There's a balance between clarity and matching existing industry standards we want to reach. |
|
I agree for the distance part. Keep For the sigmoid part I disagree. There are too many of them available and used, each with somewhat different characteristics.
Well...if other libraries have it that way it doesn't mean it's correct and that .NET has to follow along...if could be done better 😉 (I hope you get how I mean this) |
The consideration is that while it is technically an entire class, there is effectively a domain standard that simply "sigmoid" by itself is the logistic function. This is true in Intel MKL, Apple Accelerate, Wolfram Alpha and many other libraries oriented towards Linear Algebra (BLAS/LAPACK), as well as Machine Learning and math reference sources. It is the de-facto response when you search "sigmoid function programming", the reference image used in almost any explanation of what a sigmoid is, and so on. Being different in this case is likely to hurt things more than it would help, particularly for the domain of users that are most likely to use these types; particularly if they have ever done similar in another language or domain. Simply because Notably it is also what is done on many scientific or graphic calculators which support it. |
// A mathematical operation that takes a vector and returns a unit vector in the same direction.
// It is widely used in linear algebra and machine learning.
public static float Normalize(ReadOnlySpan<float> x); // BLAS1: nrm2 The description is misleading since the method presumably returns the scalar representing the Euclidean norm. Other API-s like |
Is the System.Numerics.Tensors library a sort of an attempt to imitate the classical tensors from tensor calculus? If so, it looks like it's need to implement the main concepts from tensor calculus and differential geometry, like co-/contravariance, contraction, differentiation, Cristoffels and so on. But if it is intended only for ML/AI, there is not obligatory. Who is the expected client of |
namespace System.Numerics.Tensors;
public static partial class TensorPrimitives
{
public static void Abs(ReadOnlySpan<float> x, Span<float> destination);
public static void Add(ReadOnlySpan<float> x, ReadOnlySpan<float> y, Span<float> destination);
public static void Add(ReadOnlySpan<float> x, float y, Span<float> destination);
public static void AddMultiply(ReadOnlySpan<float> x, ReadOnlySpan<float> y, ReadOnlySpan<float> multiplier, Span<float> destination);
public static void AddMultiply(ReadOnlySpan<float> x, ReadOnlySpan<float> y, T multiplier, Span<float> destination);
public static void AddMultiply(ReadOnlySpan<float> x, float y, ReadOnlySpan<float> multiplier, Span<float> destination);
public static void Cosh(ReadOnlySpan<float> x, Span<float> destination);
public static float CosineSimilarity(ReadOnlySpan<float> x, ReadOnlySpan<float> y);
public static float Distance(ReadOnlySpan<float> x, ReadOnlySpan<float> y);
public static void Divide(ReadOnlySpan<float> x, ReadOnlySpan<float> y, Span<float> destination);
public static void Divide(ReadOnlySpan<float> x, float y, Span<float> destination);
public static float Dot(ReadOnlySpan<float> x, ReadOnlySpan<float> y);
public static void Exp(ReadOnlySpan<float> x, Span<float> destination);
public static int IndexOfMax(ReadOnlySpan<float> value);
public static int IndexOfMin(ReadOnlySpan<float> value);
public static int IndexOfMaxMagnitude(ReadOnlySpan<float> value);
public static int IndexOfMinMagnitude(ReadOnlySpan<float> value);
public static float Norm(ReadOnlySpan<float> x);
public static void Log(ReadOnlySpan<float> x, Span<float> destination);
public static void Log2(ReadOnlySpan<float> x, Span<float> destination);
public static float Max(ReadOnlySpan<float> value);
public static void Max(ReadOnlySpan<float> x, ReadOnlySpan<float> y, Span<float> destination);
public static float MaxMagnitude(ReadOnlySpan<float> value);
public static void MaxMagnitude(ReadOnlySpan<float> x, ReadOnlySpan<float> y, Span<float> destination);
public static float Min(ReadOnlySpan<float> value);
public static void Min(ReadOnlySpan<float> x, ReadOnlySpan<float> y, Span<float> destination);
public static float MinMagnitude(ReadOnlySpan<float> value);
public static void MinMagnitude(ReadOnlySpan<float> x, ReadOnlySpan<float> y, Span<float> destination);
public static void Multiply(ReadOnlySpan<float> x, ReadOnlySpan<float> y, Span<float> destination);
public static void Multiply(ReadOnlySpan<float> x, float y, Span<float> destination);
public static void MultiplyAdd(ReadOnlySpan<float> x, ReadOnlySpan<float> y, ReadOnlySpan<float> addend, Span<float> destination);
public static void MultiplyAdd(ReadOnlySpan<float> x, float y, ReadOnlySpan<float> addend, Span<float> destination);
public static void MultiplyAdd(ReadOnlySpan<float> x, ReadOnlySpan<float> y, float addend, Span<float> destination);
public static void Negate(ReadOnlySpan<float> values, Span<float> destination);
public static float Product(ReadOnlySpan<float> value);
public static float ProductOfSums(ReadOnlySpan<float> x, ReadOnlySpan<float> y);
public static float ProductOfDifferences(ReadOnlySpan<float> x, ReadOnlySpan<float> y);
public static void Sigmoid(ReadOnlySpan<float> x, Span<float> destination);
public static void Sinh(ReadOnlySpan<float> x, Span<float> destination);
public static float SoftMax(ReadOnlySpan<float> x);
public static void Subtract(ReadOnlySpan<float> x, ReadOnlySpan<float> y, Span<float> destination);
public static void Subtract(ReadOnlySpan<float> x, float y, Span<float> destination);
public static float Sum(ReadOnlySpan<float> value);
public static float SumOfMagnitudes(ReadOnlySpan<float> value);
public static float SumOfSquares(ReadOnlySpan<float> value);
public static void Tanh(ReadOnlySpan<float> x, Span<float> destination);
// .NET 7+
public static void ConvertToHalf(ReadOnlySpan<float> source, Span<Half> destination);
public static void ConvertToSingle(ReadOnlySpan<Half> source, Span<float> destination);
} |
Looks like public static Span<float> Sigmoid(ReadOnlySpan<float> x, Span<float> destination); |
Fixed. |
There's already a tensor library in .NET space, namely https://github.com/dotnet/TorchSharp. I wonder what is the benefit of putting thousands of working hours on reinventing the wheel, rather than just spend a bit of time of improving the quality and stability of the existing one? |
Strictly speaking, TorchSharp isn't in .NET space. The implementation of tensor storage and operations are in native code. I believe this proposal is more about tensor operations in managed memory. |
And this doesn't make sense as well - using managed memory and CPU rather than video memory and GPU is at least 10x slower, so nobody will use it for real training |
TorchSharp gears towards python developers who want to use .NET and the APIs and naming convention are not ideomatic. Moreover, it is a wrapped over c++ implementation |
The purpose of this isn't to compete with or replace something like TorchSharp/Tensorflow/etc. It's to provide a core set of primitive linear algebra APIs that are currently "missing" from .NET. That core set is effectively ensuring we have BLAS/LAPACK like functionality provided out of the box and will entail these core "workhorse" algorithms and a friendly This will be beneficial for prototyping and potentially various types of interchange as well. It also doesn't exclude the ability for some other tooling to come in and allow execution of such code on the GPU (see ComputeSharp which does this for regular C# code), etc. Really this can be viewed as simply the "next step" on top of the existing math and numerics "primitives" we already provide and simply expanding it to include some additional APIs/algorithms that have been the de-facto industry standard for nearly 50 years. With this proposal covering the bare minimum support and a future proposal extending that to the rest of the relevant surface area. |
Right, so I tried to question the fact that linear algebra APIs are really "missing" :) If you look at the description of torch (which is what gets wrapped by torchsharp) then it
which is exactly as you describe as "tensors and workhorse algorithms" + some utilities and this is a huge amount of thorough work In the API above there is a series of choices that are highly opinionated and are good to be done in a separate nuget library rather than in .NET framework itself, namely
|
These APIs are not in the current BCL, but I'm not sure that equates to missing. Based on the above queries I think we should understand who we believe the users of these will be as opposed to people consuming the existing high quality NuGet packages. What makes the existing .NET solutions, TorchSharp/Tensorflow/etc, need in box replacements? |
In some cases it is the platform itself needing the API, e.g. various places we handroll vectorized implementations, such as with Sum/Average/Min/Max/etc. in LINQ. There are also places where we'd use the optimized functionality if it were available, but it's not worth the implementation maintaining its own optimized implementation, so it doesn't. APIs in the platform can't depend on external sources. In other cases, whether we like it or not, there are consumers that are very limited in what non-platform dependencies they're able to take. We know of a non-trivial number in this camp; I'd be happy to chat offline, @AaronRobinsonMSFT, if you'd like. In other cases, consumers can take a dependency, but don't want to force a large external dependency purely for the one or two functions they need. If they're in the platform, they'd use them; otherwise, they end up rolling their own. We want developers reaching for and using nuget. We will still push developers to solutions like TorchSharp. This is not a replacement or competitor for that. This is about building some widely used, core routines into the core libraries, so that they're available to the breadth of applications that need them. |
The above three statements are a good insight into the purpose of these APIs, thanks. I agree with this intent and I think it aligns with what was mentioned in #89639 (comment). Given the above statement I think some confusion is being introduced with the name of the class (that is, Given the queries we've seen here and in some other issues, the community is expressing a misalignment of the aforementioned intent with potential expectations and that, to me, is a red flag for the name. For example, if the class was named @stephentoub I'll reach out tomorrow. |
Many names/terms in programming come from mathematics and they are almost all molded to fit into some similar but not quite right abstraction. In programming, This proposal is starting the addition of a set of linear algebra APIs that have a 45+ year history and which have broad baseline support in multiple languages/ecosystems. This ends up being support for Developers who need more advance or complete functionality should opt to use more advanced libraries, such as
I would strongly disagree. What's being provided here is core functionality that several other ecosystems provide and which higher level libraries build on top of in other ecosystems. There are many benefits to providing such baseline functionality in box and it is not a far step from what we're already providing, it's simply expanding that support from scalars and fixed-length vectors/matrices to arbitrary length vectors/matrices. It allows .NET developers to achieve more with .NET, to get the expected behavior/performance across all platforms we support (Arm64, x64, etc), and ensures that they don't require large (and sometimes limited -- many of the most prominent packages for BLAS/LAPACK like functionality are hardware or vendor specific and may not work or behave well on other platforms/hardware) external dependencies for what is effectively baseline functionality. They only need the additional packages when going beyond the basics, which is the same experience they'd get in other prominent languages/ecosystems.
There are users, as with any larger proposal, who are engaging in conversation on the topic and direction. As is typical, users who are happy with the direction have less reason to engage directly and are simply leaving thumbs ups or other positive reactions (or no reaction). Users who are unhappy are engaging more directly with questions/concerns. Listening to and responding to those questions/concerns is important; but it doesn't mean there is now a big red flag or that this is the wrong direction. There will always be a subset of the community that is unhappy with any feature provided (BCL, Language, Runtime, etc). There will always be a subset of the community that believes a given feature isn't worthwhile, that it is the wrong direction, etc. So the feedback has to be taken and weighed against what domain experts are asking for, what the area owners believe the correct direction is, what other language ecosystems are doing, etc. We also got a lot of feedback on generic math that we should be exposing concepts like rings, groups, and fields. We got feedback that having concrete interfaces for nominal typing was the wrong direction and that instead it should be a more structurally typed based approach. Such considerations were discussed at length and a decision was made that benefited the entirety of .NET. |
Are there examples in other ecosystems where the basic algebra operations like |
The namespace here includes
I can of course accept |
@jkotas I found the following: Julia has Python has NumPy and that has a Rust has a myriad of linear algebra crates, most are clear about "linear algebra". It also has a "standard" tensor crate that isn't generic - food for thought with respect to the one we have been working on. The generic aspect also aligns with question (1) from #89639 (comment). Based on this, happy to see counter examples, I think a rename is reasonable given the stated intent of this API as found in #89639 (comment).
|
We have been explicitly moving away from It will also hinder the ability to design and expose a tensor interchange type/interfaces that will allow larger libraries to have a common interface and expand the functionality themselves -- i.e. One of the original reasons the I think people are seeing the name This static class here is just the first part and its designed and intended to be paired with a |
We could potentially split it out into |
How would this impact future features? Right now the functions are all specfically using Note that I am focused on the stated reason for this work in #89639 (comment). Providing future types and scenarios that have yet to be reviewed or that are planned seems premature when the industry seems to have no confusion with the existing languages in the space (that is, Python). I don't see how placing these linear algebra APIs within a |
This assumes that we won't need to end up introducing non-generic Tensor to deliver a usable set of exchange types. I am not convinced that it is a safe assumption to make. If we need to introduce a non-generic Tensor, we won't have a good option to go with. In other words, this proposal is burning the good name that we may want to use in future for something else. |
The other name will already be burned by having a generic A non-generic tensor would effectively end up like
We'd end up, once again, with APIs in two distinct locations and having to face the problems around overload resolution and discoverability. Its the same problem we had for years on
We aren't diverting IMO. We're exposing the same BLAS/LAPACK APIs in a way that allows developers to have type safety, use operators, etc. In a way that fits in with the broader design goal and with the way .NET itself has been moving towards doing these things. In a way that makes them friendlier and easier to use. If we didn't need to support .NET Standard at all, we might consider not exposing these driver APIs at all and only exposing
Part of any API proposal needs to take into account past, current, and future directions to try and help ensure we don't end up with something that cannot evolve, to ensure we don't end up with work that will just be thrown away, etc. We have an overall design of where this supposed to end up and we have a more immediate goal of needing to provide a .NET Standard OOB for a subset of that functionality. This is simply taking that subset and exposing it. What is effectively being suggested as an alternative is we take what's proposed for This ultimately splits things apart and makes it more nuanced when we decide to expose a particular API that isn't quite linear algebra but which does fit into the broader tensor support (e.g. supporting most of the core operations that I think that will just hurt things longer term and we're ultimately better having them on |
System.Numerics.Tensor
classSystem.Numerics.Tensors.TensorPrimitives
class
There was an offline discussion and then a confirmation with API review that we will go with Using the name |
Wouldn't using |
We didn't believe so. |
Taking into account that none of those operations have anything to do with tensors or AI, but rather with spans, would it make more sense to name it |
The APIs here are the primitive driver/workhorse APIs and therefore take a span representing the backing memory for a tensor. They are still "over tensors" even if there isn't a concrete type for them. As more of BLAS/LAPACK is added, that will also include APIs supporting sparse data and matrix data. Concrete |
What about vector math for I have vector math project (That I always intended to open source but never got around to it) that did some of these math operations we used for signal processing.
For Log and Exp we took inspiration from here: https://github.com/dotnet/runtime/blob/c1ba5c50b8ae47dda48a239be6dd9c7b6acc5c48/src/tests/JIT/Performance/CodeQuality/HWIntrinsic/X86/PacketTracer/VectorMath.cs For anyone interested we took some python code and turned it into c# and made the code 44400 times faster. From 222 milliseconds to 5 microseconds 😅 The reason for our Pow API to expose length was that the spans might be bigger than the range that we were computing on or the fact that different ranges were using different exponent for the pow. |
This proposal was only about the "core" surface area needed for the .NET Standard target. APIs like |
I think that I have prototyped a tensor library earlier this year (not OSS yet), and discovered that in order to have a meaningful system that can plug with heterogeneous computation (CPU, GPU, Tensor cores...etc.) and symbolic calculation (for computing derivatives and backward propagation), the Tensor types have to be built around a computation graph implicitly built with operations (that are operation nodes while tensors are on the edges), and that computation is completely async and deferred to a computation engine that might execute things on the CPU or GPU...etc. You need also to have all these operations on tensors to be pluggable (e.g new kind of execution engine or dispatch between CPU/GPU workloads). Anything that would be designed around a straight implementation (e.g like Vector3/Vector4) would be quite limited in their usage/usefulness. Otherwise for |
Future of Numerics and AI
.NET provides a broad range of support for various development domains, ranging from the creation of performance-oriented framework code to the rapid development of cloud native services, and beyond. In recent years, especially with the rise of AI and machine learning, there has been a prevalent push towards improving numerical support and allowing developers to more easily write and consume general purpose and reusable algorithms that work with a range of types and scenarios. While .NET's support for scalar algorithms and fixed-sized vectors/matrices is strong and continues to grow, its built-in library support for other concepts, such as tensors and arbitrary length vectors/matrices, is behind the times. Developers writing .NET applications, services, and libraries currently need to seek external dependencies in order to utilize functionality that is considered core or built-in to other ecosystems. In particular, for developers incorporating AI and copilots into their existing .NET applications and services, we strive to ensure that the core numerics support necessary to be successful is available and efficient, and that .NET developers are not forced to seek out non-.NET solutions in order for their .NET projects to be successful.
.NET Framework Compatible APIs
While there are many reasons for developers to migrate to modern .NET and it is the goto target for many new libraries or applications, there remains many large repos that have been around for years where migration can be non-trivial. And while there are many new language and runtime features which cannot ever work on .NET Framework, there still remains a subset of APIs that would be beneficial to provide and which can help those existing codebases bridge the gap until they are able to successfully complete migration.
It is therefore proposed that an out of band
System.Numerics.Tensors
package be provided which provides a core set of APIs that are compatible with and appropriate for use on .NET Standard 2.0. There are also two types that would be beneficial to polyfill downlevel as part of this process,System.Half
andSystem.MathF
, which will significantly improves the usability of the libraries for common scenarios.Provide the
Tensor
classNote
The following is an extreme simplification meant to give the general premise behind the APIs being exposed. It is intentionally skimming over several concepts and deeper domain specific terminology that is not critical to cover for the design proposal to be reviewed.
At a very high overview, you have
scalars
,vectors
, andmatrices
. Ascalar
is largely just a single valueT
, avector
is a set ofscalars
, amatrix
is a set ofvectors
. All of these can be loosely thought of as types oftensors
and they have various relationships and can be used to build up and represent higher types. That is, you could consider thatT
is ascalar
, or avector of length 1
, or a1x1
matrix. Likewise, you can then consider thatT[]
is a vector or unknown length and thatT[,]
represents a 2-dimensional matrix and so on. All of these can be defined as types of tensors. -- As a note, this terminology is partially where the name forstd::vector<T>
in C++ (and similar in other languages) comes from. This is also where the general considerations of "vectorization" when writing SIMD or SIMT arise, although they are typically over fixed-length, rather than arbitrary length.From the mathematical perspective, many of the operations you can do on scalars also apply to the higher types. There can sometimes be limitations or other considerations that can specialize or restrict these operations, but in principle they tend to exist and work mostly the same way. It is therefore generally desirable to support many such operations more broadly. This is particularly prevalent for "element-wise" operations which could be thought of as effectively doing
array.Select((x) => M(x))
and where an explicit API can often provide significant performance improvements over the naive approach.The core set of APIs described below cover the main arithmetic and conversion operators provided for
T
as well as element-wise operations for the functionality exposed bySystem.Math
/System.MathF
. Some design notes include:Method
would return abool
, the vector API has two overloadsMethodAny
andMethodAll
.Span<T>
to access these APIs, overloads takingT[]
are not providedSpan<T>
instead. This would simply bedestination
sliced to the appropriate length. This still provides the required information on number of elements written, but gives the additional advantage that the result can be immediately passed into the next user of the algorithm without requiring the user to slice or do length checks themselves.For targeting modern .NET, there will be a separate future proposal detailing a
Tensor<T>
type. This then matches a similar split we have for other generic types such asVector<T>
andVector
. The non-genericTensor
class holds extension methods, APIs that are only meant to support a particular set ofT
, and static APIs. WhileTensor<T>
will hold operators, instance methods, and core properties. The APIs defined for use on .NET Framework are effectively the "workhorse" APIs thatTensor<T>
would then delegate to. They more closely resemble the signatures from the BLAS and LAPACK libraries, which are the industry standard baseline for Linear Algebra operations and allow tensor like functionality to be supported for arbitrary memory while allowing modern .NET to provide a type safe and friendlier way to work with such functionality that can simultaneously take advantage of newer language/runtime features, such as static virtuals in interfaces or generic math.Polyfill
System.Half
inMicrosoft.Bcl.Half
System.Half
is a core interchange type for AI scenarios, often being used to minify the storage impact for the tens of thousands to millions of data points that need to be stored. It is not, however, as frequently used for computation.We initially exposed this type in .NET 5 purely as an interchange type and it is therefore lacking the arithmetic operators. These operators were later added in .NET 6/7 as a part of the generic math initiative. For .NET Framework, the interchange surface area should be sufficient and will follow the general guidance required for polyfills that they meet the initial shape we shipped, even if the version it shipped on is no longer in support (that is, the .NET Standard 2.0 surface area needs to remain compatible with .NET 5, even though .NET 5 is out of support).
Polyfill
System.MathF
inMicrosoft.Bcl.MathF
System.MathF
was added in .NET Core 2.0 to providefloat
support that had parity with thedouble
support inSystem.Math
. Given thatfloat
is the core computational type used in AI scenarios, many downlevel libraries currently provide their own internal wrappers aroundSystem.Math
. .NET ships several such shims for its own scenarios and the proposedSystem.Numerics
library would be no exception. As such, it would be beneficial to simply provide this functionality officially and allow such targets to remove their shims. This simplifies their experience and may give additional performance or correctness over the naive approach.The text was updated successfully, but these errors were encountered: