Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

proposal: spec: add support for int128 and uint128 #9455

Open
runner-mei opened this issue Dec 27, 2014 · 87 comments
Open

proposal: spec: add support for int128 and uint128 #9455

runner-mei opened this issue Dec 27, 2014 · 87 comments
Milestone

Comments

@runner-mei
Copy link

@runner-mei runner-mei commented Dec 27, 2014

No description provided.

@mikioh mikioh changed the title can add int128 and uint128 support spec: add support for int128 and uint128 Dec 27, 2014
@ianlancetaylor
Copy link
Contributor

@ianlancetaylor ianlancetaylor commented Dec 27, 2014

Can you provide a real-life use case?

@twoleds
Copy link

@twoleds twoleds commented Feb 12, 2015

it's good for UUID, IPv6, hashing (MD5) etc, we can store IPv6 into uint128 instead byte slice and do some mathematics with subnetworks, checking range of IP addresses

@minux
Copy link
Member

@minux minux commented Feb 12, 2015

These use cases are not strong enough to justify adding 128-bit types,
which is a big task to emulate it on all targets.

  1. MD5 is not secure anymore, so there is little benefit adding types to
    store its result.
  2. How often do you need to manipulate a UUID as a number rather than a
    byte slice (or a string)?
  3. The other use cases can be done with math/big just as easy.

Also note that GCC doesn't support __int128 on 32-bit targets and Go do
want consistent language features across all supported architectures.

@twoleds
Copy link

@twoleds twoleds commented Feb 13, 2015

I agree with you there aren't a lot of benefits for int128/uint128, maybe a little better performance for comparing and hashing in maps when we use uint128 for storing UUID/IPv6 because for byte slices or string we need do some loops and extra memory but it isn't important I think

@runner-mei
Copy link
Author

@runner-mei runner-mei commented Jan 18, 2017

I stat all interface flux of a device in one day.

@rsc rsc changed the title spec: add support for int128 and uint128 proposal: spec: add support for int128 and uint128 Jun 20, 2017
@rsc rsc added the Go2 label Jun 20, 2017
@the80srobot
Copy link

@the80srobot the80srobot commented Jul 12, 2017

In addition to crypto, UUID and IPv6, int128 would be enormously helpful for volatile memory analysis, by giving you a safe uintptr diff type.

@iMartyn
Copy link

@iMartyn iMartyn commented Oct 2, 2017

It also just makes code that much more readable if you have to deal with large IDs e.g. those you get back from google directory API amongst others (effectively they're uuids encoded as uint128).
Obviously you can use math/big but it makes the code much harder to reason about because you have to parse the code mentally first, distracting you from reading the code.

@ericlagergren
Copy link
Contributor

@ericlagergren ericlagergren commented Dec 22, 2017

Adding a data point: ran into a situation with a current project where I need to compute (x * y) % m where x*y can possibly overflow and require a 128-bit integer. Doing the modulus by hand for the high and low halves is needlessly complicated.

@jfesler
Copy link

@jfesler jfesler commented Jan 6, 2018

Another +1 for both IPv6 and UUID cases.

@ianlancetaylor
Copy link
Contributor

@ianlancetaylor ianlancetaylor commented Jan 9, 2018

The examples of UUID and IPv6 are not convincing to me. Those types can be done as a struct just as easily.

It's not clear that this is worth doing if processors do not have hardware support for the type; are there processors with 128-bit integer multiply and divide instructions?

See also #19623.

@ericlagergren
Copy link
Contributor

@ericlagergren ericlagergren commented Jan 10, 2018

@ianlancetaylor I do not think so. GCC seems to use the obvious 6 instructions for mul, 4 for add and sub, and a more involved routine for quo. I'm not how anybody could emulate mul, add, or sub that precisely (in Go) without assembly, but that prohibits inlining and adds function call overhead.

@ianlancetaylor
Copy link
Contributor

@ianlancetaylor ianlancetaylor commented Jan 10, 2018

The fact that the current tools can't yet inline asm code is not in itself an argument for changing the language. We would additionally need to see a significant need for efficient int128 arithmetic.

If there were hardware support, that in itself would suggest a need, since presumably the processor manufacturers would only add such instructions if people wanted them.

@ericlagergren
Copy link
Contributor

@ericlagergren ericlagergren commented Jan 10, 2018

If there were hardware support, that in itself would suggest a need

A need that—presumably—compilers couldn't meet by adding their own 128-bit types, which they have. I mean, for all but division it's a couple extra instructions. For most cases that's been sufficient.

I confess I'm not an expert on CPU characteristics, but my understanding is much of the driving force behind adding larger sizes was the ability to address more memory. That makes me think general 128-bit support is rather unlikely.

Yet major compilers have added support (GCC, Clang, ICC, ...) for C and C++. Rust has them because of LLVM. Julia has them as well.

Other languages and compilers having support isn't sufficient reason to make a language change, sure. But it's evidence there exists a need other than simply UUIDs.

Their domain seems to lie in cryptography and arbitrary-precision calculations, for now.

@FlorianUekermann
Copy link
Contributor

@FlorianUekermann FlorianUekermann commented Jan 11, 2018

Additional usecases are timestamps, cryptographic nonces and database keys.

Examples like database keys, nonces and UUID represent a pretty large collection of applications where keys/handles can't ever be reused or number ranges can't overlap.

@ianlancetaylor
Copy link
Contributor

@ianlancetaylor ianlancetaylor commented Jan 11, 2018

@FlorianUekermann People keep saying UUID, but I see no reason that a UUID could not be implemented using a struct. It's not like people use arithmetic on a UUID once it has been created. The only reason to add int128 to the language is if people are going to use arithmetic on values of that type.

@FlorianUekermann
Copy link
Contributor

@FlorianUekermann FlorianUekermann commented Jan 11, 2018

It's not like people use arithmetic on a UUID once it has been created

They do. UUIDs don't have to be random. Sequential UUIDs are common in databases for example. Combine sequential UUIDs with some range partitioning and you'll wish for integer ops in practice.

Still, timestamps seem like the most obvious example to me, where 64bit is not sufficient and the full range of arithmetic operations is obviously meaningful. Had it been available, I would expect that the time package contained some examples.

How big of an undertaking is the implementation of div? The rest seems rather straightforward.

@ericlagergren
Copy link
Contributor

@ericlagergren ericlagergren commented Jan 11, 2018

How big of an undertaking is the implementation of div?

The code for naïve 128-bit division exists in the stdlib already (math/big). The PowerPC Compiler Writer’s Guide has a 32-bit implementation of 64-bit division (https://cr.yp.to/2005-590/powerpc-cwg.pdf, page 82) that can be translated upwards.

@josharian
Copy link
Contributor

@josharian josharian commented Jan 11, 2018

Use case: [u]int128 can be used to check for overflow of [u]int64 operations in a natural way. Yes, this could make you want int256, but since int64 is the word size of many machines, this particular overflow matters a lot. See e.g. #21588. Other obvious options to address this use case are math/bits and
#19623.

Somewhat related use case: #21835 (comment).

@FlorianUekermann
Copy link
Contributor

@FlorianUekermann FlorianUekermann commented Jan 29, 2021

I feel like the goalposts keep moving here. First it was usecases, then it was hardware support, now the usecases provided years ago, plenty of which require arithmetics, aren't convincing enough.

@ianlancetaylor : What exactly are you missing or do you doubt in the usecases, CPU instructions on x86 architectures, performance and readability benefits provided over the last few years? Maybe you could be more specific.
There is plenty of precedent in other languages, compiler intrinsics etc, so this isn't exactly uncharted territory either. You have much more experience with other compiler and language implementations than most, so maybe there is some issue you know of from those projects, which isn't obvious to others in this thread.

In general, I feel like the case for (u)int128 has been made and the discussion has run its course, maybe it is time to close the issue if the case was neither convincing nor the reason can be substantiated further.

@the80srobot
Copy link

@the80srobot the80srobot commented Jan 29, 2021

Florian brings up a good point. What exactly is the standard here? It seems like one of three things is true:

  1. We're waiting to see if Go adds arbitrary width ints. If so maybe block this issue behind that one?
  2. There is no case convincing enough to add another primitive data type. If so, maybe close this?
  3. There is some list of things that must be true for this support to be added. If so, can they be stated?
@ianlancetaylor
Copy link
Contributor

@ianlancetaylor ianlancetaylor commented Jan 29, 2021

I apologize if it seems like the goalposts keep moving. The truth is that there are no goalposts.

I honestly haven't found any of the use cases given above to be particularly convincing. Sorry if I'm missing something obvious.

If there were an obvious and essential need for int128, we would have done it already. So I guess that what I am doing is looking for that obvious and essential need. Not "might be nice," or "we would use it," but more like "our Go program today can only be written with an int128 package, and it would be better if we could just use a built-in int128 type instead."

@chrispassas
Copy link

@chrispassas chrispassas commented Jan 29, 2021

@ianlancetaylor Thank you for explaining. I understand you read the use cases and probably feel someone could write their own code to have two uint64's in struct and not really need uint128 support.

This probably isn't an issue of "I can't do X in Go". It's more of an issue of "If Go had uint128, X,Y,Z would be easier for me to do".

Not everyone is of the caliber we can code around not having uint128 support so having that would allow less experienced developers to solve their problems better.

We are just advocating to the Go team to prioritize this feature request. I do think if it does not meet the bar it should be closed. While I wish the feature was there I recognize your team has to spend your time in a smart way and this might not be the best use of that time.

As a community member I don't feel I have the ability to add this feature to the language my self. I would just use it if it was there.

@jwatte
Copy link

@jwatte jwatte commented Jan 29, 2021

@the80srobot
Copy link

@the80srobot the80srobot commented Jan 29, 2021

If I understand the standard that Ian proposes, then it's similar to asking what can only be done with exactly 128 bits, as opposed to 256, 512 and so on. Cryptographic functions will continue to have larger images and timestamps and currency can require storing arbitrarily large numbers. It doesn't seem to matter much whether the line between primitive and BigInt is at 64 bits or 128 bits.

No one would argue that Go shouldn't support 64 bit numbers, because you need them to address memory efficiently, and they translate to efficient code. The only reason I can think of to support specifically int128 is memory addressing. Subtracting two uintptr values can result in a negative number. Neither uint64 nor int64 are safe to hold the result of such an operation, but an int128 would be.

This is less esoteric than it might sound, mind you. Memory analysis, forensics and some related hardware domains do use those operations and programmers writing that code get them wrong.

As a sidenote, Go has primitive complex number types, which I have never seen used in a real codebase. I imagine this is the source of some of the reluctance to add new primitives. Fair enough - maybe an efficient BigInt implementation is just as good.

@jwatte
Copy link

@jwatte jwatte commented Jan 29, 2021

@the80srobot
Copy link

@the80srobot the80srobot commented Jan 29, 2021

Some of the things you're asking for, like being able to use %d to print the big.Int, and using native operators seem to me to be more of a matter of preference. It's not like you can write generic code, or compare an int64 with an int32 without casting anyway, so I don't know why doing x.Less(y) is substantially worse than x < y.

I also don't think it's true that big.Int requires a heap allocation. Internally it's represented as a byte slice. I'm not an expert on Go's escape analysis, but I think it should be possible for it to remain on the stack. At any rate, Go doesn't specify what ends up on the stack and what on the heap, even for things where you might think it's a sure thing.

You can use big.Int.Bytes() as a map key, although I agree it's unfortunate that you have to copy it into either a string or a byte array first. (This seems like a problem with Go maps in general: the only variable size type they can use as key is string.)

As I understand it, native int128 wouldn't necessarily have some of the other properties you want, like being guaranteed a certain representation in memory.

@jwatte
Copy link

@jwatte jwatte commented Jan 29, 2021

@josharian
Copy link
Contributor

@josharian josharian commented Jan 30, 2021

@ianlancetaylor FWIW I believe that the IPv6 use case described by @danderson above fit your criteria pretty well.

@stevenj
Copy link

@stevenj stevenj commented Jan 31, 2021

Big Ints are not equivalent to fixed size 128bit or 256bit integers.

Fixed size integers have the property that any addition or subtraction near the extremes of their range cause them to wrap. This is desirable in many situations. Arbitrary sized integers do not, they just grow to accommodate. In every circumstance where i use fixed size integers i absolutely want to rely on the fact that 0xFFFFFFFF FFFFFFFF + 1 = 0x00000000 00000000 and NOT 0x 1 00000000 00000000

Further Fixed size integers will have large speed advantages over any arbitrary sized number scheme. Some say "the speed difference isn't important" i am not all knowing enough to say whats important to someone else's project. But i have been involved with many projects where the speed difference could make all the difference in the world between success and failure. Should we be placing unnecessary barriers in the way?

It has also been said, there is no hardware support for 128bit integers in 64 bits cpus. Patently not true. X86-64 can multiply two 64 bit values and produce a 128 bit result in a single instruction. Arms A64 instruction set has specific support also for 128 bit results from 64 bit x 64 bit multiplies (in two instructions). That is 128 bit support. It certainly isn't feature complete but it is still supported. The fact the result is split over multiple "registers" is of no consequence, that would be like saying the 8088 could not do 16 bit maths because it put the result in AH and AL (two registers). No sane person would make that argument.

But all of that aside, the sheer number of comments in this proposal indicates a significant number of people consider this such an issue that they bothered to take time out of their day to add support to the proposal. They obviously feel they have a need for this feature which is not currently being met and cant easily be replaced with a library.

@ethindp
Copy link

@ethindp ethindp commented Jan 31, 2021

@josharian
Copy link
Contributor

@josharian josharian commented Jan 31, 2021

In an attempt to refocus the discussion: Arguments that Go needs 128 bit ints do not move the conversation forward. Concrete use cases do. Please share those. Thanks.

@phuclv90
Copy link

@phuclv90 phuclv90 commented Feb 1, 2021

I have to add my support to this proposal too. x86 has support for 128-bit and 256-bit integers via SSE and AVX. ARM has something similar.

@ethindp Please read my comment on @martisch above #9455 (comment), there's zero native 128-bit integer support in any current architectures, SSE, AVX, Neon, SVE... are all SIMD which are intended for operations on multiple small integers at the same time. They have no way to treat the register as a single 128-bit integer. All 128-bit arithmetics are done in the normal GPRs

@jwatte
Copy link

@jwatte jwatte commented Feb 1, 2021

@as
Copy link
Contributor

@as as commented Feb 1, 2021

I don't think IPv6 is a compelling use-case for int128 at all.

Specifically, its address space is larger than usual to allow hardware to route it without treating the addresses as numbers. Go doesn't even utilize int32 types for ipv4. Doing so would require handling byte-ordering, which is a common source of bugs in custom networking code.

@stevenj
Copy link

@stevenj stevenj commented Feb 2, 2021

I have to add my support to this proposal too. x86 has support for 128-bit and 256-bit integers via SSE and AVX. ARM has something similar.

@ethindp Please read my comment on @martisch above #9455 (comment), there's zero native 128-bit integer support in any current architectures, SSE, AVX, Neon, SVE... are all SIMD which are intended for operations on multiple small integers at the same time. They have no way to treat the register as a single 128-bit integer. All 128-bit arithmetics are done in the normal GPRs

@phuclv90 This is completely untrue. Both X86-64 and A64 architecture support 128 bit integers as a result from 64 bit x 64 bit multiplication, that is NATIVE SUPPORT for 128 bit integers (using the normal integer registers, not the extended ones). Both architectures support carry flags which are specifically designed to allow integer operations greater than the word size of the base register. A cursory glance at the instruction set shows they both are specifically designed to implement integer mathematics in multiples of the base register size and have a number of features which promote this and make it easier. Your metric for "unsupported" seems to be "the base register needs to be at least this big." If that is the case then we should be discussing removing 64 bit integers from 32 bit go.

Similar to RISC philosophy, processor designers see no need to extend register size when two instructions can give you what you need across two registers. ie, their thinking goes, "128bit integer support is trivial and fast AS IS, why would we double the size of the register and the necessary complications that arise for the tiny speed improvement it would yield". Your definition of "native 128-bit integer support" is very narrow.

@phuclv90
Copy link

@phuclv90 phuclv90 commented Feb 2, 2021

@stevenj obviously I know that you can use multiple registers to do 128-bit arithmetic. But the persons I'm addressing at above claim that you use a single SIMD register (like SSE, AVX, Neon...) to do store 128-bit integers and do math on it which is completely false. I said that 128-bit operations must be done in the GPRs. Did you even read my comment clearly?

@ethindp
Copy link

@ethindp ethindp commented Feb 2, 2021

@DmitriyMV
Copy link

@DmitriyMV DmitriyMV commented Feb 2, 2021

@ethindp that still doesn't answer questions about how u128/u128 would be supported on x86-32 and arm v7 - since those platforms are officially supported by Go. IIRC even LLVM has troubles with u128 on arm v7.

@martisch
Copy link
Contributor

@martisch martisch commented Feb 2, 2021

@phuclv90

@stevenj obviously I know that you can use multiple registers to do 128-bit arithmetic. But the persons I'm addressing at above claim that you use a single SIMD register (like SSE, AVX, Neon...) to do store 128-bit integers and do math on it which is completely false. I said that 128-bit operations must be done in the GPRs. Did you even read my comment clearly?

As far as I remember I never commented one can do math (e.g. add them in a single construction) on 128bit integers. I commented about load, store and compares of 128bit registers:

I think uint128 is useful on its own for e.g. optimizations as it corresponds nicely to 128bit registers on amd64. The compiler can easily and efficiently generate single instructions for compares and load/stores on amd64 allowing to write high level go code that utilizes larger register widths than 64bits.

full width 128bit comparisons exist (not all of them but some):

You can do multiple small comparisons in an SSE/AVX register but not a 128-bit comparison on them.

A 128bit compare can be done using PTEST. https://www.felixcloutier.com/x86/ptest So there is at least one comparison single instructions that compares two 128bit registers as a whole. It can be used for checking if the 128bit register is e.g. zero. More complex compares e.g. equality require an additional operation (PXOR) on the whole 128bit register. I my comment was read as all comparisons are possible on 128bit in a single instruction thats not what I wanted to imply.

Note im implying uint128 should be modeled in a 128bit register always. But it can if e.g. only equality and moving around from memory is involved.

@stevenj
Copy link

@stevenj stevenj commented Feb 2, 2021

I don't think IPv6 is a compelling use-case for int128 at all.

Specifically, its address space is larger than usual to allow hardware to route it without treating the addresses as numbers. Go doesn't even utilize int32 types for ipv4. Doing so would require handling byte-ordering, which is a common source of bugs in custom networking code.

@as Storage in memory and serialization for transmission on the wire are two completely different problems. if go is forgoing obvious advantages to masking and comparing 32bit qtys as 32bit values in 32 bit registers then that speaks to the implementation, and not the usefulness of such an approach.

The whole "common source of bugs" thow away argument isn't very likely given that byte ordering being wrong isn't a subtle problem its immediately broken. So, sure, someone in their first pass might get it wrong, it quickly works itself out though. I would love a reference to this "custom networking code" with "common byte-ordering bugs". Having written and worked with plenty of custom networking code, i haven't seen it be "common". Have you?

@jwatte
Copy link

@jwatte jwatte commented Feb 3, 2021

@Bjohnson131
Copy link

@Bjohnson131 Bjohnson131 commented Feb 17, 2021

In an attempt to refocus the discussion: Arguments that Go needs 128 bit ints do not move the conversation forward. Concrete use cases do. Please share those. Thanks.

IMO Looking for 'use cases' to justify implementation is not a good way to go about this.

Demand should drive implementation, not personal judgement.

@ethindp
Copy link

@ethindp ethindp commented Feb 17, 2021

I'll just say this: it is not personal judgement that determines whether a type should be in a language if its not maintained by a single individual. If its maintained by a community, it is up to the community to decide what types and functionality is in the language, not up to a handful of people -- which seems to be the case here ("We won't add these types because we don't think it would be beneficial to the language"). Maybe I don't understand the Go community all that well but I'm kinda curious who put all the authority for that kind of thing into a small group of people.
It doesn't matter if there are use-cases for 128-bit integers or not. There is high demand because there may be use-cases in future, if there isn't already, that justify the addition. You could make the leap of logic here and go "Well, by that logic we should add every kind of type in existence," and that would be a valid rebuttal, but I'm specifically talking about types and functionality that have a high demand, as this issue does. People have explained rationales for 128-bit integers and a lot of people would find it easier to have them because it would make tasks easier.
Finally, something like 128-bit integers is a trivial task. To be honest, I'm puzzled as to why most of this debate has surrounded architectural issues, given that LLVM and GCC have already solved this problem. The logical thing, I imagine, would be to work off of existing knowledge instead of reinventing the wheel. Its not as though 128-bit integers will make the Go language any more complex than it already is. Its an extremely minor and trivial thing to add. If we were talking about something like a method to use Go on bare metal systems (because such a thing is, given Go's design, very difficult to do at the moment, if my knowledge is correct), I would understand. But we aren't talking about that here. So I'm really confused why this debate has gone on as long as it has.
I might've repeated myself in this comment a few times. But the halting of this issue has started to get kind of annoying to me at least, all purely because we're getting tied up either because (1) someone's "personal judgement" prevents us from adding it to the language or (2) we get tied up in issues that have already been solved by other compilers far more complicated than the Go toolchain is.
Just my two cents. I apologize if I've been harsh in this comment -- that wasn't my intent, but I'm getting kinda frustrated.

@zx2c4
Copy link
Contributor

@zx2c4 zx2c4 commented Feb 18, 2021

@josharian

In an attempt to refocus the discussion: Arguments that Go needs 128 bit ints do not move the conversation forward. Concrete use cases do. Please share those. Thanks.

Concrete use cases I'm aware of:

  • IPv6 (as discussed above by others)
  • Crypto

It's this last point that's most interesting. Some architectures offer a 64x64=>128 multiplication with integer instructions, making it efficient to use 128bit types in implementing fast generic C implementations. curve25519-donna comes to mind, but the more interesting rendition would be porting Hacl-Star's formally verified donna-inspired implemention to Go, which uses 128bit types. Similarly, Poly1305 is very naturally implemented with 128bit types to store the multiplication results, and winds up being the most efficient implementation strategy using generic integer code on 64bit machines. Of course clever-enough compilers can recognize idioms on 64bit types, but it's not nearly as pleasant to write or optimize for. Having clearer and faster generic Go implementations of code that's easy to screw up seems like a good thing.

@josharian
Copy link
Contributor

@josharian josharian commented Feb 18, 2021

Of course clever-enough compilers can recognize idioms on 64bit types, but it's not nearly as pleasant to write or optimize for.

The standard library and compiler do provide idiom-free access to most of these instructions via math/bits (e.g. bits.Mul64), and netaddr uses these as appropriate in its nascent uint128 implemention, which we are discussing splitting off from networking-world. (I also have plans to add idiom recognition to the compiler this cycle for implementing 128 bit shifts.)

But the point about readability and clarity stands. I'd like to be able to check for overflow with normal code like x := uint128(a) + uint128(b); if uint128(uint64(x)) != x { // overflow } instead of bits.Add64.

On the crypto front, if we also added uint256, we could use it for curve25519 keys. :P

@josharian
Copy link
Contributor

@josharian josharian commented Feb 18, 2021

For those expressing impatience, I also really want 128 bit integers in the language. But Go language changes always happen slowly. (#395 took a decade.)

Go's commitment to backwards compatibility requires treading very carefully. And I believe the Go language and compiler team is rather absorbed by generics at the moment.

Also of note is that in reality this isn't "just" adding uint128 and int128 types. The difficulty here is not in the compiler, it's in the spec, carefully working through all the consequences of this change, and being convinced that it is worth the costs.

As just one example, consider strconv.ParseUint. Its signature is:

func ParseUint(s string, base int, bitSize int) (uint64, error)

This works because uint64 is the widest integer type. We can't change it to return a uint128, so we probably need a new ParseUint128 function. That's a bit unfortunate. And should it still take a bitSize argument, or should we assume that for smaller bit sizes the caller can use ParseUint?

Before you rush to provide answers about what we should do about package strconv, consider that there are many such functions. It took me 60 seconds to find another: reflect.Value.OverflowUint.

Another that might (might) help move this proposal along is a thorough, detailed design doc, which identifies all the changes that would go into this. An example of such a document is https://go.googlesource.com/proposal/+/master/design/19308-number-literals.md.

@ianlancetaylor
Copy link
Contributor

@ianlancetaylor ianlancetaylor commented Feb 18, 2021

@ethindp Go is not a language that decides what features to add based solely on demand. Of course demand plays a role, but it is not the determining factor.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Linked pull requests

Successfully merging a pull request may close this issue.

None yet