Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Bitwise operation primops and types #4252

Closed
kozross opened this issue Nov 29, 2021 · 25 comments
Closed

Bitwise operation primops and types #4252

kozross opened this issue Nov 29, 2021 · 25 comments

Comments

@kozross
Copy link
Contributor

kozross commented Nov 29, 2021

Describe the feature you'd like

A collection of primitives for bit manipulation, including, but not necessarily limited to, the following:

While providing these operations is theoretically possible over Integer already, having such operations possible over BuiltinByteString would also be helpful. Furthermore, having some fixed-width, binary-only on-chain types (similar to Word* in GHC) would be helpful for a range of applications.

Describe alternatives you've considered

This, to some degree, is an extension and generalization of #4168, and is needed for features such as #4233. Furthermore, bitwise primitives are useful in a range of applications, especially where efficiency is important. While they can sort-of be defined already over Integer, the resulting implementations are brittle, easy to get wrong, and not very efficient. Furthermore, many problem domains that want bitwise operations need to operate on byte sequences whose semantics have nothing to do with numbers: BuiltinByteString is arguably the better type to work over here, but as-current, these operations are not definable over it. Lastly, especially for cryptographic operations, fixed-width byte blobs are often the right data type: as current, while such types can be faked, it's only by convention.

In general, the only solutions that exist right now are either slow and complex Integer-based workarounds, or nothing at all.

Describe the approach you would take to implement this

I would first add primitive types and operations, plus relevant functionality in the Plutus libraries and core. Specifically:

  • Bits8, Bits16, Bits32, Bits64, Bits128, Bits256, Bits512
  • A BooleanAlgebra type class, containing the necessary logical operations as either methods or implementations based on them
  • Instances of BooleanAlgebra for Bits*
  • If possible, instances of BooleanAlgebra for Integer and BuiltinByteString, or similar functionality as separate functions
  • Hardcore size-based benchmarking and optimization, using techniques from plutus-size-check

Shifts, rotates, population counting and find-first-set don't quite fit into this scheme, but they would be needed somehow too. Following this, I would comb through existing Plutus functionality that these operations and types could improve or replace. Lastly, I'd want to add support (or rather, wrappers) around common bit twiddling techniques, so that other developers can use them for efficient implementations of higher-level functionality, without needing to know the relevant bit twiddling folklore.

@michaelpj
Copy link
Contributor

Historically we've deliberately not wanted to have fixed-size numeric types to avoid overflow-related issues. I agree that if we were going to do this we would want fixed-width byte-based types.

However we have also historically not wanted to encourage people to implement such operations in user code, and rather preferred to add builtins for it. It's not clear if this is a sustainable policy, however.

@kozross
Copy link
Contributor Author

kozross commented Nov 29, 2021

@michaelpj I agree: fixed-size numerical types yield many surprises, usually of the sort that nobody wants to deal with. Overflow is just the tip of the iceberg here. The idea I'm putting forward is for only bitwise types: these should not support arithmetic operations at all. That wasn't something I stated clearly, and should have.

As my writeup above mentions, I don't think it's even possible to implement this without builtins. You can get a crude approximation via the arithmetic operations on Integer, but this is extremely difficult, and probably very slow, to say nothing of the on-chain script size being likely quite unreasonable.

@Benjmhart
Copy link

+1 on this - it's pretty critical in order to efficiently perform zero knowledge proof work.

@L-as
Copy link
Contributor

L-as commented Dec 1, 2021

@michaelpj

However we have also historically not wanted to encourage people to implement such operations in user code, and rather preferred to add builtins for it. It's not clear if this is a sustainable policy, however.

I don't think this is sustainable. I know you're against "rolling your own crypto", however, I argue that this is an exception. There are many many different reasons to roll your own crypto with Plutus. What if I want to do ZKPs on Cardano? Should I implement all the specific algorithms I'm using in the interpreter?

@michaelpj
Copy link
Contributor

What if I want to do ZKPs on Cardano? Should I implement all the specific algorithms I'm using in the interpreter?

The set of builtins is easily extensible for this reason. We probably will add ZKP primitives in due course!

@L-as
Copy link
Contributor

L-as commented Dec 1, 2021

I think that you will bottleneck Cardano dapps if you necessitate that any advanced algorithm be implemented in the interpreter.

@kozross
Copy link
Contributor Author

kozross commented Dec 1, 2021

I'd also add that bitwise primitives give support for things that aren't cryptographic in nature. Open any book or paper on data structures and algorithms, and bitwise operations are front-and-centre in almost anything. Not having support for this severely limits what can be written at all, and especially efficiently, on-chain.

@kozross
Copy link
Contributor Author

kozross commented Dec 15, 2021

@michaelpj Any thoughts on my comment regarding bitwise primitives for non-cryptographical tasks on-chain above?

@michaelpj
Copy link
Contributor

This needs to be a more fully thought-out proposal. What's in your issue won't fly: adding 7 new builtin types, and primitive operations on all of them (so 7*whatever) would be a lot of work. All of those need to be costed.

I think we'd be much more likely to implement a small set of primitives that cover most of the interesting usecases. For example, adding the most useful functions over Integer and ByteString as primitives.

@kozross
Copy link
Contributor Author

kozross commented Feb 7, 2022

@michaelpj Thanks for your feedback. In light of this, would you be OK with the following, reduced proposal? This still contains 12 primitives overall, but it is significantly more focused, and would still give us most of what we want.

Version Two

We add the following primitive operations over BuiltinByteString:

  • andByteString :: BuiltinByteString -> BuiltinByteString -> BuiltinByteString, which would calculate the bitwise and of the two arguments, truncating to the shorter one.
  • iorByteString :: BuiltinByteString -> BuiltinByteString -> BuiltinByteString, which is the same as the above, but for bitwise inclusive or.
  • xorByteString :: BuiltinByteString -> BuiltinByteString -> BuiltinByteString, which is the same as the above, but for bitwise exclusive or.
  • complementByteString :: BuiltinByteString -> BuiltinByteString, which performs the bitwise complement of all the bytes in its argument. The length of the argument is the same as the length of the result.
  • shiftByteString :: BuiltinByteString -> BuiltinInteger -> BuiltinByteString, such that shiftByteString bs i performs a right shift of bs by i bits if i is positive, a left shift of bs by abs i if i is negative, and does nothing otherwise. Any 'gaps' from the left or right are filled with zero bits and/or bytes.
  • rotateByteString :: BuiltinByteString -> BuiltinInteger -> BuiltinByteString, which works as above, except it performs a rotation instead of a shift.
  • popCountByteString :: BuiltinByteString -> BuiltinInteger, which returns the number of 1 bits in its argument.
  • bitAtByteString :: BuiltinByteString -> BuiltinInteger -> BuiltinBool, such that bitAtByteString bs ix is True if the bit at position ix in bs is 1, and False if it is 0. We consider indexing to be left-to-right, starting at 0 for the first bit. We error if ix is 'out of bounds'; that is, if it is either negative, or greater than or equal to the number of bits in bs.
  • findFirstZeroByteString :: BuiltinByteString -> BuiltinInteger. findFirstZeroByteString bs is lengthOfByteString bs * 8 if bs is made entirely of ones (that is, every byte is 0xff), and otherwise is 0 <= ix < lengthOfByteString bs * 8, such that:
    • For any 0 <= ix' < ix, bitAtByteString bs ix' = True; and
    • bitAtByteString bs ix = False.
  • findFirstOneByteString :: BuiltinByteString -> BuiltinInteger. findFirstOneByteString bs is lengthOfByteString bs * 8 if bs is made entirely of zeroes (that is, every byte is 0x00), and otherwise is 0 <= ix < lengthOfByteString bs * 8, such that:
    • For any 0 <= ix' < ix, bitAtByteString bs ix' = False; and
    • bitAtByteString bs ix = True.

Additionally, we add the following primitive operations over BuiltinIntegers:

  • integerToByteString :: BuiltinInteger -> BuiltinByteString, which converts a BuiltinInteger into its bytes.
  • byteStringToInteger :: BuiltinByteString -> BuiltinInteger, which reinterprets a BuiltinByteString as a BuiltinInteger.

@L-as
Copy link
Contributor

L-as commented Feb 7, 2022

We need to decide on the encoding for integerToByteString

@kozross
Copy link
Contributor Author

kozross commented Feb 7, 2022

@L-as I assume this is regarding endianness? Since BuiltinInteger wraps gmp bigints, this is the only thing that could vary.

@L-as
Copy link
Contributor

L-as commented Feb 8, 2022 via email

@michaelpj
Copy link
Contributor

This stuff always makes my head spin, but let's try and pin it down as carefully as we can.

  • I suggest where possible we follow Data.Bits.
  • I don't think rotateByteString nor complementByteString are well-defined for arbitrary-length bytestrings. You could make them well-defined by giving a length. Indeed, perhaps it would make the API simpler and safer (at the cost of being more expensive) to make every operation take a length.
  • I think bitAtByteString should count from the right, since Data.Bits counts from the LSB and generally assumes MSB first (on the left).
  • I think several of these force us to decide on an endianness for bytestrings. I don't know that there's an obvious choice for this, but it should match the endianness used by integerToByteString.
  • Why do you need findFirstZeroByteString and findFirstZeroByteString? Could we use some other operations that appear in the primitive set in Data.Bits.
  • Generally I'd like to see an argument for why this set of primitives is chosen.

We error if ix is 'out of bounds'; that is, if it is either negative, or greater than or equal to the number of bits in bs.

I'm not sure you can check "greater than or equal to the number of bits in bs". It can't just be "greater than the index of the highest 1", since otherwise you can't test bits in the zero bytestring. I think you just have to accept that you can test bits that our out of your implied range.

@kozross
Copy link
Contributor Author

kozross commented Feb 15, 2022

@michaelpj - I've substantially formalized and rewritten my proposal, with the intended semantics, and attached a
PDF describing it (LaTeX makes mathy notation so much easier than GHFM). I believe the document addresses your concerns.

@michaelpj
Copy link
Contributor

(I realised I was confused in my comment above: for future readers, bytestrings do have a length! Which removes a few of my concerns.)

Thanks, this is generally great!

Comments:

  • I'm very on board with section 3.2!
  • How is a bit sequence turned into a bytestring? You don't say, it could be left-to-right through the bit-sequence, or following the numbered indices of the bit-sequence. I think you meant it to be left-to-right, but I would like it to be painfully explicit.
    • I think what you want to say is something like: "The encoding is big-endian at both the byte and the bit level; bits (and hence bytes) are arranged from most-significant to least-significant. Bits are addressed from least-significant to most-significant, that is, from the right."
  • You claim you don't want partiality, which presumably justifies your semantics where and/or etc. use the minimum length of the inputs. However, IMO silently unexpected behaviour is pretty much as bad as partiality. Good luck tracking down the bug due to accidentally using a bytestring that's too short and truncating everything. Worse still, such an error could potentially propagate the whole way through to conversion to an integer or something. I think it would be better for these operations to just fail fast.
    • Also for bitAtByteString.
  • You should specify the direction of shiftByteString/rotateByteString (how else is one to know how to use it?). A high-level way would be to say that a positive number shifts towards the most-significant bit.
  • The high-level descriptions of the functions still make various references to "first", "rightmost" etc. This is IMO confusing, and either we should use a consistent way (and pin down the endianness), or say instead that they start with the most/least-significant bit.
  • findFirstOneByteString is definable in terms of findFirstZeroByteString and complementByteString, do we need it?
  • I'm still missing the rationale for why these operations. That is, why this proposal achieves goal 2.1
    • I assume we want a functionally complete set of binary operations: but we have and, ior, xor, and not, which is an odd set.
    • Of the others, the ones I'm least clear on are the findFirst* functions.

@michaelpj
Copy link
Contributor

I'd also love to put this up as a CIP. I can see why you prefer the latex version, so perhaps the following compromise would work:

  • Put the justificatory text in the main CIP body
  • Leave the semantics defined in the PDF and include a rendered copy as an attachment to the CIP.

@kozross
Copy link
Contributor Author

kozross commented Feb 16, 2022

@michaelpj A CIP is definitely the goal: I think your suggestion works, as the semantics section is a 'last resort of clarity', so can be kept as a separate addendum.

@kozross
Copy link
Contributor Author

kozross commented Feb 17, 2022

@michaelpj I've drafted a new version of the proposal, aiming to address all of your comments. I've also added a section about costing, and tried to clarify as much as I can.

I'll also respond to a few of your points here, for added clarity.

You claim you don't want partiality, which presumably justifies your semantics where and/or etc. use the minimum length of the inputs. However, IMO silently unexpected behaviour is pretty much as bad as partiality. Good luck tracking down the bug due to accidentally using a bytestring that's too short and truncating everything. Worse still, such an error could potentially propagate the whole way through to conversion to an integer or something. I think it would be better for these operations to just fail fast.

As mentioned in the proposal, 'failing fast' can be just as hard to track down! At MLabs specifically, we have had severe issues tracking down division-by-zero, which also 'fails fast', leading to literal days of lost productivity. The original idea of 'truncation semantics' was based on zipWith, which all of andByteString, iorByteString and xorByteString essentially followed in the prior version. Your point about loss of information is well-made though, and I believe that the approach in the current version (essentially 'monoidal padding') gives us the best of both worlds: we still have totality, but now don't silently lose information.

Of the others, the ones I'm least clear on are the findFirst* functions.

This arguably reveals a personal bias of mine - I happen to love rank-select dictionary-based structures, for which these operations are extremely important. However, given that I've not been able to easily think of another place where they are critical, they have now been dropped from the proposal.

@michaelpj
Copy link
Contributor

As mentioned in the proposal, 'failing fast' can be just as hard to track down! At MLabs specifically, we have had severe issues tracking down division-by-zero, which also 'fails fast', leading to literal days of lost productivity. The original idea of 'truncation semantics' was based on zipWith, which all of andByteString, iorByteString and xorByteString essentially followed in the prior version. Your point about loss of information is well-made though, and I believe that the approach in the current version (essentially 'monoidal padding') gives us the best of both worlds: we still have totality, but now don't silently lose information.

I still disagree with this pretty fundamentally. I really dislike the truncating zip* functions.

I don't think your variant is better. I think I was wrong when I said the problem was throwing away information. The problem is that you can get a result which is not what the user intended without them noticing. That relies on two claims:

  • Nobody ever wants the behaviour when the inputs have mismatched sizes. Of course, if we implemented it then people might come to rely on it, but a priori these things only make sense for inputs of matched sizes.
  • Getting anything other than a failure if you have given the wrong inputs is harder to track down than a failure, because it can just keep progressing with the wrong value.
    • I also note that your recent PR for the SECP256k1 support has made me realise that we can make more use of the logging than we do to give more information about failing builtins.

isBitSetByteString provides a simple example: it also exhibits the "monoidal padding" behaviour in that it assumes that bits beyond the length are not set. If a user attempts to check the last bit of their bytestring but has an off-by-one error, they will just get "not set"! And so they can continue on indefinitely with a dangerous logic error.

Putting it another way: since we care even more than usual about our users producing correct programs, it is better for us to rule out wrong programs aggressively, even if that is less pleasant for our users during development.

This arguably reveals a personal bias of mine - I happen to love rank-select dictionary-based structures, for which these operations are extremely important. However, given that I've not been able to easily think of another place where they are critical, they have now been dropped from the proposal.

I think this points to a weakness of the current proposal: examples! There is now some reasoning about why these operations are useful, but if we had some motivating examples it would be much easier to think about whether we have an adequate (or excessive) set.

Having both isBitSetByteString and isBitClearByteString is arguably somewhat redundant

I do think we should just remove one.

These are the typical bitwise logical operations provided in hardware, and in
most programming languages, leading to most algorithm descriptions relying on bitwise implementations assuming
them as primitive.

Citation? I feel like usually I see either

  • We assume you have all binary logical operations somehow, not our problem; or
  • Of course we can do this with just NAND so we'll do that.

@L-as
Copy link
Contributor

L-as commented Feb 20, 2022

An on-chain Bloom filter might be useful.

@kozross
Copy link
Contributor Author

kozross commented Feb 24, 2022

@michaelpj I've tried to address all of your concerns in yet another version. In summary:

  • I think I've come around to the idea of 'failing fast', but believe it is essential that the issues be as easy to track down as possible. In light of that, I have replaced the 'no partials' with 'all partials must clearly signal their errors', and specified exactly what we mean by this.
  • I've provided three motivating examples of what would be achievable with bitwise primitives, in ways that are significant to the on-chain setting (especially with regard to size).
  • An expanded justification section in light of the above examples.
  • Reintroduction of a 'find-first-set' operation (again, in light of the above examples).
  • Added a writeBitByteString instruction, which sets a bit at an index. I'm not sure why that wasn't in there from the get-go to be honest.
  • The two 'bit testing' operations have been merged into one.

@kozross
Copy link
Contributor Author

kozross commented Mar 22, 2022

@michaelpj Any chance you could review the proposal? We have just today had a task at MLabs where my very first motivating example would have been the ideal solution.

@michaelpj
Copy link
Contributor

I think this is pretty good. Would you like to turn it into a CIP as we discussed?

  • The bit ordering is still ambiguous. "We intend that BuiltinByteStrings represent byte sequences, with the sequence of bits being exactly as the description above." Which ordering? The ordering of the indices or the left-to-right ordering? I think you mean the ordering of the indices, but please say so. I am truly still unclear about which one you mean.
  • I would still appreciate a call-out to standard integer encoding terms. I think you are proposing little-endian, least-significant-bit first encoding (although I'm still not sure! see above).
  • I'm surprised by the direction of shift and rotate. In Haskell at least, Data.Bits.shift shifts towards higher numbers, i.e. away from the least-significant bit, which is the opposite of what you say. I haven't surveyed other languages for this, but that was my general expectation (which I think is based on shift x y = x * 2^y). I also think it would be very logical for "positive" shifts to move towards higher indices.
  • In findFirstSetByteString "Given the argument s, if for any j ∈ {0, 1, . . . , n}", I read the "for any" as an existential quantifier when I think you meant a universal. I'd use "for all".

@effectfully
Copy link
Contributor

Since there's now a CIP for this discussion, I'm closing the issue. Do feel free to reopen if you think we should keep the issue open.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

5 participants