London Bitcoin Devs
Bitcoin Script to Miniscript
As Michael mentioned I’ve got like two hours. Hopefully I won’t get close to that but you guys are welcome to interrupt and ask questions. I’ve given a couple of talks about Miniscript usually in like 20 or 30 minute time slots where I try to give a high level overview of what this project is, why should you use it and why it makes Bitcoin script accessible in ways that it hasn’t been. In particular you can do all sorts of cool things that in principle Bitcoin script can do but in practice you cannot through various algorithmic, practical reasons. Bitcoin script is just not easy to reason about, it is not easy to develop with, it is not fragile but it has got a lot of sharp edges. The opposite of fragile, you will feel very fragile trying to use Bitcoin script. Since I’ve got a whole pile of time and a reasonably technical audience I am going to try to do something different here. I’m going to talk in a fair bit of detail about some of the issues with Bitcoin script and some of the motivations for Miniscript. Then I’m going to try to build up some of the detail design of Miniscript which is essentially a reinterpretation of a subset of Bitcoin script that is structured in a way that lets you do all sorts of analysis. We’re going to try to build up the design and I’m going to talk about some of the open problems with Miniscript. There’s maybe an irony here that there are so many difficult things with Miniscript given how much simpler it is than script. It highlights how impossible it is to work with script to begin with. Even after all of this work we’ve put in to structuring things and getting all sorts of nice easy algorithms, there are still a lot of natural things to do that are quite difficult. In particular around dealing with fee estimation and with malleability. So let’s get started.
First I’m going to spend a few slides giving an overview of what Bitcoin script is and how it is structured. As many of you probably know Bitcoin has a scripting system that allows you to express complicated predicates. You can have coins controlled not only by single keys but by multiple keys. You can have multisignatures, threshold signatures, hash preimage type things, you can have timelocks. This is how Lightning HTLCs are created. You can do weird things like have bounties for finding hash collisions. Peter Todd put a few of these up I think in 2013. There are a whole bunch of Bitcoins you can take if you find a hash collision. The one for SHA-1 has been taken but there are ones up for the other Bitcoin hashes, for SHA-2 and the combo hashes that you can still take if you can find a collision. The way you can do this is with a thing called Bitcoin script. Bitcoin script is a stack based assembly language that is very similar to Forth if any of you are 70 years old you might have used Forth before. It is a very primitive, very frustrating language. Bitcoin took a bunch of the specific frustrating parts of it and not the rest. A quick overview. Every Bitcoin script opcode is a single byte. There are 256 of them, they are allocated as follows. You have 78 for pushing data of arbitrary size. All your stack elements are these byte vectors. The first 76 opcodes are just “push this many bytes onto the stack.” There are a couple of more general ones. We also have a bunch of special case opcodes for pushing small numbers. In particular, -1 through 16 have their own opcodes. Then for future extensibility we have a bunch of no-ops. For historical reasons we have 75 opcodes that are synonyms for what we call OP_RETURN, just fail the script immediately. For even sillier historical reasons we have 17 opcodes that will fail your script even if you don’t execute them. You can have IF branches that aren’t accessed but if there are some certain opcodes in there it will fail your script. That’s probably a historical mistake in the Bitcoin Core script interpreter that we can trace back to 2010-2012 which is when most of this came to be. Finally we have 57 “real” opcodes. These are the core of Bitcoin script.
Of these real opcodes we have a lot of things you might expect. We have 4 control-flow ops. Those are IF, ELSE, ELSEIF and we also have a NOTIF which will save you a byte in some cases. Basically it is the same as IF except it will execute if you pass in a zero rather than executing if you pass in a non-zero. We have a few oddball opcodes, let me quickly cover those. We have OP_VERIFY which will interpret the data as a boolean. If it is true then it passes, that’s great. If it is false it fails the script. IFDUP, if you pass it a true it will duplicate it, if you pass it a false it will just eat it. DEPTH and SIZE, those are interesting. DEPTH will tell you the current number of elements on the stack, SIZE will tell you the number of bytes in the most recent element. These are interesting, I’ll explain in the next slide, mainly because they make analysis very difficult. They put constraints on aspects of your stack that otherwise would not be constrained. When you are trying to general purpose script analysis these opcodes will get you in trouble. We have OP_CODESEPARATOR, it is just weird. It was there originally to enable a form of delegation where after the fact you could decide on a different public key that you wanted to allow spending with. That never worked. CODESEPARATOR does a very technical thing of unclear utility. We have 15 stack manipulation opcodes. These basically rearrange things on your stack. You’ve got a bunch of items, there are a whole bunch of things that will reorder, duplicate certain ones or duplicate certain combinations of them or swap them, all sorts of crazy stuff like that. We also have a couple of altstack opcodes. We have one that will move the top stack element to the alternate stack, one that will bring stuff back. Those of you who are familiar with the charlatan Craig Wright may be aware that you can design a Turing machine using a two stack finite state automaton or something like that. It is plausible that the alternative stack in Bitcoin was inspired by this but actually this provides you no explicit power. The only things you can do with the altstack in Bitcoin are move things on to it and move things off of it. Sometimes this lets you manipulate the stack in a slightly more efficient way. But all that is is a staging ground. You can’t do computations there, you can’t manipulate the altstack directly, you can’t do anything fun. We have 6 different opcodes that compare equality. I’ll talk a bit about those in the next slide. The reason that there is 6 of them is to make analysis hard. We have 16 numerical opcodes. These do things like addition, subtraction, boolean comparisons which is kind of weird. There are special purpose opcodes for incrementing and for decrementing. There used to be opcodes for multiplication, division and concatenation and a bunch of other stuff. Those have turned into the fail even if not executed opcodes. Those are disabled many years ago. The way that they are disabled causes them to have this weird failing behavior. There are 5 hashes in Bitcoin which are RIPEMD160, SHA1, SHA2 and there are various combinations of these. There is HASH256 which means do SHA2 twice, there is HASH160 which means you SHA2 and then RIPEMD160. Finally we have the CHECKSIG opcodes which are probably the most well known and most important ones and also the most complicated ones. We have CHECKSIG which checks a single signature on the transaction, we have CHECKMULTISIG which checks potentially many signatures on the current transaction.
Some miscellaneous information about script, some of which I’ve hinted at. We have a whole pile of limits which I’ll talk about in a second. Stack elements are just raw byte strings, there’s no typing here, there’s nothing interesting. They are just byte strings that you can create in various ways. The maximum size is 520 bytes. This was a limit set by Satoshi we think because you can put a 4000 bit RSA key into 520 bytes. Bitcoin does not and never has supported RSA keys but Satoshi did take a lot of things from the OpenSSL API and we think this might have been one of them. As far as I’m aware there’s no use for stack elements that are larger… I guess DER signatures are 75 bytes but you can go up to 520. There are many interpretations of these raw byte strings though. I said we have no types but actually every opcode has a notion of type-ness in it. A lot of the hash ops and the stack manipulation things just treat them as raw byte strings, that’s great. But all the numeric opcodes treat their things as numbers. In Bitcoin this means specifically up to 32 bit signed magnitude numbers. Signed magnitude means the top bit. If it is 1 you have a negative number, if it is 0 you have a positive number. This means you have zero and negative zero for example, two distinct things. You also have a whole bunch of non-canonical encodings. You can put a bunch of zero padding in to all your numbers and those will continue to be valid. But if you exceed 4 bytes that is not a number. For example you can add sufficiently large numbers and you will get something that is too big and so it is no longer a number. If you keep using OP_ADD and you’re not careful about it then you might overflow and then the next OP_ADD is going to sever your script. If you’re trying to reason about script you need to know these exact rules. Some things, the IF statements, OP_VERIFY, a few others but oddly not the boolean AND and OR will interpret these things as booleans. Booleans are similar to numbers. If you encode zero or negative zero that counts as false. If you encode anything else that is considered true. The difference between booleans and numbers is that booleans can be arbitrarily sized. You can make a negative zero that is like 1000 bytes long. This will not be interpreted as zero by any of the numeric opcodes, it will fail your script, but they will be interpreted as false by the booleans. Something to be aware of. If you are trying to reason about script you need to know these rules. Then finally we have the CHECKSIG and CHECKMULTISIG and those are the closest to doing something sane. They interpret their arguments as public keys or signatures. The public keys are supposed to be well formed, the signatures are supposed to be either well formed or the entry string which is how indicate no signature. These are not consensus rules, you can actually put whatever garbage you want in there and the opcode will simply fail. There is a comparison to C if any of you guys have done a lot of C programming and tried to reason about it or tried to convince the C compiler to reason about your code instead of doing insane and wrong things. You may notice that the literal zero if you type this into C source code, this is a valid value for literally every single built in type. Zero is an integer of all sizes, zero is a floating point, zero is a pointer to any arbitrary object, zero is an enum, zero is everything. There is no way to tell the compiler not to interpret zero that way. Of course the C standard library and PosEx and everything else uses a constant zero as an error return code about 40% of the time so you need to deal with this fact. This infuriating aspect of C was carried over to Bitcoin script in the form of the empty string being a valid instance of every single different way of interpreting script opcodes. That’s just the script semantics. Those are a bunch of weird edge cases and difficulties in reasoning about the script semantics and trying to convince yourself that a given arbitrary script is doing what you expect.
Additionally there are a whole pile of independent and arbitrary limits in Bitcoin script. As I mentioned stack elements can only be 520 bytes. Your total script size can only be 10,000 bytes. You can only have 201 non-push opcodes. A non-push opcode is something a little bit weird. A push opcode includes all the things you expect to be pushes like pushing arbitrary data onto the stack, pushing a number onto the stack. It also includes a couple of opcodes that are not pushes but they sort of got caught in the range check, just comparing the byte values. You need to know which ones those are. Also in CHECKMULTISIG… This 201 opcode limit actually only applies to the whole script regardless of what gets executed. You can have all sorts of IF branches and you count all the opcodes. Except if you execute a CHECKMULTISIG then all of the pubkeys in that CHECKMULTISIG count as non-push opcodes even thought they are pushes, but only if they are executed. There’s a very technical rule that is maximally difficult to verify that as far as I know nobody was aware of until we did this Miniscript development and we found this lurking in the Satoshi script interpreter. In our Miniscript compiler we have to consider that, that is a resource on it we hit sometimes in trying to work with Miniscript. There is a sigops limit. This is something a lot of you are familiar with. This is what restricts your ability to do enormous CHECKMULTISIGs and other interesting things. There are a couple of different sigops limits. I think the only consensus one is the one that limits a given block to have 80,000 sigops in it. But there are policy limits for how many sigops you can have in a given transaction. These are different for raw script and for P2SH and for SegWit. Also there are a thousand elements, you are only allowed to have a thousand elements on the stack. It is basically impossible to get a thousand elements on the stack so it doesn’t really matter. But just to make sure your optimization is constrained in as many dimensions as possible. That’s an additional thing. For what it is worth, in BIP 342 which is Tapscript we fix a whole pile of these. In particular because having so many independent limits is making a lot of our Miniscript work more difficult. We clean this whole mess up. We kept the 1000 stack element limit because it provides some useful denial of service protection and it is so far away from anything we know how to hit without doing really contrived stuff. I think we got rid of the script size limit because there are already limits on transactions and blocks which is great. We kept the stack element size limit just because that’s not doing anything, all of our stack elements are limited in practice anyway. We combined the opcode limit and the sigop limit. We got rid of CHECKMULTISIG. The most difficult thing to keep in mind when you’re constraining your scripts is now a single one dimensional thing. Now absent extreme things that you might be doing that are all outside the scope of Miniscript, you actually just have a one dimensional limit. If you are trying to work within these limits it is much easier in Tapscript which will be SegWit v1 I hope, knock knock on wood.
A couple of particular opcodes that I hate. Everyone else that I work with has a different set of opcodes. Everyone who works with script has their own set of opcodes, these are the ones that I hate. PICK and ROLL, first of all. What these opcodes do is they take a number from the top of the stack, one of them will copy that many elements back to the top, the other one will move that many elements back to the top. Now you are taking one of these arbitrarily typed objects, who knows where you got it. Now you’re interpreting it as an index into your stack. If you are trying to reason about things now you have stack indices as a new kind of thing that is happening in your script. So when you’re trying to constrain what possible things might happen that is a whole other dimension rolled up in there. Numerics overflowing I mentioned. The CHECKMULTISIG business I mentioned. DEPTH and SIZE I mentioned. These are simple opcodes but they are just extra things to think about. If you are trying to reason about what the script interpreter is doing it would be nice if as many of the internals of that interpreter you could just take for granted and weren’t available for introspection to your users. But they are. These opcodes make those available for introspection so now they are mixed in with your reasoning about what values might be possible. IFDUP does a similar annoying thing. You can use OP_DUP and OP_IF together to get an IFDUP but this opcode just happens to be the only one that will conditionally change the depth of your stack depending on whether what you give it is considered true or false. When you’re trying to keep track of your stack depth when you’re reasoning about scripts it gets in your way.
Q - What are these used for?
A - We use IFDUP in Miniscript quite a bit. Do I have an internet connection on this computer? Later in the talk I might go to the Miniscript website which has some detailed justification for some of these things. IFDUP gets used in Miniscript. None of these other things are useful for Miniscript. Let me give a couple of justifications. PICK and ROLL I have used before, only in demonstration of weird script things I guess. I can imagine cases where you want to use PICK and ROLL to manipulate your stack. If you have a whole bunch of stack elements in a row and say you are trying to verify a signature that’s really deep in your stack. You can use OP_PICK to bring that public key and signature to the top of your stack, verify them and then move on with your life without trying to swap them out a whole bunch of times. OP_DEPTH and OP_SIZE do actually have a bunch of uses. OP_DEPTH is kind of cool. We use this in Liquid as a way of distinguishing between two multisig branches. We have the standard 11-of-15 functionary quorum check and we also have a 2-of-3 emergency withdrawal check. You’re allowed to use either one of these. The emergency withdrawal has a timelock on it. This is how Liquid coins are protected. We’re using OP_DEPTH to count the number of signatures provided. If you give the script 11 signatures it knows this is a functionary signature, let’s use that branch. If you only give it 2 signatures it knows to use the other branch. That is an example of what OP_DEPTH is for. OP_SIZE has a bunch of cool applications. One is if you use OP_SIZE on a ECDSA signature to constrain the size to be very small, this forces the user to use a specific nonce that is known to have a bunch of zero bytes in it that has a known discrete log, which is a multiplicative inverse of 2 in your secret key field. That means that by signing in such a way, by producing a small enough signature you are revealing your secret key. So you can use OP_SIZE to force revelation of a secret key. You can use OP_SIZE to very efficiently force a value to be either zero or one through a cool trick we discovered in developing Miniscript. If you use OP_SIZE and then OP_EQUALVERIFY in a row that will pass the zero or one but it will fail your script otherwise. Because the size of zero is zero, that’s the empty string, the size of one is one, but no other value is equal to its own size and so the EQUALVERIFY will abort. In an early iteration of Miniscript before we realized that we had to depend on a lot of policy rules for non-malleability, we were using SIZE, EQUALVERIFY before every one of our IF statements. Because otherwise a third party could change one of our TRUEs from one to some arbitrary large TRUE value, change the size of our witnesses and wreck our fee rate. We don’t do that now because we’ve had to change strategies because there are other places where we had similar malleability. Ultimately we weren’t able to efficiently deal with it in script. But if you really want consensus guaranteed non-malleability, not policy minimalist guarantees on malleability OP_SIZE is your friend. You are going to use that a lot. One more thing OP_SIZE is for.
Q - They are used in Lightning scripts.
A - Yes exactly. In Lightning and in other atomic swap protocols you want your hash preimages to be constrained to 32 bytes. The reason being, this is kind of a footgun reason, basically if you are interoperating with other blockchains, you want to interoperate with future versions of Bitcoin or whatever, a concern is that Bitcoin allows hash preimages or anything up to 520 bytes on your stack. But if you are trying to interoperate with another system that doesn’t have such a limit, say whatever your offline Lightning implementation protocol is, maybe some bad guy creates a hash preimage that is larger than 520 bytes, it passes all the offline checks but then you try to use it on the blockchain and you get rejected because it is too large. Similarly if you are trying to do a cross-chain atomic swap and one of the chains has a different limit than the other. Then you can do the same thing. You can create a hash preimage that works on one chain but doesn’t work on the other. Now you can break the atomic swap protocol. You want to constrain your hash preimages to being 32 bytes. Everybody supports 32 byte preimages, that’s enough entropy that you’re safe for whatever entropy requirements you have. And OP_SIZE is a way to enforce this. We’ll see this later if I can get to the Miniscript site and look at some specific fragments. We have a 32 OP_SIZE, 32 EQUALVERIFY confirming that we have 32 byte things.
Q - I’m assuming if it had no use you could disable it?
A - Right. The only thing here which we disabled in Tapscript is CHECKMULTISIG ironically and that sounds like the most useful but in some sense has the worst complexity to value ratio. There are other ways to get what CHECKMULTISIG gives you that doesn’t have the limitations and doesn’t have the inefficiency of validation that CHECKMULTISIG does. I will get into that a bit later on.
Any more questions about this slide? I have one more slide whining about script and then I’ll get into the real talk. I’ve already spent 20 minutes complaining about script. I’m sorry for boring you guys but this helps me.
Another random list of complaints about script and surprising things. As I mentioned BOOLAND and BOOLOR despite having bool in the name do not take booleans they take numbers. If you put something larger than 4 bytes into these opcodes they will fail your script. Pieter Wuille claimed he didn’t know this until I pointed it out to him. No matter how long you’ve been doing this you get burned by weird, surprising stuff like this. I’ve been saying numbers are maximum 4 bytes. Actually CHECKSEQUENCEVERIFY and CHECKLOCKTIMEVERIFY have their own distinct numeric type that can be up to 5 bytes because 32 bit signed magnitude numbers can only go up to 2^31 which doesn’t cover very many dates. I think they burn out in 2038 as many of us know for other reasons. In order to get some more range we had to allow an extra bit which meant allowing an extra byte. These values can be larger than any other number. So there are actually two distinct numeric types as well as the boolean. I mentioned the CHECKMULTISIG opcode counting weirdness. One thing that burns you if you don’t unit test your stuff is the numeric values of the push number opcodes are really weird. It is hex 50 plus whatever value you’re doing, negative 1 through 16. Except zero, zero is not 0x50. If you use 0x50 that will fail your script. Zero is 0x00. You need to special case zero. If you are naively thinking you can just take your number and add decimal 80, 0x50 to it like you might say if you were trying to support SegWit and you have in the wild the only version number being used being zero but every other version number having this 0x50 added to it. This is a very easy mistake to make that you wouldn’t notice with real world data. But there we go, that’s life. Zero is zero, everything else is what you expect plus 50.
Q - What opcode is 50?
A - 50 is OP_RESERVED. I think it was always called OP_RESERVED. Unlike a lot of the other reserved opcodes that didn’t used to have a name and then we had to disable it because it triggered undefined behavior. I think OP_RESERVED has always been reserved. It has always been because you would expect a zero to go there. If you weren’t going to put zero where it should be then the safest thing to put there would be fail the script.
Q - What is the purpose of 0x50 element in SegWit version 1 if you are putting into the witness?
A - In SegWit your witness programs have a numeric opcode, a numeric push indicating your version number followed by your 32 byte program.
Q - In Taproot you can have in the witness the byte 50 that is some annex or something? There is a description, you shouldn’t use it, it is for future use. What future use are you planning for that?
A - The question is about the annex field in SegWit v1, Tapscript signatures. Let me see if I can remember the justification… Let me just give a general answer because I’m forgetting exactly how it works. The idea is that the annex is covered by the signature. The idea is that you can add arbitrary extra data that will be covered by the signature. This in some future version in something locktime related this could be useful for adding new opcodes that would conditionally constrain your signatures in some way. As of now there is no use for it. It is basically this extra field that gets covered by the signature and that may in future do something interesting for other future opcodes that don’t exist yet. We are putting it in there now so that we won’t require a hard fork to do it later. Right now it is constrained by policy to always be empty because there is no value. I wonder if the number 0x50 is used there. I think that might just be a coincidence, maybe not. It doesn’t sound like a coincidence.
Q - It is to avoid some collision with a SegWit version number or something like this?
A - Yeah it is not a coincidence but the exact means of where the collision was imagined to be I don’t remember. I’d have to read the BIP to remember. As far as the general complaint about complexity it is getting worse but very slowly. Overall it is getting better I think. There is a lot of crap that we cleared out with Tapscript and Taproot but the annex is one place where you can still see some historic baggage carried forward.
Script - High level problems
So on a high level, let me just summarize my complaints about script and then we’re going to go into what Miniscript is and how Miniscript tries to solve these things. It is difficult to argue correctness because every single opcode has insane semantics. I talked about some of them, I didn’t even talk about the bad ones. I just talked about the simple ones that are obviously crazy. It is difficult to argue security meaning is there some weird secret way you can satisfy a script. It is hard to reason about all the different script paths that might be possible. It is hard to argue malleability freeness. There are weird surprising things like you can often replace booleans with larger booleans or you can replace numbers with non canonical numbers, stuff like this which is forbidden by policy. Most of this class of attacks will be prevented on the network but in theory miners could try to execute something like this. There are so many surprising corners, so many surprising ways in which things that can be malleated that this is something difficult to reason about. Assuming you have a script that you are comfortable with from a correctness and security perspective it is difficult to estimate your satisfaction cost. If you’ve got a script that can only be satisfied one way or a couple ways you can say it is this many signatures and this many bytes. You manually count them up and you put that in a vault somewhere. In general if you are given an arbitrary script and you are trying to figure out what’s the worst case satisfaction cost for this, this is very difficult to do. The irony of this is that over in Ethereum land they have the same problem and this results in coins getting stuck all the time. As Bitcoin developers this should be something that we can laugh at them for because Bitcoin script is designed so that you reason about it and you won’t have this problem. But in fact you do. We don’t have Turing completeness, we don’t have arbitrary loops, we don’t have all these complicated things that make it intractable on Ethereum but it is intractable anyway for no good reason. So Miniscript as we’ll see solves this. It is difficult to determine how to work with an arbitrary script. You don’t know which signatures, hashes and timelocks or whatever you might need to satisfy an arbitrary script. You basically can only support scripts that your software explicitly knows how to recognize by template matching or whatever. Then even if you do know what you need to satisfy the script actually assembling it in the right order, putting all the signatures in the right place and putting the 1s and 0s for booleans and all that kind of stuff also requires a bunch of adhoc code.
Fundamentally the issue here is that there is a disconnect between the way that script works. You’ve got this stack machine with a bunch of weird opcodes that are mostly copied from Forth, some of them mangled in various ways and some of them mangled in other ways. But the way that we reason about what script does is that you are putting spending conditions on coins. You’ve got a coin, you want to say whoever has this key can spend it or maybe two of these three people can spend it or maybe it can spent by revealing a hash preimage or after a certain amount of time has gone by there is an emergency key or something. Something like this. Our job as developers for Bitcoin wallet software or Lightning software or whatever is somehow translate this high level user model into the low level stack machine model. Along the way as soon as you stick your hand into the stack machine it is going to eat your hand. Hopefully you get more than half your work done because you’ve only got one more hand. Then you’ve got to convince the rest of the world that what you did is actually correct. It is just very difficult and as a result we have very few instances of creative script usage. The biggest one is Lightning that has a tremendous amount of developer power pointed at it. Much more than should be necessary just for the purpose of verifying that the scripts do what you expect. Nonetheless what Lightning needs fortunately is what it has. But if you are some loner trying to create your own system of similar script complexity to Lightning you’re going to be in a lot of trouble. Actually one thing I did well working with Miniscript was looking for instances of people doing creative script things and I found security issues in real life uses of script out there. I don’t even know how to track down the original people, I’m not going to say any more detail about that. The fact is this is difficult, there’s not a lot of review and the review is very hard for all of these uses.
I have three more slides of script whining. These are just a couple of fun questions, I’ll be quick. Here’s a fun one. We know that zero and negative zero can both be interpreted as false as booleans. Is it possible to create a signature that will pass that is zero or negative zero. This might be surprising because we use the canonical value zero to signal no signature. In the case of CHECKMULTISIG where you’ve got multiple signature and you want to say this key doesn’t have a signature. Is it possible that you could check the signature, get false but secretly it is a valid signature. The answer is no for ECDSA because the DER encoding requires that it starts with the digit 2 which is not zero or negative zero so we’re safe by accident. With Schnorr signatures you also can’t do zero or negative zero. The reason being that in order to do so you would have to set your nonce to be some R coordinate that is basically nothing but zeros and maybe a 1 bit. Mathematically there is no way to do this, the Schnorr signature algorithm that we chose doesn’t allow you to do this without breaking your hash functions in ways we assume are intractable. Another brainteaser. Can you get a transaction hash onto the stack the way you might want to for covenants or the way you might want to with something like OP_CTV behavior, CHECKTEMPLATEVERIFY. That’s Jeremy Rubin’s special purpose covenant thing. So you can actually do this with ECDSA. It turns out not in a useful way. You can write a script that requires a signature of a specific form, so maybe one where your s value is all zeros and your R value is some specific point and the public key is free. The person satisfying the script is actually able to produce a public key that will validate given a specific transaction and that specific signature that is fixed in the script. That public key you compute by running the ECDSA verification equation in reverse is actually a hash run through a bunch of extra EC ops. It is a hash of your transaction, it is a hash of what gets signed. In theory you could do a really limited form of OP_CTV just with this mechanism. It turns out you can’t by accident because we don’t have SIGHASH_NOINPUT so every single sighash mode covers your previous outpoint which in turn is a hash of the signature you’re trying to match which means you cannot precompute this. I apologize, I would need a white board to really go through this. Basically there is an accidental design feature of Bitcoin that is several layers deep that prevents you from using this to get covenants in Bitcoin. That’s the story of Bitcoin script. There’s always really far reaching remote interactions. If you are trying to reason about Bitcoin script you have to know this.
Here’s another fun one which I hinted at earlier. What is the largest CHECKMULTISIG, what’s the largest multisignature you can do? I think CHECKMULTISIG with P2SH limits you to only putting 15 keys on because there is a sigops limit there. CHECKMULTISIG with SegWit I think limits you to 20 for other reasons that I don’t quite remember. It turns out that you don’t need to use CHECKMULTISIG at all to do threshold signatures. You can just use CHECKSIG, you do CHECKSIG on a specific signature, if it is a valid signature it will output one, it it is not it will output zero. You can take that zero and you can move it out the way onto the altstack or something, do another signature check and then bring your zero or one back and you keep adding all these together. You can get a sum of all your individual signature checks and if that sum is greater than your threshold which is easy to check with script then that’s the exact equivalent semantics of CHECKMULTISIG except it is a little bit more efficient for the verifier. If you don’t use that then you’re only limited by the amount of crap you can put on the stack based on the other limits. You can wind up having multisignatures that are 67? I think 67 keys is right. You can get 67 keys and 67 signatures onto your stack using this which is a cool way to bypass all these extra limits. This is also the justification for removing CHECKMULTISIG in Tapscript by the way. We have a new opcode who’s current name I forget that does this all in one. CHECKSIGADD? Good, we had a bunch of weird extra letters there before. It does a signature check. Rather than outputting a zero or one it takes a zero or one and adds it to an accumulator that’s already in the right place. So you can do this directly rather than needing 3 or 4 extra opcodes, I think you need two opcodes per signature check. Now you can eliminate all the weird complexity of reasoning about CHECKMULTISIG and using it, it is now much more straightforward.
Onto Miniscript. There we go. Forty minutes on why I hate script. I will actually try not to use the full two hours even though I have two hours of slides.
Q - Last question on script. It is very hard to judge Satoshi ten years on. There’s a lot of subtle complexity here. Do you honestly think back in Satoshi’s time, 2009 that you’d have had the perspective to design a better language?
A - The question is in 2009 could I have done better? Or could we have expected Satoshi to have done better? Essentially no. There are a couple of minor things that I think could’ve have been done better with a bit of forethought. For some historical context the original Bitcoin script was never used. It was this idea for this smart contracting system and smart contracting was something in blog posts by Wei Dai, Nick Szabo and Hal and a few other people. It was this vague idea that you could have programmable money. Script was created with the intention that you be able to do that. Basically it was a copy of Forth, it actually had way more of the Forth language than we have today. We had all the mathematical opcodes and we had a bunch of other weird stuff. There was also this half baked idea for delegation where the idea was that your script is a script you execute but also the witness is a script that you execute. The way that you verify is you run both scripts in a row and if the final result is true you’re good, if the final result is false…. The idea is that your scriptPubKey which is committed to the coins should check that the earlier script didn’t do anything bad. There were a couple of serious issues with these, two that I will highlight. One was there was an opcode called OP_VER, OP version. I can see some grimaces. It would push the client version onto the stack. This meant that when you upgraded Bitcoin say from 0.1 to 0.2, that’s a hard fork. Now script will execute OP_VER and push 0.1 onto the stack for some people and 0.2 onto the stack for other people. You’ve forked your chain. Fortunately nobody ever used this opcode which is good. Another issue was the original OP_RETURN. Rather than failing the script like it does now it would pass the script. Because we had this two script execution model you could stick an OP_RETURN in what’s called your script signature. It would run and you wouldn’t even get to the scriptPubKey. It would just immediately pass. You could take any coins whatsoever just by sticking an OP_RETURN into your scriptSig. Bitcoin was launched in the beginning of 2009. This was reported privately to Satoshi and Gavin in the summer of 2010. This was 18 months of you being able to take every single coin from Bitcoin. In a sketchy commit that was labelled as being like a Makefile cleanup or something, Satoshi completely overhauled the script system. He did a couple of interesting things here. One was he fixed the OP_RETURN bug. I think he got rid of OP_VER a little bit earlier than that. Another was he added all the NO_OP opcodes. Around this time if you guys are BitcoinTalk archivists you’ll notice that talk of soft forks and hard forks first appeared around this time. It would appear forensically that around this script update, the one that fixed OP_RETURN was the first time that people really thought in terms of what changes would cause the network to fork. Before that nobody was really thinking about this as an attack vector, the idea that different client versions might fork off each other. Either explicitly or because there is different script behavior. And so the NOP opcodes or the NO_OPs were added as a way to add updates later in the form of a soft fork. The fact that this happened at the same time as the OP_RETURN fix is I think historically very interesting. I think it reflects a big leap forward in our understanding of how to develop consensus systems. Knowing that historic context the original question was in 2009 could we have done better? The answer is no basically. Our ideas about what script systems should look like and what blockchains should look like and the difficulty of consensus systems, nobody had any comprehension of this whatsoever. The fact that there were bugs that let you steal all the coins for over a year tells you that no one was even using this, no one was even experimenting. It was just weird, obscure thing that Satoshi put in there based on some Nick Szabo blog posts and nobody had really tried to see if it would fulfill that vision or not.
Q - Did he literally copy something from Forth or did he reproduce it selectively or precisely from a manual?
A - The question is did he literally copy from Forth. Or did he use a manual? I don’t believe there is any actual code copying from any Forth compiler or something like that. The reason I say that everything is copied from Forth is a couple of specific things. The specific things are the stack manipulation opcodes like the SWAP and OP_ROTATE which rotates in a specific direction that isn’t useful for Bitcoin but is probably useful in Forth. All of the stack manipulation semantics seem to have come from Forth. These are just Forth opcodes just reinterpreted in a Bitcoin context. Beyond that I don’t know Forth well enough to say. There are a couple of specific things that are really coincidental that suggest that he was using the Forth language as a blueprint.
Q - He either copied something or he made something up. In the latter case he must have thought about it?
A - The statement is either he copied something or made something up. If he made something up he must have thought about it. I don’t think that’s true. I think he made up a lot of stuff without thinking about it.
Q - It accidentally worked?
A - That’s a very good point. Someone said it accidentally worked. There are a lot of things in Bitcoin that accidentally work. There’s pretty strong evidence that some amount of thought went into all of this weird stuff. There are a lot of accidentally works here. There are a lot of subtle bugs that turned out not to be serious but by all rights they should’ve been. I don’t know what evidence to draw from that. One idea is that Bitcoin was designed by a number of people bouncing ideas off each other but the code has a pretty consistent style or lack of style. It is all lost to time now.
Let me move onto Miniscript. Are there any more questions about script or its historical oddities? Cool. In practice what script is actually used for are signature checks, hashlock checks and timelocks. This is what Lightning uses, this is what atomic swaps use, this is what escrow things use like Bitrated. This is what split custody wallets use, this is what split custody exchanges like Arwen use in Boston. This is what Liquid uses. Anybody doing anything interesting with Bitcoin script, the chances are you’ve got something interesting else. Some timelock which is just a couple of lawyers with Yubikeys. All of these things fit into this paradigm of signature checks, hashlocks, timelocks and then arbitrary monotone functions of these. A monotone function just means that you’ve got ANDs and ORs and thresholds. You’ve got this and this or that and 3 of 5 of these different things. That’s all you have. An idea for a version of script that didn’t have all of these sharp edges and that allowed analysis is what if we just created templates of all these things? What if you as a user would say “I want to check signatures with these keys and I want these hashlocks and I want these timelocks and I want them in this particular order. I want these keys or this key and a timelock or this key and a hash preimage.” That’s what I want. That would be my script and I will just literally have a program that is a DAG or a tree of all these checks and various ways of combining them. Now you can reason very easily about the semantics of that. You just go through the tree, you traverse through the tree and check that every branch has the properties that you expect. If there was a way that we could create script fragments or script templates for these three primitives, these particular checks, and also create script fragments that represents ANDs and ORs and thresholds. If we could do this in a way that you could just plug them into each other then we would have a composable script subset that was actually usable on the blockchain and we could reason about in a very generic way. That’s the idea behind Miniscript. As we’ll see there are a number of design constraints. The biggest one though is that we wanted something that was reasonably efficient. There is a naive way to do this where you take a bunch of CHECKSIGs and hash preimage checks and stuff. Then you write different ways of composing these and the result is that for every single combination you’re wasting 5 or 6 bytes. Just doing a bunch of opcodes to really make this composable to make sure no matter how your different subsets are shaped you can still do this reasonably. We didn’t want that. If you are wasting 5 or 6 bytes in every single one of your UTXOs that is going to add up in terms of fees. Even if it doesn’t add up someone is going to say that it adds up. You couldn’t do something in Lightning and gratuitously waste several bytes on every single UTXO just because it gives us this general tooling. Similarly in any other specific application you are not going to gratuitously waste these extra bytes because what do you get for this? If anyone was using this you’d get interoperability and you’d get standard tooling and all that good stuff. But no one is going to go first because it is less efficient than what they have. We really have to match or beat the efficiency of stuff that is deployed. As I said in another of my more public facing talks we did actually accomplish that really well.
Let’s go through all the problems that we encounter with this approach. Here is an example of our original vision for Miniscript. You see we’ve got an AND and some pubkey checks and we’ve got some ORs and there is a timelock thing going on. If you wrote this out as a tree you could clearly see where everything is. When there are a bunch of parentheses it looks like Lisp and it is hard to tell what matches what. You would take this thing and it would just map directly to script. One problem with this approach is that there are many different ways to write each fragment. There are a number of ways to do a key check. You can use a CHECKSIG opcode, you can use a CHECKSIGVERIFY opcode, you could use CHECKMULTISIG with one key or something weird. There are a couple of different ways to do your timelock checks and then there are many different ways to do ANDs and ORs and thresholds in Bitcoin script. There are a whole bunch of ways and I’ll talk about this in a little bit. Maybe that is not so bad though. Maybe we have 5 different ANDs and so instead of writing AND at the top line of that slide I have an AND and a tag like AND_1, AND_2, AND_3. Each one represents a different one. That’s not so elegant but it is still human readable, it is still very easy to reason about. It still gets you all these nice benefits of Miniscript. A related problem is that these different fragments might not be composable in a generic way. If I’ve got a CHECKSIG that is going to output zero or one depending on whether the signature is correct. If I’ve got a CHECKSIGVERIFY that’s going to output nothing or it is going to abort my script. There’s no AND construction that you can just plug either of those into. Your AND construction needs to know what is going to happen if your thing passes. You might have an AND that expects both of your things to either pass or abort the script. You can do an AND that way just by literally running each check in a row. If the first one fails your script aborts. If the second one fails your script aborts. Otherwise it just runs through. If your opcodes are leaving extra ones and zeros on the stack you can’t just compose them. You’d run the first one, deal with the one that is out there, maybe OP_SWAP, swap it out the way or maybe move it to the altstack. Then run the other one and do a bool AND or something to check that both are ok. That’s also fine, again we can just label things. Another problem though is when deciding between all these different ways to write ANDs and ORs there are trade-offs to make. Some of these have a larger scriptPubKey. They are larger to express. Some of them have larger satisfaction sizes. Maybe you have to push zeros or ones because they have IF statements in them. Some of them have larger dissatisfaction sizes. For some of them maybe you can just push a zero which is an empty stack element and it will just skip over the entire fragment. You don’t have to think about it. For other ones you’ve got to recurse into it and zero out every single public key. Your choice of which different OR to use… In an OR your choice of which fragments to use for the children of the OR depend very much on the probability that certain branches will be taken in real life. You have a branch that is very unlikely, you want to make its script size very small, you want to make its dissatisfaction size if that’s relevant very small and your satisfaction size you can make it as large as you want because the idea is that you will never use it. If you are using it you are in emergency mode and maybe you’re willing to waste a bit of money. But now this means your branches need to somehow label their probability. Now the mapping between this nice elegant thing at the top, maybe with tags in Bitcoin script is no longer two way. I remember in the very early days of Miniscript, I think in the first 36 hours we ran into this problem and Pieter was telling me that we need to have a separate language that we can compile to Miniscript. We’d end up with two languages that kind of look the same and I put my head in my hands and I said “No this is ruined. This is not what I want it to be.” And I had a bit of a breakdown over this because what an elegant idea that you could just have this language, you see the policy and that directly maps to script. You could pull these policies out of the script and now all of a sudden there is this weird indirect thing and Pieter for the entire time was trying to write this optimal compiler. I did not care about the compiler at all. I just wanted a nice cleaner way to write script and he was turning the compiler into this first class thing. Anyway… This is in fact where we landed. You’ll see on the next slide that the results are reasonably elegant. One final problem, there are a few more on the next slide, but the last problem on this slide is that actually figuring out the optimal compilation from this abstract thing at the top to what we represent in script there are optimizations that Miniscript can’t handle. With the design necessity of Miniscript, it can’t handle. In particular if you reuse keys, the same key multiple times in some weird script. Maybe it appears as a check in one branch but then you have the same check in another branch. Maybe there is a way to rearrange your tree, to rearrange your branches so you get logically the same function but what you express in script is slightly different. Maybe the key appears once instead of twice or twice instead of once or something like that. There are two problems with this. One is that verifying that the result of these transformations is equivalent is very difficult. Given two programs or two logical predicates prove that they are equal on all inputs. I believe that that is NP complete. The only reason that it might not be is because of restrictions of monotone functions but I don’t actually think that changes anything. I think that’s actually program equivalence. Maybe it is halting complete. It is definitely not halting complete for Bitcoin script because you don’t have loops. It is hard.
Q - Isn’t the motivation for Miniscript to make that possible?
A - Yeah the original Miniscript motivation was to make this kind of analysis possible. So what we said was basically don’t reuse keys. We cannot do what we call a common subexpression elimination in Miniscript. Basically if we wanted to do these kind of advanced optimizations we would lose the simplicity and analyzability of Miniscript. As far as we’re aware nobody is really champing at the bit to do this. All of our Blockstream projects work with this current paradigm. So does Lightning, so do the atomic swap schemes that people are doing. So do escrow schemes, so do various split custody schemes. Everything that we’re aware of people doing will work in this paradigm. If our goal is to get enough uptake so that there will be standard tooling… Ultimately the goal is to have standard tooling because that is the benefit of Miniscript is that you write a script in Miniscript, you grab something off the shelf and it will deal with your script. It will give you fee estimations, it will create your witnesses for you, you can use it in PSBT with other things, you can coinjoin with other things, you don’t need to write a whole bunch of script munging code for it. These optimizations would in theory get you stuff but it would be very difficult and adhoc to implement and it would break these properties that we think are important for adoption. I threw it in there but we’re actually not going to fix it. Maybe there will be a Miniscript 2 that would not really resemble Miniscript that could do this.
Q - I saw a conversation on Reddit between you, Pieter and I think Greg as well that said the current Lightning script had no analogue in Miniscript. Something about a HTLC switching on a pubkey hash or something like this? Is that still relevant?
A - This is really cool. The question is about Lightning HTLCs not having any analogue in Miniscript. So in Lightning HTLCs you have a pubkey hash construction and if you reveal the preimage to a certain hash, this needs to be a public key and the signature check is done with that public key. If you don’t reveal a preimage to that hash then you switch and do something different instead. In the original version of Miniscript we had no notion of pubkey hashes so you couldn’t do this. Then in May of last year Sanket Kanjalkar joined us at Blockstream as an intern and he said “What if we do a pubkey hash construction?”. There were a lot of head in my hands reasons not to do this for complexity. But Sanket showed that if we add this we would need to change up our type system a little bit but then we would get a Miniscript expressible version of Lightning HTLCs. There’s two, one you keep on the local side and one that’s remote. On one of them we saved 7 bytes and on the other we saved 17 bytes versus the deployed Lightning HTLC. We actually went from being unable to express what Lightning does without unrolling it into a very inefficient thing, to having a more efficient way to express what Lightning does in Miniscript. It is still incompatible with what Lightning actually did. One goal of ours was that you could take the HTLCs that are defined in BOLT 3 and just interpret those as Miniscripts. We can’t do that because of this switch behavior. Because Miniscript doesn’t really have a way to say if a preimage is present use it as a public key otherwise use it as a reason to do something else. I think adding that to Miniscript would be technically possible but it would add complexity to Miniscript and all we would gain was the ability to interpret HTLCs as Miniscripts. It is unclear where the value is to either Miniscript or Lightning because Lightning already has a bunch of special purpose tooling. The idea behind Miniscript is that you are trying to express a condition in which coins can be spent. Whereas the idea behind a Lightning HTLC is you are trying to use the blockchain to constrain an online interactive protocol. Those are conceptually different things. It probably doesn’t make sense to use Miniscript inside of Lightning unless it was really easy to do. You would save some code deduplication but you need so much extra code to handle Lightning properly anyway that the script manipulation stuff is kind of a minor thing.
Q - It might be useful if one side of the channel is a multisig set up and doesn’t want to bother the other side with changes to the multisig set up.
A - The core developer in the back points out that it would be useful if you could have a Lightning HTLC where the individual public keys were instead complicated policies. That might be multiple public keys or a CHECKMULTISIG or something and that’s true. If you used Miniscript with Lightning HTLCs then you could have two parties open a channel where one person proposes a HTLC that isn’t actually the standard HTLC template. It is a HTLC template where the keys are replaced by more interesting policies and then your counterparty would be able to verify that. That’s true. That would be a benefit of using Miniscript with Lightning. There is a trade-off between having that ability and having to add special purpose Lightning things into Miniscript that would complicate the system. Maybe we made the wrong choice on that trade-off and maybe we want to extend Miniscript.
Q - It depends if the scripts are going to be regularly redesigned or if there are going to be different alternative paths or if there are contracts on top of Lightning. I know Z-man has talked about arbitrary contracts where there potentially could be scripts or Miniscripts on top of Lightning.
Q - I think you can use scriptless scripts to express everything you want on top of Lightning.
A - There is another thing that Pedro, Guido, Aniket and me have this scriptless script that I think is public now or will be soon where you can do all sorts of really complicated policies with scriptless scripts. If we are going to overhaul Lightning anyway what if we move to scriptless scripts instead of Miniscript. That’s kind of moving in the opposite direction towards more complicated, interactive things rather than off the shelf static analysis things. That’s another observation. The impression I get from the Lightning community is that there is more excitement about the scriptless script stuff which has a whole pile of other privacy and scalability benefits than there is excitement about Miniscript. I think this maybe comes down to Miniscript being targeted at coins that mostly sit still. Maybe. Or maybe they don’t mostly sit still but it is about spending policies rather than constraining a complicated protocol is my view of the way different people think about this.
Q - I did see that Conner and the lnd team had used Miniscript to optimize some new Lightning scripts. Did you see that?
A - Cool, I did not. The claim is that Conner used Pieter’s website or something to find a more optimal Miniscript.
Q - They saved some bytes on a new Lightning script.
A - I’ve heard this from a couple of people. Apparently Arwen which does a non-custodial exchange had a similar thing where they encoded their contract as a policy and then ran it through Pieter’s website and they saved a byte. It has happened to us at Blockstream for Liquid. We had a hand optimized script in Liquid that is deployed on the network now and Miniscript beat it which is kind of embarrassing for all of us because we’re supposed to be the script experts aren’t we? Where we landed is we don’t have explicit Lightning support. I could be convinced to add it, it is just a pain in the ass. It will lengthen the length of the website by like 20 percent. It has a trade-off in uptake and complexity and so forth but we could be convinced to add it. We’re also not going to solve this common subexpression thing. That’s beyond adding a few things and that’s really changing how we think about this.
Another category of problems here that I’m going to call malleability. As a quick bit of background, malleability is the ability for a third party to change the witness that you put on a transaction. Prior to SegWit if somebody did this it would change your transaction ID and completely break your chain of transactions and it was just a huge mess. This was very bad. After SegWit the consequences were less severe. If somebody is able to replace your witness with another valid witness they may be able to mess up transaction propagation or block propagation because it will interfere with compact blocks. It may also change the weight of your transaction in a way that your transaction is larger than you expected so the fee rate that you put on it is going to be higher than the fee rate that the network sees. You’re going to wind up underpaying because some third party added crap to your witness. We really want things that Miniscript produces to be non-malleable. An interesting observation here is that malleability is not really a property of your script as much as it is a property of your script plus your witness. The reason being when you create a full transaction with witnesses and stuff the question you ask is can somebody replace this witness with a different one? It turns out that for many scripts there are some witnesses you can choose for which the answer is yes but there are other equivalent witnesses that you can choose for which the answer is no. That’s something where there’s a lot of complexity involved and we didn’t expect going into this. That’s kind of a scary thing because when there is complexity in satisfying scripts that we didn’t even think about before Miniscript this means that this has been hiding in Bitcoin script all along and every usage of script has had to accidentally deal with this without being able to think clearly. There’s some complexity about what malleability means. One thing is that the ability of third parties to change things depends a bit on how much they know. Do they know the preimage to some hash that is out there for example and can they stick that on the blockchain? If you have a timelock somewhere in your script and this is covered by a signature then the adversary can’t change it because timelocks are encoded in your sequence or locktime field. Those are signed. If it is not covered by signatures then maybe somebody can change that and then change branches. It is actually a little bit difficult…
Q - You don’t have a sighash to not sign timelocks?
A - That’s correct. We don’t have a sighash that won’t sign timelocks but you can create Miniscripts that have no signatures and then there is a form of malleability that appears.
This is a little bit difficult to reason about although eventually Pieter and I settled on something that is fairly coherent. A long argument I had with Pieter was about whether it counted as malleability if other participants in your transaction are able to sign things. I wanted them to mark certain keys as being untrusted when we consider signatures appearing out of nowhere. I think Pieter won that, I’m not going to rehash that entire argument. I think there’s maybe some simplicity of reasoning where you can say “This key is untrusted” and then use all your standard malleability analysis to decide what’s the worst that can happen to your transaction that might be useful in some cases. But I think that’s a technical thing. Pieter argued that that’s an abuse of the word malleability. Maybe I can share algorithms but I shouldn’t be arguing this in public so I won’t.
Miniscript and Spending Policies
This covers some simple things. To make our task tractable here we are going to assume standard policy rules. I talked about SIZE EQUALVERIFY earlier. There’s a policy rule in Bitcoin, meaning a standardness rule that affects transactions on the network called minimal IF. That means that what you put as input to an IF or a NOTIF has to be zero or one. So you do not need SIZE EQUALVERIFY, the nodes will enforce that SIZE EQUALVERIFY is done. We’re going to assume that standardness. We’re going to say that if something is malleable but you have to violate standardness to malleate it then that doesn’t count for our concerns about transaction propagation. If your only potential attackers are miners, miners can already mess up transaction propagation by creating blocks full of secret transactions and stuff like that. We’re not going to deal with that, that will save us a whole bunch of bytes in almost every fragment. In particular enforcing that signatures either have to be valid or they have to be verified is very complicated and wasteful to do directly in script. But there’s a standardness rule that requires signatures to be either empty or valid. As I mentioned we don’t have a common subexpression. It would conflict with this model where you have a tree that you can iterate over where clearly your script corresponds to some tree and vice versa. To retain that we can’t be doing complicated transformations. We also assume that no key reuse occurs for malleability. Here’s something where you have two branches, the same key is there, if you provide a signature for one branch then absent of using OP_CODESEPARATOR say which is also expensive somebody could take that signature from one branch and reuse it in another branch. Now there’s potentially a malleability vector where somebody could switch branches where an OR statement is being used. I think we have to assume no key reuse because it seems that in general if you can have arbitrary sets of keys that intersect in arbitrary ways in conjunction with thresholds, it seems intractable. It doesn’t feel NP hard but we don’t know how to do it in sub exponential time so it is possible that it is. Those are our three ground rules for Miniscript.
We are also going to separate out Miniscript, Miniscripts directly correspond, it is just a mapping between script and something human readable, we are also going to have this policy language. The policy language has probabilities that different branches are taken basically. Miniscript has weird tags like AND_1, AND_2, AND_3 and so on. They have better names than that. The policy language has probabilities. Given a policy you can run it through a compiler that will produce a Miniscript. It will look at your probabilities, it will decide what the optimal choice of a specific Miniscript fragment is, it will output a Miniscript which is clearly semantically identical to the original. This is very easy to check by the way. You take your policy and delete your probabilities. You take your Miniscript and delete all the tags and those will be equal. That’s how you check that your compiler output is correct. That’s a really powerful feature. You can take stuff off the blockchain, decode it into a Miniscript, lift it into a policy by deleting all the noise and then you can see immediately what it is doing. But Miniscript is where all the technical complexity is. We have all these different tags that need to compose in certain ways. This is what I’m going to get into that I haven’t really gotten into in any of my public talks here. Here’s a policy, rather than writing it in the Lisp format I drew this pretty picture like a year ago that I’ve kept carrying around. Here’s the same thing compiled to Miniscript. So at the bottom of the slide I’ve written the actual script out in opcodes but you can see the difference between these two. There are two differences. One is that I’ve attached all these tags to the ANDs and ORs. There’s an AND_v and an OR_c, I think are the current names for them. The other interesting thing is the pk on the left, checking pubkey 1, it added a c there. That’s saying there’s actually a public key but also I stuck a CHECKSIG operator in there. Then there’s jv, v means there is an OP_VERIFY at the end of this hash check and the j means there is some complicated check that basically skips over the whole check if you give it an empty string meaning no preimage and otherwise will do the check. That j is needed for anti-malleability protection. Then we’ve got these b’s and v’s and stuff which I will explain in the next slide. These are types. Miniscript unlike script has a type system. To have a valid Miniscript you need these types and you need to do checks.
Miniscript Types (Correctness)
Let me justify that real quick. As I mentioned you’ve got some opcodes like CHECKSIG that push zero on the stack but one on success. You have other ones like CHECKSIGVERIFY which push nothing on success and they abort your script on failure. You have all these expressions of wrapped things. There’s the composition ones, the ANDs, ORs and thresholds but there are also these j’s and c’s and v’s, all these little wrapper things that manipulate your types and they might behave differently depending on whether you’ve given a CHECKSIG or a CHECKSIGVERIFY or whether you’ve given something that can accept a zero as its input or can’t ever accept zero as an input or something like that. Basically we have these four base types, B, V, K and W. So B is just base, that is what we call it. This is something that has a canonical satisfaction that will result in pushing a nonzero value on the stack. If it can be dissatisfied it will push a zero on the stack. Then we have this V which has a canonical satisfaction that will push nothing on the stack. There are no rules beyond that. If you do something wrong it will abort. Basically all these things have this caveat if you do something wrong it will abort. There is this weird one K. This one is either you satisfy or dissatisfy it, it is going to push a key onto the stack. Whether you satisfy or dissatisfy will propagate in an abstract way up to your CHECKSIG operator that will turn that key into either a zero or a one. Then you’ve got this weird one W that is a so called wrapped base. This is used for when you are doing an AND or an OR and you’ve got a whole bunch of expressions in a row. Basically what a wrapped expression does is it takes the top element of your stack, it expects that to be some sort of accumulator, 1, 2, 3, 4, 5, some counter or whatever. It will move the accumulator out of the way, it will execute the fragment, it will bring the accumulator back and it will somehow combine the result of your subexpression and the accumulator. Typically it does this by moving the accumulator to the altstack, moving it back when you’re done and combining them using bool AND, bool OR, OP_ADD in some cases. There are these minor type modifiers. We’ve got five of them. I was going to go to the website and read through all the detailed justification for these but I think I’m not going to do that. They are minor things like the o means 1, it means that this is a fragment that takes exactly one thing from the stack. This affects the behavior of certain wrappers. The idea behind all of this is that if you have a Miniscript program you can run it through a type checker that assigns all of these tags and type modifiers and stuff to your program. If your top level program, if this top AND here is a B you have a valid Miniscript. If it is not a B it is an invalid Miniscript, all bets are off, probably it can’t be satisfied. This type check has very simple rules. There’s linear time, very straightforward rules for propagating these types up. There is a giant table on Pieter’s website which describes exactly what these mean and what the rules for propagation for. You as a user don’t really need to care about that. All you need to know is that when you run the type checker if you get a B at the top you’re golden, if you don’t you’re not, throw it out. If you get your Miniscript by running our compiler then it will always type check because our compiler enforces that type checking happens. All of these complicated questions of correctness are now wrapped up entirely in this type system. You have four base types and five modifiers.
Miniscript Types (Malleability)
We’ve got another type system in parallel for malleability. This also was something that we got pretty far into the project before we realized that we needed to separate these things out conceptually. Malleability as I mentioned is this ability for a third party to replace valid witnesses with another valid witness. One complicated thing that we didn’t realize until later is that we have a notion of canonical witnesses that we expect honest signers to use. If you have an OR, like two branches, either satisfy this subscript or this other subscript. There’s some form of OR where you try to run both of them and then you check that one of them succeeded. Any honest user is just going to pick one or the other and satisfy it and dissatisfy the other one. Somebody being dishonest could satisfy both and this would still pass the rules of Bitcoin script. When we are thinking about malleability we have to consider dissatisfactions like that. That was annoying. There’s suddenly this whole wide set of possible behaviors in Miniscript that suddenly we had to consider when doing malleability reasons. The bad things about malleability, particularly they affect transaction propagation, they affect fee estimation. Those are the bad things about malleability.
What questions can we ask? Can we assure that all witnesses to a script are non-malleable. This is the wrong question to ask it turns out. I mentioned that malleability is a property of witnesses not scripts but even so it is not necessary that every possible witness be non-malleable. What we want is this second thing in red here. Basically no matter how you want to satisfy a script, no matter which set of public keys or hash preimages or whatever, you want there to be some witness you can create even if it is a bit expensive that will be non-malleable. Our compiler will ensure that this is true. If you give it a given policy it will make sure every single possible branch of that policy has a non-malleable witness that can be created. This was quite difficult and non-trivial. This was the source of a lot of security bugs that I found when auditing random crap that I found on the blockchain. As it turns out curiously, the Tier Nolan atomic swap protocol from 2012, 2013 is actually ok and Lightning is actually ok. That was surprising. I think the Lightning folks got lucky in some ways. There was a lot of thought that went into Lightning. The way that we are thinking about this here is a very structured way where you can even reason about this is an automated fashion. It is great that we can do that now. It provides a lot more safety and assurance. What I found very frequently when trying to use Miniscript internally at Blockstream to optimize Liquid and optimize various things I was doing, I was like I have Miniscript, I can do all these powerful things, now I can optimize the hell out of my script and fearlessly do things. Every time I would find an optimization my analysis tools at the compiler would reject it. I’d be like this is broken, another bug in Miniscript. Pretty much every time it was not a bug in Miniscript. I would chase through the logic, this is a difficult problem of how to get good error messages. When I’d trace through the logic of why it was failing it would come down to me failing one of these malleability properties. I would find some attack that was actually being prevented by the Miniscript analyzer warning me that “Hey you’ve got a bare hash preimage sitting over here. If that hash becomes public someone can throw away the rest of the witness and use that for example. We have these three extra type modifiers. There’s four base types, five modifiers that represent correctness. We have three more for malleability. s means there is a signature involved somewhere in the satisfaction. It used to stand for strong, then it used to stand for safe, all these weird notions. Now it is just if there is a signature that captures the properties that we need. There’s this weird thing called f for forced. What that means is there is no way to dissatisfy it at least not without throwing a signature on this. No third party can dissatisfy something that is a f. Then there is an e which is in some abstract sense the opposite of f. With e you have a unique dissatisfaction that has no signature on it. A third party can use that unique dissatisfaction but if you dissatisfy it then that’s the only dissatisfaction. If you are not satisfying this then a third party can’t do anything about it. They see the dissatisfaction, they have no other choices.
I’m going to bump over to the website in a second. It has got a table of what these rules mean, how they propagate, what the justifications for it are and it also has a summary of this slide here which is how do we produce a signature. I’ve been hinting at various malleability vectors. There is a question. Suppose you have a Miniscript that is something like “take either this timelock or this timelock and combine it with a public key.” Is this malleable? It is actually not because the signature is going to cover the specific… this is complicated. I should have chosen a simpler example. The signature will cover the locktime but if the locktime is high enough that it actually satisfies both branches then if you’ve got like an OP_IF…
Q - If it is an already spent timelock or a timelock in the past?
A - Yes. If you try to use the later timelock then you put a timelock that will be greater than both restrictions. Now if you’re switching between those using the OP_IF or something a third party can change your zero to a one and swap which branch of the OP_IF you’re using and that’s a malleability vector.
Q - The transaction is going to fail on the locktime check in the script evaluation?
A - In this example I’m assuming that the locktime is satisfied. There exists a branch in your script that if somebody honestly tries to take the result will be a malleable witness. You’re trying to take the later locktime and some third party can change it so you’re taking the earlier locktime. We spent a lot of time iterating to try to prevent this from happening. It is actually quite difficult to reason through why exactly this is happening because as Lightning developer is saying you do have a specific locktime in the transaction that is covered by the signature so where does the malleability come from? The answer in Miniscript parlance is that you have an IF statement where neither of the two branches have this s property, signature on them. If you ever have an IF statement or a threshold where there are multiple possible paths that don’t have the s on them that’s illegal and you have to dissatisfy that. You aren’t allowed to take either timelock because if you do the result is going to be malleable. Then if you have to dissatisfy it and then you don’t have the e property then probably your whole script is going to be malleable because even after dissatisfying if you’re missing e that means that somebody else can dissatisfy. So actually all of these rules have simple propagation rules that are very difficult to convince yourself are correct but that catch all this complicated stuff. So at signing time how do we avoid this? How do we say “I’ve got this weird branch here where maybe it is a timelock or a hash preimage and now I need to think I’m only allowed to take the hash preimage if the timelock has not yet expired.” If I try to use a hash preimage after the timelock has expired a third party can delete the hash preimage, take the timelock branch and there’s a malleability vector. So how does my signing algorithm reason about this? How can I actually produce a valid signature and not get any malleability under these constraints? The answer is basically you recursively go through your Miniscript trying to satisfy every subexpression. Then you try to satisfy the AND or the OR, however these are combined and you propagate all the way to the top. Whenever you have a choice of satisfactions you look at what your choices are. If they all require a signature that is great, we don’t have to think about malleability because nobody can change those, they would need a signature to change things out. We assume no signature reuse, we assume that third parties don’t know your secret keys and so on.
Q - No sighash?
A - In Miniscript we only use SIGHASH_ALL. That’s a good point from the audience. Different sighash modes might complicate this but in Miniscript we only use SIGHASH_ALL.
They all require signatures, that’s great you just take the cheapest one. If there is one possibility that doesn’t require a signature now you need to be careful because a third party can go ahead and take that. Now you can’t take any of the signature ones, you have to take the one that the third party can take. The reason being that you have to assume a third party is going to do that to you so you have to get in front of them. You just don’t publish any of the signature requiring ones. This may actually result in you creating a more expensive satisfaction than you would if you didn’t care about malleability. There is no context I can imagine where this is safe but you may want to use a malleable signer and maybe you could save a few bytes but you really need to be careful doing that. Finally if there are multiple choices that don’t have a signature then you’re… because no matter what you choose a third party is going to have some option. That’s all there is to this. You have these rules for e, f and s that propagate through. Using these rules at signing time you apply these three points and you can recursively generate a signature. That’s pretty great. This was a really complicated thing to think about and to reason through and it is really cool that we got such a straightforward algorithm that will handle it for you. When I talk our analysis tools catching me doing dumb stuff this is what I mean. I mean I would try to sign and it would refuse to sign. I would trace through and I’d be like “What’s wrong? Did I give it the wrong keys? What is going on?” Eventually I would find that somewhere I would have a choice between two things that didn’t have signatures and it was barfing on that. That’s really cool, something that has actually saved me in real life. Only saving me from wielding the power Miniscript gives me in unsafe ways. That is a good thing.
Q - Does the tooling have a non-malleable mode? If I wanted to make it malleable because I’m a miner…
A - The question is does the tooling have a non-malleable mode. I think Pieter’s code does. We might have an open issue on rust-miniscript to add this. It is a little bit more complicated than just disabling the checks because you might also want to take some of these non-canonical satisfactions in that case that you can imagine is a little bit cheaper. I think Pieter implemented it but Sanket and I didn’t. There will be, that is a design requirement for our finished Miniscript library, that you can enable malleability if you want.
I will link to the website. I was going to go through this in more detail but I can see that I’m burning everyone out already.
Q - Shall I get the website up?
A - Let me run through, I’ve only got a couple more slides. Let me run through my last subject here which is fee estimation which I should’ve moved to sooner. Malleability was the most taxing. Let’s think about fee estimation. I will be a bit more high level and a bit more friendly for these last few slides.
Miniscript: Fee Estimation
Fee estimation requires similar reasoning to malleability. You’ve got to go through your transaction, you’ve got to figure out what is the largest possible witness size that might exist on these coins. That witness size, the largest possible witness size is what I have to pay for it. If I’m in control of all the keys I can do better. I can say “I’m not going to screw myself” so I should actually find the smallest satisfaction that uses the keys that I have. But if I have other third parties who might be contributing I have to consider the worst case of what they do. They might be able to sign, they might not. Maybe that affects how efficiently I can sign something. Regardless there is a simple linear time algorithm I can go through where I just tell it which keys are available and which aren’t. I think I have to assume all keys are not potentially available before it gets complicated. Simple linear time algorithm, it will output a maximum. I know the maximum size of my witness and that is how much transaction weight I should pay for when I’m doing my fee estimation. This is great by the way. This was the worst thing about Liquid. We had half of our development effort going into figuring out fee estimation. In Liquid we have the ability to swap out quorums. We have coins in Liquid, some of them are controlled by different scripts than others. We have this joint multisig wallet trying to do fee estimation and cross-checking each other’s fee estimation on coins with complicated scripts that potentially have different sizes of witnesses. With Miniscript we just call a function, we just get maximum witness size and it does it for us. Miniscript was really just Pieter and I going off on a bender but we really wouldn’t have been able to do Liquid without it. We got lucky there.
Q - They don’t need to try to maximize it [witness size], they just need to make it bigger than the honest signers’?
A - That’s correct. You need an overestimate, that’s what you need. You can overshoot that is perfectly fine but if you undershoot you might be in trouble because the transaction might not propagate.
If you run through your script and do your type inference and your top level thing has an s on it… If you run through the type checker and it says it is non-malleable you don’t have to think about malleability, that’s awesome. If you’re being weird and you want to have potentially malleable scripts then that will complicate your fee estimation but again it is doable, you can run through this in linear time. Let me talk a bit about how malleability and fee estimation work. This is one security vulnerability I found on the blockchain. You could argue that this is irresponsible disclosure but saying it is on the blockchain and giving a script that is not even the exact script. If you guys can find this, you could have found it without me. Suppose you have this atomic swap thing. You’ve got these coins that can either be spent by two parties cooperating or it can be spent by one party by revealing a hash preimage or after a timeout the second party can use this timelock. The issue here is that party B is not required for this at all. Suppose the timelock expires. Party B creates this transaction using the timelock case and publishes this to the network. Party A sees this and eclipse attacks Party B. Party B replaces the witness that was using the timelock with one that uses the hash preimage. They are actually following the atomic swap protocol but at this point Party B has given up. They are using the hash preimage, they are no longer waiting, they’re trying to back out. Party A uses the hash preimage to increase the weight on the transaction causing it to no longer propagate and no longer get into the next block. Then Party A can wait for the timelock on the other side and steal the coins. There are two solutions to this using Miniscript. One is Party B, when doing fee estimation, should consider that A’s key is not under his control and expect worst case behavior from A. That will do the right thing because Party B will wind up paying a higher fee but they’re paying a higher fee to prevent this attack.
Q - The real attack is tracking the transaction in the mempool. I can always drop the signature for the witness?
A - The real attack here is that I’ve gotten the transaction in the mempool with a specific transaction ID but with a lower fee rate.
Q - You don’t track by witness transaction ID right now?
A - Yes that’s correct and I don’t think we want to. If the network was looking at wtxid’s that would also prevent this kind of thing.
Another solution is when creating the transaction Party B could go over this and ask the question “Is my signature required for every branch of this?” In the standard Tier Nolan atomic swap the answer to that question should be yes. The party who has a timelock should have to sign off on all branches. That is very easy to check with Miniscript. Many a time you just scan through the script, look at every branch and say “Is my key in every branch?” Party B does this and if Party A had proposed this broken thing B would reject it out of hand. There are a number of ways that we can address this concern and Miniscript gives you the tools to do them. That’s good. This was something that I hadn’t really considered. It is a little bit contrived but not a lot contrived. You can only decrease the fee rate by a little bit and your goal is to decrease the fee rate by enough that you stall a transaction for so long that a different timelock expires and the other party doesn’t notice that you put a hash preimage that they can use to take coins in the mempool. It is a little bit contrived but you can imagine this reflecting a wider issue where you make incorrect assumptions about how your witness size might grow because other parties are not cooperating with you. Miniscript gives you the tool to reason about that.
So I’ve been talking about keys being trusted or untrusted up to this point. If a key is yours you can assume that you are always going to do the best possible thing. If the witness is smaller by signing with it you’ll sign with it. If the witness is smaller by not signing with it you won’t sign with it. Then you have these untrusted keys where you’ve got to assume they’ll do the worst thing when doing fee estimation. There is another dimension here actually about availability. If keys are definitely available, this is very similar to being trusted. If you trust a key and it is available you can just do your fee estimation assuming it will be used in the most optimal way. If the key might not be available then you can’t actually. If it is on a hardware wallet that your end user might not have access to or something like that then you’ve got to assume that the signature won’t be there, you’ve got to treat the worst case. If a key is definitely unavailable that is kind of cool. If a key is untrusted but you know it is definitely unavailable because you know it is locked in a safe somewhere in another continent, then you can treat it as trusted. Unfortunately this notion of trust and availability don’t actually overlap entirely.
It turns out that if you try to do fee estimation the way it is described it looks like it might be intractable. We’ve got an open problem. This is frustrating. This is why rust-miniscript is not done by the way. Every time I go to the issue tracker to close all of my remaining things I jump onto this and then I don’t make any progress for many hours or days and then I have to go back to my real job. It is very surprising that this is open. We thought that we had a solution at the Core Dev meeting in Amsterdam last year, Pieter and I. I think Sanket and Pieter broke it the next day. It seems like we have this ability to estimate transaction weight in a really optimal weight that takes into account all of the surrounding context. But then it seems like it is intractable here. I tried to write at a high level what you have to do and even in my high level description I was able to find bugs in. Miniscript gives you the tools to get a lower bound on the worst case transaction size and that’s what you need. What I’m saying here is that we could do even better than that by giving Miniscript tooling some more information. I don’t know how to write the algorithm to do that. This is a plea for anybody who is interested. I think you could get a Master’s thesis out of that. You won’t be able to get a PhD thesis. If anyone here is a Master’s student I think this is definitely a thesis worthy thing to do. Unless it turns out to be trivial but I’m pretty sure it is not. Pieter and I tried for a while and Pieter has two PhDs.
Miniscript and PSBT
Last slide. I nailed the two hour time limit. Let me cover the interaction between PSBT and Miniscript. This ends the technical part of the talk. This is really just engineering, something we need to grind through, there are a couple of minor things that need to be done. PSBT is of course Andrew Chow’s brainchild. This is a transfer protocol for unsigned transactions that lets you tack things onto your inputs and outputs and stuff. If you are a wallet trying to sign a transaction in some complicated multisig thing you don’t need to think about that. You take the transaction, you sign it, you attach a transaction to it and pass it onto the next guy. You don’t need to worry about putting the signature in the right place and giving room for other people to fit it in or whatever other weird stuff you might otherwise have to do. You just stick it onto the PSBT. So PSBT has a whole bunch of different roles. The most interesting role is the so called finalizer. That’s the person who takes this PSBT, takes all the signatures that are attached to this, all the hash preimages that are attached to it and stuff and actually assembles a witness. Up until now the only way to write a finalizer is to do template matching. “This is a normal CHECKMULTISIG, I know I need to look for these keys and put them in this order.” With Miniscript you can write a finalizer that can handle any kind of policy that you throw at it and any script that you throw at it. It will figure out whether it has enough data to actually satisfy the input and actually finalize the transaction. Given enough data it can even optimize this. It can say “If I use these signatures instead of that signature I can come up with a smaller witness.” Your finalizer now has the ability to not only support pretty much anything that you throw at it, any protocol that anyone is using today you can just write an off the shelf finalizer, maybe with some special purpose HTLC code, that will work and the output will be more optimal than what would otherwise be done. Maybe a dumb example of this is I have a simple finalizer that doesn’t use PSBT but the finalizer logic in Liquid where typically we need 11 signatures on a given transaction and typically we get 15 because all of the signers are actually online. My code will go through and find the 11 shortest ones by byte. We save a few bytes in Liquid every transaction. We waste a tremendous amount using CHECKMULTISIG and explicit keys and all this. But we save a few bytes that way. The reason we can do that is that I’m not writing this insane micro-optimization code for Liquid. I’m writing it in a Miniscript context where it can be vetted and reused and stuff. It is actually worthwhile to do these sorts of things. That’s really cool. Miniscript was designed to work with PSBT in this way. There are a couple of minor extensions to PSBT that we need. There has been some mailing list activity in the last couple of months but it is not there.
Q - All your suggestions have gone nowhere.
A - I did a bad thing and I wrote a bunch of suggestions in an English language email to the mailing list and wrote no code and never followed up so of course nothing has happened. That is on me, I should have written code, I should follow up and do that. And I will when I find the time.
Q - On the last slide, around roles. It means that you don’t have to allocate the finalizer or the finalizer can be transferred at a later date? What’s that final point?
A - The roles are actually different participants. If you are writing code for a hardware wallet say, the chances are you only care about the signer role which means looking at validating the transaction and just producing a signature. You tack that onto the PSBT and then it is somebody else’s problem to put that into the transaction. There is another role, the updater, where you add extra information. You say “This public key is mine” or “I know a BIP32 path for that.” If you’re writing host software maybe you care about being an updater. Right now to use PSBT at all you have to write finalizer code because every individual person needs to know how to make a transaction. But the goal of Miniscript is that we can write one finalizer to rule them all. There will be a standard Miniscript finalizer and then nobody else in the ecosystem is going to have to deal with that again. As long as they can fit their project into Miniscript they can just use a standard finalizer. That is great because that removes by far the most complicated part of producing a Bitcoin transaction. It also gets you better flexibility and interoperability. The different roles are actually different participants.
As I mentioned finalizing optimally is similar to fee estimation. If you actually take all the information that is potentially available then it becomes intractable even though it feels like it should be tractable. That’s a bit frustrating. We can definitely finalize in an optimal way given information about what data is actually available in the final state. You give it a PSBT with enough signatures and hash preimages and whatever and we can write very straightforward linear time code that will find the optimal non-malleable witness using that data which is a really cool thing. Although you need to be a tiny bit careful. The finalizer has access to extra signatures and stuff that in principle the finalizer could use to malleate. If you have an untrusted finalizer then that is a different trust model and you need to think about that.
Q - Not necessarily an untrusted finalizer. Anyone who has the full PSBT?
A - Yes. This is a point of minor contention between Pieter and I, the question of how do you think about distrust from your peers when you’re doing a multisignature thing. Where we landed on for the purpose of Miniscript is that it does not count as malleability. That’s a separate analysis thing that I haven’t fully fleshed out to be honest. I don’t know the best way to think about that if you actually distrust your peers and you think they might malleate your transactions. I’m not sure in general what if anything you can do.
Q - You don’t give them all of the PSBT?
A - It is not just not giving all the PSBT. They can replace their own signatures.
Q - They can ransom change?
A - Yeah. There is no straightforward thing. I can find other attacks for most of the straightforward things.
This is actually the end. Thank you all for listening for almost two hours. I had a lot of fun doing this. I hope you guys got something out of it. A real quick summary. There is an issue tracker on rust-miniscript that has all of the open problems that I mentioned. It is bugs and also some janitor stuff in particular writing the PSBT finalizer. There are a couple of open bugs that are actually bugs that I need to fix. There is also work to do here. There is a reasonable number of straightforward things that need to be done and there’s a small number like two of hard things that might be like thesis level work. It would be awesome if someone did the thesis stuff because then I would definitely finish the other stuff. Thank you all for listening. It has been a hour and 45 minutes. We have 15 minutes for questions if people are still awake.
Q - You want the website up?
A - Yeah can I put the website up?
Q - What’s Pieter’s repo?
A - I don’t know that Pieter has a public repo yet because his code is all patched into Bitcoin Core.
Q - There’s a PR for that?
A - So then Pieter’s repo is the PR. I think he has a separate repo that he compiled the WebJS or something for the website but I think that’s a private repo. Is it public now? Ok I should add the link. There you go, you can see the code that I’m hovering over. That one is Pieter’s website, there’s the code that produced it, there’s Pieter’s PR. There’s me and Sanket’s code. We have a Policy to Miniscript compiler. Up top here’s a policy in the high level language. I click compile there and you can see it output the Miniscript. You can see the Miniscript and the policy look the same except that the Miniscript has tags on the ANDs and ORs and it has these extra wrappers stuck in there. It also gives you a bit of information about script spending cost. You can see that I put this 9 here saying that this branch of the OR is 9 times as likely as the other one to be taken. It gave me an average expected weight for spending it. It also wrote up the script here which is cool. This is Bitcoin script and Miniscript. Those are exactly equivalent, straightforward linear time translation between these. It is literally a parser that you can write with one token look ahead. It is linear time. If you have a Miniscript you can run it through this analysis tool, it gives you some cool stuff. Then here is the Miniscript reference that I mentioned. Here are all the different Miniscript fragments and the Bitcoin script that they correspond to. Here is the type system that I mentioned, you have these four base types: B, V, K and W. This is what they mean. The five ones, this is what they mean. Here is how they propagate forward. You can see all of the leaf fragments here like pubkey checks and timelocks and stuff have certain properties. The o, n, d, u here. If you have combinators like ANDs and ORs and stuff then the properties of those propagate from the properties of the children. You can convince yourself that all of these rules are correct. There are a lot of them. You can go through them if you want to not trust and verify. You can check that these are what you expect. They are very straightforward to implement. James wrote all of this in Python and Sanket looked over it for the Bitcoin Core Python test framework.
Q - I just use this website?
A - Yeah. You can just copy this stuff right off the website. It is really nice how this turned out. You guys should’ve seen some of the earlier versions of this website.
There is some banter about the resource limitations which matter if you are writing a compiler but otherwise don’t matter. Some discussion about security. This is about how to satisfy and dissatisfy things, the different ORs and ANDs, the different ways of being satisfied or dissatisfied. This is where the different trade-offs in terms of weights come from. There is a discussion about malleability. Here is an overview of the algorithm that I described. This is more talk about malleability. This is the non-malleable satisfaction algorithm that I mentioned. You can see that it is a little bit more technical than I said but it is actually not a lot. I think Pieter overcomplicated this. He didn’t. Every time I complain about stuff he had some reason that he had written stuff the way that he did.
Q - He is very precise.
A - He is very precise, yes.
Then here are the three properties for non-malleability and here is how they propagate. It is basically the same as the last table. There you go. This is Miniscript. All of Miniscript is on this website with the entire contents of this talk, the justification for everything and the detailed rules for doing type checking and malleability checking and for producing non-malleable optimal signatures. It is all here on this page. It is a lot of data but you can see that I can scroll through it in a couple of seconds. Compare the reference book for any programming language out there to this web page that is like three pages long. It was actually very small and straightforward. Ordinary people can get through this and convince themselves that all the design decisions make sense. This gets us to the point where we really can laugh at the Ethereum for having a broken script system. Until now we couldn’t. Bitcoin script pointlessly had all of the problems of Ethereum and more. But here we are, we’re living the dream.
Q - When will this get a BIP number?
A - That’s a good question. I guess it should have a BIP number. A lot of things do that are not really consensus. BIP 32 has a number, 49 does, PSBT has a number. Our thought was that when we solve these open problems that we keep thinking are easier than they turn out to be. I don’t have a plan for asking for a number. My feeling is that I want to finish the Rust implementation and I want to have something better to say than this is open regarding the optimal fee estimation. I would like to have a finalizer that somebody out there is using. I would think maybe if Miniscript should have its own number and the finalizer should have its own number, I don’t know. I don’t have specific plans for asking for numbers. I agree that it should have a number. It feels like the kind of thing that should.
Q - It feels that there should at least be some other documentation than Pieter’s website that may or may not randomly disappear.
A - That’s fair. Right now the other documentation is the implementations. You’re right, there should be something in the canonical repo and the BIP repo is a good candidate for that.
Q - Now that you’ve gone through all this pain, are you going to come up with the new Tapscript version? I guess it is versioned Tapscript so you could come up with a new one that only has the opcodes that makes life good and maybe completely different opcodes?
A - The question is whether we are going to optimize Tapscript for Miniscript. The short answer is no. When we start to do this we wind up getting ratholed pretty quickly. There are a lot of aspects of Miniscript that are kind of specific to the design of Bitcoin script. You can see how many different fragments we have here. If we were to create a new script system that directly efficiently implemented the Miniscript semantics we would actually want to make a lot of changes and it would result in a simpler and different Miniscript that we haven’t designed. What we landed on was just getting rid of CHECKMULTISIG because that was a particularly egregious thing and we made no other changes because there wasn’t any clear Schelling point or point where we could stop.
Q - Could you get rid of negative zero?
A - We didn’t even get rid of negative zero, no. That’s an interesting one.
Q - And the infinite length zero?
A - We might have made MINIMALIF a consensus rule, I think we did. That covers the issue with giant booleans.
Q - What should be in Core and what shouldn’t be in Core? There are two PRs currently open, Pieter’s and James’ for the TestFramework. I’m assuming there is going to be more functionality in your Rust library than will get into Core. How much?
A - The question is what should be in Core and what shouldn’t be in Core. Core should have the ability to sign for Miniscripts. There is really not a lot to signing. You don’t need to understand the script to sign it. Core should have the ability I think to recognize Miniscripts it has control over.
Q - Core should be able to finalize?
A - You could go either way but I also think Core should be able to finalize as Andrew says. That means that they have a RPC or something where you give it a PSBT with all the relevant data and Core will be able to output a complete transaction. You could argue that that is a separate thing from Core’s primary job but it is a really fundamental thing to create transactions. If we are going to have the createrawtransaction, signrawtransaction API a very natural successor to that would be a general PSBT finalizer.
Q - Part of it is that there is a plan to have the Core wallet store Miniscripts which directly implies Core has to be able to finalize those and sign them.
A - Andrew is saying that there are plans for Core to support Miniscript in the descriptor wallet, to have arbitrary Miniscripts. That would require it to be able to finalize in order to sign for its own outputs. So what should not be in Core? The compiler here, that should not be in Core. That is a separate beast of a thing that doesn’t even necessarily have a canonical design. There are some interesting fee estimation and analysis tooling that I have been hinting at. Those probably don’t belong in Core. Those probably belong in their own library or their own RPC tool or something like that because they are only relevant…
Q - We should get all of the fee estimation out of Core. Fee estimation should be its own process or its own interface.
A - Core should be able to estimate fees for its own stuff. But in Core should you be able to say “These keys are available, these keys are not. These are trusted and these are not” and so on and so forth. You can get into really complicated hypothetical scenarios that Miniscript can answer for but I don’t think it is Core’s job to answer that. I think Pieter would agree. Pieter continues to disagree that my scenarios are even sensible.
Q - When you were giving up this original plan of making Miniscript a preprocessor of script was there a certain detail that was convincing you or was it the sum of all the complications?
A - The question is when I had to give up my original dream of Miniscript and the Policy language being the same. I think it was Pieter explaining the way that he had separated the Policy language from Miniscript in his head. I didn’t have that separation in my model of things. He explained conceptually that first of all the Policy language had this information about branch probabilities that could not be expressed in script. There was just no way I could preserve that and go into Bitcoin script. I think that was what did it. Then I realized there was no way I was going to be able to unify these things. He had another point which was that once you were in Miniscript everything is deterministic. Miniscript has a deterministic transformation to script and back but the compilation step from the Policy language to Miniscript that involves doing all sorts of optimizations. There is a lot of potential there to discover new optimizations or give more useful advice or something. That’s a part that shouldn’t be a component of any protocols. At least if you had a compiler that was part of some protocol it would need to be constrained in some way and we didn’t want to constrain that early on. We had this clear separation both between the Policy language having extra data that we couldn’t preserve and also the Policy language being a target for a lot of design iteration on optimization kind of stuff. With Miniscript alone as a pure thing, a re-encoding of Bitcoin script for which all the manipulations we wanted to do clearly had deterministic results. I could have my beautiful little Miniscript world where everything was deterministic and there was only one thing. Basically I could treat the Policy language as not existing, just some ugly thing that I need to use to get Miniscript sometimes. That difference in opinion between Pieter and I persists to this day. When we’re developing stuff he puts all his time into the compiler and I put no time into my compiler. Eventually Sanket rewrote the entire compiler for the Rust implementation because mine didn’t even work by the time we were done changing the language. Once Pieter described this separation I realized that this is actually a sensible separation and that is something we should move forwards with and my dream of a simple Miniscript could still be preserved.
Q - Would the Policy language be supported in Core?
A - The question is will the Policy language be supported in Core. Andrew says probably not. I think that is probably true. That would be up to Pieter. If Pieter were here he could give you some answer. I think it comes down to Pieter’s opinion. I guess Pieter needs at least one other opinion to get stuff merged but my guess would be no. The compiler is a pretty complicated piece of software that’s doing this one shot task of figuring out an optimal script that doesn’t really integrate with descriptors obviously and in a natural way. It also has some more design iteration that we want to do. A question like if you have a threshold of 3-of-5 things, maybe you can get a more optimal script by reordering the components of the threshold. That is something that is not implemented and is probably intractable in general so we have to devise some heuristics for. There is design iteration on stuff like that that still needs to be done.
Q - It may also get stuck in review from everyone not understanding.
A - Andrew says it might get stuck in review which I certainly believe. I haven’t looked at Pieter’s compiler but on the Rust side the compiler is literally half the entire library. It is much more complicated. Everything else is really straightforward, all the same algorithm, iterate over everything, recurse in these specific ways and you can review it in a modular way. The compiler is this incredibly propagating permission upward and downward, going through different things, caching different values and ordering them in certain weird ways and reasoning about malleability at the same time. It is quite a beast to review. I can’t see it getting through the Core review process just because it is such a complicated thing.
Q - A really high level question. Craig Wright wants to apparently secure about 6000 patents on blockchain technology which will obviously affect Bitcoin and the Ethereum communities and every other blockchain. When do you think he’s going to jail?
A - Craig Wright as I understand it is a UK citizen. UK law is different to US law and different to Canadian law in ways that benefit him. For example I understand in the UK that the truth is not a defense to a defamation suit. Ignoring all the perjury and so forth, he came to some US court and he got in a whole bunch of trouble for contempt of court, for throwing papers around and crying and so forth. But independent of that and independent of the patent trolling which unfortunately I don’t think violates any laws and he can just do, the other legal thing he is in the middle of are these defamation suits and these counter defamation suits. I’m not a lawyer of course but my understanding is that because he is doing most of this crap in the UK and UK law is a bit more favorable to this kind of trolling than it is in other jurisdictions. In the US he probably would have had no ability to do a lot of the stuff he is doing.
Q - Some cases have already been dismissed here in the UK. They don’t have jurisdiction over those kind of things.
A - I’m hearing that some cases have been dismissed. I know your question was a joke but it is a good question. I don’t know. I wish he would go to jail.
Q - If we have OP_CAT to force nonce reuse to do this kind of protocol could you catch this with Miniscript or a Miniscript extension?
A - The question is what if we added OP_CAT to script. There are a bunch of benefits like forcing nonce reuse and doing other things. The short answer is yes, we would extend Miniscript to support the new capabilities of OP_CAT but what specifically I don’t know. We haven’t really thought through how we would want to do that. The way it would look is that we would add more fragments to this website basically. It would have to be things that were composable. I don’t have specific plans but definitely I’m sure there would be some things you can do with OP_CAT that we would just add to Miniscript.
Q - …. preimages and keys. These are all things that you can do with scriptless scripts. How long do you see Miniscript being used up until we only use scriptless scripts?
A - With scriptless scripts you cannot do timelocks. Scriptless scripts are interactive and require online keys. I think Miniscript will forever be used for cold storage. That’s my feeling is that we’re going to tend towards a world where cold storage keys use Miniscript as a means of having complicated redemption and recovery policies and the rest of the world doing cool fast paced interactive things would move to scriptless scripts. But I also think there will be a long period of research before people are comfortable deploying scriptless scripts in practice because they are complicated interactive crypto protocols.
Q - We may have hardware wallet support with the same assumptions as a cold wallet, not a deep cold wallet, just a cold wallet, we may have this kind of stuff for Lightning.
A - Support for Lightning?
Q - Support for Lightning and scriptless scripts.
A - Yeah when we get to that point I imagine people will just transition to scriptless scripts for hot stuff. I don’t know how quickly. It seems to me that we’ve got a little while, probably another year or two, maybe five. In all seriousness these are complicated interactive protocols. But in the far future I think Miniscript is for cold storage and scriptless scripts is for hot stuff.
Q - What would Simplicity be for, this new language that has been developed?
A - That’s not a question I can answer in 30 seconds. Miniscript can only do timelocks, hashlocks, signature checks. Simplicity can do anything with the same kind of assurances. Because Simplicity does significantly more stuff the kind of analysis you can do is much more powerful because it is defined in the Coq theorem proving system but also much more difficult. If you have a policy that you can encode in Miniscript you probably want to use Miniscript. In Simplicity you can implement all of the Miniscript combinators so you can just compile Miniscript policies directly to Simplicity which is kind of a nice thing. The short answer is Simplicity lets you do all sorts of stuff that you cannot do with Miniscript. It will let you covenants, it will let you interact with confidential transactions by opening up Pedersen commitments and stuff like this. It will let you create algorithmic limit orders and stuff and put those on a blockchain and have coins that can only be spent if they are being transferred into a different asset under certain terms. You can do vaults where you have coins that can only be moved to a staging area where they can only be moved back or have to sit for a day. A lot of what I just said Miniscript will never be able to do because they don’t fit into this model of being a tree of spending conditions. Simplicity is just infinitely powerful. You can verify the execution of any Turing complete program with Simplicity.