-
Notifications
You must be signed in to change notification settings - Fork 36
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Implement transaction prologue script #14
Comments
Regarding global inputs, based block header format described in #17, I'm now thinking that global inputs could be just a hash of the last known block. Then, as part of the prologue, this can be "unhashed" into block number, chain root, state root etc. And then state root can be "unhashed" into account DB root, note DB root, nullifier DB root etc. |
tasks:
notes:
These can be summarized into thes procedures:
|
Great summary! A few comments on the above.
The stack will contain only the hash of the last known block. So, we still need to use advice provider to "unhash" it into the block header. We'll also need to "unhash" commitment to the note DB to check whether consumed notes were in it - but this can be done later when we have MMRs implemented.
We actually don't have to have SMT finished for this. Copying of account vault is just copying of the root of the SMT (i.e., copy a word from one address to another).
Should be "add each asset to the tx vault".
This requires MMR to be implemented, not SMT. Also, this point seems to be repeated twice. |
@bobbinth why are the global inputs required for the transaction kernel? I would have thought that validation of inputs is performed by the os kernel during block construction? Do you intend to use it for a different purpose? Is there potential for global information to become stale? |
I was going to propose that we increase the note limit above 1024, however after some simple analytics I've concluded that it's probably not necessary. The max number of trades executed on uniswap in a single Ethereum block is 420 - see dune query |
The main reason is so that we can prove that notes consumed in a transaction were present in the Note DB (hash of the last known block will contain a commitment to the Note DB). Another reason is to allow account/note code to access such variables as block height. Access to such variables is very useful for smart contracts. It is important to note that we can't guarantee that the block height at which transaction is executed will be the same as the block height at which the transaction is included into a block, but we can guarantee transaction block height will always be less than block block height - and I think is already super useful. Another reason is that, in theory, this will allow accessing state of any account. As you've said, we can't guarantee that the state is not stale - but in some cases this may be OK (e.g., if an account cannot be updated more frequently than once an hour, any transaction which reads the account state shortly after it is updated, can be sure that the state is not stale). |
There have been some changes to the specification for the transaction prologue. Below I specify only those components that I believe to have changed. If it is not referenced then the reader can above the OP to still be relevant. Prologue Inputs
Account data sectionAccoundId has been changed to a single element and as such this must be considered here.
Consumed notes data section
|
This comment will contain a running thread of questions that arise during the implementation process:
|
Yes, we could do that - but I wanted to make this as "stateless" as possible. So, for example, if we pass account commitment via the stack, a wallet device would not need to maintain any information the account DB.
I was trying to make the nonce more easily accessible (we have specialized instructions for reading/writing the first element of a word). But I think the overhead is probably negligible and it is better to keep the layout the same everywhere. One correction: account nonce is currently a single field element and I'm not sure we'll end up changing that. |
Ok, that makes sense. Having said that, we have a dependency on inclusion proofs for notes so in any case the client will have to do lookups. Having them do the account db lookups as well may be worth considering as it reduces the overhead of the batch / os kernel. What do you think?
Thanks I have updated my comment. |
I think this is somewhat different. Inclusion proofs for notes could be anchored on some old block because we just need to prove that the note was created at some point. Inclusion proof for the account, would need to be done against the last block to prove that our starting point is the latest account state. Or I guess we could do it against the old block, but then when we aggregate the proofs into a block, we'd need to prove that the state of the account didn't change between the block used in the transaction and the last block. This is not difficult, but I think it adds more complexity than providing the expected initial state via the stack.
Thank you! There is still a tiny typo: the address of nonce should be |
Ah yes good point! This makes sense.
whoops, I'll fix that now. |
When ingesting assets from the advice provider for consumed notes it would be useful if the asset data is padded to be a multiple of the rate width. This would involve padding with an additional word when we are ingesting an odd number of assets. |
Agreed. Let's create an issue for this. |
Observation: Each note can contain a maximum of 1018 assets. |
I'm actually thinking we should probably reduce this to something like 256 (see #10). |
Why do we have these limit though? We need a counter for all these values anyways, number of consumed/produced notes, number of assets in the note's vault, so on. These counters in the VM has to be either a Felt or a u32, why not make that the limit instead? And we could have an even tinier representation outside of the VM if we use variable length encoding for counters and lay the data sequentially without padding |
My motivation for smaller rather than larger limits is that it reduces the amount of edge cases (and potential attack vectors) to worry about. Like we won't need to think what would happen if someone creates a note with 1M assets (i.e., do we need to have a separate code path to handle it or do we have a single code path which can handle notes of arbitrary size?). |
AFAIU we are setting the maximum not the exact size, so we need code to handle arbitrary size anyways, no? |
Different sizes, yes - but not arbitrary. If we can assume that a transaction will never be bigger than say 100KB we can write code in a certain way. But if a transaction could potentially be 100MB, then the code would need to be written in a very different way. |
…nullifiers-handle store: implement check nullifiers handle
Transaction prologue is a program which is executed at the beginning of a transaction (before note scripts and tx script are executed). The prologue needs to accomplish the following tasks:
Laying out inputs in memory
Before a transaction is executed, the stack is initialized with all inputs required to execute a transaction. I'm thinking these inputs could be arranged like so (from the top of the stack):
a. Last block number (1 element) - a unique sequential number of the last known block.
b. Note DB commitment (4 elements) - commitment to the note database from the last known block.
The shape of global inputs still requires more thought - so, I'll skip it for now - but how the rest works is fairly clear. Specifically, we need to read the data for account and notes from the advice provider, write it to memory, hash it, and verify that the resulting hash matches the commitments provided via the stack.
Overall, the layout of root context's memory could look as follows:
Bookkeeping section
The bookkeeping section is needed to keep track of variables which are used internally by the transaction kernel. This section could look as follows:
tx_vault_root
num_executed_notes
num_created_notes
There will probably be other variables which we need to keep track of, but I'll leave them for the future.
Global inputs section
As mentioned above, I'm skipping this for now.
Account data section
This section will contain account details. Assuming$a$ is the memory offset of the account data section, these can be laid out as follows:
acct_hash
acct_id
acct_code_root
acct_store_root
acct_vault_root
acct_nonce
Consumed notes data section
This section will contain details of all notes to be consumed. The layout of this section could look as follows:
Assuming$c$ is the memory offset of the consumed notes section, the meaning of the above is as follows:
num_notes
nullifiers
notes
Here, nullifier for$c+1$ , a nullifier for $c + 2$ etc. The assumption is that a single transaction can consume at most $1023$ notes. The choice of this number is somewhat arbitrary.
note0
is at memory addressnote1
is at memory addressAt address$c + 1024$ the, the actual note detail section starts. In this section we lay out all details of individual notes one after another in $1024$ address intervals. That is, details of $c + 1024$ , details of $2048$ etc.
note0
start at addressnote1
start at addressAssuming,$n$ is the memory offset of the $n$ -th note, the layout of note details could look as follows:
note_hash
serial_num
script_hash
input_hash
vault_hash
num_assets
assets
Here, each asset occupies 1 word, and thus the first asset in the note will be at address$n + 6$ , the second asset will be at address $n + 7$ etc.
We do not "unhash" inputs because they are needed only when we start executing a given note - so, we can unhash them then.
Created notes data section
This section will contain data of notes created during execution of a transaction. It is not affected by transaction prologue - so, I'll skip it for now.
Building
tx vault
To build a unified transaction vault we can do the following:
To implement we need to have compact Sparse Merkle tree implemented in Miden assembly.
Verify that notes are in Notes DB
One of the inputs into transaction kernel will be a commitment to notes DB. To verify that notes are in the notes DB we'd need to do the following:
To do this, we need to have Merkle Mountain Range implemented in Miden assembly, and also define better the shape of global inputs.
The text was updated successfully, but these errors were encountered: