Reviews code, fixes signature verification, reviews tutorial 3rd part, adds an introductory section on PVM debugging#3
Conversation
…erations into a list of signed operations
- renamed `token-ledger-v1` to `token-ledger-service-v1`, to maintain the parallel with v2 - in the Tutorial, updated an example command since `cargo run` no longer requires expliction `-i` or `-o` options. However, there are some more changes in this command, and the documentation will have to be all reviewed later. - Added a paragraph describing the purpose of a new binary to convert user-friendly Json operations into fully-specified ones. - some changes to justfile to enable it to be invoked from any location, and not only from that of the `justfile`. Commands build-service, create-service, query-service and submit-file, and functions to get and save the last service id, have been successfully tested - added a command to connect to an RPC node, for the moment still unhandled. Ensures that we must have either this or output specified. - Extracts some functions in main.rs to make the code more readable.
… WorkPackage directly to it. WiP: still needs some refactoring to reduce the size of long functions.
Also, builder has new features: - ability to receive a user-friendly JSON file without signatures and valid AccountIds (ie Public Keys) - ability to connect to an RPC node and submit Work Package directly without having to encode it first.
Creates a Work Package builder able to submit encoded payloads directly to an RPC Node Also: rewrites and expands the 3rd part tutorial improves the experience with justfile by allowing commands to be called from different directories extensive clippy and formatting changes
Moves the basic structs and functions to token-ledger-common.
cheme
left a comment
There was a problem hiding this comment.
few notes on a quick reading of changes of part 3 🙏
| 37063: trap | ||
| ``` | ||
|
|
||
| Debugging the PVM code is complex and way beyond the scope of this tutorial. If you need to debug at this level, be prepared to invest some serious time in understanding the PVM and the various outputs. |
There was a problem hiding this comment.
| Debugging the PVM code is complex and way beyond the scope of this tutorial. If you need to debug at this level, be prepared to invest some serious time in understanding the PVM and the various outputs. | |
| Debugging the PVM code is complex and way beyond the scope of this tutorial. If you need to debug at this level, be prepared to invest some time in understanding the PVM and the various outputs. |
| ``` | ||
|
|
||
| You can also try to compare this trace to the service's disassembled PVM code. | ||
| To do that, first locate the `.polkavm` file (TODO: currently, this seems to be exported to a temporary folder in order to create the .jam file, which is then copied to the current working directory. If there is a way to preserve the `.polkavm` file, I still haven't found it out. For now, I modified the builder logs to also copy the .polkavm together with the .jam file) and disassemble it with |
There was a problem hiding this comment.
Note that polkavm currently is just prepending a few encoded metadata bytes, I know there is a command to add them (seen in polkaport issues), likely not one to remove them, not sure.
There was a problem hiding this comment.
To be fair, I did not try very much to reason about the trace and the disassemblage. But my reading of the logs is that whatever is in square brackets is a sequential identifier for the program's instructions, while the number after it is the memory address of this instruction in the bytecode, that is, the addressed pointed to by the Program Counter.
Which metadata do you mean? Also, I would not like to remove these particulars, as they're useful to understand the code.
There was a problem hiding this comment.
Just checked: it is : jam-blob set-meta \ --name Doom \ --version 0.1 \ --license GPLv2 \ --author 'TODO' \ $@.tmp
to go from a polkavm to a corevm, but yes it is not from elf to polkavm, my mistake 🙈 Clearly going back from polkavm to riscv elf is not something that should be done.
| ``` | ||
| node2: 2026-04-09 16:16:33 tokio-runtime-worker DEBUG polkavm::api Location: #112765: u64 [sp + 0x28] = a0 | ||
| node2: 2026-04-09 16:16:33 tokio-runtime-worker DEBUG polkavm::api Trapped when trying to access address: 0xfefdddc0-0xfefdddc8 | ||
| node2: 2026-04-09 16:16:33 tokio-runtime-worker DEBUG polkavm::api Current stack range: 0xfefde000-0xfefe0000 |
There was a problem hiding this comment.
yes I remember now having address errors (from some address protection code, increasing stack fixes them), it was when running some zig code (with very little heap usage (no heap usagae at all actually)). Really wonder at the time how it i define, but did not investigate more (another issue did spawn next).
| We can't use Json directly as payload, so we encode it first in binary. The encoded payload run contains both the input operations and the state witness necessary for refinement to verify the correctness of the state transition. | ||
|
|
||
| 1. Create a list of operations without cryptographic material (no signatures nor account IDs): <unsigned_ops.json> | ||
| 2. From `token-ledger-builder-v2`, ionvert this list to a Json file with full information in Json with |
There was a problem hiding this comment.
| 2. From `token-ledger-builder-v2`, ionvert this list to a Json file with full information in Json with | |
| 2. From `token-ledger-builder-v2`, convert this list to a Json file with full information in Json with |
|
|
||
| At this point we have described the state model (full external state + partial witness + on-chain commitment). The next sections discuss a different axis: how this same payload is delivered to refinement (directly, via preimage, or via segments). | ||
|
|
||
| Full state serializing is done after all state transition (usually done by the command line producing the jam workitme), with no recovery of errors. |
There was a problem hiding this comment.
| Full state serializing is done after all state transition (usually done by the command line producing the jam workitme), with no recovery of errors. | |
| Full state serializing is done after any state transition (usually done by the command line producing the jam workitem), with no recovery of errors. |
| - the accessed key/value pair (when present), so refinement can reconstruct the touched leaves; | ||
| - the sibling hashes along the leaf-to-root path in the balances Merkle tree. | ||
|
|
||
| The witness format is therefore access-based (all keys touched by transition logic), not strictly "all keys that ended up changed". In practice this can include values that were only read for validation (for example checking balances before a transfer). |
There was a problem hiding this comment.
| The witness format is therefore access-based (all keys touched by transition logic), not strictly "all keys that ended up changed". In practice this can include values that were only read for validation (for example checking balances before a transfer). | |
| The witness format is therefore access-based (all keys touched by transition logic), not strictly "all keys that ended up changed". In practice this must include values that were read for validation (for example checking balances before a transfer). |
There was a problem hiding this comment.
Only deleted or overwrite values do not have to be include (their value hash and all tree node sibling hash are enough info), but really not sure if properly implemented.
There was a problem hiding this comment.
Thanks. I must confess I still do not fully understand the Merkle and the state code, which I almost treating like a black box at the moment. So there are some parts in this section I've kept more or less as they were, trying to smooth the English without changing the meaning.
There was a problem hiding this comment.
Np, if you go into trying to understand the code please do not hesitate to give me feedbacks as I wanted this code to be easy to understand from reading (and likely is not, but not sure what are the better part to simplify).
| - designing a client external state: accounts are not stored on jam state, only the state merkle root. | ||
| - discuss cost of such design. | ||
| - have a minimal external client state code example for educational purpose. | ||
| Note that refinement, by itself, can ascertain validity of operations only in relation to the partial state communicated with them. Where we have several clients, it may happen that one of them submits a partial state that is not in sync with what the others have submitted to the service and so it may happen that a state transition is considered valid in refinement but corresponds to a global state that is no longer up to date. For this reason, accumulation ensures that the state only changes if its current value corresponds to the initial state confirmed by the Witness, that is, the batch of operations was applied to the service's current state. |
No description provided.