-
Notifications
You must be signed in to change notification settings - Fork 81
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
SIP-015: Stacks Upgrade of Proof-of-Transfer and Clarity #95
Conversation
|
||
Title: Stacks Upgrade of Proof-of-Transfer and Clarity | ||
|
||
Authors: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The authors list can absolutely be expanded. You don't have to be a developer to contribute to a SIP's activation, especially one as broad in scope as this one! I know I've missed people who deserve to be here, so please speak up and get credit where credit is due :)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
For example, @Hero-Gamer should definitely be on this list, but I need an email address to put down :)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
came here because of @Hero-Gamer so +1 to this suggestion
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@niclaz do I know you?? What name are you on discord or twitter lol.. thanks for your vouch!.. Lots of great interesting things happens & knowledge on Github def a great place to explorer Stacks!
Not sure if I should be on the list, but if you guys feel appropriate, my email is: herogamerthesht572@gmail.com
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Through Twitter @Hero-Gamer, you posted the link to this SIP and it inspired me to read it and comment - I'm @niclaztweets on that platform
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'd love to see Jason @whoabuddy on, made my comment in the other tab reasons stated there: https://github.com/stacksgov/sips/pull/95/files#r1016619773
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yup, added!
contracts. However, it will no longer be possible to call public functions in | ||
`pox`. All future Stacking operations must be done through `pox-2`. | ||
|
||
# Activation |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Tagging @Hero-Gamer, @jennymith, and @radicleart to make sure this section accurately describes what we've discussed in the past.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Pretty important dates - might be worth having them at the very top as 'proposed activation window'
Would also call for a table here - with a column for each actor in the process (stacker, non-stacker, miner) and rows for actions they need to take and dates when they need to be completed by.
This may be best placed in the 'Activation Status' sections that follows this - I leave it to your best judgement
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Good idea! Will copy it to the Abstract.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I was going to create a flowchart instead of a table. Just haven't gotten around to it yet.
Tagging @obycode as a reviewer (it's not letting me do in the "reviewers" section it for some reason). |
non-Stacker vote criteria, and 900 of his STX will have counted towards "no" for | ||
the Stacker vote criteria. | ||
|
||
## Activation Status |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
linking my comment above
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Added comments to improve legibility and non-technical user understanding of the SIP
some typos and some recommendations
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Without changes
- to how delegetors can increase their amount while their stx is locked and
- to the delegation via burn-chain txs
this SIP is not acceptable for me.
A nice to have would be a way for Stacking delegators to indicate how to receive their rewards similar to solo-stackers. Even more if the delegators use burnchain txs for delegation.
the given stacker (and remains authorized until the end of the | ||
locked-for period), that the increased amount remains less than the | ||
delegation's `amount-ustx` field, that the user is currently locked, | ||
and that the user has enough unlocked funds to cover the increase. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We need to change the behaviour of delegate-stacks
as well so that users can increase their allowance of the operator.
Currently, users can only change their allowance when their stx amount is unlocked. Without a change to delegate-stacks
, this new method is useless.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ideally, the delegate-stack-increase function for delegators allows to specify a new reward address as well.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I will look into how feasible this is and get back to you early next week.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Any updates here? @jcnelson
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actually, this is already supported. Looking at the code for pox-2
, the user can already call revoke-delegate-stx
and then delegate-stx
to increase their lockup allowance. After this, the pool operator can call delegate-stack-increase
with the new allowance. I forgot to add this to the SIP -- delegate-stx
has already been modified in pox-2
so that the user can call it while their STX are locked.
|
||
Note that, just as with the existing `delegate-stack-stx` function, | ||
this method locks *but does not commit* the user's STX. The delegation | ||
operator must invoke `stack-aggregation-commit` to set a reward address |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is the removal of the minimum for stack-aggregation-commit a breaking change that needs to be listed in this SIP?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There is still a minimum Stacking threshold, but once that threshold has been met, the user can increase the amount of STX locked by as little as 1 uSTX with stack-increase
or delegate-stack-increase
.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
My comment was about min_increment_ustx
that is enforced on stacks-aggressition-commit
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
stacks-aggregation-commit
still uses can-stack-stx
internally, which verifies that at least (get-stacking-minimum)
uSTX are stacked by the pool operator for the given reward-cycle
.
This section is only saying that delgate-stack-extend
is added so the user can commit their STX to later reward cycles while STX are locked. The number of STX they have locked does not change. The pool operator must call stack-aggregation-commit
to commit the extension on the new reward cycles in which the STX are now committed.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I have hoped that pool operators can add more pool members after a first call to stack-aggregation-commit
.
I understand your comment that pool members can't increase with 1 uSTX - even though some of their STX are already above the threshold as part of the pool - if the sum of increases by all pool members is below the threshold.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I have hoped that pool operators can add more pool members after a first call to stack-aggregation-commit.
Ah, I see. Let me see if we can address this in the code. I think that once the given PoX address has cleared the minimum Stacking threshold, it should be safe to increment the stacked amount by 1 uSTX.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Tracking this in an issue: stacks-network/stacks-core#3360
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
PR opened
The `slice` function attempts to return a sub-sequence of `sequence_A` that starts | ||
at `left-position` (inclusive), and ends at `right-position` | ||
(non-inclusive). | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
For parameters toslice
there is an alternative to indices that is a better fit for Clarity: Make the first number the offset (how many items to skip/drop) and the second number how many items to take/keep for the result sequence:
(slice "blockstack" u5 u5) ;; Returns (some "stack")
(slice (list 1 2 3 4 5) u3 u1) ;; Returns (some (4))
Unlike with indices, if the second number is a constant, the max length of the result sequence can be trivially determined also when the first numerical argument is a variable. This is a significant benefit for cost estimation in Clarity.
The SIP should consider this parameterization as an alternative to indices.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The cost estimation would be the same either way. Either approach requires the implementation do to some bounds-checking to determine whether or not to evaluate to none
or some
, and in the latter case, the implementation must calculate the range of elements in the subsequence to consider.
The SIP should consider this parameterization as an alternative to indices.
I think that regardless of which parameterization is used, the user could easily create a private function that implements the other parameterization if so desired.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The cost estimation would be the same either way.
No. Using offset
and length
as parameters makes it trivial to determine the max range of the result sequence:
(slice "abcdefgh" i u3) ;; max length of the return sequence can be estimated to be 3
Compare to the equivalent call using indices, where it is not trivial to determine the length of the return sequence:
(slice "abcdefgh" i (+ i u3)) ;; max length of the return sequence can be estimated to be 8
Thus, offset
and length
are a better parameter choice.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In both interfaces, the Clarity VM does not know in advance what the length of the resulting slice will be, because both arguments may be expressions whose evaluated values will not be known until runtime. The length of the sequence that slice
evaluates to has the same length as sequence_A
in both cases; and in both cases, the caller will need to use as-max-len?
to assert a smaller length if that is desired.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
the Clarity VM does not know in advance what the length of the resulting slice will be
That's a failure in the implementation. If the slice
function takes offset
and length
as parameters instead of indices, the Clarity VM should be able to determine in advance that the result of
(slice "abcdefgh" i u3)
is at most 3 characters.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Consider the case where the length parameter is the last 16 bits of the last block's VRF seed.
The premise is that with offset and length as arguments, if the second number (the length) is a constant, the max length of the result sequence can be trivially determined also when the first numerical argument (the offset) is a variable.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
A version of slice
that must take a literal to represent the range of data to be extracted is a non-starter for me. What good is a slice
function that can only take literals, when more often than not, the reason for using slice
is to decode something? How would slice
be useful in this context when fields to be decoded can be variable-length?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
A version of slice that must take a literal
This is not suggested.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The premise is that with offset and length as arguments, if the second number (the length) is a constant
A version of slice that must take a literal
This is not suggested.
Either the argument for length
is permitted to be derived from information that is only known at runtime, or it isn't. If it isn't, then slice
cannot be used for its intended use-case (i.e. decoding data uploaded to a smart contract, which may require extracting variable-length fields). If it's the former, then there's no cost difference between a an offset/length approach versus a start/end approach. Please help me understand.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I had to find extra time to think it through and In this case I have to agree with @jcnelson .
It doesn't really matter whether we'll have (slice sequenceX start-pos end-pos)
or (slice sequenceX offset length)
as type of its output must be always (opional sequenceX)
It means that we don't calculate max length of the output at all. It is always the same as the length of the input sequence.
At first glance it feels counterintuitive, but we have to apply to this function exactly the same rule which is applied to all other built-in and user-defined functions.
Each function must return value of a known and static data type.
One of the best example for this rule is concat
. It takes 2 arbitrary long sequences of the same type and returns a sequence of exactly the same type as its inputs and length that is sum of lengths of input sequences.
This simplifies contract validation as it can be performed only once (during deployment).
Example:
;; returns (buff 9) because 3 + 3 + 3 = 9
(define-read-only (concat3 (x (buff 3)) (y (buff 3)) (z (buff 3)))
(concat (concat x y) z)
)
(define-private (test1 (x (buff 20)))
(print x)
)
(define-private (test2 (x (buff 9)))
(print x)
)
(define-private (test3 (x (buff 8)))
(print x)
)
(test1 (concat3 0x 0x 0x)) ;; (buff 9) fits into (buff 20) therefore it prints 0x
(test2 (concat3 0x 0x 0x)) ;; (buff 9) = (buff 9) so it prints 0x
(test3 (concat3 0x 0x 0x)) ;; despite the fact that concat3 returns 0x, type of this value is (buff 9)
;; (buff 9) is longer than (buff 8) therefore we'll get an error at this line
And the same example but for slice:
;; returns (optional (buff 300))
(define-read-only (cut (input (buff 300)) (x uint) (y uint))
(slice cut x y)
)
(define-private (test1 (x (optional (buff 400))))
(print x)
)
(define-private (test2 (x (optional (buff 300))))
(print x)
)
(define-private (test3 (x (optional (buff 100))))
(print x)
)
(test1 (cut 0x00010203 u0 u2)) ;; (buff 300) fits into (buff 400) so we can print (some 0x0001)
(test1 (cut 0x00010203 u0 u2)) ;; (buff 300) = (buff 300) - we can print (some 0x0001)
(test1 (cut 0x00010203 u0 u2)) ;; cut returns (some 0x0001) but type of this value is
;; (optional (buff 300)). Because of that we'll get an error at this line
The reason for proposing this as a breaking change is to de-risk the possibility | ||
that the new lexer and parser are not "bug-for-bug" compatible with the current | ||
parser. Tying the release of this new lexer and parser with a breaking change, | ||
while keeping the old parser for processing all blocks prior to the change, | ||
ensures that any new behaviors introduced will only apply to transactions after | ||
the change takes effect. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can you list all new behaviors this change introduce as well as all old behaviors that has been dropped with rationale behind these decisions?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Given the current testing regimen, I tentatively believe (but cannot prove) that the new implementation behaves the same as the old implementation. However, the fact that I cannot prove that they are equivalent makes shipping the new parser by way of anything other than a hard fork risky -- if it turns out that my belief was wrong, and someone is able to produce a Clarity contract that parses differently with these two versions, then we risk a chain split. Old nodes with the old parser will not treat the transaction the same away as new nodes with the new parser.
This text is saying that by shipping the new parser as part of a breaking change, we are de-risking this possibility of an accidental chain split.
EDIT: the new parser actually does fix at least one bug, which makes it incompatible with the old parser (per @njordhov's comment below).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
if it turns out that my belief was wrong, and someone is able to produce a Clarity contract that parses differently with these two versions, then we risk a chain split.
To support the point @jcnelson is making, there is indeed Clarity code that parses in the original parser but shouldn't parse in the upgrade, such as:
(list "\\" x")
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The current V1 parser is around 700 LOC (ignoring unit tests) while the lexer and parser for Stacks 2.1 is almost 2000 LOC (without unit tests) for V2 plus 700 LOC for the included V1 parser.
For all the extra complexity with the parser upgrade, there better be other advances than that the new parser is "3x faster".
Can you list all new behaviors this change introduce as well as all old behaviors that has been dropped with rationale behind these decisions?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks @njordhov for pointing this bug fix.
In interpreted languages parser is the cornerstone of everything. And even a small change can cause a cascade of unknown behaviors/quirks. That is why I wanted to have a list of what has been fixed, added, removed or modified.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
For all the extra complexity with the parser upgrade, there better be other advances than that the new parser is "3x faster".
@obycode can speak to this better than I can (he wrote the new parser), but from memory, the new parser:
-
Separates the task of lexing the source code into tokens from the task of parsing the tokens into symbolic expressions. This makes it possible to rigorously test both passes in isolation.
-
Continues parsing the program even after errors are encountered, so the full list of errors can be reported to the developer (the v1 parser only reports the first error encountered).
-
Parses comments as first-class tokens, instead of ignoring them. This empowers tools like Clarinet to use comments as a way for embedding extra metadata to Clarity code, such as for auto-generating documentation or performing additional static analysis passes.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, @jcnelson's points above are all addition benefits of this change. This new parser is actually 8x faster than the old parser. The biggest contributing factor to the higher lines of code count is also a massive improvement to the readability of the code, that is the change away from regular expressions. The regular expressions in the old parser make the code much more difficult to reason about, in addition to making it much slower, and more prone to unexpected behavior.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@obycode has anything changed with regards to:
- handling white spaces (in V1 they were mandatory in like 98% cases)
- handling text encoding (can we write contracts in any other encoding than utf8? what about line endings?)
- handling keywords (in V1 we can easily overwrite multiple keywords and change their behavior/value)
- handling optional traits
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, thanks @LNow.
- handling white spaces (in V1 they were mandatory in like 98% cases)
- There are some small changes to whitespace, but the previous behavior is largely left as-is
{a:1}
previously gave a "whitespace expected..." error, but is now legal
- handling text encoding (can we write contracts in any other encoding than utf8? what about line endings?)
- It is still the case that only ASCII characters are allowed in the contract
- handling keywords (in V1 we can easily overwrite multiple keywords and change their behavior/value)
- This is handled in the type-checker, not in the parser, but no changes were made, developers can still overwrite some keywords, same as before)
- handling optional traits
- No changes in the parser for this, but see the other comments in this PR about improvements to traits
Additionally:
- Fixes an issue where a tuple in a comment shows up in the AST (Commented tuple shows up in AST stacks-network/stacks-core#3124)
- Escaped backslash in strings may fail parsing ([Clarity] Escaped backslash in strings may fail parsing stacks-network/stacks-core#3123)
|
||
* **Input Signature:** `(get-burn-block-info? (prop-name BurnBlockPropertyName) (block-height uint))` | ||
* **Output Signature:** `(optional buff) | (optional (tuple (addrs (list 2 (tuple (hashbytes (buff 32)) (version (buff 1))))) (payout uint)))` | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It is problematic that the get-burn-block-info?
function returns conflicting output types depending on the property name. It should preferably only take block-height
as argument and return both the header-hash and the pox-address as a composite value, such as:
(get-burn-block-info? u677050)
=>
(some {header-hash: 0xe67141016c88a7f1203eca0b4312f2ed141531f59303a1c267d7d83ab6b977d8,
addrs: (list {hashbytes: 0x395f3643cea07ec4eec73b4d9a973dcce56b9bf1, version: 0x00}
{hashbytes: 0x7c6775e20e3e938d2d7e9d79ac310108ba501ddb, version: 0x01}),
payout: u123})
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't think so. The property name informs the VM of which type signature to use; the fact that the property name is distinct can really be thought of as just syntactic sugar.
get-block-info?
already behaves this way. Having get-burn-block-info?
follow this precedent was considered by the authors to be the least-surprising behavior.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Alternatively, follow the single responsibility principle with a separate function for each purpose:
(burn-block-header-hash (block-height uint))
=> (optional buff)
(burn-block-pox-addrs (block-height uint))
=> (optional (tuple (addrs (list 2 (tuple (hashbytes (buff 32)) (version (buff 1))))) (payout uint)))
These also have the benefit of being pure functions rather than adding a new special function.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
These also have the benefit of being pure functions rather than adding a new special function.
Why is that a concern? Clarity already uses non-pure functions everywhere, and that hasn't been an inhibitor to its comprehensibility and applicability. Developers don't seem to be upset about this; what does make them upset is how difficult it currently is to read and write to Bitcoin (and the Clarity changes in this SIP are meant to make progress on this front).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Overall looks good - left a few small notes on formatting.
|
||
### New method: `principal-construct` | ||
|
||
* **Input Signature:** `(principal-construct (buff 1) (buff 20) [(string-ascii 40)])`, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should this show the optional argument?
* **Input Signature:** `(principal-construct (buff 1) (buff 20) [(string-ascii 40)])`, | |
* **Input Signature:** `(principal-construct (buff 1) (buff 20) (optional (string-ascii 40)))`, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
No; the principal-construct
function takes either 2 or 3 arguments. I'll list both signatures.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If it takes 2 or 3 arguments then it should be split into two separate functions construct-standard-principal
and construct-contract-principal
, or follow @whoabuddy suggestion and make last argument optional.
Syntax should be strict and consistent.
There is no point in introducing overloaded functions if you don't have plan to allow users to construct one themselves.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do any other Clarity functions work like this? It feels like a new pattern but I could be missing something.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes -- the +
, -
, *
, /
, and
, or
, let
, map
, list
, begin
, tuple
, contract-call?
, define-public
, define-read-only
, define-private
, define-trait
, and is-eq
built-ins all take a variable number of arguments.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Generally speaking, there's already precedent in the Clarity language for using one function with multiple type signatures (get-block-info?
being the most prominent example). The reason for this is the same as the reason for having a single principal-construct
: it reduces the number of Clarity build-ins the developer must be aware of in order to solve the task at hand. The Clarity VM doesn't care either way as long as the program type-checks successfully.
### New method: `principal-destruct` | ||
|
||
* **Input Signature:** `(principal-destruct (principal-address principal))` | ||
* **Output Signature:** `(response { hash-bytes: (buff 20), name: (optional (string-ascii 40)), version: (buff 1) } { hash-bytes: (buff 20), name: (optional (string-ascii 40)), version: (buff 1) })` |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
According to description this function serves two purposes:
- decompose principal to its parts
- validate if its version byte matches the network
Every function should have only one responsibility. Therefore it would make more sense to split this one it into two separate functions:
principal-destruct
- Input Signature:
(principal-destruct (principal-address principal))
- Output Signature:
{ hash-bytes: (buff 20), name: (optional (string-ascii 40)), version: (buff 1) }
is-network-principal
- Input Signature:
(is-network-principal (principal-address principal))
- Output Signature:
bool
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
First, is-standard
validates the version byte independently.
Second, the reason principal-destruct
and principal-construct
return an (err ..)
if the version byte is incompatible with the network is to set developers up for success, and in doing so, reduce the chance of harm being brought to users. If the history of smart contract bugs is any indication, developers can't be relied upon to do a good job of validating untrusted input (in which case, users stand to suffer tremendously from the resulting loss or theft of funds). By making these two functions perform validation automatically, we make it so that the developer cannot avoid validating the input.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
is-standard
does more than that 😉 #95 (comment)
If you want to keep developers safe, then new data type (ie. generic-principal
or stx-principal
or valid-principal
) that represents principal which is valid and belongs to any network should be introduced. New data type plus extra conversion function (ie. valit-principal-to-principal
) that returns (optional principal)
or (response principal uint)
. That way you'll enforce proper rules on a language level and make impossible (or almost impossible) to deploy a contract that break these rules.
By introducing new data-type developers won't be able to shoot themselves in the foot because you'll take the gun out of their hands. By wrapping something with (err ..)
you just telling them "Hey! Be careful. You might hurt yourself with that"
On a side note, I though that it is impossible to pass principal from different network and for example send STX from mainnet to testnet address, or store testnet address in mainnet contract variable. I thought such transaction would be considered as invalid and fail or be un-mineable.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The goals are to keep users safe and developers happy. If they are in conflict, the latter can be sacrificed to preserve the former.
Introducing a bunch of new types doesn't keep users any safer than this function's current behavior, but it would make developers miserable. Not only would they have to reason about more types, but also it would de facto prevent Clarity 2 contracts from calling into Clarity 1 contracts and vice versa, because Clarity 1 doesn't know about these new types. So as far as I'm concerned, adding new types to deal with this problem is a non-starter.
On a side note, I though that it is impossible to pass principal from different network and for example send STX from mainnet to testnet address, or store testnet address in mainnet contract variable. I thought such transaction would be considered as invalid and fail or be un-mineable.
You cannot instantiate a smart contract with a non-standard address, but you can send to a non-standard address. Making this function return an error when the address is non-standard preserves compatibility with the current system while also de-risking the chance that developers permit it by mistake.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Introducing a bunch of new types doesn't keep users any safer than this function's current behavior, but it would make developers miserable. Not only would they have to reason about more types, but also it would de facto prevent Clarity 2 contracts from calling into Clarity 1 contracts and vice versa, because Clarity 1 doesn't know about these new types. So as far as I'm concerned, adding new types to deal with this problem is a non-starter.
I strongly disagree with all your arguments.
Why it would make developers miserable? Because they need to familiarize themselves with one extra type that would make their code more safe? Why would it prevent from calling V2 into V1 and vice versa?
Small example:
Let's assume for a moment that you introduced new type stx-principal
and function to-principal
with following signature:
(to-principal stx-principal (response principal uint))
, and that principal-construct
returns (response stx-principal uint)
.
Can we write a code that will be full secured by language? I think we can:
(define-public (test (version (buff 1)) (pub-key-hash (buff 20)) (contract-name (optional (string-ascii 40))))
(match (principal-construct version pub-key-hash contract_name)
;; we were able to construct valid principal so we "store it" in local variable of stx-principal type
target-stx-principal
(match (to-principal target-stx-principal)
;; we managed to successfully convert stx-principal to principal that is valid within our network
;; let's store it in another local variable and then use it
target-principal
;; so we can execute transfer knowing that `target-principal` is an address that matches our network
(stx-transfer? u100 tx-sender target-principal)
;; unfortunately we were not able to convert stx-principal to principal so we store error in variable
err-value
;; and return our own custom error
(err "invalid principal")
)
;; failed to construct principal
(err "invalid data")
)
)
Can we utilize stx-principal
and somehow call functions in V1 contract? I think so:
(define-public (test (version (buff 1)) (pub-key-hash (buff 20)) (contract-name (optional (string-ascii 40))))
(match (principal-construct version pub-key-hash contract_name)
;; we were able to construct valid principal so we "store it" in local variable of stx-principal type
target-stx-principal
(match (to-principal target-stx-principal)
;; we managed to successfully convert stx-principal to principal that is valid withing our network
target-principal
;; and now we call call V1 contract knowing that principal we are passing is 100% valid
(contract-call? .clarity-v1-contract my-function target-principal)
;; unfortunately we were not able to convert stx-principal to principal so we store error in variable
err-value
;; and return our own custom error
(err "invalid principal")
)
;; failed to construct principal
(err "invalid data")
)
)
Can we call a function defined in V2 contract from V1 contract? Why would anyone think that it would be impossible? Both versions recognize principal
type, so it will work the same way as it is working right now.
;; v1-contract
(define-trait bla-trait
((call-me (principal uint) (response bool uint)))
)
(define-public (test (tgt <bla-trait>) (who principal) (x uint))
(contract-call? txt call-me who x)
)
;; v2-contract
(define-public (call-me (who principal) (x uint))
(begin
(asserts! (> x u0) (err u123))
(print who)
(print x)
(ok true)
)
)
;; execution:
(contract-call? .v1-contract test .v2-contract tx-sender u200)
What if you want to use principal-destruct
in V2 contract but you want to call it from V1? According to my suggestion principal-destruct
should accept only stx-principal
type argument, right? And V1 doesn't know anything about stx-principal
.
What can we do?
The same thing we do with almost any conversion. If we can convert X to Y, then we should be able (with some exceptions) to convert also Y to X. You can introduce to-stx-principal
function with following signature and problem will be solved:
(to-stx-principal principal principal)
Add something like (is-network-principal principal bool)
that is able to tell if principal provided is valid withing your network. And enforce that in V2 all functions like stx-transfer/burn?
, ft-transfer/mint/burn
, nft-transfer/mint/burn
when called will accept only principals valid withing your network.
I believe that if you do something similar to what is described above you will give yourself a chance to make it virtually impossible to create a contract that will allow users to send funds to address that belongs to a different network.
And I think it would make both developers and users happy.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In your proposal, developers would always need to remember to (1) check that a principal is valid on the network after calling principal-construct, and (2) remember to convert stx-principal to and from Clarity 1's principal type when calling into Clarity 1.
In reality tools will inform developers that they are trying to pass value with incorrect type. The same way they are informing us right now when we try to pass trait-ref
instead of principal
, uint
instead of int
, raw value instead of response etc. Type system will take care of that.
Also, since it seems
stx-principal
andprincipal
are distinct types in Clarity 2 that can receive assets (I'm not sure what the real difference is?)
No, those are two different types just like principal
and trait-ref
. stx-principal
represents any syntax wise valid principal (regardless of which network it is associated with) and it can't receive assets (never). While principal
is any syntax wise valid principal with byte version that mach the network on which contract is executed, and therefore it can safely receive and spend assets.
In other words you can say that stx-principal
is like a string with built-in validation that verifies if it mach certain pattern or not. And principal
is a class/object/struct that holds internally stx-principal
, it is able to tell if it is valid within a particular network, and can send and receive assets.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In reality tools will inform developers that they are trying to pass value with incorrect type. The same way they are informing us right now when we try to pass trait-ref instead of principal, uint instead of int, raw value instead of response etc. Type system will take care of that.
Since principal-construct
and principal-destruct
return a response
, instead of a principal
, developers can be assured that they will be informed by the tooling if they do not unwrap the response
to get at the inner data. That's the reason this API is written the way it is. So, it seems our two approaches achieve the same end. However, I continue to believe that mine is superior because it does not introduce any new types or extensions to the type system to achieve its goal.
Generally speaking, blockchain code is append-only. There is no way to delete previous Clarity functionality, because the implementation requires it to validate previous blocks. Therefore, we should always prioritize API designs that minimize the amount of new functionality added to the VM, since it must be maintained forever.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You want this and few other functions to work with all instances of Stacks network, and I get that, but your approach will force users to write very weird code.
For me your reasoning is convoluted and it will make language convoluted and ambiguous.
Instead of having clearly defined functions that does one thing and one thing only, we'll have functions that does 2 things and have 3 different results:
- success
- semi error (we managed to de-structure address but it is from a different network)
- error (address can't be de-structured)
Reasoning you have for this and principal-construct
is not aligned with other functions. It is not even aligned with new is-standard
function.
Here is why: If I'll want to find out if principal from any network is standard or not I can't use is-standard
to do that. I have to use principal-destruct
.
So, it seems our two approaches achieve the same end.
I want the problem to be eliminated on the language level. Make it impossible to send funds to an address from a different network. You want to make it preventable and still fully rely on developers and users.
One approach is a permanent, probably a foolproof fix, another one is just a band-aid.
Generally speaking, blockchain code is append-only. There is no way to delete previous Clarity functionality, because the implementation requires it to validate previous blocks. Therefore, we should always prioritize API designs that minimize the amount of new functionality added to the VM, since it must be maintained forever.
I'm not asking you to delete any previous Clarity functionalities.
In V1 principal-of?
has a bug that will be fixed from 2.1 onward. You're not going to delete it and implement something brand new. And I believe you can do exactly the same with how principal
is handled.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Make it impossible to send funds to an address from a different network.
If this is your goal all along, then this would be better achieved by making it so all of the -mint?
and -transfer?
functions simply returned an error when given an unsupported address. This wouldn't require introducing any new types either. This wasn't proposed in this particular SIP, but I think the community would be receptive to such a proposal. There's even an issue already tracking it: stacks-network/stacks-core#3118.
I'm not asking you to delete any previous Clarity functionalities.
I realize that. I'm instead arguing that a simpler solution that does not add new types creates a lower maintenance burden. Adding new types would not only require more code, but also make calls between Clarity 1 and Clarity 2 much more difficult to do correctly.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Until a few days ago I wasn't aware that Clarity have such massive flaw and it allows people to send funds to addresses that belongs to a different network. And it makes me wonder how on earth it has not been communicated anywhere? Why there is no single word about it in documentation? In short WTF?
I suggested new type when I've found out why you decided to gave this function two distinctive responsibilities, and to show you that there is a much cleaner solution that will provide you the same if not greater safety. A solution that does not require returning semi-errors.
What is my primary goal?
- Fix all
*-mint?
and*-transfer?
functions. If it is not doable right now - add proper warning to documentation using big, bold red font and then fix it ASAP. - Remove version byte check from
principal-destruct
,principal-construct
,is-standard
and make all 3 of them do exactly what their names imply and nothing else. - Add new function that validates version byte
That way If developers want destruct or construct principal, or check if it is standard one or not, they use functions that does what they need. If they want to verify if principal
belongs to the network - they use function introduced in 3rd point.
Everything is fully composable. There is no need to handle errors differently based on whether we are or aren't interested in network check. If we want to check if principal is standard or not regardless of the network it belongs to, then there is no need to use principal-destruct
- you simply use is-standard
- plain and simple.
If you document these functions (principal-destruct
, principal-construct
, is-standard
and new one from 3rd point) properly, then developers will quickly learn how to use them and how to do it securely (nobody needs another: stacks-network/docs#1249 (comment)).
### New method: `principal-construct` | ||
|
||
* **Input Signature:** `(principal-construct (buff 1) (buff 20) [(string-ascii 40)])`, | ||
* **Output Signature:** `(response principal { error_code: uint, value: (optional principal) })` |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Same as with principal-destruct
we have one function that does two things.
And again I will recommend spiting it into two separate functions:
principal-construct
- *Input Signature:
(principal-construct (buff 1) (buff 20) (optional (string-ascii 40))
, - Output Signature:
(response principal uint)
is-network-principal
- Input Signature:
(is-network-principal (principal-address principal))
- Output Signature:
bool
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
See #95 (comment)
### New method: `string-to-int` | ||
|
||
* **Input Signature:** `(string-to-int (input (string-ascii|string-utf8)))` | ||
* **Output Signature:** `(optional int)` |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm going to comment only this function but it applies to all 4 (string-to-int
, string-to-uint
, int-to-asci
, int-to-utf8
).
If target data type is specific, then source data type should be specific as well and instead of 4 we should have 8 functions:
ascii-to-int
utf8-to-int
asci-to-uint
utf8-to-uint
ascii-to-int
utf8-to-int
ascii-to-uint
utf8-to-uint
Especially when names of int-to-utf8
and int-to-ascii
suggests that input data type is int
while in fact it accepts both uint
and int
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If target data type is specific, then source data type should be specific as well
Why?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
To make it explicit what is the data type from which you are converting. It will make code analysis much simpler.
If I squint hard enough I would probably accept string-to-X
but int-to-X
suggests that input type should be int
while in fact it accepts both uint
and int
, so number-to-X
would fit better.
Also it would be great to add max length of input of string-to-X
and output of number-to-X
as they are known and should be enforced and respected at the contract validation level.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This exact topic came up at one of the blockchain meetings back when this was being implemented (I don't remember who brought it up). The group considered number
, but thought that developers might think that included floating/fixed point numbers; we considered integer
, but thought that just made it longer to type and didn't really clarify anything. Personally, I would be okay with number
.
Co-authored-by: Matthew Little <zone117x@gmail.com>
The value for `v1_unlock_height` will be determined before this SIP activates. | ||
It will fall as close to the start of cycle `N` as possible in order to give | ||
users the maximum amount of time to re-lock their now-unlocked STX into PoX-2, | ||
if they so desire. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can we clarify "close to the start" here. Is v1_unlock_height
intended to be a part of N (one of the first reward phase blocks), or a part of N-1 (one of the last prepare phase blocks)?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In practice, yes, but it doesn't matter if it's later. I was planning on just updating these values once we know the target activation block heights.
and beyond is unlocked at the end of period 2a, once the burnchain block | ||
height `v1_unlock_height` passes. | ||
- Accounts locked in PoX-2 will remain locked. It will be possible to do this | ||
with unlocked STX at the start of period 2a. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Accounts locked in PoX-2 will remain locked. It will be possible to do this
with unlocked STX at the start of period 2a.
This still trips me up a bit. Could it be rephrased? What action is meant by "do this"?
Added some comments that may have been addressed in some discussions, but would be nice to have as part of the SIP text. |
Co-authored-by: janniks <6362150+janniks@users.noreply.github.com>
This was discussed in the comments, and updated in the implementation, but never made it into the actual SIP.
fix: updated `?` suffixes
Advance to activation-in-progress now that voting has begun
It looks like the stacking amount was removed in pox2. Is that a bug or a feature that should be mentioned here? stacks-network/stacks-core#3436 |
fix: fix typo in `stx-transfer-memo?` example
It contains a length-prefixed name and a length-prefixed code body. See | ||
SIP-005 for details. | ||
|
||
## New Burnchain Transaction: `delegate-stx` |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There should be revoke-delegate-stx
as burnchain transaction as well.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@jcnelson without revoke function, users won't be able to change the pool.
Just an update on this -- we will merge this once 2.1 activates on mainnet. |
|
Hi SC members, @jcnelson @MarvinJanssen @GinaAbrams Same goes for SIP-020, SIP-018, SIP-019 |
This PR supersedes #53 and contains the SIP for activating Stacks 2.1.