-
Notifications
You must be signed in to change notification settings - Fork 3.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Feature Request] Dispatchable Fungible Asset Standard #12490
Comments
@runtian-zhou Generally I think overloading is a useful extension that will come in handy for folks trying to implement royalties, gamification, etc., and as long as it is optional then I generally support efforts for this kind of support My major concern stems from what could be a misinterpretation of the following:
Basically, if an issuer decides to overload, does this mean that the overloading logic gets injected into every transfer for the affected asset? Or just for those who call through the overload wrappers in
I am not sure how to reconcile the two, as it seems like a trade-off between a) breaking DeFi (or making it prohibitively complex to trade overloaded assets), and b) allowing circumvention of overloading Please advise cc @lightmark |
I think it's the former. The issuer get to decide if they want to insert logic into transfer. Would like to understand how this could affect DeFi space a bit more. My thinking is that we will be able to provide the same api as regular fungible_asset.move. So a function for |
Yes I understand that this will apply for the new wrapper, but will DeFi protocols still be able to use the non-overloaded functions in the existing FA source code? If not, and if they are forced to use overloaded functions, this is where the accounting disaster comes in:
In the general case, supporting arbitrary, turing complete callback extensions to simple transfers is a nightmare for a DeFi protocol designer. An example of a simple yet nondeterministic overload: say someone implements a lottery royalty system transfer extension: whenever a transfer happens, a random amount between .1 and .5% gets taken as commission: then a DeFi protocol that has a pool with the asset has no way of calculating the effective amount of collateral in the pool, because it is impossible to predict how much will get taken as commission on any given swap. That means that a DeFi protocol has to bake in some kind of stochastic prediction model for collateralization based on what the expected commission would be, etc. for each transfer. Another example with a complexly deterministic overload: 1% of token x gets taken as commission if recipient holds a different token y, but 2% if they don't hold it. Then does this mean that the matching engine for an x/USDC order book has to also monitor the y holdings of every limit order holder in order to calculate effective limit price? @MoonShiesty you might be interested in this |
Thanks for the examples here! My question is how is that going to be different from what ethereum already have in their DeFi ecosystem? The coin contract only provide the interface for transfering some value from one account to the other. It's up to the coin contract to decide what actually needs to be done during this transfer. In that case, the semantic of transfer is also not predictable. Although I wasn't completely sure if anyone would do that in reality but it is indeed implementable? Commision prediction is indeed an interesting topic. With an overloadable withdraw, it is indeed impossible to predict what the side effects could be for a given transfer. |
Also in the current proposed api, the comission fee is actually a bit interesting. In the overloaded function, the signature made sure that you only have an |
@runtian-zhou thanks just now for the call to chat through this live. Summary of main points:
|
Wish we had spoken in person. I think the above request isn't really practical. What you're effectively asking for is an additional set of native calls and more logic on the overloaded contracts for reasons that aren't super clear to me. It also implies that certain assets have to conform deterministically, which isn't a requirement at all. In fact, this model allows for arbitrary behavior. So I think there are the following cases:
I think the concern your sharing is that with an overloaded asset, a user could specify What I'd like to propose is that at least a withdraw cannot withdraw more than the user indicated. Specifically, we check the balance before the withdraw and after and ensure that that is the amount requested. That eliminates a lot of quirks where a protocol can be excessively arbitrary. It should also help ensure that a pool is able to adequately track fund movements out of it. In terms of deposit, we already don't know how much was removed during the withdraw taxing, so a depositing taxing has no additional implications. This must be monitored outside the dapp. Thoughts @alnoki? |
@davidiw the fixed send/receive amounts are so make DeFi accounting pragmatic and user friendly Consider the case where there is no fixed receive, and a user wants to deposit 1000 X into a pool, because the pool will abort unless it actually receives 1000 X. The basic "provide liquidity" function will simply call a 1000 X transfer, but if there is some kind of tax on it then the txn aborts. And even if it is deterministic that means someone else has to write an API that ensures the deposit works smoothly by specifying some amount above 1000 X But this is solved for a fixed receive function On the converse side once the assets are in the pool, the user who wants to withdraw specifies how much comes from the pool, rather than how much they get, for the same reason that the pool accounting must not be altered by transfer tax effects. Hence a fixed send function
So you are already enforcing that |
The further we go down this path, we end up with a more complicated and restrictive API. The intent here is to allow for rather arbitrary asset types. Though this conversation better belongs in the AIP than in code. We most certainly will not ship painful to use code. So let's treat this as a higher level discussion first and foremost :). So if we compare to ERC-20, we are already more explicit, see https://eips.ethereum.org/EIPS/eip-20 where as our withdraws must extract explicitly the expected amount. If we want to have relatively arbitrary behavior, there's no way to also give the behavior you seek. Instead we need to make a relatively fixed set of operations that can be used independent of the token withdraw / deposit or we increase the number of functions that need to be dispatched. DeFi solutions already have verified and unverified pools, which I hope would ensure that only those assets that either play by the natural rules are authorized and those that do not have appropriate harnesses around them. I worry that trying to be exhaustive will increase development time and have lesser outcomes. What happens if/when Aptos offers broader dynamic dispatch? Will the exposure to framework an core library functions require such considerations? Here are a set of operations that could exist:
Maybe we could have an additional two overriden functions:
or alternatively we could have
These can be enforced on the respective wrapper for withdraw and deposit. However, each additional override costs more computationally and cognitively. Personally not aware of how this is handled in ETH and it seems like Solana Token 2022 has somewhat arbitrary behaviors too. I'll look at Solana more carefully. |
Hmm... now if you just want a transfer function that has these asserts in place for convenience, I think we could do that. However if we want the flexibility with learning how much to move around to get to a certain value, it becomes more and more expensive and limiting. These are operations that may be better suited for off-chain computation. |
That would resolve most of the issues
I'd rather rely on purely onchain to eliminate trust requirements
Yes please, this should do it |
I think what makes sense here is:
|
🚀 Feature Request: Dispatchable Fungible Asset Standard
Summary
Right now the Aptos Framework defines one single
fungible_asset.move
as our fungible asset standard, making it hard for other developers to customize the logic they need. With this AIP, we hope that developers can define their custom way of withdrawing and deposit for their fungible asset, allowing for a much more extensible way of using our Aptos Framework.This proposal will be submitted as an AIP. Using issue here to track and start the discussion.
Goals
The goal is to allow for third party developers to inject their custom logic during fungible asset deposit and withdraw. This would allow for use cases such as:
Note that all the logics mentioned above can be developed and extended by any developer on Aptos! This would greatly increase the extensivity of our framework.
Out of Scope
We will not be modifying any core Move VM/file format logic. We will use this AIP as the predecessor work for the future dynamic/static dispatch we are planning to support in the future Move versions.
The AIP here can potentially be applied to our NFT standard as well. However, we are not going to worry about such use case in the scope of this AIP.
Motivation
Right now the Aptos Framework governs the whole logic of what fungible asset means, and every defi module will need to be statically linked against such module. We probably won't be able to meet the various functionality need coming from all of our developers, so an extensible fungible asset standard is a must on our network.
Impact
We want to offer token developers the flexibility to inject customized logic during token withdraw and deposit. This would have some downstream impact to our defi developers as well.
Alternative solutions
We are using this AIP as the precurssor work of the future dispatch support in Move on Aptos. So we will have a limit scoped dispatch function implemented via a native function instead of a full set of changes in Move compiler and VM so that we will have more time to assess the security implication of dispatching logic in Move.
For the proposed
overloaded_fungible_asset.move
, another alternative solution would be to add the dispatch functionality directly in the existingfungible_asset.move
. However, that would be pretty unusable right out of the box with the existing runtime rule proposed. In order for such dispatch function to be usable, we need an exception for the runtime safety rule where re-entrancy intofungible_asset.move
is allowed. This would require the framework developers to be particularly cautious about the potential re-entrancy problem.Specification
We will be adding two modules in the Aptos Framework.
function_info.move
. This module will simulate a runtime function pointer that could be used as dispatching. The module will look like the following:overloaded_fungible_asset.move
. This module will serve as the new entry point for the fungible asset. This module will serve as the wrapper module of our existingfungible_asset.move
and have similar api. The reason why we need an extra module instead of adding the dispatch logic infungible_asset.move
is because of the runtime rule mentioned below. The module will have the following api:There will also be a new Runtime checks in the Move VM:
This runtime check is needed because of the possible re-entrancy problem that could be enabled by this AIP. This check will not fail on any existing Move programs on chain. See the security discussion for why we need such runtime check.
Reference Implementation
Not implemented yet.
Risks and Drawbacks
The biggest risk here is the potential re-entrancy problem that could be introduced by the dispatching logic. See security consideration section for details.
Security Considerations
Current State of Move's Re-entrancy and Reference safety
The biggest security concern is how this could change the re-entrancy and reference safety story of Move. Before we jump into the problem, let's take a look at a couple of Move design goals:
Note that these two properties are being enforeced by the Move bytecode verifier, which is a static analysis done at module publishing time so any module that violates such properties will be rejected right away at module publishing time. The question is how do we actually reason about the two safety properties in the Move bytecode verifier?
First important assumption made by the Move bytecode verifier is that the dependency graph of any Move program has to be acyclic, meaning that two modules cannot mutually depends on each other, directly or transitively. There is a specific check for this property when a module is published. This leads to an important observation: if a module A invokes a function defined in another module B, such function will have no way of invoking any functions defined in module A because of the acyclic property. So consider the following program:
With the acyclic property, the move bytecode verifier knows that
another_module::bar()
will have no way of invoking other functions that could mutate thesupply
field in theLending
resource. Thus, the only way of mutating theLending
resource is to invoke functions that are defined in your own modules, and Move bytecode verifier will perform a static anaylsis to make sure that there won't be two mutable references. Specifically we can look into the following examples:In all test functions mentioned above, once a mutable reference has been borrowed, the bytecode verifier will make sure that subsequent reference can be borrowed only after the first mutable reference has been dropped. In
test1
, calling intoacquires_t2
will be allowed because the mutable reference has already been dropped. Intest2
, however, calling intoacquires_t2
will be strictly forbidden and module containing such code won't be publishable, as the mutable reference is still be held whenacquires_t2
trying to get another reference. Intest3
however, because of the acyclic property of Move dependencies mentioned abover, the Move bytecode verifier can statically assume that this function call will not be able to invoke functions that can generate references to states that you are currently holding. Thus the bytecode verifier will simply treat this call as a no-op during the static analysis.How would any dispatching logic changes the story here?
The biggest assumption that dispatching would break is that the Move bytecode verifier can no longer assume that a function can only invoke another function that has already been published. As a result, the important acyclic property that is crucial to Move's reference safety property and re-entrancy property would be broken. Considering the following example:
In this example, the bytecode verifier has no idea that the call into
aptos_framework::overloadable_fungible_asset::withdraw()
will go back into thedispatchable_withdraw
function defined in the same module. Thus it would have no idea that whensupply_2
is borrowed, there's an existing mutable reference insupply_1
already, which effectively break the reference safety assumption of Move.Here's another slightly problematic example about re-entrancy:
In this case, the reference safety property of Move is held, as the reference to
Lending
is already destructed after thesupply()
call. However, the code is still problematic. In the current Move setup, calling into functions defined in another module will have no way of mutating states that you care about. As a result, you only need to reason about local functions that can mutate those states. Such assumption will no longer be held with the introduction of dispatching. This could add huge overhead for smart contract developers to reason about their code's re-entrancy properties.Proposed solution: new runtime checks for cyclic dependencies.
In the analysis above, we demonstrated how acyclic assumption plays an important role in Move's static reference safety analysis and re-entrancy property. In the worst case scenario, developers will be able to create multiple mutable references to the same global value without being complained by Move's bytecode verifier. As a mitigation, we suggest we need to enforce such property at runtime. Meaning that function cannot form a back edge in the call dependency graph. In the re-entrancy problem example, the call stack will look like following:
The runtime rule will cause the program to abort when
dispatchable_withdraw
is pushed onto the call stack. Other blockchain systems have similar runtime checks for module-level re-entrancy problem. One thing to note here is that this check can never fail on any of our existing Move code because we've already checked for this property when a module is published. Such check can only fail with the introduction of the dispatching mechanism.A downside of such runtime check is that it makes it very hard to integrate the dispatch function directly in
fungible_asset.move
. The reason is that one could imagine that the dispatched withdraw function might need to invoke functions defined infungible_asset.move
. In the deflation token example, the developer will most likely need to call the split and burn functions infungible_asset.move
. If thewithdraw
api is added infungible_asset.move
, the call stack will look like the following:This will be an immediate violation of the runtime check rule proposed above. To mitigate this issue, I would propose to move the dispatch entrypoint to a new module
overloadable_fungible_asset.move
instead of the existingwithdraw
api infungible_asset.move
. Another alternative is to makefungible_asset.move
not vulnerable to this check, making a one-off exeception here. This would forces framework developers to reason about the re-entrancy property offungible_asset.move
, which wasn't a problem previously.Future Potential
We will utilize the lessons we learn from this AIP to help implement the high order function system in Move as suggested in the future of Move on Aptos
Timeline
Suggested implementation timeline
We are planning to implement the feature in the upcoming release.
Suggested developer platform support timeline
N/A
Suggested deployment timeline
We would want to implement it in 1.11 release.
...
Open Questions (Optional)
We need some feedbacks on the public interface of the modules in the
overloaded_fungible_asset.move
. And see how this should work withfungible_asset.move
properly.The text was updated successfully, but these errors were encountered: