Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Accommodating existing SLP use cases and other metadata support #50

Closed
zander opened this issue Jul 27, 2022 · 25 comments
Closed

Accommodating existing SLP use cases and other metadata support #50

zander opened this issue Jul 27, 2022 · 25 comments

Comments

@zander
Copy link
Contributor

zander commented Jul 27, 2022

Cashtokens are gearing up to be awesome and likely will enable a large number of very interesting usecases that frankly are unique in any decentralized system.

Why I'm writing here is to also take a look at the usecases that SLP is currently very successful at and make sure those are not forgotten and support is the best it can be.

To take one great example; olicrypto. This is an SLP token which is openly traded and all the technicals are she pays divident to investors (example). Technicals are trivially supported in cashtokens. Except that it has a user-visible name. A URL and some other details that are important for the usability of the token and for the trading in a decentralized manner.

"Simple" traded tokens may be a good way to make the point, but even more advanced tokens will benefit from meta-data like which date the token was started, what the maximum number of fungible tokens are and more details.

Genesis transaction

All the above examples revolve around information that is (or can be) stored on the genesis transaction. The start of one specific cashtoken.

So, lets look at how we can get hold of that transaction based on anyone looking at any cashtoken transaction. This starts with the fact that the token has a 'category', which is intentionally a copy from a TXID. So we can start with that transaction.

Unfortunately, by looking up that transaction we don't actually find the genesis. The genesis is the one that spends from the "category = txid". And in blockchain that link is not stored. Additionally, there is no limit to how old that transaction can be when the genesis is created. It could be from years before the actual token is minted.

We need to either use a indexer server, but that makes things much more complex and costly and we learned from SLP that its easy to assume they will keep on being maintained while reality is much more harsh and bleak. Or an alternative path to our genesis tx is to use BIP 37, which is using bloom filters and merkleblock. The problem with that is we would need to ask a full node to filter every single block in sequence. Which, at best, severely limits the full nodes from serving better deserving SVP wallets. Its also slow and thus not very UX friendly.

Rationale

So, I understand the reason why the current design was picked. The simplest solution which uses the best f both worlds; a txid as a unique ID for a category (avoiding risk of duplicates) but not in the same transaction in order to allow the setup script for a token to actually include the category in its script. Can't include your own TXID in the same transaction, so a previd it became.

So, we have today:

Tx-k: the txid setting transaction.
Tx-g: a coin genesis transaction.

With the problem that there is no link from the first to the second.

Suggestion

In a sentence: to merge those two into one. At least for many simpler usecases.

So, for simple usecase you just end up with:

Tx-g: a coin genesis transaction. Its txid is the coin category.

This would be enough to do all the things that SLP supports today. It can have an op_return output that is specified much like SLP does today. And most important, all transactions ever transacting this token will refer to its txid and most full nodes will be happy to serve a single transaction based on its txid.

But you'll find there is a need to do more things because cashtokes are just too cool (read: powerful). How would that work?

A setup where multiple tokens are working together works best by them being supplied with their details all in different outputs of the same transaction. For that, you'd need something like this;

Tx-g1: genesis transaction for token 1. Output script is a simple p2pkh. (mutable enabled!)
Tx-gn: genesis transaction for token n. Output script is a simple p2pkh.

Tx-start: transaction that spends all genesis tokens and initializes them in its outputs.

Those could all be broadcast at the same time, no problem there. In practically all topic this is just as flexible as the current design, it really is only a small refactor that makes the worst case slightly more expensive at the benefit of allowing a whole range of new features that SLP users are used to.

One detail that makes sense to mention is that likely the easiest way to differentiate a genesis is if we were to decide to lists its own category as zeros. This is cheap to check during validation, really at worst just as expensive as the check we need now.

That would be a really strong advantage for those currently on SLP.

@bitjson
Copy link
Member

bitjson commented Jul 27, 2022

Thanks for opening an issue! Yes, we should definitely have the basics figured out for existing SLP use cases.

For storing immutable/genesis data about a token: could you place the OP_RETURN data in output 1 of the transaction used for the category ID, i.e. the "pre-genesis transaction"?

In a sense, that transaction can be seen as the real "genesis", since it produces the one-time-use output (output 0) with the ability to create the token category for that TXID. Downstream, consumers can know for certain that fungible tokens of that category were created by spending that output.

On that note, to enforce particular token creation schemes:

  • limiting the fungible token supply below the maximum limit,
  • ensuring that NFT schemes include no mutable/minting tokens,
  • ensuring that a produced minting token is only given to a known covenant,
  • ensuring that the next transaction produces a properly-constructed Jedex using the token category,
  • etc.

Output 0 can be given to a covenant enforcing whatever rules are needed.

With this setup, downstream consumers can immediately verify the integrity of highly-complex contract systems for a particular token category by only validating the contents of the pre-genesis transaction.

To make it even more efficient, you can make it possible for consumers to check only for well-known, static "minting P2SH contracts" without baked-in public keys – the just have to be designed to be controlled by a sibling output, e.g. output 2. See depository covenants here. Then for example, validating that a particular token category offers a non-corrupted instance of a well-reviewed DEX design is as simple as confirming that pre-genesis output 0 has the correct 35-byte P2SH32 locking bytecode. (And if you have an indexer, you can see all the DEX instances of this type by looking at that "address". With a little more work, you can automatically find the highest-liquidity DEX for any given token.)

@bitjson bitjson changed the title proposal for category Accommodating existing SLP use cases Jul 27, 2022
@A60AB5450353F40E
Copy link
Contributor

I don't understand why it would be a problem to find the genesis TX from the pre-genesis TX. We don't need any new kinds of indexers, any block explorer, even non-upgraded, will do the job, users can just do:

  1. Inspect category_id of the UTXO their wallet has received.
  2. Go to a block explorer, search for a transaction by ID and type in the category_id.
  3. Click on the 1st output. Done.

@bitjson
Copy link
Member

bitjson commented Jul 27, 2022

Related to this topic, I think it's unwise for wallets to place much trust in this sort of pre-genesis data (posted in CashToken Devs:

Using genesis date as a proxy for proper verification would encourage squatting on many similar names/images/other metadata that failable humans will be using to attempt to select the "real" tokens from imposters. Minting tokens isn't costly, so it seems very risky for a wallet to rely on this sort of validation. In other token ecosystems, wallets tend to use lists of validated/known token IDs, where the issuer of the list ensures that the items on the list can't be confused with each other (and many other parties sign that they have validated the contents, so everyone can be sure that the list doesn't include impostors). See for example: https://tokenlists.org/

But it's still important that we have an answer here. 👍

@zander
Copy link
Contributor Author

zander commented Jul 27, 2022

hi Jason,

thanks for thinking along.

For storing immutable/genesis data about a token: could you place the OP_RETURN data in output 1 of the transaction used for the category ID, i.e. the "pre-genesis transaction"?

That would be yet another option, one that feels really quite hacky. An SLP info without any actual token data in the same transaction, doesn't that feel hacky to you too?

On that note [big snip].

Not sure if that was meant for this topic, it didn't feel like it was at all commenting on the proposal.

I hope you can write something about the proposal, so far it still is the only one that solves the issues listed.

@A60AB5450353F40E
Copy link
Contributor

An SLP info without any actual token data in the same transaction, doesn't that feel hacky to you too?

It's more than that, because with a covenant placed on output 0 it will not be possible to create the token in any other way than according to the data revealed in pre-genesis TX, so the data is as good as data from the actual genesis TX that writes the initial UTXO state of the token.

Imagine a "pre-genesis" TX with these 2 outputs:

  • output 0: has some P2SH locking script
  • output 1: OP_RETURN that reveals the P2SH redeem script, and it would be a contract that enforces the exact initial setup of the token to be created by consuming index 0 output.

Or, it can be the input that reveals the data and validates the exact script that will create a token. If the input is a public P2SH covenant that validates token setup according to some standard template and parameters, then that input's script will tell you all about the token to be created.

Then, your users interested in the token would:

  1. fetch the TXID == categoryID transaction
  2. inspect input 0, if it spends the known standardization P2SH then it's "vetted" to be a SLP token
  3. inspect input 0, and extract the token metadata from its unlocking data and know for certain that the next TX will have created only that because
  4. output 0 p2sh hash matches the declared token metadata and setup parameters

This would all of course be automated. User would just see the token's data be automatically loaded, without any indexer required.

@zander zander changed the title Accommodating existing SLP use cases Accommodating existing SLP use cases and other metadata support Jul 27, 2022
@bitjson
Copy link
Member

bitjson commented Jul 30, 2022

@zander:

That would be yet another option, one that feels really quite hacky. An SLP info without any actual token data in the same transaction, doesn't that feel hacky to you too?

I'm not sure I follow – could you describe how the two options differ in practice?

@zander
Copy link
Contributor Author

zander commented Jul 30, 2022

how the two options differ in practice?

Well, first of all, it would not actually solve all the issues this task has been created for. Your idea to stick the SLP metadata op-return on a random transaction is a bit weird as you suggest splitting the genesis data over two transactions that may be mined months or years apart. It is quite the opposite of an elegant solution.

Clients will still not be able to find all the metadata information, as the initial issue reports, using your suggested alternative and as a result I don't think it is a very interesting solution. It doesn't actually solve the issue. You still don't have access to basic info like date-of-genesis.

I haven't heard anyone give any reason to dislike the suggestion this issue makes, even since its been well over a week since I first posted it on telegram (and 2 years since I first suggested it in general). I'll go ahead and make a pull request while people check if there is any downsides to it. Thanks!

@A60AB5450353F40E
Copy link
Contributor

A60AB5450353F40E commented Jul 30, 2022

Your problem statement fails to address why it's an important enough problem to warrant tailoring consensus for it, when the problem is trivially solved by existing infrastructure (querying any block explorer or SPV server, even non-upgraded ones can do the job), and if that is not satisfactory then it CAN be solved by the proposal as it is, using the covenant approach, without having to rely on any infrastructure other than your node.

Querying a SPV server is something already being done by most wallets to get user balances, they can just add the address of categoryID/0 output to the list of addresses they watch.

Users who insist on not having to query any server and who may care about this feature of learning everything in a single hop CAN have the feature if the token creator wants them to have it.

Clients will still not be able to find all the metadata information, as the initial issue reports, using your suggested alternative and as a result I don't think it is a very interesting solution. It doesn't actually solve the issue. You still don't have access to basic info like date-of-genesis.

If we place a covenant on index 0 output that will require the next TX to have exact token initialization + reveal the P2SH redeem script and whatever metadata somewhere in that same TX then THAT is effectively THE genesis transaction, and the date of genesis is the date of creation of the output 0 with such covenant, call it SLP-genesis covenant. You don't need the next TX, because THIS TX pre-commits to entire next TX. When the index 0 output is mined - there will be no other way to create the first UTXOs with that categoryID other than to respect the covenant that specifies the initial supply etc. The fact that you're holding a descendent token UTXO of the category is proof enough that the post-genesis TX was mined according to the token constructor, you don't have to look it up to be assured of that.

TL;DR existing system lets you implement exactly what you're asking for (annotate an output as category constructor) - but using Script instead of consensus: special p2sh (script solution possible with currently proposed consensus spec) vs special categoryID=0. (alternative consensus spec).

I haven't heard anyone give any reason to dislike the suggestion this issue makes, even since its been well over a week since I first posted it on telegram (and 2 years since I first suggested it in general).

The alternative approach would reduce flexibility of the system by limiting the number of new categories to exactly 1 per TX.

@bitjson
Copy link
Member

bitjson commented Aug 2, 2022

@zander:

Your idea to stick the SLP metadata op-return on a random transaction is a bit weird as you suggest splitting the genesis data over two transactions that may be mined months or years apart.

What data would go in the second transaction?

If I understand the use case, we can put everything in the first transaction and treat the second just like the third, fourth, etc. No one needs to care what happens after that first transaction – if the network approved it, we know it didn't violate any of our expectations.

You still don't have access to basic info like date-of-genesis.

It seems the date of genesis in this case would be MTP of the block in which the first transaction is mined, since that's the transaction that produces the "category genesis output". I'll also suggest this is a better behavior security-wise than considering the next transaction to be the date of genesis – systems that judge token legitimacy based on creation date are easily defrauded. E.g. "The first token category to claim the name BigCorpBucks is the legitimate one."

Wallets/services attempting to rely on the date of the second transaction would create a perverse incentive for rent seekers to squat on token metadata, and they'd also present new opportunities for fraud during metadata updates like mergers, acquisitions, rebranding, etc. It wasn't a design goal, but I'd consider it a feature that date of genesis is more naturally the first transaction than the second.

@zander
Copy link
Contributor Author

zander commented Aug 2, 2022

What data would go in the second transaction?

Your doc describes it as the genesis transaction. The one that spents a first output from a random transaction and than adopts the txid of the random-tx it spents the first output off.

If I understand the use case, we can put everything in the first transaction and treat the second just like the third, fourth, etc. No one needs to care what happens after that first transaction – if the network approved it, we know it didn't violate any of our expectations.

This sounds like a new proposal.
In the current doc, the transaction that you inherit the txid from is the first and that one has no special behavior or requirements. Just that you need to have access to the first output. It can certainly preceed the May 2023 activation time of cashtokens.

It seems the date of genesis in this case would be MTP of the block in which the first transaction is mined, since that's the transaction that produces the "category genesis output".

Ok, so you now have two "genesis" concepts, the one that creates an output and the one that actually mints the tokens, defines if its an NFT or a FT, etc. Not making it easier to spread things like that! (What use Median Time Past, here?) Why add all that complexity? KISS, please.

I'll also suggest this is a better behavior security-wise than considering the next transaction to be the date of genesis

The difference is being able to get the genesis-transaction (as opposed to what you now minted the genesis-output), or not. Being able to get access to the metadata of the entire token yes or no. Its not limitd to 'date', its also the details if you have any NFTs or how many FTs. Token genesis is useful, it obviously doesn't remove security to have access to more details. Uniqueness of the ID is and stays protected by the wider bitcoin system.

Systems that judge token legitimacy based on creation date are easily defrauded

The basic concept is that token metadata ADDS information to the security-assessment of said token. More verified information benefits security. This is such an obvious statement, its going to be extremely hard to go against. More details is always better for people to judge. The counter is that its enough for people to recognize a 256-bits long number, but just a long categoryID is absolutly non-trivial to recognize.

It wasn't a design goal, but I'd consider it a feature that date of genesis is more naturally the first transaction than the second.

What you are effectively saying here is that you prefer there to be two genesis points, a 'genesis-output' that has no special meaning nor any indication of there being anything genesis-y to it, and a genesis-transaction that actually initializes the token with details like how many FTs there can be, etc. And that the confusion of this being spread over potentially years of time is actually a benefit. (?)

Any claims based on more metadata being a bad thing for users to get hold of is essentially going against years of SLP and other tokens experience. Piling on and saying its actually a feature to go and intentially muddying the metadata of a token is weird.

Has SLP treated you so bad that you want to make sure there is no migration path from SLP to cashtokens? How do you think SLP people will feel about your stance?

@bitjson
Copy link
Member

bitjson commented Aug 2, 2022

Has SLP treated you so bad [...] How do you think SLP people will feel about your stance?

Please keep discussion focused on ideas rather than individuals.

Our goal is to produce a token specification that is as minimal as possible without sacrificing utility. If we can demonstrate that a change would enable new use cases or significantly optimize existing use cases, it's easy to justify applying the change. We're getting closer in this issue, but we still haven't produced the rationale required to justify additional complexity in this part of the design.

there is no migration path from SLP to cashtokens

This is not accurate. There are simple, efficient, user-friendly migration paths for all SLP use cases we've seen, including both trusted and trustless migration strategies. (Of course, if those can be further improved, please contribute!)


Thank you for your replies here so far, I appreciate you working on this part of the design. My concern with this specific proposal remains that we've not fully described a use case.

If I understand, the issue is:

The genesis process in current CHIP only allows easy inspection of the pre-genesis transaction rather than the genesis transaction itself. This offers similar functionality – arbitrary data can be included in the pre-genesis transaction, and pre-genesis transaction covenants can be used to enforce genesis behavior – however in practice, there are use cases for light-validation of the genesis transaction itself that cannot be reasonably accomplished using pre-genesis covenants. E.g. a genesis transaction with thousands of "trading card" NFT outputs where each NFT has a unique commitment of the form <card_index> <ipfs_address>. (This example would require higher VM limits to ensure via covenant, and it would require looping operations to be byte-efficient.)

Am I understanding the motivation? Are there additional examples we could review?

@zander
Copy link
Contributor Author

zander commented Aug 4, 2022

there is no migration path from SLP to cashtokens

This is not accurate. There are simple, efficient, user-friendly migration paths for all SLP use cases we've seen

One of the most used features in SLP is to add an icon, to add a name.

We have seen that basic features like "is this an NFT-only or a Fungible Token" makes a lot of sense for a wallet to fetch and display.

These are currently impossible without some indexer. You have suggested some SLP-metadata to be stored on a genesis transaction-output (as opposed to the genesis transaction itself) which solves only half the problem and I want to add that adding a token metadata on a transaction which itself creates or spends no tokens is weird.

As such, the question; you say that migration is possible and the sole reason for this issue is that it currently is not possible to store a lot of these details on-chain, please explain how this kind of migration would happen if I can't find any place this information can be stored in cashtokens at all.

How would that migration look? Where does this metadata get stored in your current version of cash-tokens?

@zander
Copy link
Contributor Author

zander commented Aug 4, 2022

On telegram Jason wrote:

If you're talking about "genesis metadata", would you mind offering an example? I've actually struggled to come up with example use cases that seem plausible

I've given several before, so apologies if I repeat myself.

Your personal name is Jason, it is not unique and secure to identify you as such, yet I think in my addressbook that is still super useful. This is the number one usecase, allow the creator of a token to give it a name. Doesn't have to be unique, just like Jason isn't unique, but its a lot better than a 32-byte hash which nobody can remember.

The genesis transaction also implicitly exposes the total number of fungible tokens allowed by this token.
A wallet that gets this information would be able to simply check if that number is zero and show the token in a different tab or with an icon.
Ditto for non-fungible tokens, also equally available under the token genesis transaction for free.

I can imagine much more useful metadata that can be added. For instance "associated categories" that allows a wallet to treat 3 tokens as one in the user interface.

All of these usecases come down to the same thing, the wallet needs to fetch this data and the trick of using the category-id to fetch the transaction of the same txid seems to be the easiest to make this work with full nodes.
It does need the user-specified metadata to be on the genesis transaction (and the category-id to be renamed to be about the genesis transaction) to keep this trustless and cheap.

@bitjson
Copy link
Member

bitjson commented Aug 4, 2022

How would that migration look?

There's some discussion here:

There are lots of good strategies for migrating existing tokens to v2, including for tokens which don't have a central issuer. Some tokens would probably remain on v1, and other tokens could migrate over slowly or at precise window.

It's even easy to do migrations trustlessly: an "SLP migration covenant" can be created with a snapshot of SLP ownership at some height (easily verifiable by everyone). Over the following months/years, holders can prove they were a holder in the snapshot and be allowed to withdraw migrated tokens. (Can even be done by SPV wallets.) I imagine a standard would also be developed to make migration easier (specifying clear rules around snapshot timing and public review).

And of course, centralized issuers could always maintain support for existing tokens while issuing new tokens with later standards.

There are a lot of options – if you're interested in putting something together, it would be great to get started on a "v2" SLP spec that standardizes several migration options + new SLP token creation/handling. I'd be happy to help!

Where does this metadata get stored in your current version of cash-tokens?

In all of the above examples, the wallet already requires real-world verification of the token category. E.g. there could be thousands of scammer-issued "BigCorpBucks" token categories on-chain, and on-chain metadata doesn't help wallets/users distinguish between them. A proper verification strategy like Token Lists or Bitauth Key-Value Protocol is still needed.

So there's little value in requiring wallets to make another on-chain lookup – they might as well have that info included directly in the verified token list, Bitauth identity metadata publication, verified SLP migration snapshot, etc. (With current network limits, they'd only be looking up <220 bytes of OP_RETURN data, and that data may be misleading and needs to be separately verified anyways.)


A wallet that gets this information would be able to simply check if that number is zero and show the token in a different tab or with an icon. Ditto for non-fungible tokens, also equally available under the token genesis transaction for free.

The genesis transaction can't really give us much information on the current state of a category – fungible tokens can be burned in later transactions, and new non-fungible tokens can be created.

Also, wouldn't a wallet prefer to tailor the UI based on what a user holds, rather than the token category? If the "trading card" NFT category described above also issued a bunch of fungible tokens, but the user doesn't hold any, the users trading cards should probably still be displayed as if it's an NFT-only category? E.g. most wallets that support multiple coins don't display a giant list of the 100s of coins where the user has a balance of 0, instead they just show the coins where the user has a balance.

I can imagine much more useful metadata that can be added. For instance "associated categories" that allows a wallet to treat 3 tokens as one in the user interface.

This would be great to support! (That example is also mentioned here) But if you accomplish it with genesis data, how do you support migrations where additional tokens need to be issued? E.g. a stable coin company didn't plan far enough ahead and needs to add a new category – do they need to have every user trade in their tokens and start over? Wouldn't it be more useful if our solution gave them a non-disruptive option? And wouldn't it save every wallet a lookup to simply include that list of category IDs in the token's validation information? (From the token list, Bitauth identity, SLP migration, etc.)


So examples we've collected to review so far:

  1. Validating genesis transaction of thousands of collectable NFTs
  2. Adding a permanent icon and name for the token
  3. Checking the maximum possible supply of a token category (excluding later burns)

Are there other examples we should look at?

@zander
Copy link
Contributor Author

zander commented Aug 4, 2022

The genesis transaction can't really give us much information on the current state of a category

And the genesis transaction can in actual fact give a lot of useful basic info about the type of token. Your fact doesn't counter my fact, does it?

This would be great to support! But if you accomplish it with genesis data, how do you support migrations where additional tokens need to be issued?

Any solution that stores data on the blockchain has the downside of being immutable. This is a huge benefit to most usecases, though. So your cornercase doesn't seem to me to be a show-stopper as any solution that takes it into account will by need be much more complex and expensive.
As they say: perfect is the enemy of good enough.

@zander
Copy link
Contributor Author

zander commented Aug 4, 2022

Thanks everyone for reading,

its clear that there is not going to be a happy conclusion here, so closing.

@zander zander closed this as completed Aug 4, 2022
@bitjson
Copy link
Member

bitjson commented Aug 4, 2022

Thanks again for sticking with this @zander! This is an important topic, and we either need to 1) change the CHIP or 2) have a good rationale for not. Just going to leave this open until that's done.

@bitjson bitjson reopened this Aug 4, 2022
@A60AB5450353F40E
Copy link
Contributor

Here's proof of concept for a "SLPv1 constructor" covenant: https://alpha.ide.bitauth.com/import-gist/c70e1dc5bc2ead37a90eb0ef7fb553ec

To create a token simply make a TX that spends 1 input and has 2 outputs:

  • 0- sent to p2sh 0xa91431ad13c79fbc8dddd80612c2075d2ddcf1fe42c987
  • 1- OP_RETURN according to specification, it will serialize:
    • "SLPv1" magic bytes
    • First owner (locking script of the output that will take the 1st output of the category)
    • Initial supply (fungible token amount of the 1st output of the category)
    • Metadata: decimals, ticker, name, document url, document hash

The next tx spends the 0, and the input unlocking script is the whole "pre-genesis" TX. The covenant can only be spent if the provided TX validates against the parent's TXID and if the outputs being created are as specified in the parent's TX OP_RETURN + it also verifies metadata is properly formatted.

Therefore, the spender of the "pre-genesis" covenant ("constructor") can only initialize the category according to definition in the OP_RETURN, it doesn't even matter who posts the genesis TX (anyone can do it) since the covenant enforces that the 1st owner is the one specified in OP_RETURN of the parent.

So... wallets can automatically do both create&spend and the creator will just see his "baton" p2pkh output show up in the wallet.

Everyone else who receives a token of the category can lookup the txid=categoryID to find the pre-genesis tx, verify the p2sh standard address (it will be the same for ALL SLPv1 tokens) and read the token info from the OP_RETURN, knowing it's valid without needing the next TX.

This is also convenient for automatic list making of ALL compliant tokens - just monitor the p2sh address and you'll know whenever a new standard token has been created.

Note I had to do some hacks to make op_verify checks pass in bitauth ide, so this can't actually be used, I'll build it with proper bytecodes when we get testnet4 :)

@A60AB5450353F40E
Copy link
Contributor

A60AB5450353F40E commented Sep 8, 2022

Proof of concept for a "one token standard" baton covenant with updateable metadata and lockable metadata fields: https://alpha.ide.bitauth.com/import-gist/0f0d440ba4101ed9ab4a0a364b2f8704

The latest metadata hash is stored in the NFT commitment field of the baton UTXO so it can be updated while p2sh address remains constant.
The constructor is designed to validate baton initialization in the genesis TX and salt the baton contract with the token's categoryID so it will be a per-category address so wallets could subscribe only to metadata for select tokens.

Token list could be made by subscribing to the "constructor" address, and collect metadata of all tokens that are standardized this way.
From that TX, standard-aware wallet can generate the tracker address by appending the TXID to the invariant part of baton contract, then add the tracker address to the token.

Pre-genesis TX, the TX that creates the "constructor" output, and whose TXID will set the categoryID in the next TX: https://testnet4.imaginary.cash/tx/5e2d8ac65a091836a6f13ad115130c9b363b85c67440246851b56fdb8effa507

Genesis TX, the TX that decides initial supply, first owner, and initializes the metadata: https://testnet4.imaginary.cash/tx/1e556738029bb7bc945c45efa4d959e9cdfa24ec2f755309c8fe83f1cd138a8e

First baton spend, this updates the metadata (I changed the decimals and one text field) and also releases some tokens (to my p2pkh address, 3rd output): https://testnet4.imaginary.cash/tx/a64e2737553ad0f9762030b1d1fff28af607af90330f30388aa3f57bd0f3321b

btw the "owner" could be a recursive covenant requiring a baton NFT of some other category, making this category owned by some parent category

Procedure for a light wallet interested in a single category's metadata:

  1. Jump to TXID=categoryID transaction
  2. Verify constructor address, if match continue, else flag as "unknown token" or check against some other known standard constructor
  3. Generate baton address, that will be category-specific i.e. lockingScript = (0xa914 H(baton_invariant + txid + 0x0187) + 0x87).
  4. Ask an Electrum server for that address UTXOs, if multiple are returned pick one that has the NFT - that'll be authentic and up to date metadata.
  5. Optional: ask an Electrum server for that address history, remove any fakes (remember anyone can dust whatever address) that don't have the NFT, result of the query will be metadata revision history

Procedure for some token database builder:

  1. Create a constructor address filter (fixed to 1 element)
  2. Create another, empty, baton addresses filter
  3. Go to CT activation block
  4. Scan the block for any "constructor" outputs
    • on hit, construct a baton address for the TXID and add it to the filter
  5. Rescan the block using the filter and add data to some tokenDB
  6. Repeat 4 and 5 until blockhain tip reached, continue monitoring new blocks using the 2 filters (constructor and baton set)

The tokenDB could keep the full history for each baton or just the tips (set of latest baton UTXO TXes for all standard tokens) + their genesis TXes. That's all that's needed to validate the current metadata, genesis TX + the current UTXO TX. The chain of correct updates respecting the locks is proven by induction.

Metadata blob format (NFT commitment stores a hash of it):

  • Supply lock - 1 byte, 0x00 (closed) or 0x01 (open) (can't be opened once closed, means no more taking FTs from the pool or putting them back in)
  • Genesis supply - 8 bytes (can't be changed once set in genesis TX, updates just preserve the information)
  • Genesis supply policy - variable length byte array (len, array), this will be set as locking script of the 1st owner (can't be changed once set in genesis TX, updates just preserve the information)
  • Decimals - 1 byte
  • Ticker - variable length byte array (len, array)
  • Name - variable length byte array (len, array)
  • Document hash - variable length byte array (len, array)
  • Document - variable length byte array (len, array)
  • Back-validation API - variable length byte array (len, array)
  • Metadata locks - 1 byte, one flag for each of the 6 fields - 0x00 (all closed) up to 0x3f (all open) (can't be opened once closed, if a field is locked, TX wanting to change it will fail Script validation)

Note: supply lock will prevent the spender from taking out any more FTs from the baton UTXO. The "baton" is using both NFT+FT cashtoken primitives and the FT amount stores the reserve supply. Once locked, it'll just preserve the information and enable people to calculate emitted supply from "circulating supply = (genesis supply) - (FT amount in the baton NFT)"

@bitjson
Copy link
Member

bitjson commented Sep 8, 2022

I'm still working on an "SLP2023" proposal that I hope will offer a clearer solution to this issue (details in CashToken Devs). Hoping to have time to finish that by the end of September.

Until then, I just wanted to mention again: any system in which wallets "trust" data committed in genesis transactions will be vulnerable to impersonation. E.g. an attacker can create several "token genesis impersonations" for every token genesis transaction. The impersonations could share any/all the features of the "real" token genesis transactions: same "genesis data", same structure, and mined in the same block (the only difference would be the attackers control). That means an attacker can make "genesis metadata" protocols practically useless for all BCH users with only a few dollars a day in transaction fees. (See further discussion in my comments above.)

There are plenty of strategies for broadcasting authenticated data about a token category on chain (NFT commitments, OP_RETURNs, dropped input bytecode pushes, etc.) – I have not yet found a use case where looking up fixed data blobs in genesis transactions seems better than one of those other strategies.

@A60AB5450353F40E
Copy link
Contributor

A60AB5450353F40E commented Sep 8, 2022

E.g. an attacker can create several "token genesis impersonations" for every token genesis transaction.

Yes, so I had the idea of a back-validation API. You write some url into the blob, and then after wallets read and parse metadata they go to the url and submit the categoryID and the site returns true/false if it belongs to it, confirming the association.

This latest contract I posted is updateable metadata, where the issuer commits to respecting the standard format and updating rules by instancing the correct "baton" contract at genesis which will both carry the reserve FT supply and track metadata updates. Hash of old/new blob lives in "baton" NFT's prevout/output commitment, and updating is done by pushing both old/new blobs as input script, which then verifies that it's updated correctly - reapecting the format and any locks self-imposed in past revisions. The constructor and baton contracts have been designed so thay can leverage existing Electrum server infrastructure, and rely on address queries to get the data.

@bitjson
Copy link
Member

bitjson commented Sep 30, 2022

Progress update: I've finally been able to close the other remaining issues in this repo, so I'll be focusing on this issue 100% over the next few weeks.

@bitjson
Copy link
Member

bitjson commented Sep 30, 2022

I created a topic for this issue on the Bitcoin Cash Research forum too:
Higher-level token standards using CashTokens

@bitjson
Copy link
Member

bitjson commented Oct 31, 2022

Hi all! I just published a draft proposal for a new application-layer standard:

Bitcoin Cash Metadata Registries (BCMRs) share metadata between Bitcoin Cash wallets, allowing user-recognizable names, descriptions, icons, ticker symbols, etc. to be associated with on-chain artifacts like identities, tokens, and contract systems.

Registries can be found via DNS and updated using on-chain transactions, offering strong censorship resistance and the same security available to funds and tokens: multisignature wallets, offline signers, time-delayed vaults, bounties/honeypots, and more.

On-chain identities are represented by chains of transactions, so their history and broadcasts can be verified by light clients with tiny proofs (a few KBs). Think software update hashes, warrant canaries, tamper-evident logs, reusable payment addresses, .onion addresses, etc.

I'm also going to host a livestreamed tech talk + Q&A on Bitcoin Cash Metadata Registries on November 1, 2022 at 13 https://www.youtube.com/watch?v=gPOCj0KulNg

@bitjson
Copy link
Member

bitjson commented Nov 2, 2022

Video is now available!

video

Since we now have a fairly complete proposal for the application-layer work to be done, I think we can close this thread and recommend that further discussion move to higher-level standards. @zander and I are starting to break-out individual issues here, new contributors welcome! https://github.com/bitjson/chip-bcmr/issues

I'm going to close this for now, but if anyone knows of other places where related, higher-level token standards are being discussed, please add a link below.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants