-
Notifications
You must be signed in to change notification settings - Fork 3
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
RFC: ESIP-8 - Single Ethscription Creation through Blob Transactions #16
Comments
Nice! I agree that we want a "normal" calldata ethscription that points to the blob. Such a "pointer" ethscription could even be reused to point to non-blob things! Here's my idea:
Or with the JSON splayed out: {
"location": "blobs://self",
"type": "image/png",
"length": 1234,
"encoding": "...optional eg deflate...",
"sha256": "abcde",
"extra-metadata": {
"description": "optional metadata"
}
}
Indexers should have a separate field for the content the ethscription is pointing to and the data URI content of the ethscription itself. The rule that all ethscription content must be a data URI persists. Likewise uniqueness / ESIP-6 are applied to the data URI as normal. The presence of the content sha will ensure correct behavior. Thoughts? |
Questions: How long must indexers store the blob data? 18 days? Can you reference blobs in other transactions from your transaction? Not sure why you'd want to do that exactly; maybe to add commentary. Should you be able to reference other ethscription content in your ethscription? Can you reference something that references something else? I'd prefer not to open a whole recursion can of worms as part of the blob change. |
From the end to the top.
Yes. I think so. In the JS side, we create the blob with toBlobs, which returns the hex of the data, padded; then we can use blobsToCommitments, which returns the kzg commitment and from there we can generate the blob hash; All that before sending the transaction, so yeah.
I like
Not sure. Thing is that there are 3 things - indexer, db, and API. In theory, if they are separate, Indexer don't need to know and store the actual data, it can ask the DB through the API. If they are not, yeah, the indexer will/should store it permanently.
Don't think so.
Where? From the metadata or from the blob's content? In anyway, i don't think we need any special/internal recursion thing. Recursion is easiest done with standardized api endpoints, and through html+js, so that you don't need anything special.
I agree that we need a metadata in the calldata. BUT, don't agree with most of the points there. It's a waste of money and precious and more expensive calldata space, cuz most of the things can be inferred. It should be as simple as possible, and light on indexer to choose to implement and support it.
What's the point of that? It will always be
We don't. Lets be strict and have the 90% of cases handled, the rest can be added through other ESIPs if such a need come at all (eg the docx example). It's also safer to have only a limited set of content types supported.
We can do that too. It's not that hard. For most types you can see where the thing ends. I did it, it may not be guaranteed but it should work in most cases especially if we have only a limited content type support. But that's 1 of the 2 things i can agree to include.
Useless. Compression support for different things should be accepted from other ESIP anyway. If it's not a "required" field, okay. We can actually move it to the
Agree. It allows when people try to create ethscrition from content to check if this thing is "ever existed" if so, it won't be a valid creation, despite the fact it already don't exist. Thing is, they will always exist. That's the point. So there's no such possibility, and may not even need this sha as requirement.
Why? You're decoding the whole thing into "proper content_uri" anyway, you'll have the actual content and make the sha and then check it against the DB if it exists or not and reject the creation if it already exist. In general, turns out we don't need most of the stuff. I can agree for
It will be for the ethscription data, but you will need to check if the blob's content exists or not. |
What about that: The calldata won't be "ethscription", but
It's optimal and it's enough. Also.. killing the
Glyph has similar format of calldata. This allows me to decrease the cost 2-10x of creation and transferring of tokens (cuz it's primarily calldata-based tokens), than ethscriptions and even facet. Like Calldata is so cool.. it's mind blowing. A lot of people don't realize how powerful it is. Cuz reality is the whole network is driven by this single thing LOL. edit: in case of more than 1 blobs, it format will be mostly the same
|
We should permanently store blob data in the indexer, right? User data is precious. |
Ethscriptions.com surely will, but it will be indexer by indexer. I think of it as "enhanced IPFS" in a way. |
Let the hype begin.
I'll update it later today. We talked a lot today in #general, in #technical and in DM but.. yeah. Gotta get some sleep and formulate it.
Premise
Make it possible to create ethscriptions through Blob Transactions.
If multiple blobs are found on a single ESIP-4844 transaction, they are combined to form a single ethscription. Current max, for some reason, is around 240-250kb (roughly 2 blobs). This might be because other are also filling the block.. which again, for some reason the max is 6 blobs per block, not 16 as supposed (probably temporary). Question is, how to know how many blobs there gonna be in a block before submitting the creation transaction, LOL.. so, anyway.
About the default uniqueness requirement
The blob hashes already support uniqueness check by default, eg. if the content is the same, the
blob_versioned_hash
is the same too. The calculation of this "blob hash", for short, is by hashing thekzg_commitment
with SHA256, removing the first 2 characters and prefix with0x01
- voila, you have the blob hash. Now, how to get commitments? Easy, can derive them from the data/content/hex "submitted" by users/transaction. Anyway, i'll post some more later.moar on blobs
The splitting into blobs is done automatically. Eg. if the file is 200kb, it will make 2 blobs of 128kb, the last one will have empty bytes at the end and you gotta be careful with that and clean them to be able to have correct and proper
content_uri
. Waste few hours on that yesterday :D I'll share everything. Or it's somewhere in the discord.more later
these are some "Blobbed Ethscriptions" i track while I was talking the whole day ;d
Also made a basic viewer https://blobs.wgw.lol
The text was updated successfully, but these errors were encountered: