-
Notifications
You must be signed in to change notification settings - Fork 119
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat: w3name #648
feat: w3name #648
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nice, a lot of great things here! 🚀 ⚡
throw new DBError(error) | ||
} | ||
|
||
return data.length ? data[0].record : undefined |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why not using .single()
in the supabase client req?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I believe it throws if not found (and I couldn't be bothered to figure out the particular error code it uses) and I want to to return undefined
if not found.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
yeah, it will return an error and thrown with the error on line 750
If this is not what we want, we should rethink some other methods like getKey
, getUpload
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In general I'm anti throw when things are not found at the data storage layer. In JS it's idiomatic for things to return undefined
like with Map
, Set
,Array
etc. also with regular Object
s.
I also wanted to return a better error message than what we'd get from the DB.
Also, we should do this for the other methods because I just saw this in sentry:
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, that's the getKey
that I was mentioning. Can you open an issue there for us to change it?
I also prefer to return undefined here
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hey, I agree while working on the pinning APIs I've encountered the same problem.
I think in general we should look into abstracting the logic to interpret different DB errors into smaller and more defined ones (ie not found, foreign key violations, connection error etc). Because as you mentioned at the moment everything is caught and on the API level we would send different errors based on the actual returns.
-- https://github.com/ipfs/go-ipns/blob/a8379aa25ef287ffab7c5b89bfaad622da7e976d/ipns.go#L325 | ||
((data ->> 'has_v2_sig')::BOOLEAN = TRUE AND name.has_v2_sig = FALSE) OR | ||
((data ->> 'has_v2_sig')::BOOLEAN = name.has_v2_sig AND (data ->> 'seqno')::BIGINT > name.seqno) OR | ||
((data ->> 'has_v2_sig')::BOOLEAN = name.has_v2_sig AND (data ->> 'seqno')::BIGINT = name.seqno AND (data ->> 'validity')::BIGINT > name.validity) OR |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
No new records should be created with same seqno. Only if a bug exists right?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Not sure if I follow, in https://github.com/ipfs/go-ipns/blob/a8379aa25ef287ffab7c5b89bfaad622da7e976d/ipns.go#L325 a new record will be used if the seqno's are both the same and the sigs are both the same but the validity
of the new record is greater than the existing one.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That is odd. No new records should be created with the same seq number as far as I understand.
I would like to know why this is a possibility, but I guess we can follow what go does for now
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You implemented the same here: https://github.com/ipfs/js-ipns/blob/master/src/index.js#L481-L503
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think it's ok to be stricter here and disallow overwriting same revision even if IPNS allows you to do that, unless there is a good reason to do so.
From the discussion above it appears we do not have a clear understanding of the motivation here. So I suggest we disallow that because client intentionally does the increment.
Or identify a clear reason why support this and:
- Add comment stating that reason.
- Consider supporting this from the client
Otherwise I fear we are misleading users by providing client API that leaves an impression of needing to increment but than can catch them by surprise of force update.
P.S.: If I had to guess, overwrite is inspired by git push --force
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If we're accepting IPNS records, then our rules to update our cache should be the same as in IPFS because:
- There's not anything stopping people from crafting their own IPNS records and using validity time not sequence to manage updates. So our server code needs to be prepared for that...or we need to lock the server down further to only accept records that increment the sequence number but...
- My prediction is that these records might not always come through the Web3.Storage API. If we start accepting record updates over pubsub, then I want the behaviour to be the same as in IPFS.
I want Web3.Storage to have the same behaviour as IPFS. I don't want to find a bug in the far future where we're not updating our cache because we decided not to do the same thing as IPFS. One of the biggest parts of this is that it's compatible with IPFS, when we deviate from that it starts to look like a second class citizen. I'm not trying to build a better IPNS, I'm just trying to make a bridge to what's already there that people can use.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Those are fare points! Yet, I would still invite designing practical and intuitive system that can help us improve impractical one instead of constraining design by it.
I consider IPNS impractical because as far as I know no one has adopted it due to it's impracticality (or at least that is my impression).
It could be (and is likely) that there are good reasons for IPNS to do things this way & it's impracticality has other causes. If so we should probably better reflect that in the client API as well. In my experience the it is the worst when API imposes set of invariants, but then you discover that somehow those invarians weren't upheld (reminds me of fauna unique index).
((data ->> 'has_v2_sig')::BOOLEAN = TRUE AND name.has_v2_sig = FALSE) OR | ||
((data ->> 'has_v2_sig')::BOOLEAN = name.has_v2_sig AND (data ->> 'seqno')::BIGINT > name.seqno) OR | ||
((data ->> 'has_v2_sig')::BOOLEAN = name.has_v2_sig AND (data ->> 'seqno')::BIGINT = name.seqno AND (data ->> 'validity')::BIGINT > name.validity) OR | ||
((data ->> 'has_v2_sig')::BOOLEAN = name.has_v2_sig AND (data ->> 'seqno')::BIGINT = name.seqno AND (data ->> 'validity')::BIGINT = name.validity AND DECODE(data ->> 'record', 'base64') > DECODE(name.record, 'base64')); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I could not understand why this. Can you let me know the reason for this?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This function determines if a new record is newer than another: https://github.com/ipfs/go-ipns/blob/a8379aa25ef287ffab7c5b89bfaad622da7e976d/ipns.go#L325
The code here attempts to replicate that in SQL.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I mean this bit DECODE(data ->> 'record', 'base64') > DECODE(name.record, 'base64'))
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I see, it is not in the code, so perhaps responsibility of the caller. I don't know what is the reason for that, we don't do this at all in JS afaik, nor is in the spec
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, it's responsibility of the caller. https://github.com/ipfs/go-ipns/blob/5976a80227cc5199414119585cca347bc814647a/record.go#L113-L118
I couldn't find in JS either, maybe an oversight, or it's new code on the Go side?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think it would be a good idea here to figure out a reason and add a code comment stating it. Even better if same could be done in go code & spec otherwise over time intentions can get muddy as code bases change.
@@ -37,6 +37,40 @@ for (const file of files) { | |||
} | |||
``` | |||
|
|||
### Mutability | |||
|
|||
Mutability in Web3.Storage is maintained through IPNS records. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In line 14, we use web3.storage
, we should be consistent and use the same everywhere
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I can lower it here for consistency but I think we decided when written it should be Web3.Storage.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
mind checking and change the other ones in this file if we want to go with Web3.Storage naming?
|
||
Mutability in Web3.Storage is maintained through IPNS records. | ||
|
||
⚠️ Name records are not _yet_ published to or updated from the IPFS network. Working with name records simply updates the Web3.Storage cache of data. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Worth pointing this to an issue where people can subscribe for developments
console.log('✅ Done\n') | ||
|
||
console.log('⏳ Resolving current value...') | ||
const { value: curValue, record: curRecord } = await Name.resolve(client, id) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I like to have Name exported as an util. But, I would expect resolve
and publish
to be part of the main client API. What are your thoughts here? I am with mixed feelings about this.
- (+) It is good that it is isolated because people will need to use
Name.keypair()
andName.create()
to publish. - (+) We can probably isolate in the context of a shared dotstorage lib. But we should make a design session on how we want to have this lib
- (-) Passing the client into
Name.resolve()
andName.publish()
is strange IMHO - (-) Folks can resolve by receiving an ID from a third party, having to import Name for it
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'd be happy to add publish
and resolve
to the main client as sugar for calling Name.publish
and Name.resolve
.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think it would really help to refine the terms. Here is what would make most sense to me:
// Creates new writable name.
declare function create():Promise<WritableName>
// Creates writable name from keypair.
declare function from(keypair:KeyPair):Promise<WritableName>
// Parses string name to readable name
declare function parse(name:string):Name
export interface Name {
// binary representation of the identifier (hash digest of the public key)
bytes: Uint8Array
// returns string representation of the name
toString(base?:BaseEncoder):string
}
// Writable name is just a name with a signing key attached so it could be
// used for publishing
export interface WritableName extends Name {
key: SigningKey
}
export interface SigningKey {
sign(input:Uint8Array):Promise<Uint8Array>
}
// Creates initial revision for the given name name. Note this does not require // signing key because it is ok to create revision without it, publishing it would
// require key however.
declare function v0(name:Name, value:string):Revision
// Creates an incremented revision
declare function increment(revision:Revision, value:string):Revision
interface Revision {
name: Name
sequence: number
value: string
}
// Publishes given revision to a service. Requires signing key
declare function publish(service:Service, revision, key:SigningKey}):Promise<Revision>
// Resolves last published revision from the service
declare function resolve(service:Service, name:Name):Promise<Revision>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I've implemented this as designed above. I think it's a great idea to separate the name from the record. I've also separated this from the main bundle, so you use it like import * as Name from 'web3.storage/name'
. This allows us to publish it as is and try it out without effecting the main library. Once we're happy with it and we've removed libp2p-crypto
to minimise the bundle size we can add sugar to the main library.
Co-authored-by: Vasco Santos <santos.vasco10@gmail.com>
c50d856
to
11690bd
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I made bunch of suggestions, take it or leave it up to you. I do think Name
could and probably should be a standalone library pulled in here. That would encourage "naming service" similar to "pinning service" as opposed to making it *.storage specific.
@@ -43,6 +44,9 @@ router.put('/car/:cid', mode['📝'](auth['🔒'](carPut))) | |||
router.post('/upload', mode['📝'](auth['🔒'](uploadPost))) | |||
router.get('/user/uploads', mode['👀'](auth['🔒'](userUploadsGet))) | |||
|
|||
router.get('/name/:key', mode['👀'](auth['🤲'](nameGet))) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
❓Do we need to auth ? Seems like this feature would useful in so many contexts without this.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It's not strictly needed, in the spec we create it'll be optional. However using the auth key allows us to rate limit and optionally associate records with users (which we don't do in this MVP) for metrics.
(data ->> 'has_v2_sig')::BOOLEAN, | ||
(data ->> 'seqno')::BIGINT, | ||
(data ->> 'validity')::BIGINT) | ||
ON CONFLICT (key) DO UPDATE |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I wonder if it would be a better idea to have a name table that retains all revisions with a materialized view for current revisions instead. It is really helpful in other systems and I imagine would be the case here as well.
Would also probably have better write speed (at expense of read).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes probably, we should move to something like that if this becomes successful.
import { identity } from 'multiformats/hashes/identity' | ||
import { base36 } from 'multiformats/bases/base36' | ||
import { CID } from 'multiformats/cid' | ||
import { keys } from 'libp2p-crypto' |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do we really want to bring all this along ? Unless things has improved since this was one of the largest piece of js-ipfs.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It's on the list of things: #659
console.log('✅ Done\n') | ||
|
||
console.log('⏳ Resolving current value...') | ||
const { value: curValue, record: curRecord } = await Name.resolve(client, id) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think it would really help to refine the terms. Here is what would make most sense to me:
// Creates new writable name.
declare function create():Promise<WritableName>
// Creates writable name from keypair.
declare function from(keypair:KeyPair):Promise<WritableName>
// Parses string name to readable name
declare function parse(name:string):Name
export interface Name {
// binary representation of the identifier (hash digest of the public key)
bytes: Uint8Array
// returns string representation of the name
toString(base?:BaseEncoder):string
}
// Writable name is just a name with a signing key attached so it could be
// used for publishing
export interface WritableName extends Name {
key: SigningKey
}
export interface SigningKey {
sign(input:Uint8Array):Promise<Uint8Array>
}
// Creates initial revision for the given name name. Note this does not require // signing key because it is ok to create revision without it, publishing it would
// require key however.
declare function v0(name:Name, value:string):Revision
// Creates an incremented revision
declare function increment(revision:Revision, value:string):Revision
interface Revision {
name: Name
sequence: number
value: string
}
// Publishes given revision to a service. Requires signing key
declare function publish(service:Service, revision, key:SigningKey}):Promise<Revision>
// Resolves last published revision from the service
declare function resolve(service:Service, name:Name):Promise<Revision>
@alanshaw one other thing that had been on my mind that I have not captured anywhere, I think it would be a good idea to have an example illustrating how to use this across multiple sessions, because:
|
|
I'm going to get this merged now. I believe I've addressed the majority of the feedback. If there's anything missed then please shout. |
This PR implements w3name as specified in https://github.com/alanshaw/w3name with some changes to the DB schema because it turns out IPNS record newness isn't solely determined by sequence number.
Demo: