Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Explanation on the DID Methods in the registries' document #83

Open
iherman opened this issue Jul 10, 2020 · 59 comments
Open

Explanation on the DID Methods in the registries' document #83

iherman opened this issue Jul 10, 2020 · 59 comments
Assignees

Comments

@iherman
Copy link
Member

iherman commented Jul 10, 2020

At present, §6 in the document is clearly different from the others. I presume the process described in §3 is not directly relevant for the methods, the table contains a column ("Status") whose meaning is not clear, and there is no more explanation. It is good to have this registry here, and I know it has a different origin to the other sections, but I believe it would need some urgent editorial care...

@OR13
Copy link
Contributor

OR13 commented Aug 7, 2020

@iherman can you rephrase this as a directive for me or @msporny ?

happy to take a stab at a PR, if I can figure out what to do.

@OR13 OR13 added the did method label Aug 7, 2020
@iherman
Copy link
Member Author

iherman commented Aug 11, 2020

Let me try to be more specific.

  • §3 describes the registration process. But that section is relevant for properties only. It does not seem to be relevant for parameters (ie, §5) and for DID methods (ie, §6). The process must be clearly defined for those two categories, too.
  • The tables in §4 all have three headings: Normative definition, JSON-LD, and CBOR. It is not entirely clear from the description what the CBOR column means (it is relatively obvious that the second column refers to the relevant JSON-LD context).
    • B.t.w., there are number of properties, like §4.4.1. and others, where what seems to be a reference to the JSON-LD context is under the CBOR heading; I guess that is a bug.
    • The text should also make it clear how JSON comes into the picture. I presume it is simple, ie, the name used in JSON-LD is, verbatim, the key name for JSON as well, but this should be stated.
  • In contrast to §6 there is no trace on the origin of the property or parameter: who submitted it and why? This is in contrast to methods, ie, §6.
  • §6 uses a different structure. As I said, the registration process in §3 does not tell me how these are registered; ie, there is no explanation for what the 'Status' and 'Link' columns mean, and "DLT or Network" might not be clear for everyone. I presume the last column is a link to the description (this is equivalent to the 'normative definition' in the previous headers, right? Can we use the same terminology?). I would expect the description of the registration process to shed some lights on these.
  • See my remark above on authors and the difference between §6 and the other sections.
  • There is an editorial inconsistency. In §4 and §5 each submitted property has its own subsection, with a small table referring to the normative definition, the JSON-LD, and the CBOR references. §6 does not follow the same structure, it instead lumps all the methods into one giant table, without any example for usage (although it may not be clear what one would put there as an example). I am not sure which approach is better but, I must admit, this inconsistency bothers me.

I hope this helps.


As an aside, I wonder whether the registration process for DID methods should not be more demanding. I just glanced into some descriptions and, I must admit, I simply do not see what makes them interesting, useful, why they are there. In some cases the only information I really get is "it is a DID implementation on the XYZ blockchain". This is not very helpful. I believe we should require a 1-2 paragraph description for each of the methods that would describe why that DID method is interesting, unique in some way, etc.

@OR13
Copy link
Contributor

OR13 commented Aug 14, 2020

@iherman I have tried to address some of your concerns here. https://github.com/w3c/did-spec-registries/pull/115/files

@iherman
Copy link
Member Author

iherman commented Aug 15, 2020

@iherman I have tried to address some of your concerns here. https://github.com/w3c/did-spec-registries/pull/115/files

Ack.

@brentzundel
Copy link
Member

PR that addressed this issue was closed. Still need a PR for this. @peacekeeper will take a look

@msporny
Copy link
Member

msporny commented Aug 3, 2021

This was not resolved, the PR noted above wasn't ever merged. Some text might have made it into DID Core to address the issue.

@iherman
Copy link
Member Author

iherman commented Aug 3, 2021

The issue was discussed in a meeting on 2021-08-03

  • no resolutions were taken
View the transcript

5.1. Explanation on the DID Methods in the registries' document (issue did-spec-registries#83)

See github issue did-spec-registries#83.

Brent Zundel: explanation on did methods in registries document, raised by ivan

Ivan Herman: more than a year ago
… the last comment is from orie saying i have tried to addressed, I acknowledged it.. seems like this issue should have been closed a long time ago

Manu Sporny: the PR, there was a massive permathread in it and it got closed, never went in

Ivan Herman: vaguely remember when I raised it registration of terms and methods and they looked different from one another but don't know what happened since then

Brent Zundel: The PR that tried to address the issue was closed rather than merged

Manu Sporny: this was orie and markus going back and forth over normative language in did core...
… my expectation is something got into did core and it was potentially resolved

Markus Sabadello: I can't check right now but will look later

@peacekeeper
Copy link
Contributor

I think some of this has been resolved (e.g. the CBOR column has been removed, and there is also some language now on how DID methods will get accepted into the table). But some other issues here are probably still open, e.g. about the structure and contents of the table.

A few weeks ago there was an idea that the "Status" field in the table could contain the value "tested" or "implemented", if an implementation of the DID method was submitted to the test suite.

Probably need to discuss this topic again on a WG call with @iherman who raised this issue, to see how much of it still needs to be addressed.

@peacekeeper
Copy link
Contributor

Related issue is #174, which also discusses tracking contact information for DID methods and other additions to the registry.

@talltree
Copy link

See also the suggestion I just made in #265 .

@iherman
Copy link
Member Author

iherman commented Sep 14, 2021

The issue was discussed in a meeting on 2021-09-14

  • no resolutions were taken
View the transcript

7. Explanation on the DID Methods in the registries' document (issue did-spec-registries#83)

See github issue did-spec-registries#83.

Brent Zundel: issue has been around a while. DID method section has a status column. 99% says provisional for status.
… what do we want that column to say? Do we want to explain provisional, etc.?

Drummond Reed: Added a reference to where I had put another comment. Original suggestion was to create a new table that li-sts methods where authors have upgraded their methods to match the Recommendation.
… old table stays as is, but provisional changes to "upgraded" for such methods and people should look at new table.
… it will help us call out name squatting
… quality of the did method specs varies. This will leave us with methods that meet our now higher bar in the main table.

Joe Andrieu: worried about how you proposed it. Worried about new name squatting.
… we do need consensus on the legitimate values for status and how they're determined.
… when we first added provisional, it was to deal with methods that might not be consistent with current spec draft.
… need to figure out states, what they mean, and how they are assigned.

Ted Thibodeau Jr.: it seems that members of the new list should either be removed from the old (which is thus not static), or included on both with current status shown (and the old is again not static)

Ivan Herman: since we want this to become a formal registry, the doc itself has a registration process and that process says nothing about registering a new method. There is just a table, but no registration process. We need a clear policy for how things get into the table.
… we made a decision it would become a registry, but many details remain

Drummond Reed: Agreed, Ted. The proposal I made is that the only change to any methods listed in the Old table is that their status column value is changed if that method becomes listed in the New table.

Brent Zundel: we should use provisional (written before there was a spec), v1.0 compliant (submitted after the Recommendation), and deactivated (for no longer in use).

Justin Richer: what if the states are "Pre-1.0", "1.0", and "Deprecated"?
Justin Richer: basically what Brent Said
Justin Richer: or burn. Somebody said it.
Justin Richer: but basically call it "version" instead of "status" might help, too -- but that's a different argument

Ted Thibodeau Jr.: implementations submitted before DID Core 1.0 CR/PR could be listed as such, or de-listed for registry purposes

Drummond Reed: if we keep the current table, prefer what justin typed above
… no matter how we do it, need a clear policy on how to get the status changed.

Brent Zundel: next step should be a pull request proposing new language

Joe Andrieu: whoever the owner is of an entry, they should be able to self-assert which version of spec they claim to be compliant with
… since these will live for years and we need to plan for the future

Drummond Reed: +1 to being able to continuous upgrade the status values for future versions

Brent Zundel: any volunteers to write a PR?

Ryan Grant: I'll volunteer

Brent Zundel: reminder for Imp. Guide PR review and request for other PRs, we will keep you informed of the progress of the spec
… thanks for remaining professional.


@talltree
Copy link

@rxgrant and @jricher: at the end of the DID WG call last week, both of you had a specific suggestions for the values of the Status tag in the DID method table and the rules that the Registry editors should follow to assign those values. Could one of you submit a PR?

@rxgrant
Copy link

rxgrant commented Sep 21, 2021

@rxgrant and @jricher: at the end of the DID WG call last week, both of you had a specific suggestions for the values of the Status tag in the DID method table and the rules that the Registry editors should follow to assign those values.

My read of the end of the conversation was that there was general approval to add a (blank) column to the table of DID Methods which would link to their updated-for-1.0 DID Method spec, that the (generally) "Status: PROVISIONAL" column should be removed, that old links should be labeled as pre-1.0 versions, and that since DID Method authors should self-certify, the registry should not attempt to declare their status. I will submit a pull request with these changes and briefly describe the change above the table.

@rxgrant
Copy link

rxgrant commented Oct 5, 2021

As part of this work, I've reviewed all the existing DID Method Specifications and noticed that several do not resolve to existing web pages. I believe that #83, as it's currently scoped, does not cover editorial judgement on changing the status of these DID Methods, but point out that we need a process that has certain minimum standards.

rxgrant added a commit to rxgrant/did-spec-registries that referenced this issue Oct 12, 2021
The following changes are for [issue 83](/w3c/issues/83#issuecomment-924061109):

- add a (blank) column to the table of DID Methods which would link to their updated-for-1.0 DID Method spec
- remove "Status: PROVISIONAL" column
- label old links as pre-1.0 versions
- add notes solumn for author-submitted status changes
- rename WITHDRAWN status to DEPRECATED, per spec
@rxgrant
Copy link

rxgrant commented Oct 12, 2021

See pull request #341

@talltree
Copy link

Per a request from @OR13 in PR #341, and in light of the feedback received in the formal objections to the DID 1.0 spec, for the third time I will put forth the proposal that we split the DID method registry table into two tables:

  1. A new table for all v1.0-compliant methods (listed FIRST).
  2. The existing table for all current provisional registrations (listed SECOND).

Proposed rules for these two tables

  1. All new registrations MUST be v1.0-compliant and MUST go into the new table—the old table is locked.
  2. All existing registrants who submit a new v1.0-compliant version MUST be added to the new table and MUST be removed from the old table.
  3. Both tables SHOULD have the same set of columns:
    1. Method Name
    2. Status
    3. Spec Link
    4. Author Link(s)
    5. Verifiable Data Registry
  4. Status values for the old table:
    1. Provisional
    2. Deprecated
  5. Status values for the new table:
    1. v1.0-compliant
    2. In production
    3. Test suite available
    4. Approved standard
    5. Deprecated

Rationale

Besides giving greater visibility to v1.0-compliant DID method specifications, the two-table approach would enable us to put an explanatory paragraph before each table that should reduce confusion, not increase it.

The para before the first table can explain that these are DID method specifications submitted AFTER the DID 1.0 spec reached PR and that meet all the requirements of a compliant DID method.

The para before the second table can explain that these were all DID method specifications submitted prior to completion of the DID 1.0 spec, and thus are all provisional until they submit a v1.0-compliant DID method specification.

This way it becomes much easier for implementers to "separate the wheat from the chaff".

@rxgrant
Copy link

rxgrant commented Oct 18, 2021

proposal that we split the DID method registry table into two tables

I'd be happy to implement this in the existing pull request. Any objections?

@talltree
Copy link

I'd be happy to implement this in the existing pull request. Any objections?

@rxgrant Not from me! I suggest we see if there are any objections or modifications on tomorrow's DID WG call. Then let's go for it.

@iherman
Copy link
Member Author

iherman commented Oct 19, 2021

The issue was discussed in a meeting on 2021-10-19

  • no resolutions were taken
View the transcript

4.1. change registry columns per issue #83 (pr did-spec-registries#341)

Orie Steele: PR reviews: #341

See github pull request did-spec-registries#341.

See github issue did-spec-registries#83.

Daniel Burnett: framing questions: what's necessary to continue the work? can everything else work on github as issues?

Orie Steele: i want to thank ryan for an issue-first, PR-second approach that resolves many registry problems
… we haven't always been timely about the registry, so plz plz review those PRs, it helps us with many of our core issues as a WG
… i won't summarize ryan's very broad PR because it covers a lot of ground but review it soon, particularly if you have a did method that might get booted by its being merged!

Drummond Reed: i think this PR is urgent vis-a-vis the formal objections!
… I shared a link to an alternative solution opened in another issue

Manu Sporny: since we're on that issue (pr 341), my only suggestion is to replace "non-compliant" with "provisional"
… or rather, NOT to replace it-- we will look bad if we overnight switch most of our registry to "non-compliant"

Drummond Reed: +1 to not using "non-compliant". But #83 proposes a more comprehensive solution.

Ted Thibodeau Jr.: "experimental"?

Ted Thibodeau Jr.: "beta-compliant"?

Manu Sporny: replace "non-compliant" with "provisional" in the PR, i mean

Drummond Reed: +1 to "trolling the DID method spec authors"

Drummond Reed: Comment being discussed: #83 (comment)

Ryan Grant: I was trolling, it's true, or put a little fire under them. I would support drummond's solution and I think it addresses manu's objection

Orie Steele: I will happily review a PR drummond, you are welcome to open one.

Manu Sporny: maybe we are not thinking enough about ungenerous readings-- we don't want people marked as "noncompliant" for having been compliant and having passed a test suite before breaking changes

Michael Prorock: +1 manu - wording and appearances are very important right now

Manu Sporny: and we also don't want to hand a "gotcha" opportunity to those who will comb through our github looking for evidence that we aren't running a proper WG here
… or that we've wasted effort

Daniel Burnett: +1 manu

Drummond Reed: I put a link to a sidestepping solution-- a 1.0 compliant table distinct from the existing table that includes all the provisionals as-is
… as long as there is some contextualizing explanation above both

Orie Steele: basically, we need PRs... there are already enough issues....

Drummond Reed: I will work with Ryan on doing this in PRs
… if the group supports it

Ryan Grant: First of all, Manu thanks for correcting the record on the amount of interop that these specs have already achieved
… I wasn't trolling to be annoying, I was hoping to avoid value judgments or partisanship in the editing of this registry
… just to explain the choice of words, even if i support solutions using a diff word

@talltree
Copy link

@rxgrant We didn't get any objections in the DID WG meeting today, but we didn't get any strong reactions in general. So here's my proposal: if you're willing to update your PR, let me know if you want me to draft text for the intro paragraphs for each of the two tables. Or alternately just go ahead and update your PR and I can comment on that. Whichever you prefer.

@kdenhartog
Copy link
Member

kdenhartog commented Oct 19, 2021

I agree in principle with Drummond's proposal and I think it get's us mostly there. Some further refinements I'd suggest -

  1. change the SHOULD to a MUST. I don't see any reason that we shouldn't include those details.
  2. New statuses need to be clearly defined what they mean.
    2a. what constitutes "production" readiness?
    2b. What constitutes an acceptable test suite?
    2c. How do we define "approved" standard for status 4? E.g. a spec that contains a single sentence for a security/privacy considerations section isn't worth "approving". A spec that doesn't have normative statements isn't worth "approving". A spec that has a strong dependency to a particular implementation isn't worth "approving". (my opinion here - curious if WG consensus agrees)

So at a high level I'm at a major +1 to this proposal (and have been for awhile now - thanks for re-proposing it for the 3rd time @talltree) and think with a bit of more specifics about the details in a follow up PR to flesh out the details of point 2a through 2c of this registry we can make this work. Would others here prefer I open a separate issue to discuss the requirements or do we want to consider that here if people agree this is necessary?

@rxgrant
Copy link

rxgrant commented Oct 19, 2021

Here are the methods that IMHO don't have a reasonable spec at a reasonable URL that, at minimum, addresses how to read a DID Document from the VDR:

some variant of a 404

  • did:twit (Twit DID method) has a github link that works but it's the wrong column, while the spec column is a github link that redirects to a 404.
  • did:op (Ocean Protocol) manages to 404 at a Github link
  • did:dom (Dominode) has no link posted at all

didn't bother posting a DID Method spec that describes how to read a DID Document from the VDR

  • did:did (DID Identity DID Method) does not point to any spec, rather marketing material
  • did:ion (ION DID Method) points to documentation about libraries that incorporate the method, not a DID Method specification

posted a DID Method specification that takes a form too confusing for the author of this comment to figure out how to retrieve the DID Document

  • did:ala offers resolution instructions that include "Create the DID Document"
  • did:elem, under its subsection for reads, refers to the sidechain spec, but it's a dead link (404).
  • did:gatc
  • did:dual

@rxgrant
Copy link

rxgrant commented Oct 19, 2021

Earlier I made a comment about a DID Method with a very short name. But they're building stuff, so the comment wasn't appropriate.

@mprorock
Copy link
Contributor

So how do we proceed with this as one or more PRs?

open on this - possibly one PR to get the JSON format established, and then a second that updates the respec from that as part of the build process on commit @OR13 any thoughts?

@msporny
Copy link
Member

msporny commented Oct 20, 2021

If we're going the JSON file route, please don't dump everything into a single JSON file (we repeatedly have merge conflicts or have to teach people how to rebase when we do that). Rather, each DID Method gets its own JSON file, put 'em all in a subdirectory, please.

@OR13
Copy link
Contributor

OR13 commented Oct 20, 2021

@mprorock yep, I would build a directory of json files, and a dynamic index built from parsing it.

@OR13
Copy link
Contributor

OR13 commented Oct 20, 2021

@rxgrant
Copy link

rxgrant commented Oct 20, 2021

I can implement either the div elements or a list of (over one hundred) subdirectories. However, I am worried about obsfucating the build process and thus requiring that people learn ReSpec build intricacies in order to keep this running.

I think we'd need excellent documentation on either process. Who's willing to write that up? Which one is simpler, yet will still result in non-conflicting merges?

@rxgrant
Copy link

rxgrant commented Oct 20, 2021

Approved open standard

I don't know what this means or how to write my code in order to pass this test.

@msporny
Copy link
Member

msporny commented Oct 20, 2021

I can implement either the div elements or a list of (over one hundred) subdirectories.

I meant 112 JSON documents in a single subdirectory labeled "didMethods" or something like that. :)

There is no build process w/ ReSpec, but someone will have to extend ReSpec to pull all 112 files in at page load time and translate that to HTML (which is what ReSpec does in realtime). Exceedingly bad examples on how to do that here:

https://github.com/w3c-ccg/vc-api/blob/main/common.js#L404-L422

and invoked here:

https://github.com/w3c-ccg/vc-api/blob/main/index.html#L70

with target markup here:

https://github.com/w3c-ccg/vc-api/blob/main/index.html#L343-L344

That is almost certainly a hacky way to do it, but ya gotta start somewhere, right?! :)

I agree that we shouldn't need an external build process to do this (or we've failed).

@talltree
Copy link

I can't help with the coding process, but I'm assuming that if we go this way (which again I favor), will we not still need to publish in the DID Methods section of the document a description of the registration process and the requirements that have to be met, yes?

If so, I'm willing to help work on that. But it sounds like we need a reset on what the registration properties are and what is required for each property.

@kdenhartog
Copy link
Member

kdenhartog commented Oct 21, 2021

I hate to be the voice of dissent on details that affect the job of did method reviewers, especially when there's shared enthusiasm on a data driven approach. Bare with me here because I'm airing some controversial opinions here, but I think they need to be said.

Right now we've got a lot of dog**** methods that are accepted here because there's little measure of quality that's being set. My hope in setting some ground rules is to thread a fine line between IANA processes I've encountered which feel like a wizard's ritual that only the blessed can perform and the open floodgates approach that we have today.

The fact of the matter is expert review takes time and includes implicit bias, but what we have today and what's being proposed with an automated approach isn't working either because we're left with a lot of low quality half baked stuff that assumes tons of tribal knowledge into the inner workings of each method in order to implement.

So, while I'm absolutely empathetic to the reality that any form of expert review flies exactly against the ethos of decentralization and much of what this work tends to require a lot of human effort to achieve this, I view this as a necessary tradeoff to create a valuable ecosystem built on DIDs. In fact, I see it as an opportunity for us to raise the bar on what quality means for people authoring DID Methods.

Can we please consider the impact of the long term viability here by being transparent about what we think good did methods look like and place at least some bar of quality on what's necessary to register a did method? After all, a did doesn't suddenly become not a compliant did just because it's not blessed by the registry. It's just a did that no one knows how to interact with which is effectively the same as a did method that's published but nobody understands how to implement interoperably.

@rxgrant
Copy link

rxgrant commented Oct 21, 2021

My hope in setting some ground rules is to thread a fine line between IANA processes I've encountered which feel like a wizard's ritual that only the blessed can perform and the open floodgates approach that we have today.

Continuous integration and test suites could prevent the politics while retaining the quality. I know how to do that for implementation libraries, but not for the specifications themselves.

@msporny
Copy link
Member

msporny commented Oct 21, 2021

@kdenhartog wrote:

Right now we've got a lot of dog**** methods that are accepted here because there's little measure of quality that's being set.

😆 ... 🤔.oO(Rename the registry to "Dog**** DID Method Registry"?)

I sympathize with your viewpoint @kdenhartog, and I think much of what you wrote is valid.

I also agree with @rxgrant -- the more we can automate, the better off we'll be. I have ideas on how we could do that, but it's all work that people have to do (write DID Method spec parsers that check for DID Core Method requirements -- that's a 2-4 month project in and of itself).

All that said, the issues remain:

  1. We don't want to put a time burden on the people that are volunteering their time to manage this registry.
  2. We don't want to put a policing burden on the people that are volunteering to manage this registry. They will become the target of attacks and process escalations when people disagree that their DID Method doesn't fit the criteria.
  3. We don't want to discourage people from using the registry.

There is an analogy here that I think might help us, and that is the "5-star Linked Data" approach. In essence, it suggested a 5 star deployment scheme for Linked Data. The 5 Star Linked Data system is cumulative. Each additional star presumes the data meets the criteria of the previous step(s). Before it, people had heated debates about what is and isn't Linked Data, and those debates often excluded new communities. So, instead of drawing a line in the sand, what was proposed was a gradual entry into the ecosystem. I think we have the same sort of thing here -- For example... first you publish a provisional spec, then you implement it, then you demonstrate that your implementations output is conformant to DID Core v1.0, then you stand up a test net, then you provide a resolver for others to use, then you go into "production", then you provide multiple implementations and perhaps fully define your specification, and then you have a test suite demonstrating multiple implementations interop'ing, and then you take it through a global standardization process with consensus and expert review.

We want people registering at the provisional spec phase... and then what comes next might not happen in the order I mentioned above... but, IMHO, we do want to expose that in DID Spec Registries and perhaps use it as sorting/bucketing criteria.

When you're trying to build a open and inclusive community, it helps to have a gradual onboarding process that's inclusive instead of setting up fences to keep people out.

Food for thought...

@talltree
Copy link

@msporny I find your "5-star Linked Data" approach to be very compelling for all the reasons you mentioned. I do believe it can address @kdenhartog concerns about the quality of the entries by making it relatively objective how each additional star is achieved. (If someone is truly trying to game the system, that should be pretty easy for the editors to detect.)

Can you say a little more about how you'd recommend structuring the five stars? And what specifically we'd need to do to put that approach into place for the registry?

@msporny
Copy link
Member

msporny commented Oct 21, 2021

Can you say a little more about how you'd recommend structuring the five stars?

I have no firm ideas there other than "people seem to go through a basic progression to get to 'five stars'"? Maybe... I don't know if they do... the list I provided above kinda falls apart toward the end wrt. linear progression. So we might skip the stars thing? Don't know, haven't thought about it enough yet.

And what specifically we'd need to do to put that approach into place for the registry?

I think the JSON files per DID method with some variation of the contents mprorock and I suggested above gives us that general structure.

@kdenhartog
Copy link
Member

I think in general where you're coming from is a safe bet for the maintainers of this registry over time and I get where you're coming from here by not wanting to turn this into an overtly political process that raises more headaches than it's worth. Additionally, I'm fully supportive of the idea of making this as automated as possible for very strict and transparent rules. I think there's a balance here that needs to be considered and at the very least getting the automated infrastructure in place is a good first step.

I'm hesitant to say that a big tent approach like what's done for the MIME types registry is going to end up being what we need here when the bare minimum for interoperability of DIDs and DID Documents is far more involved. I think this is where the idea of having the standard developed through a standards org is going to be an important factor here because that's the step where rigor can be applied without placing the burden on the editors here.

So what if we stick with things operating at a machine readable approach for the initial phases which allows for early registration and good open tent approach, but also allow ourselves to lean on standards bodies with good processes in place to define what an "approved open standard" means. For example, we can say that in order for a standard to be considered approved it needs to be approved by a predetermined list of SDOs which we believe have the set practices in place to evaluate the method in order to elevate those methods that do achieve that higher bar with the "approved open standard" status.

@rxgrant
Copy link

rxgrant commented Oct 22, 2021

@kdenhartog

I'm hesitant to say that a big tent approach like what's done for the MIME types registry is going to end up being what we need here when the bare minimum for interoperability of DIDs and DID Documents is far more involved. I think this is where the idea of having the standard developed through a standards org is going to be an important factor here because that's the step where rigor can be applied without placing the burden on the editors here.

Based on the uncertainty regarding which conflicting TAG/EWP items excuse formal objections in this standards org, I am certain that no standards org requirement for any star/badge/level is appropriate when dealing with decentralized protocols that disrupt traditional institutions. (Proof-of-work has become a powerful shibboleth.) The value-stack merge-conflict implications of DID Methods are too great for Internet engineers to wield their votes objectively.

also allow ourselves to lean on standards bodies with good processes in place to define what an "approved open standard" means.

No. For the reasons given above.

I further believe that if you did force this requirement, it would move the fight to creating standards organizations that do whatever it takes to get approved by any critera listed here, but either disallow any criteria in their voting that could be a shibboleth, or carefully prevent infiltration by individuals who respond in an oppositional way to the shibboleth. All you would cause is delay and cost in hacking the process to obsolete the political aspects. It would be better for marketplace fit to sort the technologies.

@kdenhartog
Copy link
Member

kdenhartog commented Oct 25, 2021

Edit: the spam message has been removed now - I didn't intend for this to be a removal of @rxgrant message which is informative and on topic.

This comment above mine reads like spam that seems unrelated to the discussion. @iherman am I allowed to just delete it (I have the permissions to do this)?

@kdenhartog
Copy link
Member

@kdenhartog

I'm hesitant to say that a big tent approach like what's done for the MIME types registry is going to end up being what we need here when the bare minimum for interoperability of DIDs and DID Documents is far more involved. I think this is where the idea of having the standard developed through a standards org is going to be an important factor here because that's the step where rigor can be applied without placing the burden on the editors here.

Based on the uncertainty regarding which conflicting TAG/EWP items excuse formal objections in this standards org, I am certain that no standards org requirement for any star/badge/level is appropriate when dealing with decentralized protocols that disrupt traditional institutions. (Proof-of-work has become a powerful shibboleth.) The value-stack merge-conflict implications of DID Methods are too great for Internet engineers to wield their votes objectively.

This seems a bit of strong allergic reaction because of the current problems we're facing. While this may be true in an SDO like W3C I can't say that we'd encounter the same issue in IETF and if we wanted to consider something like DIF an SDO (I don't believe this opinion is shared by all within the community) which is far more friendly to the work being done by us in this space. Point being here is that as long as we're transparent about the SDOs we believe are acceptable to prevent rug pulling on a controversial did method, I think we can circumvent the concerns you raise while still maintaining the high level of rigor that's expected from a well baked standard.

also allow ourselves to lean on standards bodies with good processes in place to define what an "approved open standard" means.

No. For the reasons given above.

I further believe that if you did force this requirement, it would move the fight to creating standards organizations that do whatever it takes to get approved by any critera listed here, but either disallow any criteria in their voting that could be a shibboleth, or carefully prevent infiltration by individuals who respond in an oppositional way to the shibboleth. All you would cause is delay and cost in hacking the process to obsolete the political aspects. It would be better for marketplace fit to sort the technologies.

I'm a bit less concerned about this. While I expect there to be some political maneuvering to occur I don't think it will be long standing and I generally believe that the issues that get raised during these conversations should be considered legitimate and useful to the development of the technology. If this did become a legitimate concern that hurts the legitimacy of any particular did method I think it would then be worth evaluating the effects that our process has set and considering modifying them to mitigate these concerns.

The issue I take with the "let the marketplace" philosophy is that for the most part it hasn't been effective over the years that the marketplace has been working with DIDs. Instead what I've more commonly seen is that the did methods that get chosen are not chosen based on their technical merits but rather on their marketing and the gaps get filled via tribal knowledge. Take for example did:sov, a method that has been around for a long time. It's been very successful in garnering adoption of the method by way of promoting a particular implementation (indy-sdk) which gets reused for the majority of implementations which are either producing, consuming, or resolving DID Documents from an Indy ledger. There's been legitimate and useful effort to build libraries which help to circumvent this as well as other concerns, but for the most part if you want to use did:sov you're left to a few libraries to achieve this since there's a fair amount of tribal knowledge that's necessary in order to implement this method.

That community has made great strides to place a greater emphasis on a standard rather than a particular implementation by starting work on did:indy which goes leaps and bounds beyond the current state of where things were a few years ago. That's useful in the legitimacy of the method and it shouldn't be understated that it's been useful, but I don't believe it was necessary for the marketplace to select that method since there was a good enough implementation available to make it work.

So why is this a concern? The reason I'm raising this is because building implementations on one or a few implementations which were built will get methods over the adoption barrier, but I don't believe the end state of what makes a good method should be just adoption. I believe that in order to build a robust method a well documented specification is necessary so that new implementors can also work with the method.

In a bit more dystopian what-if scenario too, I could see the day where a wildly successful method which was deployed by a large corporation over night in order to achieve that success could be abused to lock in licensing fees for did resolvers for example. To play this scenario out a bit, I could see that did:example is deployed to a billion users overnight and the user isn't even aware they're using DIDs and this method now becomes the most used method. Then since this method is built on a single implementation and deployed by a single corporation every implementer in the ecosystem realizes that in order for them to resolve the did document they are expected to use a library that was authored by the corporation who's patented the method and expects any developer who wishes to use the library to agree to their license in order to do so and collect royalty fees for doing so.

Now I'd hope that there's a push back by many people to choose not to support that method, but inevitably some will and this whole concern could have been avoided by us choosing to say good methods require a standard not just adoption. Scenarios like that are the reason why I'm advocating to see a standards based approach to this problem rather than a market based approach. I think with a market based approach we're likely to end up making much of the work here irrelevant even though it's well designed, robust technology because the market sided with the method that was well marketed not the one that solved the legitimate concerns of users.

@talltree
Copy link

@kdenhartog Isn't a "standards-based approach" a subset of a "market-based approach"? In other words, nowadays most standards only happen if there's enough market demand to see them all the way through the process.

From a practical standpoint, don't we have to treat a market-based approach as the baseline—because with DID 1.0 as an open standard, there's nothing we can do to prevent it.

So IMHO the only goal of the DID method registry is to surface as much helpful information as we can about DID methods that choose to be registered and which meet our baseline registration criteria.

@iherman
Copy link
Member Author

iherman commented Oct 26, 2021

This comment above mine reads like spam that seems unrelated to the discussion. @iherman am I allowed to just delete it (I have the permissions to do this)?

I believe we should consider comment threads the same way as we handle email threads at W3C in this respect. The overall policy for those is to be extremely reluctant of removing anything from the archives (barring very exceptional cases); the same should be true here imho.

@iherman
Copy link
Member Author

iherman commented Oct 26, 2021

I meant 112 JSON documents in a single subdirectory labeled "didMethods" or something like that. :)

There is no build process w/ ReSpec, but someone will have to extend ReSpec to pull all 112 files in at page load time and translate that to HTML (which is what ReSpec does in realtime). Exceedingly bad examples on how to do that here:

I have done something similar in the EPUB testing repository: https://github.com/w3c/epub-tests/. The EPUB tests, as well as the implementation reports, are submitted in JSON. I have created a TypeScript process that gathers all the information and generates a bunch of HTML tables which are then imported by a respec skeleton. I then defined a github action to run that script whenever there is a change. It is doable.

(B.t.w., I actually run respec from the action script, too, because the processing of respec, involving lots of and large tables, may be a bit slow when done run-time. But that is a detail.)

@TallTed
Copy link
Member

TallTed commented Oct 26, 2021

[@kdenhartog] This comment above mine reads like spam that seems unrelated to the discussion. @iherman am I allowed to just delete it (I have the permissions to do this)?

[@iherman] I believe we should consider comment threads the same way as we handle email threads at W3C in this respect. The overall policy for those is to be extremely reluctant of removing anything from the archives (barring very exceptional cases); the same should be true here imho.

Note that the comment referred to by @kdenhartog has in fact been deleted (@kdenhartog was not referring to @rxgrant's #83 (comment)). I think it likely this was done by GitHub admins, as they have tools for reporting such content (look under the three dots at upper-right of any comment) and/or users (look to the bottom of the left-hand column of any GitHub user profile page), which I had used to report that comment before @kdenhartog added his.

In general, I concur with @iherman that deletion should be extremely rare, but that can only be achieved if repo admins or the like can can easily hide (which should provide the option to reveal it for themselves to any reader at any time) and unhide such apparently-noise content to minimize its distraction effects ... and if the GitHub tools can be disabled, such that reports like mine don't lead to deletion of content the repo admin just wants to hide.

@brentzundel
Copy link
Member

brentzundel commented Oct 26, 2021

@kdenhartog @iherman @TallTed
I hid the comment referred to in your conversation, and when I did there was an option to unhide it.

Today, I no longer see it the comment at all and am not sure why that is. Deleting a comment should create an event in the timeline saying that the comment was deleted and by whom.

@TallTed
Copy link
Member

TallTed commented Oct 26, 2021

Today, I no longer see it the comment at all and am not sure why that is. Deleting a comment should create an event in the timeline saying that the comment was deleted and by whom.

@brentzundel -- Might be worth some followup with the GitHub powers-that-be? I'm betting it's their tooling and/or intervention that deleted it. Question is whether that should leave no trace, as now, or should leave similar evidence as would be there if one of us GitHub users (at whatever place in the repo's privilege hierarchy) deleted it. My understanding is that GitHub itself is Git-based, so it should be just another commit in the stack, so should be displayable....

@rxgrant
Copy link

rxgrant commented Oct 28, 2021

As mentioned in today's WG call, I see a registry column that could announce any standardization process underway as unobjectionable, when the answer is not required for a DID method to be listed.

I agree with @OR13 's point that requesting the data is an excellent way for Rubric evaluation authors, and end users, to get more informed about the DID Method.

@OR13
Copy link
Contributor

OR13 commented Oct 29, 2021

The more I think about this, the more opposed to embedding value judgments in the did spec registries I am... including "v1 conformance" ... since we can't really confirm this, it seems dangerous to say anything about a registered method other than linking to its spec, and possibly the rubric entry for it....

I think we should keep all forms of evaluation (including recommended status or conformance status) to the did rubric.... and keep the did spec registries a pretty boring list of URLs.

@msporny
Copy link
Member

msporny commented Oct 29, 2021

v1 conformance

The automated check I had in mind had to do with whether or not there was an entry for the DID Method in the DID Test Suite report. The individual would submit a link as a part of the registration process... so perhaps a better term is "implemented" or "test report exists" or something more objective.

I'd like us to not get too wrapped up in what we call the objective measure just yet (as we can always change that), and rather, focus on what the objective measures are (which in my mind, are "links to things").

For example: link to the specification, link to a code repository that implements the method, link to the DID Method results in the DID Test Suite, link to live resolver data (to demonstrate that a live network exists for the DID Method) ... and so on.

@talltree
Copy link

talltree commented Oct 31, 2021

I am getting more comfortable with @msporny's suggestion that a DID method registration consist entirely of a filled-out JSON template of "links to things" with two caveats:

  1. None of the links can produce a 404.
  2. The baseline for including a registry entry is a link to a DID method specification—and this the one area where I believe the registry maintainers should make a judgement of whether the specification as a document complies with the requirements in section 8 of DID 1.0. That doesn't mean the registry maintainers need to check the validity of all statements in the specification, just that it the document itself factually meets the requirements.

I believe this is how we keep a baseline of quality in the DID method registry (albeit a pretty low baseline).

@kdenhartog
Copy link
Member

kdenhartog commented Nov 3, 2021

Position change from me incoming:

I've been watching some of the discussions in the did-wg-charter on what a "quality" did method is and the effects of picking and choosing winners via the standardization process. It's become clear to me that while standardization can be a clear way to identify quality methods, it should not be the only one because it's an inherently biased process. It's also likely that standardization will likely be used to promote or tarnish the brand of a particular method for the majority of people who want to rely on dids but not join us in the mud to debate and critically assess. Instead, I suspect many people who don't want to deeply evaluate the merits of many did methods will defer to the authority of people they deem as experts and that's effectively means looking at the registry to decide which method should be chosen. I consider the tradeoffs here to likely be more harmful in the long term than the short-term problems I'm faced with when trying to evaluate whether a did method is something I should advocate implementation support for.

Given the way I'm watching this play out, I'm changing my position and consider it acceptable to go ahead with the limited number of status categories that can be automated for now until we can find suitable methods to objectively indicate the quality of a method without intentionally promoting or tarnishing a methods brand.

@iherman
Copy link
Member Author

iherman commented Nov 9, 2021

The issue was discussed in a meeting on 2021-10-28

  • no resolutions were taken
View the transcript

4. DID Method Registration.

See github issue did-spec-registries#83.

Kyle Den Hartog: #83 is the ongoing issue about this topic.

Brent Zundel: What specifically do we have to do to make the registry process as straightforward and clear as possible, both both those who register, and for those who look at it..

Manu Sporny: This concrete proposal could address a number of challenges we have had with DID method registration.
… There are complaints that we are not being strict enough about who can register. This was by design in the beginning, we wanted a low barrier of entry..
… This has created a problem that people can't tell the difference between DID method registrations..

Drummond Reed: The challenge is QUALITY.

Manu Sporny: What are the "good" ones that have way more implementation experience than e.g. someone's weekend project..
… We don't want to put a huge burden on those who register either..
… If we do an attribute-based registration process. E.g. This DID method has a specification, this specification has an implementation, it has a testnet, etc. These are clear yes/no questions..

Brent Zundel: this did method passed the did core test suite?.

Manu Sporny: If we do that, we can annotate the DID method registry in an objective way.
… We could add tiny JSON files to registrations that are used to render tables.
… This could make the process more manageable and objective..

jyasskin: +1 manu.

Kyle Den Hartog: +1 to manu, that's a really good starting point.

Ryan Grant: +1 to attribute-based registration process.

Kyle Den Hartog: My frustration is that it doesn't get us the full way there to decide what's a "quality" DID method..
… There is a need for better specifications. Many methods have security considerations that are a single sentence. Implementation guidelines sometimes just point to a single library..
… Rather than us deciding on quality, we lean on standards organizations that have WGs that can look at methods..
… E.g. if a certain method has gone through a standardization process, it achieves a higher status..

Drummond Reed: Encourage people to contribute to the Github issue..
… We should have a process that is as objective as possible, but it should also have an objective quality bar. E.g. to simply point to a specification, some of those are very lacking..

Brent Zundel: maybe the JSON could also point to a rubric evaluation.

Manu Sporny: +1 to that, brent.

Drummond Reed: We wanted to be inclusive in the beginning. I've been an advocate of keeping the current table, but start a new table that has a baseline bar. You must revise your specification for all DID Core 1.0 requirements, and you can't handwave at Security+Privacy Considerations..

Kyle Den Hartog: +1, that seems like a potential quality metric if we're not going to be able to achieve consensus on the reliance of standards bodies.

Kristina Yasuda: Agree with how Manu framed the problem statement, and +1 to Kyle that we need to do more that the initial proposal. there is a need for an organized structured process/body of ppl reviewing what gets accepted as a DID method..

Drummond Reed: I don't think it's going to be a large burdens, but you should only go into the new table if you are 1.0 compliant..
… Then our attention should be on objective characteristics on which registry maintainers could make objective decisions..
… DID method authors should be free to standardize wherever they want. We should encourage the process of maturing DID methods, so that the market can compete..

Orie Steele: I agree with some of what drummond said. Other things make me nervous. In Privacy+Security Considerations, there is sometimes only one sentence. Sometimes that's okay, and sometimes it is not..
… My experience is with JOSE/COSE registries. Merges into them are controlled by a set of individuals who establish consensus. The entries of terms points to a specification, which doesn't have to be at a specific standards organization..
… We're now at a point where we need a larger amount of editors, with a higher number of required consents before we accept something..
… The JOSE/COSE registry is very successful, I hope we can be like that..
… The number #1 way of improving quality is to add editors, and require all to approve..

Kristina Yasuda: really well-said, Orie..

Manu Sporny: I wanted to respond to Kyle. I'm nodding in agreement with a lot. The original proposal is something we can execute on today..

Drummond Reed: I mostly agree with Orie, but I don't think every registry maintainer should be required to approve every listing. Just a threshold..

Manu Sporny: With that proposal we will end up with either the same document or a better one that has labels e.g..
… We don't have to strife for perfection right now.
… The proposal is such that it doesn't matter if we have 1 or 2 tables. We can generate them programmatically based on the data..

Drummond Reed: +1 to generating the table(s) programmatically.

Manu Sporny: We have a concrete proposal in front of us that can give us immediate improvements that we can continue to iterate on.

Orie Steele: drummond, we need acountability, otherwise a maintainer can never approve things... and still be listed as an editor... we need the burden to be shared equally..

Ryan Grant: Requiring validation from a standards organization is a difficult bar for some decentralized protocols..

Daniel Buchner: +1 to Ryan's comment.

Ryan Grant: Some decentralized protocols are based on VDRs that disrupt traditional institutions..
… I'm a strong proponent of manu 's objective criteria.

Eric Siow: This is a question that hopefully can educate me. Is this issue related to one of the objections (diverging instead of converging)?.

Ryan Grant: Eric_Siow, I think it is the essence of one of the objections..

Eric Siow: If that's the issue, then if the group can define a way to come up with objective methods, that might be helpful..

jyasskin: +1 that non-standardized methods should be acceptable on the registry, just distinguished from ones that match manu's and kdenhartog's criteria..

Orie Steele: limiting registration is not an objective, imo.... letting the market pick methods is..

Kyle Den Hartog: Responding to manu, I wholeheartedly agree that editors should be able to handle this in a programmatic way. Managing this is a tragedy of the commons problem. Leaning on programmatic approach is better..
… A good litmus test of what is "high quality" is "can I produce an interoperable implementation just by reading the spec?". The test suite can help with this. Being able to lean on Rubric evaluations also gets us close to where I want us to get..
… We should reach a high bar, without excluding methods that can't go through a standards body..

Orie Steele: See this IANA registry for comparison....

Drummond Reed: Wanted to Eric_Siow 's really good question. It's easy to look at a registry with ≈114 registered methods and seeing divergence. I want to make it clear that comparing DID methods to URIs/URNs, that comparison makes sense in some parts (URI schemes, URN namespaces, DID methods), but they are also different..

Daniel Buchner: +1 to Drummond.

Drummond Reed: This design was intentional. Every DID method is an attempt to provide a verifiable identifier using any combination of cryptography and VDR. There are many ways of doing that. We wanted to accelerate and standardize the competition. We built an abstraction layer on top of all of them, that's the primary reason of the specification..

Ned Smith: We have a similar challenge working with the sea of cryptographic algorithms. Different algorithms have different purposes so they are grouped by intended function. Beyond that specs need to define negotiation of which algorithm to use..

Manu Sporny: +1 to what Drummond is saying..

Drummond Reed: We want the market to choose and let the best DID methods rise to the top. This is different from encouraging divergence..

Eric Siow: Can you standardize the ones that have some objective measure (e.g. widely implemented and use), vs. those that are not widely used could be standardized later?.

Drummond Reed: I wanted to talk about standardization. The existence of a standard (effort) associated with a DID method is another one of those objective criteria. I want to see W3C standardize more DID methods, but some DID methods are also going to happen elsewhere..
… I don't think you should HAVE to standardize a DID method..

Joe Andrieu: +1 to decentralized innovation.

Drummond Reed: The marketplace can develop DID methods anywhere they want, but we want an objective process for adding them to the registry. If there is a standard, then we will have a way to point to it..

Ryan Grant: See relevant DID Rubric issue to discuss standardization (whether or not the DID WG requires anything here).

Drummond Reed: Once we improve the quality of the registry, that will help the market make its decisions..

Kyle Den Hartog: +1 to not requiring them. It's worth stating to that while I believe a standards body can be a way to display quality it's not the only one. Another example metric that can help evaluate quality is number of implementations submitted to a test suite.

Orie Steele: See the charter issue raised here.

Drummond Reed: There are also many URI schemes..

Manu Sporny: We optimized the registry to learn early about DID methods that are being created. We wanted to know about DID methods that are being created..
… We can provide signals in the registry that tell you whether or not a DID method has reached a certain level of maturity..

jyasskin: The IETF has a history of putting too high a bar on acceptance to some of their registries, and I believe they mostly regret that. So +1 to manu..

Manu Sporny: I want to push back hard against making it harder for people to register DID methods. It should be easy to sort by criteria that matter to people..

Daniel Burnett: +1, we want to ensure that experimental did methods can get registered.

Manu Sporny: not if we make it all optional for registration :).

Orie Steele: We can't sort on criteria, unless we require people to provide them, which will make it harder for people to register..

Manu Sporny: The only mandatory thing for registration is a spec that meets DID Core... everything else is optional..

Drummond Reed: I mostly want to see the baseline criteria for registration be a v1.0 compliant DID method specification. All other registration attributes should be optional..

Orie Steele: The challenge I see is that the registry is attempting to do more than just being a registry. See JOSE/COSE which is simple. If we add criteria, it will not just be about adding a link to a spec, it will also about additional tasks for the editors..

Philippe le Hégaret: See "The Registry Track".

Orie Steele: To some degree, the Rubric has begun to capture some of the things we were also envisioning for the registry..

Drummond Reed: +1 to the DID Spec Registries NOT being the place that you go to for advice and guidance on selection of DID methods. We want the market to compete on offering those services..

Orie Steele: It might be better to keep it a very boring registry, and refer to the Rubric for a better way to add comparision, sorting, etc..

Ryan Grant: Orie: +1 to both adding a column allowing one to note a standards process underway (or achieved) in the registry, as well as to speaking to this more in the Rubric.

Drummond Reed: Yes, I like the idea of adding a column for being able to point to one or more published evaluations against the Rubric..

Orie Steele: maybe we can point from the registry to the rubric, instead of expanding the registry requirements, and move that consideration to the rubric..

Brent Zundel: I think we got some good data points. We seem to have agreement around a desire for registration to remain simple, to benefit those who are making those registrations happen (the editors).
… But we do need some way of making the registry easier to consume. A number of directions were proposed, I think we will be able to come to consensus..
… Thanks all for coming, we had some great conversations. Next week we will be back to our regular schedule of meetings..
… We invite you to join the DID WG..

Ryan Grant: thanks everyone!.

Brent Zundel: Thanks to scribes, thanks to all, see you next week..


Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

11 participants