Skip to content

NIP-105 API Service Marketplace#780

Open
CoachChuckFF wants to merge 2 commits intonostr-protocol:masterfrom
Team-Pleb-TabConf-2023:NIP-105
Open

NIP-105 API Service Marketplace#780
CoachChuckFF wants to merge 2 commits intonostr-protocol:masterfrom
Team-Pleb-TabConf-2023:NIP-105

Conversation

@CoachChuckFF
Copy link
Copy Markdown

NIP-105 defines kind:31402 for broadcasting API services, endpoints and their costs in mSats.

It was written, implemented and placed in the 2023 TABconf Hackathon.

@uncleJim21
Copy link
Copy Markdown

image

Why NIP-105 and not NIP-90 Data Vending Machines?

Someone asked me what are the advantages of NIP-105 over NIP-90 (https://github.com/nostr-protocol/nips/blob/vending-machine/90.md). That is a great question. Allow me to dig in.

Let's start by saying there are many ways to approach the problem and we encourage competition and different approaches with different trade offs. Let a thousand flowers bloom. At the same time, there are significant upsides to API Marketplace (we call them Data Buffets) over Data Vending Machines that are worth noting:

  • DVMs are bid based meanwhile Data Buffets are designed to be offer based: We feel that the bid based solution while possibly advantageous in some situations is not typically the most advantageous solution. When you walk into Home Depot to buy a 2x4 you don't accept bids from 10 vendors. You find the aisle and choose from a selection. That keeps the sellers more honest due to revealing their offer beforehand and it lowers the sellers overhead (don't have to hire a human or bot workforce to do outbound sales). We feel the offer based approach will promote efficiency both in the transaction mechanics (less back and forth bidding against each other on a per job basis) and market efficiency (race to the bottom especially for commodity-like services)
  • DVMs Publish Buyer/Seller Back and Forth on the Relays - That seems unnecessary and prone to bloat. Data Buffets instead communicate out of band. Once an offer is published, the buyer of services communicates directly with the service provider, leaving no footprint on the nostr relays and foregoing the latency, cost, bloat and privacy issues associated with them (see next point).
  • DVMs Inherit Some Technical Debt Inherited by Complex Approach - For example, DVMs are provisioning for making jobs encrypted to ensure user privacy. This is a disadvantage inherited due to the point above. In an offer based system, encryption is not necessary because all comms between the service provider & the buyer are direct & happen out of band, outside nostr.
  • Data Buffets are optimized to make the service side as simple and efficient as possible - It is possible to run a simple service with about 300 lines of typescript and no front end. By doing it this way, we feel that we enable wider competition and thereby a race to the bottom on common commodity services.
  • Data Buffets Incentivize Consolidation of Complexity Inside the Client Due to the point above Data Buffets encourage sophistication to consolidate inside the client applications. We feel this stays true to the "dumb pipes smart clients" design principles promoted by nostr. This approach gives users more control over their privacy, cost management and application composability.
  • What is the potential downside of Data Buffets? While DVMs by design seem to remain relatively neutral and agnostic regarding the structuring of specific APIs and their business models. Data Buffets on the other hand are very specific about the way service providers structure and broadcast their offerings. With this standardization we hope to bring the efficiency and scalability gains previously discussed. While this may be inconvenient at times and may even preclude certain applications or business models, we see many advantages accruing to the majority of applications. That is a trade-off we feel is worthwhile at this stage and one that could possibly be overcome in the future with either revision of this or other NIPs or publication of new NIP at such a time those applications emerge.

@thrillerxx
Copy link
Copy Markdown

@pablof7z
Copy link
Copy Markdown
Member

pablof7z commented Oct 1, 2023

I'll just comment on a few points that caught my eye on the comparison of this approach vs DVMs because I think there are many misconceptions (e.g. DVMs are NOT bid based).

You find the aisle and choose from a selection. That keeps the sellers more honest due to revealing their offer beforehand and it lowers the sellers overhead (don't have to hire a human or bot workforce to do outbound sales).

Service Providers in DVMs are explicitly showing their hand when/if they provide a bid for a job and can provide different bids on a per-job basis if they so choose. Instead of requiring having a single price they can create a price (again, if they so choose) that is specific to the peculiarities or risk profile of the job/customer.

Furthermore, each DVM is creating a price history of it's jobs and this transparency brings competition by wildly reducing information asymmetry.

We feel the offer based approach will promote efficiency both in the transaction mechanics (less back and forth bidding against each other on a per job basis) and market efficiency (race to the bottom especially for commodity-like services)

  • There is no back and forth price negotiation in the DVM spec 🤷‍♂️
  • Again, DVMs are not bid-based, they are intent-based.
  • The lack of per-job pricing means that service providers must price to the average+margin, since they can't granularly price the true cost of each job.

DVMs Publish Buyer/Seller Back and Forth on the Relays - That seems unnecessary and prone to bloat.

This is the entire design of Nostr. Communication happens through relays, and the byproduct of the data can be useful to others or permit others to build job chaining based on other DVMs outputs (as is already happening)

DVMs Inherit Some Technical Debt Inherited by Complex Approach - For example, DVMs are provisioning for making jobs encrypted to ensure user privacy.

There is no encryption in the DVM spec as I think it's a bad idea.

Data Buffets are optimized to make the service side as simple and efficient as possible - It is possible to run a simple service with about 300 lines of typescript and no front end.

There is no frontend to DVMs 🤔 My skeleton DVM is probably less than 300 LOC. And I think this gets to the crux of the main difference between this proposal and DVMs, which I don't think it's whether this is bid-vs-ask based (again, a misunderstanding of NIP-90 since the bid tag is optional).

I see many issues with this proposal, but the main one is that the payload to generate a request is really hard to reliably compute and very specific to each endpoint, which means that there is little difference between using this or just hardcoding integration with a very specific endpoint. What's the upside of doing this vs just integrating against specific endpoints? There is no discoverability.

Integrating with a specific endpoint has the benefit that implementation complexity remains bounded to the particular endpoint, whereas doing this requires at least one higher level of abstraction for no added benefit.

@cmdruid
Copy link
Copy Markdown

cmdruid commented Oct 2, 2023

I'll just comment on a few points that caught my eye on the comparison of this approach vs DVMs because I think there are many misconceptions (e.g. DVMs are NOT bid based).

This is the first step in the DVM spec:

A request to have data processed, published by a customer. This event signals that an npub is interested in receiving the result of some kind of compute.

This is commonly called a RFP or Request for Proposal in my industry. The purpose of an RFP is to solicit bids, which also seems to be the purpose of this first step in the DVM spec.

Can you better explain your position here? How are DVMs not bid based?

This is the entire design of Nostr. Communication happens through relays, and the byproduct of the data can be useful to others or permit others to build job chaining based on other DVMs outputs (as is already happening)

Websockets are not that great for chaining services together. There is no transaction guarantee (async protocols are not good for this) and no spec for status / error states. Relays are already unreliable and not very communicative when message delivery fails.

This problem compounds with every job in the pipeline, and the client has to shoulder the burdens of these problems since everything is coordinated through them.

I think it's better to chain APIs together the old fashioned way. HTTP is a robust spec and it works great. Nostr is great for discovery. We designed NIP-105 to leverage the strengths of both Nostr and traditional HTTP / REST. Also I am incredibly lazy and do not want to reinvent the wheel.

There is no encryption in the DVM spec as I think it's a bad idea.

I agree. I believe this is a reversal of your suggestion here which is why it was brought up by @uncleJim21, but pr comments aren't gospel so I don't feel it's worth debating imo.

There is no frontend to DVMs 🤔 My skeleton DVM is probably less than 300 LOC.

I believe any client-side logic can be fairly categorized as 'front-end', and logic meant to solicit one-to-many rounds of communication for a bidding process (or compute request or w/e you want to call it) sounds quite heavy and a big ask from the client-side developer.

If you want to compare apples-to-apples, I would compare LOC for a client library that implements the proposed spec in its intended use-case.

In regards to NIP-105, we are only using offers to discover service endpoints, everything after that uses standard HTTP requests which doesn't require anything fancy.

I see many issues with this proposal, but the main one is that the payload to generate a request is really hard to reliably compute and very specific to each endpoint, which means that there is little difference between using this or just hardcoding integration with a very specific endpoint.

I'm not sure what you mean by this. JSON schemas are a very common standard. Everybody uses them and there are tons of libraries for generating type-safe API calls with them. For example, zod has several libraries for consuming JSON schemas.

I can't imagine why you wouldn't want to use a schema to define your API.

I plan to add a hash of the schema to the NIP-105 spec so that it is easy to search for standardized endpoints that can directly plug into another service.

Thank you for the feedback, and also thank you for your many contributions to nostr. We may disagree on a few things but I don't think we are competing, so it would be great to hear more feedback on both NIP-90 and NIP-105 and share ideas.

@TheCryptoDonkey
Copy link
Copy Markdown

We've been building on kind 31402 independently and have 8 implementations in production:

Our spec uses a different tag schema: name, url, pmi (payment method identifier), price, and t in tags for relay-side filtering, with payment-agnostic pmi rails (l402, x402, cashu, xcashu) instead of Lightning-only. We also support multiple capabilities per event.

We will be submitting a PR for our NIP soon, but are happy to chat / collaborate if you are interested.

Cheers, The Crypto Donkey

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

6 participants