Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat: encode decode permits #10

Merged

Conversation

gentlementlegen
Copy link
Member

@gentlementlegen gentlementlegen commented Apr 8, 2024

Resolves #8

Copy link
Contributor

github-actions bot commented Apr 8, 2024

Lines Statements Branches Functions
Coverage: NaN%
Unknown% (0/0) Unknown% (0/0) Unknown% (0/0)

JUnit

Tests Skipped Failures Errors Time
0 0 💤 0 ❌ 0 🔥 6.666s ⏱️

@gentlementlegen gentlementlegen mentioned this pull request Apr 8, 2024
@gentlementlegen
Copy link
Member Author

To be resolved here: #7 (comment)

@0x4007
Copy link
Member

0x4007 commented Apr 10, 2024

I guess we will migrate to Cloudflare KV in another future task?

@gentlementlegen
Copy link
Member Author

What's going on in this PR:

It introduces function helpers to serialize / deserialize permits and transform them into a base64 encoded string. This is useful when we want to pass the permit data around, like for example with pay.ubq.fi within the claim inside the URL.

Having it inside this package allows us to have only one source of truth, easily implementable inside any of our project if needed.

Another problem was that the Supabase generated classes were very old and using the old DB schema. It got updated and code / tests have been fixed accordingly.

@gentlementlegen gentlementlegen marked this pull request as ready for review April 10, 2024 11:49
@gentlementlegen
Copy link
Member Author

I guess we will migrate to Cloudflare KV in another future task?

@0x4007 what part of Cloudflare KV do you refer when talking about this repo?

@gentlementlegen gentlementlegen marked this pull request as draft April 10, 2024 15:27
@gentlementlegen gentlementlegen marked this pull request as ready for review April 10, 2024 15:48
Copy link

@molecula451 molecula451 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

wow, clean stuff, great PR it is the gently GENTLEman......

molecula451

This comment was marked as duplicate.

Copy link
Contributor

@gitcoindev gitcoindev left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I left a few questions, but overall a good job!

knip.ts Outdated Show resolved Hide resolved
src/handlers/encode-decode.ts Show resolved Hide resolved
tests/encode-decode.test.ts Outdated Show resolved Hide resolved
@0x4007
Copy link
Member

0x4007 commented Apr 11, 2024

I guess we will migrate to Cloudflare KV in another future task?

@0x4007 what part of Cloudflare KV do you refer when talking about this repo?

I want plugins "agents" to default to using KV instead of Supabase.

This is because I want to make the developer experience as simple as possible and I believe that setting up a KV store is less involved than a Supabase database.

We should consider full database storage as exceptions instead of the norm.

I also think it's worth implementing the changes in this repository (in another pull) because it will be used as an example for future developers.

@0x4007 0x4007 removed their request for review April 11, 2024 01:05
@gentlementlegen
Copy link
Member Author

I guess we will migrate to Cloudflare KV in another future task?

@0x4007 what part of Cloudflare KV do you refer when talking about this repo?

I want plugins "agents" to default to using KV instead of Supabase.

This is because I want to make the developer experience as simple as possible and I believe that setting up a KV store is less involved than a Supabase database.

We should consider full database storage as exceptions instead of the norm.

I also think it's worth implementing the changes in this repository (in another pull) because it will be used as an example for future developers.

KV cannot replace what Supabase does, they have completely different purposes. While Supabase can replace KV, opposite is not true. KV only purpose is to do Key-Value pairs, hence its name. While this is totally suitable for simple things like remembering a user wallet, it is not suitable to do updates, joins, queries since it handles only a one to one relationship. And even in the case of wallets here, they contain multiple columns that cannot be represented through KV. To quote the docs

Workers KV provides low-latency, high-throughput global storage to your Cloudflare Workers applications. Workers KV is ideal for storing user configuration data, routing data, A/B testing configurations and authentication tokens, and is well suited for read-heavy workloads.

So in this scenario, it would be possible to do it if we drop other columns of the wallets. But that also means we start to scatter data all around.

@whilefoo
Copy link
Contributor

We also can't run plugins in Github Actions if we use Cloudflare KV (technically you can with a worker proxy but then you need to implement auth...)

src/types/env.ts Outdated Show resolved Hide resolved
src/types/plugin-input.ts Outdated Show resolved Hide resolved
src/handlers/generate-erc721-permit.ts Outdated Show resolved Hide resolved
src/handlers/encode-decode.ts Show resolved Hide resolved
src/handlers/generate-erc20-permit.ts Outdated Show resolved Hide resolved
@molecula451 molecula451 merged commit 5673641 into ubiquity-os:development Apr 14, 2024
2 checks passed
@gentlementlegen gentlementlegen deleted the feat/encode-decode-permits branch April 14, 2024 02:01
@0x4007
Copy link
Member

0x4007 commented Apr 16, 2024

I guess we will migrate to Cloudflare KV in another future task?

@0x4007 what part of Cloudflare KV do you refer when talking about this repo?

I want plugins "agents" to default to using KV instead of Supabase.

This is because I want to make the developer experience as simple as possible and I believe that setting up a KV store is less involved than a Supabase database.

We should consider full database storage as exceptions instead of the norm.

I also think it's worth implementing the changes in this repository (in another pull) because it will be used as an example for future developers.

KV cannot replace what Supabase does, they have completely different purposes. While Supabase can replace KV, opposite is not true. KV only purpose is to do Key-Value pairs, hence its name. While this is totally suitable for simple things like remembering a user wallet, it is not suitable to do updates, joins, queries since it handles only a one to one relationship. And even in the case of wallets here, they contain multiple columns that cannot be represented through KV. To quote the docs

Workers KV provides low-latency, high-throughput global storage to your Cloudflare Workers applications. Workers KV is ideal for storing user configuration data, routing data, A/B testing configurations and authentication tokens, and is well suited for read-heavy workloads.

So in this scenario, it would be possible to do it if we drop other columns of the wallets. But that also means we start to scatter data all around.

My vision is to encode an array (or object) inside of the value for situations like this @gentlementlegen

@0x4007
Copy link
Member

0x4007 commented Apr 16, 2024

We also can't run plugins in Github Actions if we use Cloudflare KV (technically you can with a worker proxy but then you need to implement auth...)

Auth is not relevant to your point of choosing Supabase over KV.

Auth is required for Supabase as well.

@whilefoo

@gentlementlegen
Copy link
Member Author

gentlementlegen commented Apr 16, 2024

This is a very utopian scenario where we never have to change the shape of the content, and where this data is only specifically needed in one and only spot. What if tomorrow we have that encoded object and wanna add / remove / change a type? KV doesn't have any migration system, we should be writing scripts that update the data. And if that data is accessed by any other project, it will subsequently break every one of them. KV also does not handle RLS or rules to access specific data and columns, which we use to control with fine grain of what is accessible or not, which will lead us to have lot of custom code to protect / control access to such data. With KV we cannot seed data either, which is very useful for local setup and testing.

Maybe this can be discussed somewhere else, but I am not sure to understand the need to move away from Supabase? It is extremely easy to setup, robust, free, and helps accessing the data in an ordered fashion. To me KV seems helpful in scenario where you have just a front-end and wanna save some data specific to a user with no relation with any other object, or settings for example. I feel like we are comparing a shovel to a rake, because both of these techs do not aim to achieve the same purpose, and are not really interchangeable

@0x4007
Copy link
Member

0x4007 commented Apr 16, 2024

This is a very utopian scenario where we never have to change the shape of the content, and where this data is only specifically needed in one and only spot. What if tomorrow we have that encoded object and wanna add / remove / change a type? KV doesn't have any migration system, we should be writing scripts that update the data.

Yes this is a valid concern, but at the same time, I really think we should keep the data PER plugin as simple as possible.

It's hard to anticipate how large of a data structure any possible future plugin developer would use; but in this exceptional case, the plugin developer can switch out the data storage solution to be Supabase (or any other database)

However for most cases I anticipate that a simple KV will suffice. Think of plugins as like, micro services. Each one should only be concerned with storing simple data directly relevant to their simple capability.

Again, I am trying to make plugin development as convenient and as accessible as possible for new developers so that our ecosystem can grow faster. It is our duty to make plugin development as easy as possible to get started with.

And if that data is accessed by any other project, it will subsequently break every one of them. KV also does not handle RLS or rules to access specific data and columns, which we use to control with fine grain of what is accessible or not, which will lead us to have lot of custom code to protect / control access to such data.

I anticipated that we needed to implement the custom code anyways on the plugin level. However, yes, this implies less safety measures without RLS etc. again, I don't think it's a showstopper if the data is decentralized. If a single plugin's data is compromised, the entire network won't go down.

We can also consider making wrapper methods in our SDK that could be useful data for many plugins. For example, collect all data related to the completion of the task (include the associated pull)

And this includes every user, the list of their contributions etc.

The SDK could also possibly include the generic boilerplate security related features. This would be a conversation for the future but again the objective is to make plugin development as accessible as possible.

With KV we cannot seed data either, which is very useful for local setup and testing.

Maybe this can be discussed somewhere else, but I am not sure to understand the need to move away from Supabase? It is extremely easy to setup, robust, free, and helps accessing the data in an ordered fashion. To me KV seems helpful in scenario where you have just a front-end and wanna save some data specific to a user with no relation with any other object, or settings for example. I feel like we are comparing a shovel to a rake, because both of these techs do not aim to achieve the same purpose, and are not really interchangeable

I have intimate experience working with Supabase so I am quite familiar with the product. But to be honest I only really conceptually understand KV, I haven't really worked with it on a serious project etc.

There is a chance I could be wrong about this, but conceptually it seems faster and simpler to setup if no UI interaction is necessary to set it up (Supabase in my experience does require to sign in to the UI, establish the schema etc)

@rndquu
Copy link
Member

rndquu commented Apr 16, 2024

Overall it seems that we need to:

  1. Use a shared supabase instance for all of the "core" plugins (assistive pricing, conversation rewards, wallet registration, etc...). Most of the plugins share the same entities (i.e. DB tables) so keeping everything in a single DB eliminates the hassle of sharing data between plugins.
  2. Allow none-core plugin developers to use whatever they want (supabase, cloudflare KV, etc...).

Another point in favour of a single DB instance for "core" plugins is that many of our web services (pay.ubq.fi, onboard.ubq.fi, etc...) require data from a shared supabase instance that we use right now. It is very easy to retrieve data from supabase instance using public anon key. Moving to cloudflare KV for each plugin would require implementing some backend service that would collect data from different cloudflare KV instances so that frontend services could read that data whenever they need to.

@0x4007
Copy link
Member

0x4007 commented Apr 16, 2024

Overall it seems that we need to:

  1. Use a shared supabase instance for all of the "core" plugins (assistive pricing, conversation rewards, wallet registration, etc...). Most of the plugins share the same entities (i.e. DB tables) so keeping everything in a single DB eliminates the hassle of sharing data between plugins.

  2. Allow none-core plugin developers to use whatever they want (supabase, cloudflare KV, etc...).

Another point in favour of a single DB instance for "core" plugins is that many of our web services (pay.ubq.fi, onboard.ubq.fi, etc...) require data from a shared supabase instance that we use right now. It is very easy to retrieve data from supabase instance using public anon key. Moving to cloudflare KV for each plugin would require implementing some backend service that would collect data from different cloudflare KV instances so that frontend services could read that data whenever they need to.

I think this is a good proposal. So then let's focus on Supabase in the short term, as we are building the key/core/"official" plugins.

Then later we should make a boilerplate/example plugin for developers to fork which uses a simpler storage solution such as KV.

I am concerned that new developers will try and study our existing plugins in order to learn how to make their own. If we don't follow our own "best practices" then it may lead to confusion. We will need to be very deliberate about the messaging (example: use this example plugin project as a starting point instead of studying our existing complex ones) for the sake of growing our developer community faster.

@gentlementlegen
Copy link
Member Author

@0x4007 I totally see your point and agree that isolating is good, but as @rndquu said there is a lot of data shared within each plugin like wallets, users, GitHub info and so on. And micro-services would generally use an orchestrator (Kubernetes for example) which helps passing data around, or some central point that helps communication between each entity. It is a common scenario, just again not with KV because I really don't think this is meant to be used for this. And actually, most micro-services spin up their own db even with simple data, and would use NoSql to make things lighter.

@whilefoo
Copy link
Contributor

Auth is not relevant to your point of choosing Supabase over KV.

Auth is required for Supabase as well.

@whilefoo

Yes Supabase already has auth but if you want to use Cloudflare KV with plugins running on Github Actions then you need a Cloudflare worker to act as a proxy to access the KV and because Cloudflare workers don't have auth, you have to implement your own auth.
So we would have to create a proxy worker that every plugin developer has to deploy and use in their plugin which defeats the point of using KV because it's easier

@0x4007
Copy link
Member

0x4007 commented Apr 16, 2024

And micro-services would generally use an orchestrator (Kubernetes for example) which helps passing data around, or some central point that helps communication between each entity

I don't have experience with designing microservices. We should follow best practices, and try our best to make it easy for developers to get started with building plugins.

Keyrxng pushed a commit to ubq-testing/permit-generation that referenced this pull request Oct 30, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Permit data structure, encode / decode
6 participants