-
Notifications
You must be signed in to change notification settings - Fork 5
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add graphql replication endpoints #100
Conversation
Hi @pietgeursen! I'm working on storage methods tomorrow and some of next week. So far I've been working on the db structure and simple inserts and gets, thinking more from the node internals rather than replication. I'd like to move onto this though 👍 is there somewhere I can see what you are wanting? Or if not, and you feel like writing me a wish-list, I can try to make it happen 😄 |
7bed368
to
484fda3
Compare
a1c54e4
to
d784469
Compare
I'm still working on the storage provider... got quite far with it and wondering if we want an |
My thought at the moment is that author aliases will be very temporary in
an lru cache.
Currently we don't have a nice way to query for what authors exist,
I think the client part of replication will want this. For mvp we can just
get every author we know about and ask to replicate them.
…On Wed, May 18, 2022 at 2:26 PM Sam Andreae ***@***.***> wrote:
I'm still working on the storage provider... got quite far with it and
wondering if we want an AuthorStore. Currently we don't have a nice way
to query for what authors exist, I don't think we ever needed it yet....
but if I build that into the storage provider, I wondered if you wanted
anything in particular? I know you are gunna be using author aliases, but
I'm not sure if this is something very temporary, or if you'd want it
storing?
—
Reply to this email directly, view it on GitHub
<#100 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/ACQRI6TTRQEVJ7I5KIRPSSDVKRIM5ANCNFSM5VAVCQEQ>
.
You are receiving this because you were mentioned.Message ID:
***@***.***>
|
1598b1b
to
4f47bc9
Compare
Codecov Report
@@ Coverage Diff @@
## development #100 +/- ##
===============================================
- Coverage 94.22% 88.60% -5.62%
===============================================
Files 28 42 +14
Lines 1938 2202 +264
===============================================
+ Hits 1826 1951 +125
- Misses 112 251 +139
Continue to review full report at Codecov.
|
Reviewing this atm, please hold merging. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Quite exciting to see replication enter the picture! And how fitting that this pr has that nice round number. In my comments I mentioned when things can also be handled in a follow-up. The main thing for me here is that I would like to understand better the approach with generating the schema before hand or maybe at least understand whether this is fundamentally in conflict with the way i've started to implement dynamic queries #141.
If anybody wants to try
mutation put {
publishEntry(
entryEncoded: "00bedabb435758855968b3e2de2aa1f653adfbb392fcf9cb2295a68b2eca3cfb030101a200204b771d59d76e820cbae493682003e99b795e4e7c86a8d6b4c9ad836dc4c9bf1d3970fb39f21542099ba2fbfd6ec5076152f26c02445c621b43a7e2898d203048ec9f35d8c2a1547f2b83da8e06cadd8a60bb45d3b500451e63f7cccbcbd64d09",
operationEncoded: "a466616374696f6e6663726561746566736368656d61784a76656e75655f30303230633635353637616533376566656132393365333461396337643133663866326266323364626463336235633762396162343632393331313163343866633738626776657273696f6e01666669656c6473a1676d657373616765a26474797065637374726576616c7565764f68682c206d79206669727374206d65737361676521"
) {
logId,
seqNum,
backlink,
backlink,
skiplink
}
}
query retrieve {
entryByLogIdAndSequence(
logId: 1,
sequenceNumber: 1,
author: {
publicKey: "bedabb435758855968b3e2de2aa1f653adfbb392fcf9cb2295a68b2eca3cfb03"
}
) {
entry,
payload
}
}
aquadoggo/src/lib.rs
Outdated
pub mod db; | ||
mod errors; | ||
mod graphql; | ||
pub mod graphql; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We should only make these pub
if we want library consumers to use these directly. As they are only pub
for the dump_gql_schema
bin I would suggest to instead just export a utility function that returns the schema as String
, which is called from the bin, instead of these complete modules.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I made it pub assuming it's going to be useful if / when I get to using qp2p.
|
||
/// The public key of an entry | ||
#[derive(Clone, Debug, Serialize, Deserialize)] | ||
pub struct PublicKey(pub Author); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I like that you are calling this PublicKey
and not Author
:D
Thinking a bit about the naming in general, you chose to use "payload" instead of "operation" and use both "author", which bamboo uses, and "public key", which p2panda prefers. We chose to go with "public key" in order to distinguish the data type public key from human authors - who can have multiple public keys. Also see the second bullet point in the "Authors" section here.
I think it might improve clarity to use the same naming as in the rest of p2panda (operation instead of payload and consistently "public key"), also because with the addition of schemas this replication system will not be agnostic of the payload content (otherwise this could also just use bamboo's naming).
Maybe I am being too nitpicky though - @sandreae @adzialocha do you have an opinion?
(can be a follow-up)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't have a strong feeling on this right now, we still need to do some renaming in p2panda-rs
so maybe we could review this again later?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Clippy doesn't seem to be happy, maybe that needs fixing before merging, otherwise it's good to go! ✌️✨🌟
A few of those clippy errors are coming from |
I've tidied up all the clippy warnings. I had to add a few doc strings in @cafca's graphql client stuff, make sure you're ok with them. |
😌 |
Adds graphql endpoints useful for replication.
📋 Checklist
CHANGELOG.md
Closes #114 and #116 and #112
If at all possible I'm going to turn PR review comments requesting changes into issues that can be addressed later in small PRs. This PR is huge (Sorry!) and I'm scared of the time it will take to keep it mergable if it drags out.
So maybe let me know what things must be fixed before this PR is merged and what can be dealt with separately.