Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Feature Support: cluster gc #739

Closed
wants to merge 15 commits into from

Conversation

0zAND1z
Copy link

@0zAND1z 0zAND1z commented Apr 4, 2019

This pull request is for a feature request #628

Tasks to be completed before merge:

  • Implement & review the API client for /repo/gc
  • Implement & review the RPC Call that needs to be executed in all the cluster nodes
  • Write tests

This change is Reviewable

@hsanjuan hsanjuan self-assigned this Apr 4, 2019
Copy link
Contributor

@kishansagathiya kishansagathiya left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There are a lot of pieces missing in this.
Use #628 (comment) as a checklist
This feature would touch many parts of the codebase so take your time to figure out how they are connected.

How I would go about doing this would be

  • start with getting the rpc api, then rest api then the client+cli

ipfscluster.go Outdated
@@ -68,6 +68,7 @@ type API interface {
type IPFSConnector interface {
Component
ID(context.Context) (*api.IPFSID, error)
GC(context.Context) (*api.IPFSRepoGc, error)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Make it RepoGC and move it after RepoStat

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We have to implement this as well

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Renamed & Moved. The implementation skeleton has been added at cluster.go.

cluster.go Outdated
@@ -605,6 +605,11 @@ func (c *Cluster) ID(ctx context.Context) *api.ID {
}
}

// GC performs garbage collection in the cluster
func (c *Cluster) GC(ctx context.Context) *api.IPFSRepoGc {
// TODO: Need clarity on how to return the response of c.ipfs.GC(ctx)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Make IPFSRepoGC rpc calls to all peers. See how that's done in other cluster methods.

.gitignore Outdated Show resolved Hide resolved
.gitignore Outdated Show resolved Hide resolved
cluster.go Outdated Show resolved Hide resolved
ipfscluster.go Outdated Show resolved Hide resolved
ipfscluster.go Show resolved Hide resolved
rpc_api.go Show resolved Hide resolved
rpc_api.go Outdated Show resolved Hide resolved
rpc_api.go Outdated Show resolved Hide resolved
cluster.go Outdated Show resolved Hide resolved
lanzafame and others added 4 commits April 18, 2019 10:06
Co-Authored-By: 0zAND1z <11145839+0zAND1z@users.noreply.github.com>
Co-Authored-By: 0zAND1z <11145839+0zAND1z@users.noreply.github.com>
Co-Authored-By: 0zAND1z <11145839+0zAND1z@users.noreply.github.com>
- renaming

Co-Authored-By: 0zAND1z <11145839+0zAND1z@users.noreply.github.com>
// repoGcHandler launches the garbage collection sequence in the cluster.
// Based on https://docs.ipfs.io/reference/api/http/#api-v0-repo-gc
func (proxy *Server) repoGcHandler(w http.ResponseWriter, r *http.Request) {
//TODO: Need help/direction on finishing the handler
Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@kishansagathiya , who can help me here? Can you please tag them for further follow-up? Thanks.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@lanzafame , any suggestions here? Thanks.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think I already mentioned in #628 (comment)

You will have to create a cluster method(say RepoGC) that would make rpc calls for RepoGC(say IPFSRepoGC) to each peer. You will need to write an rpc api that would call the cluster method described above(RepoGC).
You handlers(in restapi.go and ipfsproxy,go) would make RepoGC(the cluster one, not IPFSRepoGC) rpc call to itself.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks Kishan, working on the changes.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

had a small confusion here - should I be use .Call() or .CallContext() ?

I've just committed the code using the .CallContext(). But I'm looking for some inputs here on completing the code.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

CallContext

@@ -252,6 +252,21 @@ cluster peers.
return nil
},
},
{
Name: "gc",
Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is it okay to have gc as a sub command ?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

My opinion is to create a new command ipfs-cluster-ctl repo gc, but this comes much later (checklist #628 (comment))

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@kishansagathiya I disagree that we should have another command just for gc. And I think makes more sense to be under the peers command as this will be broadcast to all peers.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I lean towards ipfs-cluster-ctl peers gc too.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think of peers subcommand as peerset management (ls, rm). This doesn't get us info or change anything in the peerset.

I think makes more sense to be under the peers command as this will be broadcast to all peers

Out of existing commands peers is surely the most appropriate. Many of our commands deal with all peers. With that logic many commands can fit into peers. But if I keep this narrow view of peers, it doesn't fit as much as ls or rm. It's not intuitive that peers would have gc, especially for those users who have used ipfs. But when they see repo gc, they would know what it would do.

First I thought of ipfs-cluster-ctl gc, but I thought with repo gc, it would leave room for other repo commands to be added.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Also, this feature is different than cluster's primary use(pinset management). With this we would be adding one more resource to manage. Somehow I am inclined to call that resource repo than peers.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

At this point it doesn't make sense to add a repo command for one subcommand and it shouldn't be a top level command. It does fit with the model of managing peers and is associated ipfs node

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Closing this thread for now...

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So @kishansagathiya does have a point. After all, it's not the cluster peers that are being gc'ed but the ipfs ones (reason why I don't like repo gc either). We could do ipfs-cluster-ctl ipfs gc, where the ipfs subcommand runs commands on the ipfs daemons. Additional commands for the future ipfs-cluster-ctl ipfs connect (swarm connect) or... (haven't thought of more).

.gitignore Outdated Show resolved Hide resolved
ipfscluster.go Outdated
@@ -68,6 +68,7 @@ type API interface {
type IPFSConnector interface {
Component
ID(context.Context) (*api.IPFSID, error)
GC(context.Context) (*api.IPFSRepoGc, error)
Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Renamed & Moved. The implementation skeleton has been added at cluster.go.

ipfscluster.go Show resolved Hide resolved
"",
"Cluster",
"IPFSRepoGc",
&peers,
Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I just used this from repo stat, not sure if this is the right way to do it. Can somebody confirm if this is alright?

@@ -252,6 +252,21 @@ cluster peers.
return nil
},
},
{
Name: "gc",
Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Closing this thread for now...

ipfscluster.go Show resolved Hide resolved
ctx = trace.NewContext(c.ctx, span)

// ignore error since it is included in response object
ipfsGC, err := c.ipfs.RepoGC(ctx)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Here, instead of calling the local IPFS RepoGC, you will need to make a broadcast request to every peer. There are examples in this file (anything with rpc.MultiCall). Otherwise leave a comment and I or someone else will fix it when the rest is ready.

@kishansagathiya
Copy link
Contributor

I will finish this PR with #777

@0zAND1z
I have squashed your work in a commit and added to that PR
Thanks a lot for this

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

4 participants