Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat(core): add libp2p resource manager #8735

Merged
merged 1 commit into from Mar 4, 2022

Conversation

schomatis
Copy link
Contributor

@schomatis schomatis commented Feb 15, 2022

Built on top of #8680.

Commands:

ipfs swarm stats
ipfs swarm limit

TODO:

  • Decide where to put these commands and if they should go through interface-go-ipfs-core.
  • Decide where is the limit.json config file going to live: hardcoded in the repo root or configured through go-ipfs-config.
  • Confirm if ViewPeer is working.

Closes #8722.

@schomatis schomatis self-assigned this Feb 15, 2022
@schomatis
Copy link
Contributor Author

@marten-seemann Initial take on stats command to get some feedback before moving on with limit and then deciding where to place this API.

Copy link
Member

@marten-seemann marten-seemann left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

With my limited knowledge of the code base, this looks good to me.

core/core.go Outdated Show resolved Hide resolved
core/node/libp2p/rcmgr.go Outdated Show resolved Hide resolved
Copy link
Contributor

@vyzo vyzo left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

just like lotus, looking fine.

@schomatis
Copy link
Contributor Author

@marten-seemann

Were you able to test the command (actually running it)? Is it working as expected?

If the answer is yes I'll proceed to plumb the limit command.

After that, I'll need you to define in the original thread where exactly is this command going to live, and if it should have its own API in github.com/ipfs/interface-go-ipfs-core or will be processed directly by the command's logic. (Please discuss that there in the issue to leave this PR for implementation discussions if possible.)

@marten-seemann
Copy link
Member

Were you able to test the command (actually running it)? Is it working as expected?

Everything works as expected, except for the peer command. It either fails to parse a (valid) peer ID (e.g. Error: invalid peer ID: QmfCY8CGoFpyDZvKvBkGqoxsqzTDgG8VTAjKzBimJqY5UU: length greater than remaining number of bytes in buffer), or it reports all 0s for a peer.

@schomatis
Copy link
Contributor Author

length greater than remaining number of bytes in buffer

Yeah, it's doing that for the CIDv0 addresses (Qm...), not sure why, might be a deeper issue I'll need to look into later.

ipfs swarm stats peer:QmNWpN4P7nb81zyKHoSaPx4aWdAV9po4UTpjypvk4eA377 # Fails
ipfs swarm stats peer:12D3KooWSriosYwnx52vWPwPmGoLA1F1seBXCUf5UsMSMT1bpJpX # Works

or it reports all 0s for a peer.

I'm not seeing that in local test:

Test output
$ ipfs swarm stats all | jq

{
  "Peers": {
    "12D3KooWAta4KZHRYhZPZko6Nbt6sMMgWy9Mgm42dJLdxcBL6HtN": {
      "Memory": 8323,
      "NumConnsInbound": 0,
      "NumConnsOutbound": 0,
      "NumFD": 0,
      "NumStreamsInbound": 0,
      "NumStreamsOutbound": 0
    },
    "12D3KooWB3FXPG1RAqgZuA5iDZymTCHBFD1vcwfY2cjPLJmq7udi": {
      "Memory": 0,
      "NumConnsInbound": 0,
      "NumConnsOutbound": 1,
      "NumFD": 0,
      "NumStreamsInbound": 0,
      "NumStreamsOutbound": 1
    },
    "12D3KooWB7LPTnH7kDjuDYF6A81RUQh3NXAN9tpFi2NzunVc3KC1": {
      "Memory": 0,
      "NumConnsInbound": 0,
      "NumConnsOutbound": 1,
      "NumFD": 0,
      "NumStreamsInbound": 1,
      "NumStreamsOutbound": 1
    },
    "12D3KooWBBGfqDkVEddhq8j6prz3fDwbznB1j3TaS2dgpcTsa2xB": {
      "Memory": 0,
      "NumConnsInbound": 0,
      "NumConnsOutbound": 1,
      "NumFD": 0,
      "NumStreamsInbound": 0,
      "NumStreamsOutbound": 1
    },
    "12D3KooWBJhfKCkEa6jtohx2p3efqEYAfKkS43cqXHN196Ea3TAq": {
      "Memory": 0,
      "NumConnsInbound": 0,
      "NumConnsOutbound": 0,
      "NumFD": 0,
      "NumStreamsInbound": 0,
      "NumStreamsOutbound": 0
    },
    "12D3KooWBRLmSWB4c235i5Xy9afRm17hfGaxocU33F9fLcyhmA4s": {
      "Memory": 0,
      "NumConnsInbound": 0,
      "NumConnsOutbound": 1,
      "NumFD": 0,
      "NumStreamsInbound": 0,
      "NumStreamsOutbound": 1
    },
    "12D3KooWBV2BkQwD3vdMp46JdQxPi1nw9SLRTDBo5qdjy6oPpnvJ": {
      "Memory": 0,
      "NumConnsInbound": 0,
      "NumConnsOutbound": 0,
      "NumFD": 0,
      "NumStreamsInbound": 0,
      "NumStreamsOutbound": 0
    },
    "12D3KooWBYGH7TiHUeWtmxkQ7QV98ee6uHZkpahaVbFUyP7y316R": {
      "Memory": 0,
      "NumConnsInbound": 0,
      "NumConnsOutbound": 0,
      "NumFD": 0,
      "NumStreamsInbound": 0,
      "NumStreamsOutbound": 0
    },
    "12D3KooWBfjrvKUpxWaLFGjAWSLD3tav8CZ5mC2k3e6WRbM5rwrW": {
      "Memory": 3717,
      "NumConnsInbound": 0,
      "NumConnsOutbound": 1,
      "NumFD": 1,
      "NumStreamsInbound": 0,
      "NumStreamsOutbound": 1
    },
    "12D3KooWBnSRDWYYJbuJ8PFr4Y2ehtJJHDN6trTnWuw2b8v8KjLN": {
      "Memory": 0,
      "NumConnsInbound": 0,
      "NumConnsOutbound": 0,
      "NumFD": 0,
      "NumStreamsInbound": 0,
      "NumStreamsOutbound": 0
    },
    "12D3KooWBuH48Zt5Nm2DAmhywaE6vMCutuiNqkYPAzYBmmyfm5cp": {
      "Memory": 0,
      "NumConnsInbound": 0,
      "NumConnsOutbound": 1,
      "NumFD": 0,
      "NumStreamsInbound": 0,
      "NumStreamsOutbound": 1
    },
    "12D3KooWC11Cy9DjPYNQHApzTdSxLQixsenmB3JbFGSpcX4pVVyo": {
      "Memory": 5403,
      "NumConnsInbound": 0,
      "NumConnsOutbound": 1,
      "NumFD": 1,
      "NumStreamsInbound": 0,
      "NumStreamsOutbound": 1
    },

@marten-seemann
Copy link
Member

Sorry, I should've been more specific: the peer output in all works, it's when I query the peer scope directly that it's all 0s.

The decode might be the issue of peer.Decode vs peer.IDFromString (just a guess without digging into it).

@schomatis
Copy link
Contributor Author

The decode might be the issue of peer.Decode vs peer.IDFromString (just a guess without digging into it).

Yes, we're using the second, should we switch?

@schomatis
Copy link
Contributor Author

Sorry, I should've been more specific: the peer output in all works, it's when I query the peer scope directly that it's all 0s.

Confirmed, will look into it, though it's a pretty straightforward call to the library's ViewPeer so the issue might be upstream. Was this working in Lotus?

@marten-seemann
Copy link
Member

Confirmed, will look into it, though it's a pretty straightforward call to the library's ViewPeer so the issue might be upstream. Was this working in Lotus?

@vyzo, have you tried this?

@schomatis
Copy link
Contributor Author

@marten-seemann Mind giving the limit command a try?

Something like

ipfs swarm limit system > limit.json
# Modify JSON
ipfs swarm limit system -s limit.json

@vyzo
Copy link
Contributor

vyzo commented Feb 18, 2022

we dont have legacy peers in lotus so it worked all dandy when i tried it.
IDFromString should work here.

@marten-seemann
Copy link
Member

It should be Decode, not IDFromString. I fixed this PR, and now I'm able to query peer IDs.

@marten-seemann
Copy link
Member

@marten-seemann Mind giving the limit command a try?

Something like

ipfs swarm limit system > limit.json
# Modify JSON
ipfs swarm limit system -s limit.json

@schomatis I tried this, and it can't find the file:

❯ ipfs swarm limit system -s limit.json
Error: error opening limit JSON file: open limit.json: no such file or directory

@schomatis
Copy link
Contributor Author

@marten-seemann Did the read limit command

ipfs swarm limit system > limit.json

before

ipfs swarm limit system -s limit.json

not generate any output? Are you running both commands in the same CLI/daemon?

@marten-seemann
Copy link
Member

@schomatis It works now! Thank you!

Can you rebase this on top of the (updated) update-libp2p-v018 branch?

@schomatis schomatis force-pushed the schomatis/feat/resource-manager branch from da21179 to 4594422 Compare February 24, 2022 03:39
@schomatis
Copy link
Contributor Author

@marten-seemann Rebased. Left some TODOs in the PR description which I can't resolve myself.

@schomatis
Copy link
Contributor Author

Nothing more for me to do at the moment so I'll remove my assignment from this.

@schomatis schomatis removed their assignment Feb 25, 2022
@marten-seemann marten-seemann force-pushed the update-libp2p-v018 branch 2 times, most recently from 6ddf5bf to 0c9ddba Compare February 28, 2022 12:34
@BigLep
Copy link
Contributor

BigLep commented Mar 1, 2022

@marten-seemann is going to merge this into the update-libp2p-v018 branch once the tests in #8680 pass.

@marten-seemann marten-seemann force-pushed the schomatis/feat/resource-manager branch from 4594422 to a89390d Compare March 4, 2022 06:27
@marten-seemann marten-seemann marked this pull request as ready for review March 4, 2022 06:59
@marten-seemann marten-seemann merged commit 293ceaf into update-libp2p-v018 Mar 4, 2022
@marten-seemann marten-seemann deleted the schomatis/feat/resource-manager branch March 4, 2022 07:07
@BigLep
Copy link
Contributor

BigLep commented Mar 7, 2022

To be clear for anyone watching, this remaining work is now happening in #8680

@BigLep BigLep linked an issue Mar 7, 2022 that may be closed by this pull request
@BigLep BigLep modified the milestones: Best Effort Track, go-ipfs 0.13 Mar 7, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
Archived in project
Development

Successfully merging this pull request may close these issues.

create a command for resource manager stats
4 participants