Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

The minimum requirements for the server are configured #4145

Closed
metal-young opened this issue Aug 15, 2017 · 19 comments
Closed

The minimum requirements for the server are configured #4145

metal-young opened this issue Aug 15, 2017 · 19 comments

Comments

@metal-young
Copy link

metal-young commented Aug 15, 2017

Version information:

ipfs version --all
go-ipfs version: 0.4.10-
Repo version: 5
System version: amd64/linux
Golang version: go1.8.3

Type:

Etc

Severity:

Critical

Description:

My server configuration is 1 core, 512M memory

But when i execute ipfs daemon

The server becomes very crowded

So, what kind of configuration should I upgrade to make the service run smoothly?

@zinid
Copy link

zinid commented Aug 22, 2017

This is quite annoying btw, since using so many memory for only about 500 connections is definitely a problem.

@paulogr
Copy link

paulogr commented Nov 22, 2017

Having the same problem on a VPS with 512MB of RAM.

@Stebalien
Copy link
Member

Stebalien commented Nov 22, 2017

  1. Upgrade to the latest IPFS. We've made a lot of improvements concerning memory/CPU usage.
  2. We've also introduced a connection closing feature. If the default settings in go-ipfs 0.4.13 don't reduce the memory usage enough, try the connection closing limits (https://github.com/ipfs/go-ipfs/blob/master/docs/config.md#connmgr) and restart.

@paulogr
Copy link

paulogr commented Nov 22, 2017

Hi @Stebalien, thank you for your answer.

Is this a better option than --routing=dhtclient ??
What means running the daemon as dhtclient?

I saw on another issue the option to ipfs config Reprovider.Strategy pinned

"pinned" causes your nodes to only reprovide objects that youve pinned, as opposed to the default of every local block. "roots" only provides the pins themselves, and not and of their child blocks. "roots" has a much lower bandwidth cost, but may harm the reachability of content stored by your node.

Are those good options? Or those options does not contribute to keep a functional IPFS network?

@Stebalien
Copy link
Member

Stebalien commented Nov 22, 2017

What means running the daemon as dhtclient?

By default, your node will act as a DHT (distributed hash table) server. This means it will store and serve small bits of data to the network. This is how we distribute: IPNS records, content provider records (who has what content), peer address records (to map peer IDs to IP addresses, etc. This usually doesn't actually take up that much memory. However, constantly answering DHT queries can significantly increase CPU usage.

Is this a better option than --routing=dhtclient ??

I assume you mean lowering the connection limit. If so, yes for memory usage (but less so for CPU usage).

I saw on another issue the option to ipfs config Reprovider.Strategy pinned

This will cause your node to advertise only pinned content, not random content that you happen to have cached. Basically, your node will occasionally (1m after starting and every 12h thereafter) will submit provider records to the DHT for every piece of data you're storing. This will help other nodes find data on your machine.

Impact:

  1. You won't see a spike in CPU/memory usage once every 12h.
  2. Your memory usage will grow less overtime (we have some memory leaks in our address book system so we rarely actually delete any Peer ID -> IP mappings we know about).
  3. Peers will only connect to you to retrieve content you've explicitly pinned. Note: they'll still be able to download other content from you, they just won't seek you out for it.

So, it's up to you. However, I kind of doubt this will have much impact. You're probably better off just lowering the connection limits.

@zinid
Copy link

zinid commented Nov 22, 2017

Now I cannot even compile it due to OOM killer 🤣

go build -i -ldflags="-X "github.com/ipfs/go-ipfs/repo/config".CurrentCommit=e1f433e3"  -o "cmd/ipfs/ipfs" "github.com/ipfs/go-ipfs/cmd/ipfs"
go build github.com/ipfs/go-ipfs/cmd/ipfs: /usr/lib/go-1.9/pkg/tool/linux_amd64/link: signal: killed
make: *** [cmd/ipfs/ipfs] Error 1

This is on DigitalOcean VPS:

$ free -m
             total       used       free     shared    buffers     cached
Mem:           496        118        378          0          2         14
-/+ buffers/cache:        102        394
Swap:            0          0          0

@paulogr
Copy link

paulogr commented Nov 22, 2017

@Stebalien, thank you for the long answer. It helps me to understand a little bit more about IPFS.

I've just set the new ConnMgr values and hope its make my daemon stop get killed by out of memory errors.

Really appreciate.

@zinid
Copy link

zinid commented Nov 22, 2017

@Stebalien so I finally updated go-ipfs to 0.4.13, but it still consumes all memory and gets killed. Then I changed the config to (not sure what all these means):

...
  "Swarm": {
    "AddrFilters": null,
    "ConnMgr": {
      "GracePeriod": "",
      "HighWater": 100,
      "LowWater": 0,
      "Type": "none"
    },
    "DisableBandwidthMetrics": false,
    "DisableNatPortMap": false,
    "DisableRelay": false,
    "EnableRelayHop": false
  },
...

and ipfs daemon indeed consumes less memory. Let's see if it survives the day :)

@Kubuxu
Copy link
Member

Kubuxu commented Nov 22, 2017

@zignig using LowWater: 0 is very bad idea. It will drop all connections and you need some connections to search the network.

@zinid
Copy link

zinid commented Nov 22, 2017

@Kubuxu I didn't touch this parameter.

Whatever, it got destroyed by OOM killer again.

@Kubuxu
Copy link
Member

Kubuxu commented Nov 22, 2017

Ahh, right, LowWater will default to 600, you might want to set it to let's say 50. Otherwise ConnMgr might not close any connections.

@zinid
Copy link

zinid commented Nov 22, 2017

@Kubuxu setting LowWater to 50 didn't help, still crashing.

@paulogr
Copy link

paulogr commented Nov 22, 2017

After change the HighWater and LowWater parameters I'm running the daemon for 5h+, my personal record :)

screen shot 2017-11-22 at 15 21 29

HighWater is set to 100.
LowWater to 50.

@zinid
Copy link

zinid commented Nov 22, 2017

This is odd, I still have a lot of connections:

$ ipfs swarm peers | wc -l
365

and it's growing. I double checked the config again:

  "Swarm": {
    "AddrFilters": null,
    "ConnMgr": {
      "GracePeriod": "",
      "HighWater": 100,
      "LowWater": 50,
      "Type": "none"
    },

@paulogr
Copy link

paulogr commented Nov 22, 2017

@zinid try to keep the GracePeriod as "20s" and Type as "basic" and restart the daemon after changes.

@zinid
Copy link

zinid commented Nov 22, 2017

@paulogr thanks a lot, now much better. However, memory is still growing (albeit much slower).

@paulogr
Copy link

paulogr commented Nov 22, 2017

Great!

I see the memory comsuption still growing too, let's see how it happens.

@momack2 momack2 added this to Inbox in ipfs/go-ipfs May 9, 2019
@Stebalien
Copy link
Member

Stebalien commented Jul 29, 2019

Closing due to age. Memory usage has improved significantly and we've fixed most known memory leaks.

If you're still having this issue, please open a new issue.

@nlw0
Copy link

nlw0 commented May 7, 2022

I'm having a similar problem right now. Running a server on a VPS with two cores and 1GB of RAM + 1GB swap, process always seems to get killed when I check the next day. This was with v0.12.0.

Reducing Swarm.ConnMgr.HighWater does seem to solve it... This has puzzled me for quite a while, it would be nice to make this information is more readily available to new users with small machines, if it's not easy to prevent the program to get killed

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
No open projects
Development

No branches or pull requests

6 participants