Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Error about minimal requirement in README #6513

Closed
Jorropo opened this issue Jul 16, 2019 · 5 comments · Fixed by #6543
Closed

Error about minimal requirement in README #6513

Jorropo opened this issue Jul 16, 2019 · 5 comments · Fixed by #6543
Labels
topic/docs-ipfs Topic docs-ipfs

Comments

@Jorropo
Copy link
Contributor

Jorropo commented Jul 16, 2019

Location

README.md (then ctrl+f "it’ll do fine with only one CPU core")

Description

The readme says "it’ll do fine with only one CPU core" about ipfs, but from my test that not realy true.

You can do it but with a small config change, disabling /ip4/0.0.0.0/tcp/4001 (and maybe install cjdns on your node because cjdns allows you to still be reachable), listening on quic, and activating relay and autorelay, the goal of this is to be unreachable for most of the nodes because be contacted then be asked : "hello, do you have this things ?", reponding no, and closing the conn is very costy (and doing that hundreds times per second is very costy).

From my profiling result this is a problem due to secio handshake, these are very expensive (from an 1h run libp2p consumed 65% of the cpu time and 40% (of total of cpu time) was dedicated to secio handshake) (but this is more an issue for libp2p).

So maybe a section could be added about how to run on a single core cpu.

@Jorropo Jorropo added the topic/docs-ipfs Topic docs-ipfs label Jul 16, 2019
@Stebalien
Copy link
Member

It'll likely do fine if you enable the lowpower config profile: ipfs config profile apply lowpower. But we yeah, that section is misleading. Mind filing a PR?

@Jorropo
Copy link
Contributor Author

Jorropo commented Jul 16, 2019

@Stebalien I've tested lowpower on my monocore server and that didn't fixed anythings, this is because the transport and security part is taking most of the cpu time, not dht, and reducing peer count doesn't change anythings because ipfs still have to do secio handshake for closing the connection.
You can see the profiling here : (3k peers, adding a directory of 100MiB and downloading it from an other node using quic over cjdns)
https://jorropo.ovh/ipfs/QmXxyCLbsBLoCsV3uArhMGkUe1gtDzcFp14AAojCJkD5yT
(If the file doesn't works, that is an SVG, rename it with the extension).

I know 2 solution but I don't know libp2p enough to make them works (I'm opening an issue about that in libp2p).

Should I write about my solution ?
(autorelay, quic, nolisten on /ip4/0.0.0.0/tcp/0)

@Stebalien
Copy link
Member

Turning off the DHT should reduce power consumption as it'll limit the number of inbound connections. However, if you add a large directory, your node will make a ton of outbound connections (the likely cause here). Another way is to build with OpenSSL support (make build GOFLAGS=-tags=openssl).

I'd love to hear about your solution. I'm supprised that autorelay, quic, and not using TCP helps as you'd still have to do the RSA handshakes. You could try switching to ed25519 (requires some manual jiggering as this isn't officially supported yet).

@Jorropo
Copy link
Contributor Author

Jorropo commented Jul 16, 2019

First result of openssl, way better (thx), holding 6.5k instead of 3k peers for 20% of cpu instead of 35% dedicated for handshake (wich is around 3 times faster) (so here I have a question, why is not openssl enabled by default (for supported arch at least, like amd64 and arm 7, 8) ?).
Here is profiling result same works as before (I've done ipfs repo gc on all of my node so there was no cache from the previous one) :
https://jorropo.ovh/ipfs/QmPjpSzNeqdFgX1VCt6JkhTohcyDfmYAHsCBHttT4HAZ87

Then, what I've is not to remove handshake, but to limit the number of handshake, (because when running on default conn you get a lot of nodes doing handshake, asking for more nodes or asking question about the dht and then closing the conn (and your node do that as well), hundreds time per second (more connection you have more this happend frequently, and that seems to go down to 25 times per seconds after 1h of running without doing nothing (add, pin, get, name, ... can maybe bump this number))).
So if I not listen on /ip4/0.0.0.0/tcp/0 the number of nodes capable of reaching mine (and inverted, my node contact less nodes) is drastically pulled down, and so instead of having 100 handshake per seconds, I do only 5 to 15.
Quic, ip6 tcp, cjdns and relay are just here for not being too isolated and still serving peers asking for my files but that makes my node way less attractive (an ip4 tcp only node (wich is the case of most node on ipfs (a lot of node listen on ip6 tcp but that just local, they don't have nat punching or no public ip6))).
Here is my config file (without any sensitive information) :

{
  "API": {
    "HTTPHeaders": {}
  },
  "Addresses": {
    "API": "/ip4/127.0.0.1/tcp/5001",
    "Announce": [],
    "Gateway": "/ip4/127.0.0.1/tcp/8080",
    "NoAnnounce": [],
    "Swarm": [
      "/ip4/127.0.0.1/tcp/4001",
      "/ip4/0.0.0.0/udp/4001/quic/",
      "/ip6/::/udp/4001/quic/",
      "/ip6/::/tcp/4001"
    ]
  },
  "Bootstrap": [
    "/dnsaddr/bootstrap.libp2p.io/ipfs/QmNnooDu7bfjPFoTZYxMNLWUQJyrVwtbZg5gBMjTezGAJN",
    "/dnsaddr/bootstrap.libp2p.io/ipfs/QmQCU2EcMqAqQPR2i9bChDtGNJchTbq5TbXJJ16u19uLTa",
    "/dnsaddr/bootstrap.libp2p.io/ipfs/QmbLHAnMoJPWSCR5Zhtx6BHJX9KiKNN6tpvbUcqanj75Nb",
    "/dnsaddr/bootstrap.libp2p.io/ipfs/QmcZf59bWwK5XFi76CZX8cbJ4BhTzzA3gU1ZjYZcYW3dwt",
    "/ip4/104.131.131.82/tcp/4001/ipfs/QmaCpDMGvV2BGHeYERUEnRQAwe3N8SzbUtfsmvsqQLuvuJ",
    "/ip4/104.236.179.241/tcp/4001/ipfs/QmSoLPppuBtQSGwKDZT2M73ULpjvfd3aZ6ha4oFGL1KrGM",
    "/ip4/128.199.219.111/tcp/4001/ipfs/QmSoLSafTMBsPKadTEgaXctDQVcqN88CNLHXMkTNwMKPnu",
    "/ip4/104.236.76.40/tcp/4001/ipfs/QmSoLV4Bbm51jM9C4gDYZQ9Cy3U6aXMJDAbzgu2fzaDs64",
    "/ip4/178.62.158.247/tcp/4001/ipfs/QmSoLer265NRgSp2LA3dPaeykiS1J6DifTC88f5uVQKNAd",
    "/ip6/2604:a880:1:20::203:d001/tcp/4001/ipfs/QmSoLPppuBtQSGwKDZT2M73ULpjvfd3aZ6ha4oFGL1KrGM",
    "/ip6/2400:6180:0:d0::151:6001/tcp/4001/ipfs/QmSoLSafTMBsPKadTEgaXctDQVcqN88CNLHXMkTNwMKPnu",
    "/ip6/2604:a880:800:10::4a:5001/tcp/4001/ipfs/QmSoLV4Bbm51jM9C4gDYZQ9Cy3U6aXMJDAbzgu2fzaDs64",
    "/ip6/2a03:b0c0:0:1010::23:1001/tcp/4001/ipfs/QmSoLer265NRgSp2LA3dPaeykiS1J6DifTC88f5uVQKNAd",
    "/ip4/138.197.153.52/udp/55855/quic/ipfs/Qmc1EqWXLPfKByfeBos65D36XMBvbmPp5E3f7wu2iRgpTL",
    "/ip4/139.178.64.247/udp/4001/quic/ipfs/QmVbReJM8RpHxZcMCmdSubBfBX7VdYiCn4piAmFxaGUDmJ",
    "/ip4/139.178.69.15/udp/4001/quic/ipfs/QmTg7DdGo519B7KdzFppTaK5i8WGWu9erGhxsLafxRd6gp",
    "/ip4/139.178.69.3/udp/4001/quic/ipfs/QmdGQoGuK3pao6bRDqGSDvux5SFHa4kC2XNFfHFcvcbydY",
    "/ip4/147.75.105.219/udp/4001/quic/ipfs/QmTtFWmQ3qrp166m96ibL2jW2Doz4tJjo2CwQfYNaFb3XZ",
    "/ip4/147.75.106.163/udp/4001/quic/ipfs/QmRdjvsyoNjA2ZfAQBtQ7A2m5NmtSXLgxE55Brn1AUjZ1v",
    "/ip4/147.75.109.65/udp/4001/quic/ipfs/QmcYZo7xDLm8sNakKe8UK9AXjoGXGvngpD6apqmTqu7HzU",
    "/ip4/147.75.80.35/udp/4001/quic/ipfs/QmU5jkMcfaZ4N1B4MzMXdCZY2pJ3re5YaPB7UjiyqShwT9",
    "/ip4/147.75.84.57/udp/4001/quic/ipfs/QmZP8NCi1L2LS8K2DoG175tH4mSe8Z4ygcVXkwFxnyeMLL",
    "/ip4/172.249.155.251/udp/34045/quic/ipfs/Qme2cZcZnE8gwyxvpdUKhr2eW6fZjDrXPvRMtJ1RWUNw6T",
    "/ip4/178.128.164.200/udp/59116/quic/ipfs/QmcrsAcVXz2BLxmy6HibQVSjRQPpVp7jP4cquRJNLeFtvz",
    "/ip4/185.14.233.88/udp/54284/quic/ipfs/QmaLq3EK8egMRD6PKYUKALQET8tp8LdNpnPx67XdiAu17D",
    "/ip4/185.183.147.39/udp/19889/quic/ipfs/QmXM6uS1pftKFaXtZNvWLSCPbM9AioRvV2X3bphwRMiNqu",
    "/ip4/212.56.108.81/udp/62997/quic/ipfs/QmeCa4B3yhjYn74BbP5PuRuDKFAMqLcPcDJVy8GyUaymXk",
    "/ip4/51.75.35.194/udp/4001/quic/ipfs/QmVGX47BzePPqEzpkTwfUJogPZxHcifpSXsGdgyHjtk5t7",
    "/ip4/65.19.134.243/udp/4001/quic/ipfs/QmVjnHFzaFvaxfrjTvVVJuit6hbMQXYXTdDfHyWjk73agU",
    "/ip4/80.195.63.245/udp/45287/quic/ipfs/QmPxrDjxG7LB2B3j4ZdzfC5gWNE2JqUCCQQMe9eR5LPjmW"
  ],
  "Datastore": {
    "BloomFilterSize": 0,
    "GCPeriod": "1h",
    "HashOnRead": false,
    "Spec": {
      "mounts": [
        {
          "child": {
            "path": "blocks",
            "shardFunc": "/repo/flatfs/shard/v1/next-to-last/2",
            "sync": true,
            "type": "flatfs"
          },
          "mountpoint": "/blocks",
          "prefix": "flatfs.datastore",
          "type": "measure"
        },
        {
          "child": {
            "compression": "none",
            "path": "datastore",
            "type": "levelds"
          },
          "mountpoint": "/",
          "prefix": "leveldb.datastore",
          "type": "measure"
        }
      ],
      "type": "mount"
    },
    "StorageGCWatermark": 90,
    "StorageMax": "20GB"
  },
  "Discovery": {
    "MDNS": {
      "Enabled": false,
      "Interval": 10
    }
  },
  "Experimental": {
    "FilestoreEnabled": true,
    "Libp2pStreamMounting": true,
    "P2pHttpProxy": true,
    "PreferTLS": true,
    "QUIC": true,
    "ShardingEnabled": true,
    "UrlstoreEnabled": true
  },
  "Gateway": {
    "APICommands": [],
    "HTTPHeaders": {
      "Access-Control-Allow-Headers": [
        "X-Requested-With",
        "Range",
        "User-Agent"
      ],
      "Access-Control-Allow-Methods": [
        "GET"
      ],
      "Access-Control-Allow-Origin": [
        "*"
      ]
    },
    "NoFetch": false,
    "PathPrefixes": [],
    "RootRedirect": "",
    "Writable": false
  },
  "Ipns": {
    "RecordLifetime": "",
    "RepublishPeriod": "",
    "ResolveCacheSize": 128
  },
  "Mounts": {
    "FuseAllowOther": false,
    "IPFS": "/ipfs",
    "IPNS": "/ipns"
  },
  "Pubsub": {
    "DisableSigning": false,
    "Router": "",
    "StrictSignatureVerification": false
  },
  "Reprovider": {
    "Interval": "12h",
    "Strategy": "all"
  },
  "Routing": {
    "Type": "dht"
  },
  "Swarm": {
    "AddrFilters": [],
    "ConnMgr": {
      "GracePeriod": "20s",
      "HighWater": 2000,
      "LowWater": 1000,
      "Type": "basic"
    },
    "DisableBandwidthMetrics": false,
    "DisableNatPortMap": true,
    "DisableRelay": false,
    "EnableAutoNATService": true,
    "EnableAutoRelay": true,
    "EnableRelayHop": true
  }
}

@Stebalien
Copy link
Member

Your issue is EnableRelayHop and EnableAutoNATService. These options are causing your node to advertise itself as a public relay, causing a bunch of other nodes behind NATs to connect to your node. If you turn those off, you should be fine (note: it may take a bit for the network to forget that you offered these services and stop trying to connect to your node).

ralendor pushed a commit to ralendor/go-ipfs that referenced this issue Jun 6, 2020
ralendor pushed a commit to ralendor/go-ipfs that referenced this issue Jun 8, 2020
ralendor pushed a commit to ralendor/go-ipfs that referenced this issue Jun 8, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
topic/docs-ipfs Topic docs-ipfs
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants