-
Notifications
You must be signed in to change notification settings - Fork 285
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
New peers in a crdt cluster don't receive state #784
Comments
You would need to wait a little bit until someone pins something or re-broadcast is triggered (every minute by default, controlled by |
After a full night and some manual pins added, I can confirm that the state is not transferred to the new peers.
|
I added the node responsible for pinning in everyone's I think it would be appropriate to consider the bootstrap target a |
Ah indeed! That was pretty obvious. At this point you are testing things that I haven't had time to setup myself.
Yes, if a peer does not trust you it won't let you sniff around or touch anything.
The plan is to make configuration template, so that any peer can start with something like Auto-configuring the bootstrap peer(s) as "trusted" is probably the intended behaviour for 95% of cases, but technically you should be able to bootstrap to anyone (trusted or not). |
I could reproduce this issue on another computer. I have 3 peers bootstrapped to another one, and one of those 3 isn't receiving the state. They all have the exact same configurations and versions of IPFS and IPFS-Cluster. Here's all the info about the defective peer. SystemOS: elementary OS 5.0 Juno x86_64
Host: S551LN 1.0
Kernel: 4.15.0-51-generic
Uptime: 54 mins
Packages: 1897
Shell: bash 4.4.19
Terminal: /dev/pts/0
CPU: Intel i7-4500U (4) @ 3.000GHz
GPU: NVIDIA GeForce 840M
GPU: Intel Haswell-ULT
Memory: 1037MiB / 7860MiB IPFSgo-ipfs version: 0.4.21-rc1-cf3b6c43d
Repo version: 7
System version: amd64/linux
Golang version: go1.12.5 IPFS-Cluster
Peers12D3KooWCWXtr2t7uLRdzwX9CitfLi5WApSbC4jY812NXdcRkWj4 | hp-server | Sees 3 other peers
> Addresses:
- /ip4/127.0.0.1/tcp/9096/ipfs/12D3KooWCWXtr2t7uLRdzwX9CitfLi5WApSbC4jY812NXdcRkWj4
- /ip4/135.23.195.89/tcp/35218/ipfs/12D3KooWCWXtr2t7uLRdzwX9CitfLi5WApSbC4jY812NXdcRkWj4
- /ip4/192.168.1.139/tcp/9096/ipfs/12D3KooWCWXtr2t7uLRdzwX9CitfLi5WApSbC4jY812NXdcRkWj4
> IPFS: QmP1aP6pa4N82Yy2NUFzRW7AWSF8St3x3H1b6vd2fg5SPR
- /ip4/127.0.0.1/tcp/4001/ipfs/QmP1aP6pa4N82Yy2NUFzRW7AWSF8St3x3H1b6vd2fg5SPR
- /ip4/135.23.195.89/tcp/62370/ipfs/QmP1aP6pa4N82Yy2NUFzRW7AWSF8St3x3H1b6vd2fg5SPR
- /ip4/192.168.1.139/tcp/4001/ipfs/QmP1aP6pa4N82Yy2NUFzRW7AWSF8St3x3H1b6vd2fg5SPR
- /ip6/::1/tcp/4001/ipfs/QmP1aP6pa4N82Yy2NUFzRW7AWSF8St3x3H1b6vd2fg5SPR
12D3KooWEr3PcBU3aqCqzDEHADg1CmSZ2CeYyuFCVaCi2DXdhEVX | nato-elementary | Sees 3 other peers
> Addresses:
- /ip4/127.0.0.1/tcp/9096/ipfs/12D3KooWEr3PcBU3aqCqzDEHADg1CmSZ2CeYyuFCVaCi2DXdhEVX
- /ip4/135.23.195.89/tcp/55101/ipfs/12D3KooWEr3PcBU3aqCqzDEHADg1CmSZ2CeYyuFCVaCi2DXdhEVX
- /ip4/192.168.1.102/tcp/9096/ipfs/12D3KooWEr3PcBU3aqCqzDEHADg1CmSZ2CeYyuFCVaCi2DXdhEVX
- /ip4/192.168.1.142/tcp/9096/ipfs/12D3KooWEr3PcBU3aqCqzDEHADg1CmSZ2CeYyuFCVaCi2DXdhEVX
> IPFS: QmcEN7ka3MfvkVhwZu1RGSoPstx72vC5YoZ9Yg4VBXYEq7
- /ip4/127.0.0.1/tcp/4001/ipfs/QmcEN7ka3MfvkVhwZu1RGSoPstx72vC5YoZ9Yg4VBXYEq7
- /ip4/135.23.195.89/tcp/63691/ipfs/QmcEN7ka3MfvkVhwZu1RGSoPstx72vC5YoZ9Yg4VBXYEq7
- /ip4/192.168.1.102/tcp/4001/ipfs/QmcEN7ka3MfvkVhwZu1RGSoPstx72vC5YoZ9Yg4VBXYEq7
- /ip4/192.168.1.142/tcp/4001/ipfs/QmcEN7ka3MfvkVhwZu1RGSoPstx72vC5YoZ9Yg4VBXYEq7
- /ip6/::1/tcp/4001/ipfs/QmcEN7ka3MfvkVhwZu1RGSoPstx72vC5YoZ9Yg4VBXYEq7
12D3KooWAG38EVPM1Mrw1YBDe1wuFKS23LiZXq8FGjdypmh6RbgY | lineageos-on-ipfs | Sees 3 other peers
> Addresses:
- /ip4/10.20.0.5/tcp/9096/ipfs/12D3KooWAG38EVPM1Mrw1YBDe1wuFKS23LiZXq8FGjdypmh6RbgY
- /ip4/127.0.0.1/tcp/9096/ipfs/12D3KooWAG38EVPM1Mrw1YBDe1wuFKS23LiZXq8FGjdypmh6RbgY
- /ip4/159.89.116.13/tcp/9096/ipfs/12D3KooWAG38EVPM1Mrw1YBDe1wuFKS23LiZXq8FGjdypmh6RbgY
> IPFS: QmSqLAXiJiteNbuNPY4Y5Lp4iKiUmqhCkBZSedZEutktVs
- /ip4/127.0.0.1/tcp/4001/ipfs/QmSqLAXiJiteNbuNPY4Y5Lp4iKiUmqhCkBZSedZEutktVs
- /ip4/159.89.116.13/tcp/4001/ipfs/QmSqLAXiJiteNbuNPY4Y5Lp4iKiUmqhCkBZSedZEutktVs
- /ip6/2604:a880:cad:d0::17:2001/tcp/4001/ipfs/QmSqLAXiJiteNbuNPY4Y5Lp4iKiUmqhCkBZSedZEutktVs
- /ip6/::1/tcp/4001/ipfs/QmSqLAXiJiteNbuNPY4Y5Lp4iKiUmqhCkBZSedZEutktVs
12D3KooWEcEmcMy2MMHHCqH2w4Hx3TPrpHSKpSWLXKiW5uVKpC74 | nato-hu | Sees 3 other peers
> Addresses:
- /ip4/127.0.0.1/tcp/9096/ipfs/12D3KooWEcEmcMy2MMHHCqH2w4Hx3TPrpHSKpSWLXKiW5uVKpC74
- /ip4/135.23.195.89/tcp/50030/ipfs/12D3KooWEcEmcMy2MMHHCqH2w4Hx3TPrpHSKpSWLXKiW5uVKpC74
- /ip4/192.168.1.122/tcp/9096/ipfs/12D3KooWEcEmcMy2MMHHCqH2w4Hx3TPrpHSKpSWLXKiW5uVKpC74
- /ip4/192.168.122.1/tcp/9096/ipfs/12D3KooWEcEmcMy2MMHHCqH2w4Hx3TPrpHSKpSWLXKiW5uVKpC74
> IPFS: QmUJKWDGYwNsihCrGCxQYEbM2MME28LsMhgCdtfRJ7S8t9
- /ip4/127.0.0.1/tcp/4001/ipfs/QmUJKWDGYwNsihCrGCxQYEbM2MME28LsMhgCdtfRJ7S8t9
- /ip4/135.23.195.89/tcp/65074/ipfs/QmUJKWDGYwNsihCrGCxQYEbM2MME28LsMhgCdtfRJ7S8t9
- /ip4/192.168.1.122/tcp/4001/ipfs/QmUJKWDGYwNsihCrGCxQYEbM2MME28LsMhgCdtfRJ7S8t9
- /ip4/192.168.122.1/tcp/4001/ipfs/QmUJKWDGYwNsihCrGCxQYEbM2MME28LsMhgCdtfRJ7S8t9
- /ip6/::1/tcp/4001/ipfs/QmUJKWDGYwNsihCrGCxQYEbM2MME28LsMhgCdtfRJ7S8t9 It doesn't appear in QmWA4Lqv1AQjEbuurrXnCybag1M1jeQbTXUcBkBFoUoG1B :
> lineageos-on-ipfs : REMOTE: Post http://127.0.0.1:5001/api/v0/pin/ls?arg=QmWA4Lqv1AQjEbuurrXnCybag1M1jeQbTXUcBkBFoUoG1B&type=recursive: dial tcp 127.0.0.1:5001: connect: connection refused | 2019-05-20T22:40:30.190533394-04:00
> hp-server : PIN_ERROR: Post http://127.0.0.1:5001/api/v0/pin/ls?arg=QmWA4Lqv1AQjEbuurrXnCybag1M1jeQbTXUcBkBFoUoG1B&type=recursive: dial tcp 127.0.0.1:5001: connect: connection refused | 2019-05-21T03:41:39.245751714Z
> nato-hu : REMOTE | 2019-05-21T03:39:14.336839625Z
> nato-elementary : UNPINNED | 2019-05-21T03:45:07.334229478Z Logs23:49:36.972 INFO service: Initializing. For verbose output run with "-l debug". Please wait... daemon.go:51
23:49:37.022 INFO cluster: IPFS Cluster v0.10.1+git30ba6f82dd8050e4df302f162ac29b3a4f4f8715 listening on:
/ip4/127.0.0.1/tcp/9096/ipfs/12D3KooWEr3PcBU3aqCqzDEHADg1CmSZ2CeYyuFCVaCi2DXdhEVX
/ip4/192.168.1.142/tcp/9096/ipfs/12D3KooWEr3PcBU3aqCqzDEHADg1CmSZ2CeYyuFCVaCi2DXdhEVX
/ip4/192.168.1.102/tcp/9096/ipfs/12D3KooWEr3PcBU3aqCqzDEHADg1CmSZ2CeYyuFCVaCi2DXdhEVX
cluster.go:117
23:49:37.022 INFO restapi: REST API (HTTP): /ip4/127.0.0.1/tcp/9094 restapi.go:456
23:49:37.022 INFO ipfsproxy: IPFS Proxy: /ip4/127.0.0.1/tcp/9095 -> /ip4/127.0.0.1/tcp/5001 ipfsproxy.go:278
23:49:37.022 INFO service: Bootstrapping to /ip4/159.89.116.13/tcp/9096/ipfs/12D3KooWAG38EVPM1Mrw1YBDe1wuFKS23LiZXq8FGjdypmh6RbgY daemon.go:202
23:49:37.024 INFO restapi: REST API (libp2p-http): ENABLED. Listening on:
/ip4/127.0.0.1/tcp/9096/ipfs/12D3KooWEr3PcBU3aqCqzDEHADg1CmSZ2CeYyuFCVaCi2DXdhEVX
/ip4/192.168.1.142/tcp/9096/ipfs/12D3KooWEr3PcBU3aqCqzDEHADg1CmSZ2CeYyuFCVaCi2DXdhEVX
/ip4/192.168.1.102/tcp/9096/ipfs/12D3KooWEr3PcBU3aqCqzDEHADg1CmSZ2CeYyuFCVaCi2DXdhEVX
restapi.go:473
23:49:37.025 INFO crdt: crdt Datastore created. Number of heads: 0. Current max-height: 0 crdt.go:213
23:49:37.026 INFO cluster: Cluster Peers (without including ourselves): cluster.go:478
23:49:37.026 INFO cluster: - No other peers cluster.go:480
23:49:37.026 INFO cluster: ** IPFS Cluster is READY ** cluster.go:493
23:49:37.299 INFO cluster: 12D3KooWEr3PcBU3aqCqzDEHADg1CmSZ2CeYyuFCVaCi2DXdhEVX: joined 12D3KooWAG38EVPM1Mrw1YBDe1wuFKS23LiZXq8FGjdypmh6RbgY's cluster cluster.go:804 |
Can probably be tested by having peers join the collaborative pinset.
|
is |
Ooh. Well, that resolved the issue. So |
@NatoBoram it's the pubsub topic that peers subscribe to for updates, in case you have several clusters running on the same libp2p swarm (i.e. you could run cluster now on the public ipfs network if you really wanted, but using the global DHT likely makes things slower). |
I'll close this as it seems there's no issue after all, but I'll re-open if anything similar comes up during my testing. |
Sounds fun, I want to test that! How do I do it? What does it do differently? |
Runing cluster peers (in crdt mode) with an empty
Doesn't do anything differently, just removes the encryption layer that isolated the swarm from other swarms (like the IPFS one). |
Additional information:
Describe the bug:
New clusters initialized from this version don't receive the current state from other peers.
ipfs-cluster-ctl status
on new members returns nothing. Pinning a CID then runningipfs-cluster-ctl status
does return the correct result, but that new CID doesn't appear on other peers' status.Other peers are shown in
ipfs-cluster-ctl peers ls
.The text was updated successfully, but these errors were encountered: