New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Ipns very slow #3860

Open
nezzard opened this Issue Apr 11, 2017 · 73 comments

Comments

Projects
None yet
@nezzard

nezzard commented Apr 11, 2017

Hi, is it normal, that ipns loading very slow?
I tried to make something like cms with dynamic content, but ipns to slow, when i load site via ipns, first loading is very slow, if after that i reload page it's load quickly. But if i reload after few minutes, it's again load slow.

@whyrusleeping

This comment has been minimized.

Show comment
Hide comment
@whyrusleeping

whyrusleeping Apr 11, 2017

Member

@nezzard This is generally a known issue, but providing more information is helpful. Are you resolving from your local node? Or are you resolving through the gateway?

Member

whyrusleeping commented Apr 11, 2017

@nezzard This is generally a known issue, but providing more information is helpful. Are you resolving from your local node? Or are you resolving through the gateway?

@nezzard

This comment has been minimized.

Show comment
Hide comment
@nezzard

nezzard Apr 11, 2017

@whyrusleeping throw local, but sometimes gateway faster, sometimes local faster
So, for now I can't use ipns normally?

nezzard commented Apr 11, 2017

@whyrusleeping throw local, but sometimes gateway faster, sometimes local faster
So, for now I can't use ipns normally?

@whyrusleeping

This comment has been minimized.

Show comment
Hide comment
@whyrusleeping

whyrusleeping Apr 12, 2017

Member

@nezzard when using locally, how many peers do you have connected? (ipfs swarm peers) The primary slowdown of ipns is connecting to enough of the right peers on the dht, once thats warmed up it should be faster.

DHT based ipns isnt as fast as something more centralized, but you can generally cache the results for longer than ipfs caches them. We should take a look at making these caches more configurable, and look into other ipns slowdowns.

When you say its 'very slow', what time range exactly are you experiencing? 1-5 seconds? 5-10, 10+ ?

Member

whyrusleeping commented Apr 12, 2017

@nezzard when using locally, how many peers do you have connected? (ipfs swarm peers) The primary slowdown of ipns is connecting to enough of the right peers on the dht, once thats warmed up it should be faster.

DHT based ipns isnt as fast as something more centralized, but you can generally cache the results for longer than ipfs caches them. We should take a look at making these caches more configurable, and look into other ipns slowdowns.

When you say its 'very slow', what time range exactly are you experiencing? 1-5 seconds? 5-10, 10+ ?

@nezzard

This comment has been minimized.

Show comment
Hide comment
@nezzard

nezzard Apr 12, 2017

@whyrusleeping Sometimes it's really fast, sometimes i have
https://yadi.sk/i/mL6Q4OFX3Gu2nk

Ipfs swarm
/ip4/104.131.131.82/tcp/4001/ipfs/QmaCpDMGvV2BGHeYERUEnRQAwe3N8SzbUtfsmvsqQLuvuJ /ip4/104.131.180.155/tcp/4001/ipfs/QmeXAm1zdLbPaA9wVemaCjbeJgWsCrH4oSCrK2F92yWnbm /ip4/104.133.2.68/tcp/53366/ipfs/QmTAmvzNBsicnajpLTUnVqcPankP3pNDoqHpAtUNkK2rU7 /ip4/104.155.150.120/tcp/4001/ipfs/Qmep8LtipXUG4WSNgJGEtwmuaQQt77wRDL5nkMpZyDqrD3 /ip4/104.236.169.138/tcp/4001/ipfs/QmYodPH2C6xEYFPxNhK4how1frPdXFWVrZ3QGynTFCFfBe /ip4/104.236.176.52/tcp/4001/ipfs/QmSoLnSGccFuZQJzRadHn95W2CrSFmZuTdDWP8HXaHca9z /ip4/104.236.176.59/tcp/4001/ipfs/QmQ8MYL1ANybPTM5uamhzTnPwDwCFgfrdpYo9cwiEmVsge /ip4/104.236.179.241/tcp/4001/ipfs/QmSoLPppuBtQSGwKDZT2M73ULpjvfd3aZ6ha4oFGL1KrGM /ip4/104.236.76.40/tcp/4001/ipfs/QmSoLV4Bbm51jM9C4gDYZQ9Cy3U6aXMJDAbzgu2fzaDs64 /ip4/104.40.212.43/tcp/4001/ipfs/QmcvFeaip7B3RDmLU9MgqGcCRv881Citnv5cHkrTSusZD6 /ip4/106.246.181.100/tcp/4001/ipfs/QmQ6TbUShnjKbnJDSYdxaBb78Dz6fF82NMetDKnau3k7zW /ip4/108.161.120.136/tcp/27040/ipfs/QmNRM8W3u6gxAvm8WqSXqCVC6Wzknq66tdET6fLGh8zCVk /ip4/108.28.144.234/tcp/5002/ipfs/QmWfjhgBWjwiesWQPCC4CSV4q83vyBdSA6LRSaZLLCZoVH /ip4/112.196.16.84/tcp/4002/ipfs/QmbELjeVvfpbGYNcC4j4PPr6mnssp6jKWd4D6Jht8jDhiW /ip4/113.253.98.194/tcp/54388/ipfs/QmcL9BdiHQbRng6PvDzbJye7yG73ttNAkhA5hLGn22StM8 /ip4/121.122.82.230/tcp/58960/ipfs/QmPz9uv4HUP1er5TGaaoc4NVCbN8VFMrf5gwvxfmtSAmGv /ip4/128.199.219.111/tcp/4001/ipfs/QmSoLSafTMBsPKadTEgaXctDQVcqN88CNLHXMkTNwMKPnu /ip4/128.32.112.184/tcp/4001/ipfs/QmeM9rJsk6Ke57xMwMuCkJBb9pYGx7qVRkgzVD6zhxPaBx /ip4/128.32.153.243/tcp/1030/ipfs/QmYoH11GjCyoQW4HyZtSZcL8BqBuudaXWi1pdYyy1AroFd /ip4/134.71.135.172/tcp/4001/ipfs/QmU3q7GxnnhJabNh3ukDq2QsnzwzVpcT5FEPBcJcRu3Wq1 /ip4/138.201.53.216/tcp/4001/ipfs/QmWmJfJKfJmKtRqsTnygmWgJfsmHnXo4p3Uc1Atf8N5iQ5 /ip4/139.162.191.34/tcp/4001/ipfs/QmYfmBh8Pud13uwc5mbtCGRgYbxzsipY87xgjdj2TGeJWm /ip4/142.4.211.131/tcp/4001/ipfs/QmWWWLYe16uU53wPgdP3V5eEb8QRwoqUb35h5EMWoEyWaJ /ip4/159.203.77.184/tcp/4001/ipfs/QmeLGqhi5dFBpxD4xuzAWWcoip69i5SaneXL9Jb83sxSXo /ip4/163.172.222.20/tcp/4001/ipfs/Qmd4up4kjr8TNWc4rx6r4bFwpe6TJQjVVmfwtiv4q3FSPx /ip4/167.114.2.68/tcp/4001/ipfs/QmfY24aJDGyPyUJVyzL1QHPoegmFKuuScoCKrBk9asoTFG /ip4/168.235.149.174/tcp/4001/ipfs/QmbPFhS9YwUxE4rPeaqd7Vn6GEESd1MUUM67ECtYchHyFB /ip4/168.235.79.131/tcp/4001/ipfs/QmaqsmhXtQfKfiWi3jXdb4PxrN8JNi2zmXN13MDEktjK8H /ip4/168.235.90.18/tcp/4001/ipfs/QmWtA6WFyo44pYzQzHFtrtMWPHZiFEDFjUWihEY49obZ1e /ip4/169.231.33.236/tcp/55897/ipfs/QmQyTC3Bg2BkctdisKBvWPoG8Avr7HMrnNMNJS25ubjVUU /ip4/173.95.181.110/tcp/42615/ipfs/QmTxQ2Bv9gppcNvzAtRJiwNAahVhkUHxFt5mMYkW9qPjE6 /ip4/176.9.85.5/tcp/4001/ipfs/QmNUZW8yuNxdLSPMwvaafiMVN8fof5r2PrsUJAgyAn8Udb /ip4/178.19.251.249/tcp/4401/ipfs/QmR2FRyigN82VJc3MFZNz79L8Hunc3XvfAxU3eA3McRPHg /ip4/178.209.50.28/tcp/30852/ipfs/QmVfwJUWnj7GAkQtV4cDVrNDnZEwi4oxnyZaJc7xY7zaN3 /ip4/178.209.50.28/tcp/36706/ipfs/QmWCNyBxJS9iuwCrnnA3QfcrS9Yb67WXnZTiXZsMDFj2ja /ip4/178.62.61.185/tcp/4001/ipfs/QmSoLMeWqB7YGVLJN3pNLQpmmEk35v6wYtsMGLzSr5QBU3 /ip4/180.181.245.242/tcp/4001/ipfs/QmZg57eGmSgXs8cXeGJNsBknZTxdphZH9wWLDx8TdBQrMY /ip4/185.10.68.111/tcp/4001/ipfs/QmTNjTQy6sGFG39VSunS4v1UZRfPFevtGzHwr2h1xfa5Bh /ip4/185.21.217.59/tcp/4001/ipfs/QmQ4GzeQzyW3VcBgVacKSjrUrBxEo6s7VQrrkQyQwi1sxs /ip4/185.32.221.138/tcp/4001/ipfs/QmcmTqKUdasx9xwbG2DcyY95q6GcMzx8uUC9fVqdTyETrZ /ip4/185.61.148.187/tcp/4001/ipfs/QmQ5k9N7aVGECaNBLsX9ZeYJCYvcNWcKDZ8VacV9HGUwSC /ip4/185.97.214.103/tcp/4001/ipfs/QmbKGbNNyvBe6A7kUYQtUpXZU61QiTMnGGjqBx6zuvrYyj /ip4/188.226.129.60/tcp/4001/ipfs/QmWBthnxqH6CpAA9k9XGP9TqWMZGT6UC2DZ4x9qGr7eapc /ip4/188.25.26.115/tcp/32349/ipfs/QmVUR2mtHXCnm7KVyEjBQe1Vdp8XWG6RXC8f8FfrnAxCGJ /ip4/188.25.26.115/tcp/53649/ipfs/QmXctexVWdB4PqAquZ6Ksmu1FwwRMiYhQNfoiaWV4iqEFn /ip4/188.40.114.11/tcp/4001/ipfs/QmZY7MtK8ZbG1suwrxc7xEYZ2hQLf1dAWPRHhjxC8rjq8E /ip4/188.40.41.114/tcp/4001/ipfs/QmUYYq1rYdhmrU7za9zrc6adLmwFBKYx3ksTVU3y1RHomm /ip4/192.124.26.250/tcp/16808/ipfs/QmUnwLT7GK8yCxHrpEELTyHVwGFhiZFwmjrq3jypG9n1k8 /ip4/192.124.26.250/tcp/21486/ipfs/QmeBT8g5ekgXaF4ZPqAi1Y8ssuQTjtzacWB7HC7ZHY8CH7 /ip4/192.131.44.99/tcp/4001/ipfs/QmWQBr5KAnCpGiQa5888DYsJc4gF7x7SDzpT6eVW2SoMMQ /ip4/192.52.2.2/tcp/4001/ipfs/QmeJENdKrdD8Bcj6iSrYPAwfQpR2K1nC8aYFkZ7wXdN9ic /ip4/194.100.58.189/tcp/4001/ipfs/QmVPCaHpUJ2eKVMSgb54zZhYRUKokNsX32C4PSRWiKWY6w /ip4/194.135.91.244/tcp/4001/ipfs/QmbE4S5EBBuY7du97ARD3BizNqpdcwQ3iH1aGyo5c8Ezmb /ip4/195.154.182.94/tcp/1031/ipfs/QmUSfsmVqD8TTgnUcPDTrd24SbWDEpnmkWWr7eqbJT2g8y /ip4/199.188.101.24/tcp/4001/ipfs/QmSsjprNEhoDZJAZYscB4G23b1dhxJ1cmiCdC5k73N8Jra /ip4/204.236.253.32/tcp/4001/ipfs/QmYf9BoND8MCHfmzihpseFc6MA6JwBV1ZvHsSMPJVW9Hww /ip4/206.190.135.76/tcp/4001/ipfs/QmTRmYCFGJLz2s5tfnHiB1kwrfrtVSxKeSPxojMioZKVH6 /ip4/212.227.249.191/tcp/4001/ipfs/QmcZrBqWBYV3RGsPuhQX11QzpKAQ8SYfMYL1dGXuPmaDYF /ip4/212.47.243.156/tcp/4001/ipfs/QmPCfdoA8aDscrfNVAhB12YYJJ2CR9mDG2WtKYFoxwL182 /ip4/213.108.213.138/tcp/4001/ipfs/QmWHo4hLG3tkmfuCot3xGCzE2a822MCNQ1mAx1tdEXVL46 /ip4/213.32.16.10/tcp/4001/ipfs/QmcWjSF6prpJwBZsfPSfzGEL61agU1vcMNCX8K6qaH5PAq /ip4/217.210.239.98/tcp/48069/ipfs/QmWGUTL6pQe4ryneBarFqnMdFwTq847a2DnWNo4oYRHxEJ /ip4/217.234.48.60/tcp/65012/ipfs/QmPPnZRcPCPxDvqgz3nyg5QshSzCzqa837ABFU4H4ZzUQP /ip4/23.250.20.244/tcp/4001/ipfs/QmUgNCzhgGvjn9DAs22mCJ7bv3sFp6PWPD6Egt9aPopjVn /ip4/34.223.212.29/tcp/1024/ipfs/QmcXwJ34KM17jkwYwGjgUFvG7zBgGGnUXRYCJdvAPTc8CB /ip4/35.154.222.183/tcp/4001/ipfs/Qmecb2A1Ki34eb4jUuaaWBH8A3rRhiLaynoq4Yj7issF1L /ip4/37.187.116.23/tcp/4001/ipfs/QmbqE6UfCJaXST3i65zbr649s8cJCUoP9m3UFUrXcNgeDn /ip4/37.187.98.185/tcp/1045/ipfs/QmS7djjNercLL4R4kbEjs6eGtxmAiuWMwnvAhP6AkFB64U /ip4/37.205.9.176/tcp/4001/ipfs/QmdX1zPzUtGJzcQm2gz6fyiaX7XgthK5d4LNSJq3rUAsiP /ip4/40.112.223.87/tcp/4001/ipfs/QmWPSzKERs6KAjb8QfSXViFqyEUn3VZYYnXjgG6hJwXWYK /ip4/45.32.155.49/tcp/4001/ipfs/QmYdn8trPQMRZEURK3BRrwh2kSMrb6r6xMoFr1AC1hRmNG /ip4/45.63.24.86/tcp/4001/ipfs/Qmd66qwujno615ZPiJZYTm12SF1c9fuHcTMSU9mA4gvuwM /ip4/49.77.250.124/tcp/20540/ipfs/QmPXWsm3wCRdyTZAeu4gEon7i1xSQ1QsWsR2X6GpAB3x6r /ip4/5.186.55.132/tcp/1024/ipfs/QmR1mXyic9jSbyzLtnBU9gjbFY8K3TFHrpvJK88LSyPnd9 /ip4/5.28.92.193/tcp/4001/ipfs/QmZ9RMTK8YrgFY7EaYsWnE2AsDNHu1rm5LqadvhFmivPWF /ip4/5.9.150.40/tcp/4737/ipfs/QmaeXrsLHWm4gbjyEUJ4NtPsF3d36mXVzY5eTBQHLdMQ19 /ip4/50.148.88.236/tcp/4001/ipfs/QmUeaH7miiLjxneP3dgJ7EgYxCe6nR16C7xyA5NDzBAcP3 /ip4/50.31.11.244/tcp/4001/ipfs/QmYMdi1e6RV7nJ4xoNUcP4CrfuNdpskzLQ6YBT4xcdaKAV /ip4/50.53.255.232/tcp/20792/ipfs/QmTaqVy1m5MLUh2vPSU64m1nqBj5n3ghovXZ48V6ThLiLj /ip4/51.254.25.17/tcp/4002/ipfs/QmdKbeXoXnMbPDfLsAFPGZDJ41bQuRNKALQSydJ66k1FfH /ip4/52.168.18.22/tcp/9001/ipfs/QmV9eRZ3uJjk461cWSPc8gYTCqWmxLxMU6SFWbDjdYAsxA /ip4/52.170.218.157/tcp/9001/ipfs/QmRZvZiZrhJdZoDruT7w2QLKTdniThNwrpNeFFdZXAzY1s /ip4/52.233.193.228/tcp/4001/ipfs/QmcdQmd42P3Mer1XQrENkpKEW9Z97ucBb5iw3bEPqFnqHe /ip4/52.53.224.174/tcp/4001/ipfs/QmdhVq4BHYLmrsatWxw8FHVCspdTabdgptUaGxW2ow2F7Q /ip4/52.7.58.3/tcp/4001/ipfs/QmdG5Y7xqrtDkVjP1dDuwWvPcVHQJjyJqG5xK62VzMth2x /ip4/54.178.171.10/tcp/4091/ipfs/QmdtfJBMitotUWBX5YZ6rYeaYRFu6zfXXMZP6fygEWK2iu /ip4/54.190.54.51/tcp/4001/ipfs/QmZobm32XH2UiGi5uAg2KabEh6kRL6x64HB56ZF3oA4awR /ip4/54.208.247.108/tcp/4001/ipfs/QmdDyCsGm8Zzv4uyKB4MzX8wP7QDfSfVCsCNMZV5UxNgJd /ip4/54.70.38.180/tcp/1024/ipfs/QmSHCEevPPowdJKHPwivtTW6HsShGQz5qVrFytDeW1dHDv /ip4/54.70.48.46/tcp/1030/ipfs/QmeDcUc9ytZdLcuPHwDNrN1gj415ZFHr27gPgnqJqbf1hg /ip4/54.71.244.118/tcp/4001/ipfs/QmaGYHEnjr5SwSrjP44FHGahtdk3ShPf3DBYmDrZCa1nbS /ip4/54.89.97.141/tcp/4001/ipfs/QmRjxYdkT4x3QpAWqqcz1wqXhTUYrNBm6afaYGk5DQFeY8 /ip4/58.179.165.141/tcp/4001/ipfs/QmYoXumXQYX3FknhH1drVhgqnJd2vQ1ExECLAHykA1zhJZ /ip4/63.96.220.210/tcp/4001/ipfs/QmX4SxZFMgds5b1mf3y4KKHsrLijrFvKZ6HfjZN6DkY4j5 /ip4/65.19.134.242/tcp/4001/ipfs/QmYCLRXcux9BrLSkv3SuGEW6iu7nUD7QSg3YVHcLZjS5AT /ip4/66.56.15.111/tcp/4001/ipfs/QmZxW1oKFYNhQLjypNtUZJqtZMvzk1JNAQnfGLczan2RD2 /ip4/67.174.159.210/tcp/4001/ipfs/QmRNuP6GpZ4tAMvfgXNeCB6At4uRGqqTXBusHRxFh5n8Eq /ip4/69.12.67.106/tcp/4001/ipfs/QmT1q92VyoqysvC268kegsdxeNLR8gkEgpFzmnKWfqp29V /ip4/69.61.33.241/tcp/4001/ipfs/QmTtggHgG1tjAHrHfBDBLPmUvn5BwNRpZY4qMJRXnQ7bQj /ip4/69.62.223.164/tcp/4001/ipfs/QmZrzE3Gye318CU7ZsZ3YeEnw6L7RkbhBvmfU7ebRQEF54 /ip4/71.204.170.241/tcp/4001/ipfs/QmTwvAzEoWZjFAsv9rhXrcn1XPb7qhxDVZN1Q61AnZbqmM /ip4/72.177.11.53/tcp/4001/ipfs/QmPxFX8j1zbHNzLgmeScjX7pjKho2EgzGLaiANFTjLUAb4 /ip4/75.112.252.166/tcp/11465/ipfs/QmRWC4hgiM7Tzchz2uLAN6Yt1xWptqZWYPb5AWvv2DeMhp /ip4/78.46.68.56/tcp/53378/ipfs/QmbE9eo6PXuSHAASumNVZBKvPsVpSjgRDEqoMNHJ49cBKz /ip4/78.56.33.225/tcp/4001/ipfs/QmXokcQHHxSCNZgFv28hN7dTzxbLcXpCM1MUDRXa8G9wNK /ip4/79.175.125.102/tcp/58126/ipfs/QmdDA6QfLQ5sRez6Ev15yDCdumvBuYygeNjVZqFef693Gn /ip4/80.167.121.206/tcp/4001/ipfs/QmfFB7ShRaVPEy9Bbr9fu9xG947KCZqhCTw1utBNHBwGK2 /ip4/82.119.233.36/tcp/4001/ipfs/QmY3xH9PWc4NpmupJ9KWE4r1w9XshvW6oGVeHAApuvVU2K /ip4/82.197.194.135/tcp/41271/ipfs/QmQLW2mhJYPmhYmhkA2FZwFGdEXFjnsprB5DfBxCMRdBk9 /ip4/82.227.20.27/tcp/50190/ipfs/QmY8bMNkkNZvxw1pGVi4pqiXeszZnHY9wwr1Qvyv6QmfsE /ip4/84.217.19.85/tcp/62227/ipfs/QmaD38nfW4u97DPHDLz1cYWzhWUYPKrEianJs2dKctutpf /ip4/84.217.19.85/tcp/63787/ipfs/QmXKd1pJxTqTWNgGENcX2daiGLgWRPDDsXJe8eecQCr6Vh /ip4/86.0.212.51/tcp/50000/ipfs/Qmb9ECxYmPL9sc8jRNAwpGhgjEiXVHKb2qfS8jtjN5z7Pp /ip4/88.153.7.190/tcp/17396/ipfs/QmWTyP5FFpykrfocJ14AcQcwnuSdKAnVASWuFbtqCw3RPT /ip4/88.198.52.13/tcp/4001/ipfs/QmNhwcGyu8pyCHzHS9SuVyVNbg8SjpTKyFb72oofvL4Nf5 /ip4/88.99.13.90/tcp/4001/ipfs/QmTCM4KLAF1xG4ri2JBRigmjf8CLwAzkTs6ckCQbHaArR6 /ip4/89.23.224.58/tcp/37305/ipfs/QmWqjusr86LThkYgjAbNMa8gJ55wzVufkcv5E2TFfzYZXu /ip4/89.64.51.138/tcp/47111/ipfs/Qme63idhHJ2awgkdG952iddw5Ta9nrfQB3Bpn83V1Bqgvv /ip4/91.126.106.78/tcp/21076/ipfs/QmdFZQdcLbgjK5uUaJS2EiKMs4d2oke1DdyGoHAKRMcaXk /ip4/92.222.85.0/tcp/4001/ipfs/QmTm7RdPXbvdSwKQdjEcbtm4JKv1VebzJR7RDra3DpiWd7 /ip4/93.11.115.24/tcp/34730/ipfs/QmRztqxTvxvQXWi7JbtTXijzzngpDgVYwQ2YBccVkt7qjn /ip4/93.182.128.2/tcp/39803/ipfs/Qma8oBW3GNWvNbdEzWiNWenrGtF3DhDUBcUrrsTJBiNKJ2 /ip4/95.31.15.24/tcp/4001/ipfs/QmPxgtHFqyAdby5oqLT5UJGMjPFyGHu5zQcpZ1sKYcuX75 /ip4/96.84.144.177/tcp/4001/ipfs/Qma7U9CNhPnfLit2UL88CFKvizFCZ7pnxB38N3Y5WsZwFH

nezzard commented Apr 12, 2017

@whyrusleeping Sometimes it's really fast, sometimes i have
https://yadi.sk/i/mL6Q4OFX3Gu2nk

Ipfs swarm
/ip4/104.131.131.82/tcp/4001/ipfs/QmaCpDMGvV2BGHeYERUEnRQAwe3N8SzbUtfsmvsqQLuvuJ /ip4/104.131.180.155/tcp/4001/ipfs/QmeXAm1zdLbPaA9wVemaCjbeJgWsCrH4oSCrK2F92yWnbm /ip4/104.133.2.68/tcp/53366/ipfs/QmTAmvzNBsicnajpLTUnVqcPankP3pNDoqHpAtUNkK2rU7 /ip4/104.155.150.120/tcp/4001/ipfs/Qmep8LtipXUG4WSNgJGEtwmuaQQt77wRDL5nkMpZyDqrD3 /ip4/104.236.169.138/tcp/4001/ipfs/QmYodPH2C6xEYFPxNhK4how1frPdXFWVrZ3QGynTFCFfBe /ip4/104.236.176.52/tcp/4001/ipfs/QmSoLnSGccFuZQJzRadHn95W2CrSFmZuTdDWP8HXaHca9z /ip4/104.236.176.59/tcp/4001/ipfs/QmQ8MYL1ANybPTM5uamhzTnPwDwCFgfrdpYo9cwiEmVsge /ip4/104.236.179.241/tcp/4001/ipfs/QmSoLPppuBtQSGwKDZT2M73ULpjvfd3aZ6ha4oFGL1KrGM /ip4/104.236.76.40/tcp/4001/ipfs/QmSoLV4Bbm51jM9C4gDYZQ9Cy3U6aXMJDAbzgu2fzaDs64 /ip4/104.40.212.43/tcp/4001/ipfs/QmcvFeaip7B3RDmLU9MgqGcCRv881Citnv5cHkrTSusZD6 /ip4/106.246.181.100/tcp/4001/ipfs/QmQ6TbUShnjKbnJDSYdxaBb78Dz6fF82NMetDKnau3k7zW /ip4/108.161.120.136/tcp/27040/ipfs/QmNRM8W3u6gxAvm8WqSXqCVC6Wzknq66tdET6fLGh8zCVk /ip4/108.28.144.234/tcp/5002/ipfs/QmWfjhgBWjwiesWQPCC4CSV4q83vyBdSA6LRSaZLLCZoVH /ip4/112.196.16.84/tcp/4002/ipfs/QmbELjeVvfpbGYNcC4j4PPr6mnssp6jKWd4D6Jht8jDhiW /ip4/113.253.98.194/tcp/54388/ipfs/QmcL9BdiHQbRng6PvDzbJye7yG73ttNAkhA5hLGn22StM8 /ip4/121.122.82.230/tcp/58960/ipfs/QmPz9uv4HUP1er5TGaaoc4NVCbN8VFMrf5gwvxfmtSAmGv /ip4/128.199.219.111/tcp/4001/ipfs/QmSoLSafTMBsPKadTEgaXctDQVcqN88CNLHXMkTNwMKPnu /ip4/128.32.112.184/tcp/4001/ipfs/QmeM9rJsk6Ke57xMwMuCkJBb9pYGx7qVRkgzVD6zhxPaBx /ip4/128.32.153.243/tcp/1030/ipfs/QmYoH11GjCyoQW4HyZtSZcL8BqBuudaXWi1pdYyy1AroFd /ip4/134.71.135.172/tcp/4001/ipfs/QmU3q7GxnnhJabNh3ukDq2QsnzwzVpcT5FEPBcJcRu3Wq1 /ip4/138.201.53.216/tcp/4001/ipfs/QmWmJfJKfJmKtRqsTnygmWgJfsmHnXo4p3Uc1Atf8N5iQ5 /ip4/139.162.191.34/tcp/4001/ipfs/QmYfmBh8Pud13uwc5mbtCGRgYbxzsipY87xgjdj2TGeJWm /ip4/142.4.211.131/tcp/4001/ipfs/QmWWWLYe16uU53wPgdP3V5eEb8QRwoqUb35h5EMWoEyWaJ /ip4/159.203.77.184/tcp/4001/ipfs/QmeLGqhi5dFBpxD4xuzAWWcoip69i5SaneXL9Jb83sxSXo /ip4/163.172.222.20/tcp/4001/ipfs/Qmd4up4kjr8TNWc4rx6r4bFwpe6TJQjVVmfwtiv4q3FSPx /ip4/167.114.2.68/tcp/4001/ipfs/QmfY24aJDGyPyUJVyzL1QHPoegmFKuuScoCKrBk9asoTFG /ip4/168.235.149.174/tcp/4001/ipfs/QmbPFhS9YwUxE4rPeaqd7Vn6GEESd1MUUM67ECtYchHyFB /ip4/168.235.79.131/tcp/4001/ipfs/QmaqsmhXtQfKfiWi3jXdb4PxrN8JNi2zmXN13MDEktjK8H /ip4/168.235.90.18/tcp/4001/ipfs/QmWtA6WFyo44pYzQzHFtrtMWPHZiFEDFjUWihEY49obZ1e /ip4/169.231.33.236/tcp/55897/ipfs/QmQyTC3Bg2BkctdisKBvWPoG8Avr7HMrnNMNJS25ubjVUU /ip4/173.95.181.110/tcp/42615/ipfs/QmTxQ2Bv9gppcNvzAtRJiwNAahVhkUHxFt5mMYkW9qPjE6 /ip4/176.9.85.5/tcp/4001/ipfs/QmNUZW8yuNxdLSPMwvaafiMVN8fof5r2PrsUJAgyAn8Udb /ip4/178.19.251.249/tcp/4401/ipfs/QmR2FRyigN82VJc3MFZNz79L8Hunc3XvfAxU3eA3McRPHg /ip4/178.209.50.28/tcp/30852/ipfs/QmVfwJUWnj7GAkQtV4cDVrNDnZEwi4oxnyZaJc7xY7zaN3 /ip4/178.209.50.28/tcp/36706/ipfs/QmWCNyBxJS9iuwCrnnA3QfcrS9Yb67WXnZTiXZsMDFj2ja /ip4/178.62.61.185/tcp/4001/ipfs/QmSoLMeWqB7YGVLJN3pNLQpmmEk35v6wYtsMGLzSr5QBU3 /ip4/180.181.245.242/tcp/4001/ipfs/QmZg57eGmSgXs8cXeGJNsBknZTxdphZH9wWLDx8TdBQrMY /ip4/185.10.68.111/tcp/4001/ipfs/QmTNjTQy6sGFG39VSunS4v1UZRfPFevtGzHwr2h1xfa5Bh /ip4/185.21.217.59/tcp/4001/ipfs/QmQ4GzeQzyW3VcBgVacKSjrUrBxEo6s7VQrrkQyQwi1sxs /ip4/185.32.221.138/tcp/4001/ipfs/QmcmTqKUdasx9xwbG2DcyY95q6GcMzx8uUC9fVqdTyETrZ /ip4/185.61.148.187/tcp/4001/ipfs/QmQ5k9N7aVGECaNBLsX9ZeYJCYvcNWcKDZ8VacV9HGUwSC /ip4/185.97.214.103/tcp/4001/ipfs/QmbKGbNNyvBe6A7kUYQtUpXZU61QiTMnGGjqBx6zuvrYyj /ip4/188.226.129.60/tcp/4001/ipfs/QmWBthnxqH6CpAA9k9XGP9TqWMZGT6UC2DZ4x9qGr7eapc /ip4/188.25.26.115/tcp/32349/ipfs/QmVUR2mtHXCnm7KVyEjBQe1Vdp8XWG6RXC8f8FfrnAxCGJ /ip4/188.25.26.115/tcp/53649/ipfs/QmXctexVWdB4PqAquZ6Ksmu1FwwRMiYhQNfoiaWV4iqEFn /ip4/188.40.114.11/tcp/4001/ipfs/QmZY7MtK8ZbG1suwrxc7xEYZ2hQLf1dAWPRHhjxC8rjq8E /ip4/188.40.41.114/tcp/4001/ipfs/QmUYYq1rYdhmrU7za9zrc6adLmwFBKYx3ksTVU3y1RHomm /ip4/192.124.26.250/tcp/16808/ipfs/QmUnwLT7GK8yCxHrpEELTyHVwGFhiZFwmjrq3jypG9n1k8 /ip4/192.124.26.250/tcp/21486/ipfs/QmeBT8g5ekgXaF4ZPqAi1Y8ssuQTjtzacWB7HC7ZHY8CH7 /ip4/192.131.44.99/tcp/4001/ipfs/QmWQBr5KAnCpGiQa5888DYsJc4gF7x7SDzpT6eVW2SoMMQ /ip4/192.52.2.2/tcp/4001/ipfs/QmeJENdKrdD8Bcj6iSrYPAwfQpR2K1nC8aYFkZ7wXdN9ic /ip4/194.100.58.189/tcp/4001/ipfs/QmVPCaHpUJ2eKVMSgb54zZhYRUKokNsX32C4PSRWiKWY6w /ip4/194.135.91.244/tcp/4001/ipfs/QmbE4S5EBBuY7du97ARD3BizNqpdcwQ3iH1aGyo5c8Ezmb /ip4/195.154.182.94/tcp/1031/ipfs/QmUSfsmVqD8TTgnUcPDTrd24SbWDEpnmkWWr7eqbJT2g8y /ip4/199.188.101.24/tcp/4001/ipfs/QmSsjprNEhoDZJAZYscB4G23b1dhxJ1cmiCdC5k73N8Jra /ip4/204.236.253.32/tcp/4001/ipfs/QmYf9BoND8MCHfmzihpseFc6MA6JwBV1ZvHsSMPJVW9Hww /ip4/206.190.135.76/tcp/4001/ipfs/QmTRmYCFGJLz2s5tfnHiB1kwrfrtVSxKeSPxojMioZKVH6 /ip4/212.227.249.191/tcp/4001/ipfs/QmcZrBqWBYV3RGsPuhQX11QzpKAQ8SYfMYL1dGXuPmaDYF /ip4/212.47.243.156/tcp/4001/ipfs/QmPCfdoA8aDscrfNVAhB12YYJJ2CR9mDG2WtKYFoxwL182 /ip4/213.108.213.138/tcp/4001/ipfs/QmWHo4hLG3tkmfuCot3xGCzE2a822MCNQ1mAx1tdEXVL46 /ip4/213.32.16.10/tcp/4001/ipfs/QmcWjSF6prpJwBZsfPSfzGEL61agU1vcMNCX8K6qaH5PAq /ip4/217.210.239.98/tcp/48069/ipfs/QmWGUTL6pQe4ryneBarFqnMdFwTq847a2DnWNo4oYRHxEJ /ip4/217.234.48.60/tcp/65012/ipfs/QmPPnZRcPCPxDvqgz3nyg5QshSzCzqa837ABFU4H4ZzUQP /ip4/23.250.20.244/tcp/4001/ipfs/QmUgNCzhgGvjn9DAs22mCJ7bv3sFp6PWPD6Egt9aPopjVn /ip4/34.223.212.29/tcp/1024/ipfs/QmcXwJ34KM17jkwYwGjgUFvG7zBgGGnUXRYCJdvAPTc8CB /ip4/35.154.222.183/tcp/4001/ipfs/Qmecb2A1Ki34eb4jUuaaWBH8A3rRhiLaynoq4Yj7issF1L /ip4/37.187.116.23/tcp/4001/ipfs/QmbqE6UfCJaXST3i65zbr649s8cJCUoP9m3UFUrXcNgeDn /ip4/37.187.98.185/tcp/1045/ipfs/QmS7djjNercLL4R4kbEjs6eGtxmAiuWMwnvAhP6AkFB64U /ip4/37.205.9.176/tcp/4001/ipfs/QmdX1zPzUtGJzcQm2gz6fyiaX7XgthK5d4LNSJq3rUAsiP /ip4/40.112.223.87/tcp/4001/ipfs/QmWPSzKERs6KAjb8QfSXViFqyEUn3VZYYnXjgG6hJwXWYK /ip4/45.32.155.49/tcp/4001/ipfs/QmYdn8trPQMRZEURK3BRrwh2kSMrb6r6xMoFr1AC1hRmNG /ip4/45.63.24.86/tcp/4001/ipfs/Qmd66qwujno615ZPiJZYTm12SF1c9fuHcTMSU9mA4gvuwM /ip4/49.77.250.124/tcp/20540/ipfs/QmPXWsm3wCRdyTZAeu4gEon7i1xSQ1QsWsR2X6GpAB3x6r /ip4/5.186.55.132/tcp/1024/ipfs/QmR1mXyic9jSbyzLtnBU9gjbFY8K3TFHrpvJK88LSyPnd9 /ip4/5.28.92.193/tcp/4001/ipfs/QmZ9RMTK8YrgFY7EaYsWnE2AsDNHu1rm5LqadvhFmivPWF /ip4/5.9.150.40/tcp/4737/ipfs/QmaeXrsLHWm4gbjyEUJ4NtPsF3d36mXVzY5eTBQHLdMQ19 /ip4/50.148.88.236/tcp/4001/ipfs/QmUeaH7miiLjxneP3dgJ7EgYxCe6nR16C7xyA5NDzBAcP3 /ip4/50.31.11.244/tcp/4001/ipfs/QmYMdi1e6RV7nJ4xoNUcP4CrfuNdpskzLQ6YBT4xcdaKAV /ip4/50.53.255.232/tcp/20792/ipfs/QmTaqVy1m5MLUh2vPSU64m1nqBj5n3ghovXZ48V6ThLiLj /ip4/51.254.25.17/tcp/4002/ipfs/QmdKbeXoXnMbPDfLsAFPGZDJ41bQuRNKALQSydJ66k1FfH /ip4/52.168.18.22/tcp/9001/ipfs/QmV9eRZ3uJjk461cWSPc8gYTCqWmxLxMU6SFWbDjdYAsxA /ip4/52.170.218.157/tcp/9001/ipfs/QmRZvZiZrhJdZoDruT7w2QLKTdniThNwrpNeFFdZXAzY1s /ip4/52.233.193.228/tcp/4001/ipfs/QmcdQmd42P3Mer1XQrENkpKEW9Z97ucBb5iw3bEPqFnqHe /ip4/52.53.224.174/tcp/4001/ipfs/QmdhVq4BHYLmrsatWxw8FHVCspdTabdgptUaGxW2ow2F7Q /ip4/52.7.58.3/tcp/4001/ipfs/QmdG5Y7xqrtDkVjP1dDuwWvPcVHQJjyJqG5xK62VzMth2x /ip4/54.178.171.10/tcp/4091/ipfs/QmdtfJBMitotUWBX5YZ6rYeaYRFu6zfXXMZP6fygEWK2iu /ip4/54.190.54.51/tcp/4001/ipfs/QmZobm32XH2UiGi5uAg2KabEh6kRL6x64HB56ZF3oA4awR /ip4/54.208.247.108/tcp/4001/ipfs/QmdDyCsGm8Zzv4uyKB4MzX8wP7QDfSfVCsCNMZV5UxNgJd /ip4/54.70.38.180/tcp/1024/ipfs/QmSHCEevPPowdJKHPwivtTW6HsShGQz5qVrFytDeW1dHDv /ip4/54.70.48.46/tcp/1030/ipfs/QmeDcUc9ytZdLcuPHwDNrN1gj415ZFHr27gPgnqJqbf1hg /ip4/54.71.244.118/tcp/4001/ipfs/QmaGYHEnjr5SwSrjP44FHGahtdk3ShPf3DBYmDrZCa1nbS /ip4/54.89.97.141/tcp/4001/ipfs/QmRjxYdkT4x3QpAWqqcz1wqXhTUYrNBm6afaYGk5DQFeY8 /ip4/58.179.165.141/tcp/4001/ipfs/QmYoXumXQYX3FknhH1drVhgqnJd2vQ1ExECLAHykA1zhJZ /ip4/63.96.220.210/tcp/4001/ipfs/QmX4SxZFMgds5b1mf3y4KKHsrLijrFvKZ6HfjZN6DkY4j5 /ip4/65.19.134.242/tcp/4001/ipfs/QmYCLRXcux9BrLSkv3SuGEW6iu7nUD7QSg3YVHcLZjS5AT /ip4/66.56.15.111/tcp/4001/ipfs/QmZxW1oKFYNhQLjypNtUZJqtZMvzk1JNAQnfGLczan2RD2 /ip4/67.174.159.210/tcp/4001/ipfs/QmRNuP6GpZ4tAMvfgXNeCB6At4uRGqqTXBusHRxFh5n8Eq /ip4/69.12.67.106/tcp/4001/ipfs/QmT1q92VyoqysvC268kegsdxeNLR8gkEgpFzmnKWfqp29V /ip4/69.61.33.241/tcp/4001/ipfs/QmTtggHgG1tjAHrHfBDBLPmUvn5BwNRpZY4qMJRXnQ7bQj /ip4/69.62.223.164/tcp/4001/ipfs/QmZrzE3Gye318CU7ZsZ3YeEnw6L7RkbhBvmfU7ebRQEF54 /ip4/71.204.170.241/tcp/4001/ipfs/QmTwvAzEoWZjFAsv9rhXrcn1XPb7qhxDVZN1Q61AnZbqmM /ip4/72.177.11.53/tcp/4001/ipfs/QmPxFX8j1zbHNzLgmeScjX7pjKho2EgzGLaiANFTjLUAb4 /ip4/75.112.252.166/tcp/11465/ipfs/QmRWC4hgiM7Tzchz2uLAN6Yt1xWptqZWYPb5AWvv2DeMhp /ip4/78.46.68.56/tcp/53378/ipfs/QmbE9eo6PXuSHAASumNVZBKvPsVpSjgRDEqoMNHJ49cBKz /ip4/78.56.33.225/tcp/4001/ipfs/QmXokcQHHxSCNZgFv28hN7dTzxbLcXpCM1MUDRXa8G9wNK /ip4/79.175.125.102/tcp/58126/ipfs/QmdDA6QfLQ5sRez6Ev15yDCdumvBuYygeNjVZqFef693Gn /ip4/80.167.121.206/tcp/4001/ipfs/QmfFB7ShRaVPEy9Bbr9fu9xG947KCZqhCTw1utBNHBwGK2 /ip4/82.119.233.36/tcp/4001/ipfs/QmY3xH9PWc4NpmupJ9KWE4r1w9XshvW6oGVeHAApuvVU2K /ip4/82.197.194.135/tcp/41271/ipfs/QmQLW2mhJYPmhYmhkA2FZwFGdEXFjnsprB5DfBxCMRdBk9 /ip4/82.227.20.27/tcp/50190/ipfs/QmY8bMNkkNZvxw1pGVi4pqiXeszZnHY9wwr1Qvyv6QmfsE /ip4/84.217.19.85/tcp/62227/ipfs/QmaD38nfW4u97DPHDLz1cYWzhWUYPKrEianJs2dKctutpf /ip4/84.217.19.85/tcp/63787/ipfs/QmXKd1pJxTqTWNgGENcX2daiGLgWRPDDsXJe8eecQCr6Vh /ip4/86.0.212.51/tcp/50000/ipfs/Qmb9ECxYmPL9sc8jRNAwpGhgjEiXVHKb2qfS8jtjN5z7Pp /ip4/88.153.7.190/tcp/17396/ipfs/QmWTyP5FFpykrfocJ14AcQcwnuSdKAnVASWuFbtqCw3RPT /ip4/88.198.52.13/tcp/4001/ipfs/QmNhwcGyu8pyCHzHS9SuVyVNbg8SjpTKyFb72oofvL4Nf5 /ip4/88.99.13.90/tcp/4001/ipfs/QmTCM4KLAF1xG4ri2JBRigmjf8CLwAzkTs6ckCQbHaArR6 /ip4/89.23.224.58/tcp/37305/ipfs/QmWqjusr86LThkYgjAbNMa8gJ55wzVufkcv5E2TFfzYZXu /ip4/89.64.51.138/tcp/47111/ipfs/Qme63idhHJ2awgkdG952iddw5Ta9nrfQB3Bpn83V1Bqgvv /ip4/91.126.106.78/tcp/21076/ipfs/QmdFZQdcLbgjK5uUaJS2EiKMs4d2oke1DdyGoHAKRMcaXk /ip4/92.222.85.0/tcp/4001/ipfs/QmTm7RdPXbvdSwKQdjEcbtm4JKv1VebzJR7RDra3DpiWd7 /ip4/93.11.115.24/tcp/34730/ipfs/QmRztqxTvxvQXWi7JbtTXijzzngpDgVYwQ2YBccVkt7qjn /ip4/93.182.128.2/tcp/39803/ipfs/Qma8oBW3GNWvNbdEzWiNWenrGtF3DhDUBcUrrsTJBiNKJ2 /ip4/95.31.15.24/tcp/4001/ipfs/QmPxgtHFqyAdby5oqLT5UJGMjPFyGHu5zQcpZ1sKYcuX75 /ip4/96.84.144.177/tcp/4001/ipfs/Qma7U9CNhPnfLit2UL88CFKvizFCZ7pnxB38N3Y5WsZwFH

@Kubuxu

This comment has been minimized.

Show comment
Hide comment
@Kubuxu

Kubuxu Apr 17, 2017

Member

Which ipfs version are you running?

Member

Kubuxu commented Apr 17, 2017

Which ipfs version are you running?

@kikoncuo

This comment has been minimized.

Show comment
Hide comment
@kikoncuo

kikoncuo Apr 24, 2017

@nezzard What tool are you using in your screenshot? I've seen it many times in the forums but I can't find it anywhere.

kikoncuo commented Apr 24, 2017

@nezzard What tool are you using in your screenshot? I've seen it many times in the forums but I can't find it anywhere.

@nezzard

This comment has been minimized.

Show comment
Hide comment
@nezzard

nezzard Apr 24, 2017

@kikoncuo it's a tool from cloud service like dropbox
https://disk.yandex.ua/

nezzard commented Apr 24, 2017

@kikoncuo it's a tool from cloud service like dropbox
https://disk.yandex.ua/

@nezzard

This comment has been minimized.

Show comment
Hide comment
@nezzard

nezzard Apr 24, 2017

@Kubuxu The last at the time

nezzard commented Apr 24, 2017

@Kubuxu The last at the time

@kikoncuo

This comment has been minimized.

Show comment
Hide comment
@kikoncuo

kikoncuo Apr 24, 2017

@nezzard I meant the tool which you took the screenshot from, my bad

kikoncuo commented Apr 24, 2017

@nezzard I meant the tool which you took the screenshot from, my bad

@nezzard

This comment has been minimized.

Show comment
Hide comment
@nezzard

nezzard Apr 25, 2017

@Kubuxu this tool inside the program yandex disk

nezzard commented Apr 25, 2017

@Kubuxu this tool inside the program yandex disk

@cpacia

This comment has been minimized.

Show comment
Hide comment
@cpacia

cpacia Apr 27, 2017

So let me tell you some tweaks I've made which has helped quite a bit.

  1. I made the dht query size param accessible from the config. Setting it to like 5 or 6 speeds it up quite a bit.

  2. I also added some caching into the resolver so that if it can't find a record on the network (such as it expiring) it loads it from local cache. Obviously each record that is fetched updates the cache. This isn't really speed related but it does provide a slightly better UX as data remains available after it drops out of the dht.

  3. Using #2 for certain types of data where it doesn't matter if it's slightly stale, like profiles, I load the record from cache and use it to return the profile. Then in the background I do the IPNS call to fetch the latest profile and update the cache. This ensures that our profile calls are nearly instant while potentially being only slightly out of date.

cpacia commented Apr 27, 2017

So let me tell you some tweaks I've made which has helped quite a bit.

  1. I made the dht query size param accessible from the config. Setting it to like 5 or 6 speeds it up quite a bit.

  2. I also added some caching into the resolver so that if it can't find a record on the network (such as it expiring) it loads it from local cache. Obviously each record that is fetched updates the cache. This isn't really speed related but it does provide a slightly better UX as data remains available after it drops out of the dht.

  3. Using #2 for certain types of data where it doesn't matter if it's slightly stale, like profiles, I load the record from cache and use it to return the profile. Then in the background I do the IPNS call to fetch the latest profile and update the cache. This ensures that our profile calls are nearly instant while potentially being only slightly out of date.

@whyrusleeping

This comment has been minimized.

Show comment
Hide comment
@whyrusleeping

whyrusleeping Apr 28, 2017

Member

We can probably add flags to the ipfs name resolve api that allow selection (per resolve) of the query size parameter, and also to say "just give me whatever value you have cached".

Both of those would be simple enough to implement without actually having to change too much

Member

whyrusleeping commented Apr 28, 2017

We can probably add flags to the ipfs name resolve api that allow selection (per resolve) of the query size parameter, and also to say "just give me whatever value you have cached".

Both of those would be simple enough to implement without actually having to change too much

@whyrusleeping

This comment has been minimized.

Show comment
Hide comment
@whyrusleeping

whyrusleeping Apr 28, 2017

Member

Another thing we could do it have a command that returns ipns results as they come in, and then when enough comes in to make a decision, says "This is the best one". This way you could start working with the first one you receive, then when the right one comes in, switch to using that

Member

whyrusleeping commented Apr 28, 2017

Another thing we could do it have a command that returns ipns results as they come in, and then when enough comes in to make a decision, says "This is the best one". This way you could start working with the first one you receive, then when the right one comes in, switch to using that

@MichaelMure

This comment has been minimized.

Show comment
Hide comment
@MichaelMure

MichaelMure May 26, 2017

Contributor

I have some trouble as well with IPNS. I have a linux box and a windows box on the same LAN running ipfs 0.4.9 and I can't resolve IPNS addresses published from the other side, even after several minutes. I have 400 peers connected on one side, 250 on the other.

@cpacia your changes are in a branch somewhere ? That looks like a very handy addition for my project.

Contributor

MichaelMure commented May 26, 2017

I have some trouble as well with IPNS. I have a linux box and a windows box on the same LAN running ipfs 0.4.9 and I can't resolve IPNS addresses published from the other side, even after several minutes. I have 400 peers connected on one side, 250 on the other.

@cpacia your changes are in a branch somewhere ? That looks like a very handy addition for my project.

@MichaelMure

This comment has been minimized.

Show comment
Hide comment
@MichaelMure

MichaelMure May 26, 2017

Contributor

Answering to myself, the fork is here: https://github.com/OpenBazaar/go-ipfs

@whyrusleeping any idea how I can debug this issue ?

Contributor

MichaelMure commented May 26, 2017

Answering to myself, the fork is here: https://github.com/OpenBazaar/go-ipfs

@whyrusleeping any idea how I can debug this issue ?

@whyrusleeping

This comment has been minimized.

Show comment
Hide comment
@whyrusleeping

whyrusleeping May 26, 2017

Member

@MichaelMure you cant resolve at all? Or its just very slow?

Member

whyrusleeping commented May 26, 2017

@MichaelMure you cant resolve at all? Or its just very slow?

@MichaelMure

This comment has been minimized.

Show comment
Hide comment
@MichaelMure

MichaelMure May 26, 2017

Contributor

Sometimes it just take times before being able to resolve and once it has been resolved once it works properly. But in this case it didn't resolve at all even after 30 minutes. It might be another issue but without a way to find out what's going on in ipfs, well ...

Contributor

MichaelMure commented May 26, 2017

Sometimes it just take times before being able to resolve and once it has been resolved once it works properly. But in this case it didn't resolve at all even after 30 minutes. It might be another issue but without a way to find out what's going on in ipfs, well ...

@nezzard

This comment has been minimized.

Show comment
Hide comment
@nezzard

nezzard Jun 21, 2017

I think ipns is very bad for use
You can check
http://ipfs.artpixel.com.ua/

It's load for 15 - 20 seconds

nezzard commented Jun 21, 2017

I think ipns is very bad for use
You can check
http://ipfs.artpixel.com.ua/

It's load for 15 - 20 seconds

@hhff

This comment has been minimized.

Show comment
Hide comment
@hhff

hhff Jul 1, 2017

I'm also experiencing massive resolution times with IPNS. Same behavior over here - the first resolution can take multiple minutes, then once a it's loaded, I can refresh the content in under a second.

If I leave it for a few minutes, then do another refresh, and that request cycle repeats the same behavior.

The "cache" for the resolution only appears to stay warm for a short period of time.

hhff commented Jul 1, 2017

I'm also experiencing massive resolution times with IPNS. Same behavior over here - the first resolution can take multiple minutes, then once a it's loaded, I can refresh the content in under a second.

If I leave it for a few minutes, then do another refresh, and that request cycle repeats the same behavior.

The "cache" for the resolution only appears to stay warm for a short period of time.

@hhff

This comment has been minimized.

Show comment
Hide comment
@hhff

hhff Jul 1, 2017

I'm using a CNAME with _dnslink, for what it's worth.

Content is at www.ember-cli-deploy-ipfs.com

hhff commented Jul 1, 2017

I'm using a CNAME with _dnslink, for what it's worth.

Content is at www.ember-cli-deploy-ipfs.com

@alexandre1985

This comment has been minimized.

Show comment
Hide comment
@alexandre1985

alexandre1985 Aug 24, 2017

Ipfs is unusable. I have the daemon running on both of my computers inside a LAN and one "serving" one file (a video) that the other doesn't have. When I try to access that video from the pc that doesn't have the file, using localhost:8080/ipfs/... on my browser, the video is stopping and taking huge amount of times to load. HUGE amount of time in such a way that I can't watch the video.
When I netcat that video and pipe it through mplayer to the other computer I can watch the video stream great.
So this is a problem of ipfs and it has great great performance issues. So great that it doesn't make sense and makes the technology not worth using (as today 2017-08-24).
IPFS isn't delivering what it promised. Very disappointed

alexandre1985 commented Aug 24, 2017

Ipfs is unusable. I have the daemon running on both of my computers inside a LAN and one "serving" one file (a video) that the other doesn't have. When I try to access that video from the pc that doesn't have the file, using localhost:8080/ipfs/... on my browser, the video is stopping and taking huge amount of times to load. HUGE amount of time in such a way that I can't watch the video.
When I netcat that video and pipe it through mplayer to the other computer I can watch the video stream great.
So this is a problem of ipfs and it has great great performance issues. So great that it doesn't make sense and makes the technology not worth using (as today 2017-08-24).
IPFS isn't delivering what it promised. Very disappointed

@whyrusleeping whyrusleeping added this to the Ipfs 0.4.12 milestone Sep 2, 2017

@kesar

This comment has been minimized.

Show comment
Hide comment
@kesar

kesar Sep 2, 2017

Very disappointed

You should ask for a refund 👍

kesar commented Sep 2, 2017

Very disappointed

You should ask for a refund 👍

@alexandre1985

This comment has been minimized.

Show comment
Hide comment
@alexandre1985

alexandre1985 Sep 3, 2017

@kesar I mean this out of love. @jbenet (Juan Benet) says that it is going to release us from the backbone but currently ipfs network performance is very weak.
I would like ipfs to succeed but how can that be if I can see a video faster through the backbone than through ipfs hosting video file inside my LAN?
The performance of ipfs in this aspect is weak, to be modest. You should try this experiment yourself

alexandre1985 commented Sep 3, 2017

@kesar I mean this out of love. @jbenet (Juan Benet) says that it is going to release us from the backbone but currently ipfs network performance is very weak.
I would like ipfs to succeed but how can that be if I can see a video faster through the backbone than through ipfs hosting video file inside my LAN?
The performance of ipfs in this aspect is weak, to be modest. You should try this experiment yourself

@Calmarius

This comment has been minimized.

Show comment
Hide comment
@Calmarius

Calmarius Oct 8, 2017

It took me more than a minute to resolve the domain published by my own computer... And it's not the DNS resolution it hangs at resolving the actual IPNS entry.

$ time ipfs resolve /ipns/QmQqR8R9nfFkWYH9P7xNPtAry8tT63miNyZwt121uXsmSU
/ipfs/QmQunuPzcLp2FiKwMDucJi957SrB8BygKA4C4J4h7VG4M9

real	1m0.078s
user	0m0.060s
sys	0m0.008s

Calmarius commented Oct 8, 2017

It took me more than a minute to resolve the domain published by my own computer... And it's not the DNS resolution it hangs at resolving the actual IPNS entry.

$ time ipfs resolve /ipns/QmQqR8R9nfFkWYH9P7xNPtAry8tT63miNyZwt121uXsmSU
/ipfs/QmQunuPzcLp2FiKwMDucJi957SrB8BygKA4C4J4h7VG4M9

real	1m0.078s
user	0m0.060s
sys	0m0.008s
@Stebalien

This comment has been minimized.

Show comment
Hide comment
@Stebalien

Stebalien Oct 9, 2017

Contributor

We're working on fixing some low hanging fruit in the DHT that should alleviate this: libp2p/go-libp2p-kad-dht#88. You can expect this to appear in a release month or so (4.12 or 4.13).

We're also working on bypassing the DHT for recently accessed IPNS addresses by using pubsub ( #4047). However, that will likely remain under an experimental flag for a while as our current pubsub implementation is very naive.

Contributor

Stebalien commented Oct 9, 2017

We're working on fixing some low hanging fruit in the DHT that should alleviate this: libp2p/go-libp2p-kad-dht#88. You can expect this to appear in a release month or so (4.12 or 4.13).

We're also working on bypassing the DHT for recently accessed IPNS addresses by using pubsub ( #4047). However, that will likely remain under an experimental flag for a while as our current pubsub implementation is very naive.

@phr34k0

This comment has been minimized.

Show comment
Hide comment
@phr34k0

phr34k0 Oct 14, 2017

@Stebalien Can't wait for the update. Reverting back to IFPS solves the performance. IPNS just takes a million years to load. I'll just go back to Zeronet.io for what it's worth. It's x10 better. And damnnn #2105 .. 2015. 2 years worth of same problem. Woah

phr34k0 commented Oct 14, 2017

@Stebalien Can't wait for the update. Reverting back to IFPS solves the performance. IPNS just takes a million years to load. I'll just go back to Zeronet.io for what it's worth. It's x10 better. And damnnn #2105 .. 2015. 2 years worth of same problem. Woah

@whyrusleeping

This comment has been minimized.

Show comment
Hide comment
@whyrusleeping

whyrusleeping Apr 23, 2018

Member

@inetic the selector for public keys does just pick the first valid record, because all public key records are the same. In that case, yes. It is wasteful to wait for 16 records.

Though since public keys are cached, and can also be obtained through other methods, the slowness of ipns is rarely due to public retrieval. The slowness happens because for IPNS records, we need the 16 values, this is a probabilistic protection against an eclipse attack on a given value. As it gets smaller, it becomes exponentially easier for an attacker to pull it off and give the wrong record to their victim.

That said, in this case, the 'wrong' record must still be a valid record (signed, and with a 'live' TTL). So a successful attack is either censorship by wrong value (giving the victim 16 records with valid signatures but bad TTLs), or a slightly older value.

Member

whyrusleeping commented Apr 23, 2018

@inetic the selector for public keys does just pick the first valid record, because all public key records are the same. In that case, yes. It is wasteful to wait for 16 records.

Though since public keys are cached, and can also be obtained through other methods, the slowness of ipns is rarely due to public retrieval. The slowness happens because for IPNS records, we need the 16 values, this is a probabilistic protection against an eclipse attack on a given value. As it gets smaller, it becomes exponentially easier for an attacker to pull it off and give the wrong record to their victim.

That said, in this case, the 'wrong' record must still be a valid record (signed, and with a 'live' TTL). So a successful attack is either censorship by wrong value (giving the victim 16 records with valid signatures but bad TTLs), or a slightly older value.

@karalabe

This comment has been minimized.

Show comment
Hide comment
@karalabe

karalabe Apr 23, 2018

Contributor

Can the TTL be controlled by the user? E.g. can I say I want an IPNS entry to be valid for only 5 mins?

Contributor

karalabe commented Apr 23, 2018

Can the TTL be controlled by the user? E.g. can I say I want an IPNS entry to be valid for only 5 mins?

@whyrusleeping

This comment has been minimized.

Show comment
Hide comment
@whyrusleeping

whyrusleeping Apr 23, 2018

Member

@karalabe yes, via the --lifetime flag. There is also a TTL flag, but that affects a different TTL than the record validity one.

Member

whyrusleeping commented Apr 23, 2018

@karalabe yes, via the --lifetime flag. There is also a TTL flag, but that affects a different TTL than the record validity one.

@leonprou

This comment has been minimized.

Show comment
Hide comment
@leonprou

leonprou Apr 29, 2018

I tried to make it faster approaching the IPNS of my own node, but there was no difference.

Can this work like IPFS queries? When I query my ipfs node for a pinned file it takes no time cause the node doesn't need to communicate with the network. The same can be done with IPNS I think. I've found that "decetralize but store your stuff locally" approach pretty useful while IPFS is in alpha and feature are missing.

Please correct me if I'm wrong, just start exploring the InterPlanetary 😊

leonprou commented Apr 29, 2018

I tried to make it faster approaching the IPNS of my own node, but there was no difference.

Can this work like IPFS queries? When I query my ipfs node for a pinned file it takes no time cause the node doesn't need to communicate with the network. The same can be done with IPNS I think. I've found that "decetralize but store your stuff locally" approach pretty useful while IPFS is in alpha and feature are missing.

Please correct me if I'm wrong, just start exploring the InterPlanetary 😊

@Kubuxu

This comment has been minimized.

Show comment
Hide comment
@Kubuxu

Kubuxu Apr 29, 2018

Member

It is most likely caused by libp2p/go-libp2p-kad-dht#139 . In short, we don't push enough DHT records to resolve IPNS without the timeout and it is slow because of that.

Member

Kubuxu commented Apr 29, 2018

It is most likely caused by libp2p/go-libp2p-kad-dht#139 . In short, we don't push enough DHT records to resolve IPNS without the timeout and it is slow because of that.

@whyrusleeping

This comment has been minimized.

Show comment
Hide comment
@whyrusleeping

whyrusleeping Apr 30, 2018

Member

Also note that we are actively working on a resolution for 139 linked above. Its quite involved.

Member

whyrusleeping commented Apr 30, 2018

Also note that we are actively working on a resolution for 139 linked above. Its quite involved.

@Stebalien

This comment has been minimized.

Show comment
Hide comment
@Stebalien

Stebalien May 1, 2018

Contributor

@leonprou

Can this work like IPFS queries?

Unfortunately, no. IPNS links are mutable so we always need to ask the network if there's a newer version available. Regardless of what we do, we'll have to make at least one network request to find the latest value (unless you're using IPNS over pubsub).

Contributor

Stebalien commented May 1, 2018

@leonprou

Can this work like IPFS queries?

Unfortunately, no. IPNS links are mutable so we always need to ask the network if there's a newer version available. Regardless of what we do, we'll have to make at least one network request to find the latest value (unless you're using IPNS over pubsub).

@leonprou

This comment has been minimized.

Show comment
Hide comment
@leonprou

leonprou May 2, 2018

@Stebalien thanks for explaining this!

leonprou commented May 2, 2018

@Stebalien thanks for explaining this!

@karalabe

This comment has been minimized.

Show comment
Hide comment
@karalabe

karalabe May 2, 2018

Contributor

I did a bit of benchmarking of IPNS. I'm trying to figure out how it performs under different constraints and whether there are more anomalies in the code than networking would allow (i.e. bugs).

My benchmark ran for about one and a half hours. There were two hosts involved in separate locations (though same city and ISP):

  • One of the hosts was publishing IPNS entries for the same private key as fast as it could
  • The other node was resolving the IPNS entry with caching disabled

I also reduced resolution settings to “DHTRecordCount = 3” and “DHTTimeout = 9s”.

ipns

The interesting observations from the above chart:

  • The active peer count of a node doesn't seem to affect the IPNS publish time (red line on the chart). Pushing out an IPNS entry on a freshly started node takes +- the same time as it takes for a node with 1700 active connections. This might be obvious to an implementer, not so much from the outside.
  • Publication time (red line on the chart) always takes either exactly 1 min, exactly 1 min 30 sec or exactly 2 mins. Occasionally I've seen 1:15 and 2:30, but those were outliers in occurrence count. This seems to signal that pushing IPNS entries into the DHT is triggering different timeouts, but at no point is it a natural successful publish. The timings are simply too perfect and too uniform. I don't think IPNS publishing works correctly currently.
  • Discovery of a new IPNS entry (red circles without the line) seem to be fairly stable, managing to find a newly published entry within about 10-30 seconds after publication finishes. The chart is a bit off as sometimes I even discover the entry before publish finishes, but that's not plotted currently.
  • Resolving to stale entries (green squares) where IPNS was already updated but the node does not detect it seem to be a fairly low occurrence event, though it does happen almost always a bit.
  • Resolution times (blue triangle scatter plot) seem to always hover around the 9 second mark, meaning that given the current benchmark of updating IPNS like crazy, the network never really stabilizes enough to resolve without hitting the timeout.
  • The curious thing in IPNS resolution (blue triangles) is that sometimes we get correct resolution after 15-20 seconds, a couple times after 1 minute. These point to bugs in the codebase, since a 9 second timeout should always be honored, at worst failing resolution. It should most definitely never hang for 7 times the timeout allowance.

I'll keep playing around with these things and post any other interesting things I find. Note, these are not using the pubsub system, but I'm interesting in worst case performance, not ideal scenarios where the stars align and everybody is online at the same time.

Contributor

karalabe commented May 2, 2018

I did a bit of benchmarking of IPNS. I'm trying to figure out how it performs under different constraints and whether there are more anomalies in the code than networking would allow (i.e. bugs).

My benchmark ran for about one and a half hours. There were two hosts involved in separate locations (though same city and ISP):

  • One of the hosts was publishing IPNS entries for the same private key as fast as it could
  • The other node was resolving the IPNS entry with caching disabled

I also reduced resolution settings to “DHTRecordCount = 3” and “DHTTimeout = 9s”.

ipns

The interesting observations from the above chart:

  • The active peer count of a node doesn't seem to affect the IPNS publish time (red line on the chart). Pushing out an IPNS entry on a freshly started node takes +- the same time as it takes for a node with 1700 active connections. This might be obvious to an implementer, not so much from the outside.
  • Publication time (red line on the chart) always takes either exactly 1 min, exactly 1 min 30 sec or exactly 2 mins. Occasionally I've seen 1:15 and 2:30, but those were outliers in occurrence count. This seems to signal that pushing IPNS entries into the DHT is triggering different timeouts, but at no point is it a natural successful publish. The timings are simply too perfect and too uniform. I don't think IPNS publishing works correctly currently.
  • Discovery of a new IPNS entry (red circles without the line) seem to be fairly stable, managing to find a newly published entry within about 10-30 seconds after publication finishes. The chart is a bit off as sometimes I even discover the entry before publish finishes, but that's not plotted currently.
  • Resolving to stale entries (green squares) where IPNS was already updated but the node does not detect it seem to be a fairly low occurrence event, though it does happen almost always a bit.
  • Resolution times (blue triangle scatter plot) seem to always hover around the 9 second mark, meaning that given the current benchmark of updating IPNS like crazy, the network never really stabilizes enough to resolve without hitting the timeout.
  • The curious thing in IPNS resolution (blue triangles) is that sometimes we get correct resolution after 15-20 seconds, a couple times after 1 minute. These point to bugs in the codebase, since a 9 second timeout should always be honored, at worst failing resolution. It should most definitely never hang for 7 times the timeout allowance.

I'll keep playing around with these things and post any other interesting things I find. Note, these are not using the pubsub system, but I'm interesting in worst case performance, not ideal scenarios where the stars align and everybody is online at the same time.

@dirkmc

This comment has been minimized.

Show comment
Hide comment
@dirkmc

dirkmc May 2, 2018

Contributor

@karalabe interesting findings, thanks for taking the time to look into this

Contributor

dirkmc commented May 2, 2018

@karalabe interesting findings, thanks for taking the time to look into this

@dirkmc

This comment has been minimized.

Show comment
Hide comment
@dirkmc

dirkmc May 2, 2018

Contributor

With respect to timeouts often being almost exactly 60 seconds, read this comment here (and stebalian's reply):
libp2p/go-libp2p-kad-dht#88 (comment)

Contributor

dirkmc commented May 2, 2018

With respect to timeouts often being almost exactly 60 seconds, read this comment here (and stebalian's reply):
libp2p/go-libp2p-kad-dht#88 (comment)

@karalabe

This comment has been minimized.

Show comment
Hide comment
@karalabe

karalabe May 2, 2018

Contributor

@dirkmc Yes, that does explain the 1 minute timeout, but I still consistently get 1:30 and 2:00 too, which mean there are other timeouts and we're often hitting combos of them. I don't mind a publish "failing" after 1 minute, but it would be nice to have consistent behavior, or at least an explanation to the 3 different timeout cases.

Contributor

karalabe commented May 2, 2018

@dirkmc Yes, that does explain the 1 minute timeout, but I still consistently get 1:30 and 2:00 too, which mean there are other timeouts and we're often hitting combos of them. I don't mind a publish "failing" after 1 minute, but it would be nice to have consistent behavior, or at least an explanation to the 3 different timeout cases.

@karalabe

This comment has been minimized.

Show comment
Hide comment
@karalabe

karalabe May 2, 2018

Contributor

@dirkmc Oh, that's a different thread than the one I read. I'll read up on it. Thanks for the link!

Contributor

karalabe commented May 2, 2018

@dirkmc Oh, that's a different thread than the one I read. I'll read up on it. Thanks for the link!

@Powersource

This comment has been minimized.

Show comment
Hide comment
@Powersource

Powersource Jul 29, 2018

As a quick-fix, could ipfs name publish get an option for lowering the timeout?

Powersource commented Jul 29, 2018

As a quick-fix, could ipfs name publish get an option for lowering the timeout?

@whyrusleeping

This comment has been minimized.

Show comment
Hide comment
@whyrusleeping

whyrusleeping Jul 30, 2018

Member

@Powersource it has one, just pass --timeout=2s or whatever value you want. Also make sure youre using the latest release (at least 0.4.17), things have gotten a bit better.

Member

whyrusleeping commented Jul 30, 2018

@Powersource it has one, just pass --timeout=2s or whatever value you want. Also make sure youre using the latest release (at least 0.4.17), things have gotten a bit better.

@Powersource

This comment has been minimized.

Show comment
Hide comment
@Powersource

Powersource Jul 30, 2018

Ah nice. That's not documented in --help fyi.

Powersource commented Jul 30, 2018

Ah nice. That's not documented in --help fyi.

@whyrusleeping

This comment has been minimized.

Show comment
Hide comment
@whyrusleeping

whyrusleeping Jul 30, 2018

Member

Yeah... the way we handle global options in the helptext needs work. Theres an open issue for it.

Member

whyrusleeping commented Jul 30, 2018

Yeah... the way we handle global options in the helptext needs work. Theres an open issue for it.

@Powersource

This comment has been minimized.

Show comment
Hide comment
@Powersource

Powersource Jul 30, 2018

When I set timeout it always seems to fail with context deadline exceeded, even when I set it to 60s.

Powersource commented Jul 30, 2018

When I set timeout it always seems to fail with context deadline exceeded, even when I set it to 60s.

@magik6k

This comment has been minimized.

Show comment
Hide comment
@magik6k

magik6k Aug 4, 2018

Member

Note: #5232 should help for some use cases

Member

magik6k commented Aug 4, 2018

Note: #5232 should help for some use cases

@beenotung

This comment has been minimized.

Show comment
Hide comment
@beenotung

beenotung Sep 6, 2018

As point out by MM at #3860 (comment)
it seems a mismatch for p2p application (ipfs) to be built on hierarchical network (tcp/ip).
When some node addresses are exchanged, they may not be accessible due to the router configuration.
Would it be better if we're using NDN? It looks like a virtual package switching network.

beenotung commented Sep 6, 2018

As point out by MM at #3860 (comment)
it seems a mismatch for p2p application (ipfs) to be built on hierarchical network (tcp/ip).
When some node addresses are exchanged, they may not be accessible due to the router configuration.
Would it be better if we're using NDN? It looks like a virtual package switching network.

@Stebalien

This comment has been minimized.

Show comment
Hide comment
@Stebalien

Stebalien Sep 6, 2018

Contributor

NDN does looks like an interesting alternative to a DHT that may be able to better take the underlying internet topology into account. We currently have experimental support for pubsub-based name resolution; NDN looks like a more generalized pubsub network.

(Note: IPFS is actually an overlay network on-top of {tcp,udp}/ip, using libp2p)

Contributor

Stebalien commented Sep 6, 2018

NDN does looks like an interesting alternative to a DHT that may be able to better take the underlying internet topology into account. We currently have experimental support for pubsub-based name resolution; NDN looks like a more generalized pubsub network.

(Note: IPFS is actually an overlay network on-top of {tcp,udp}/ip, using libp2p)

@kisulken

This comment has been minimized.

Show comment
Hide comment
@kisulken

kisulken Sep 6, 2018

if you just want to use the IPFS in a local network you might as well add all the used local addresses to the IPFS bootstrap list & make sure to clear the default list beforehand. If all the nodes are in each others bootstrap list then the process of publishing and loading will happen way faster

kisulken commented Sep 6, 2018

if you just want to use the IPFS in a local network you might as well add all the used local addresses to the IPFS bootstrap list & make sure to clear the default list beforehand. If all the nodes are in each others bootstrap list then the process of publishing and loading will happen way faster

@playground

This comment has been minimized.

Show comment
Hide comment
@playground

playground Sep 15, 2018

IPNS resolution time is very inconsistent, it ranges from 0.048s to 6+s

 someuser  ~/git_repo/sandbox/ipfs  time ipfs name resolve QmRknUZ69ifupZion2DCMrerEwUZgApqa5dsqEzuF1GYpQ
/ipfs/QmaM16fD27B2NLPtbTTbhGK9X57DvkRdi7TLyqkCv11k4E
ipfs name resolve QmRknUZ69ifupZion2DCMrerEwUZgApqa5dsqEzuF1GYpQ  0.03s user 0.02s system 0% cpu 4:09.32 total
 someuser  ~/git_repo/sandbox/ipfs  time ipfs name resolve QmRknUZ69ifupZion2DCMrerEwUZgApqa5dsqEzuF1GYpQ
/ipfs/QmaM16fD27B2NLPtbTTbhGK9X57DvkRdi7TLyqkCv11k4E
ipfs name resolve QmRknUZ69ifupZion2DCMrerEwUZgApqa5dsqEzuF1GYpQ  0.04s user 0.02s system 0% cpu 1:00.06 total
 someuser  ~/git_repo/sandbox/ipfs  time ipfs name resolve QmRknUZ69ifupZion2DCMrerEwUZgApqa5dsqEzuF1GYpQ
/ipfs/QmaM16fD27B2NLPtbTTbhGK9X57DvkRdi7TLyqkCv11k4E
ipfs name resolve QmRknUZ69ifupZion2DCMrerEwUZgApqa5dsqEzuF1GYpQ  0.03s user 0.01s system 92% cpu 0.051 total
 someuser  ~/git_repo/sandbox/ipfs  time ipfs name resolve QmRknUZ69ifupZion2DCMrerEwUZgApqa5dsqEzuF1GYpQ
/ipfs/QmaM16fD27B2NLPtbTTbhGK9X57DvkRdi7TLyqkCv11k4E
ipfs name resolve QmRknUZ69ifupZion2DCMrerEwUZgApqa5dsqEzuF1GYpQ  0.03s user 0.01s system 86% cpu 0.050 total
 someuser  ~/git_repo/sandbox/ipfs  time ipfs name resolve QmRknUZ69ifupZion2DCMrerEwUZgApqa5dsqEzuF1GYpQ
/ipfs/QmaM16fD27B2NLPtbTTbhGK9X57DvkRdi7TLyqkCv11k4E
ipfs name resolve QmRknUZ69ifupZion2DCMrerEwUZgApqa5dsqEzuF1GYpQ  0.03s user 0.01s system 89% cpu 0.048 total
 someuser  ~/git_repo/sandbox/ipfs  time ipfs name resolve QmRknUZ69ifupZion2DCMrerEwUZgApqa5dsqEzuF1GYpQ
/ipfs/QmaM16fD27B2NLPtbTTbhGK9X57DvkRdi7TLyqkCv11k4E
ipfs name resolve QmRknUZ69ifupZion2DCMrerEwUZgApqa5dsqEzuF1GYpQ  0.04s user 0.01s system 91% cpu 0.055 total
 someuser  ~/git_repo/sandbox/ipfs  time ipfs name resolve QmTwKUDjxfk5jaAvfoNCR5iESN6LW73QV8EfJBTNeWE1Gw
/ipfs/QmNayPSwvoasgu4VN3d2JQLrosQ8cEPdyjPCuvyT8G4idr
ipfs name resolve QmTwKUDjxfk5jaAvfoNCR5iESN6LW73QV8EfJBTNeWE1Gw  0.03s user 0.01s system 4% cpu 0.988 total
 someuser  ~/git_repo/sandbox/ipfs  time ipfs name resolve QmRknUZ69ifupZion2DCMrerEwUZgApqa5dsqEzuF1GYpQ
/ipfs/QmaM16fD27B2NLPtbTTbhGK9X57DvkRdi7TLyqkCv11k4E
ipfs name resolve QmRknUZ69ifupZion2DCMrerEwUZgApqa5dsqEzuF1GYpQ  0.04s user 0.02s system 0% cpu 6:13.67 total

playground commented Sep 15, 2018

IPNS resolution time is very inconsistent, it ranges from 0.048s to 6+s

 someuser  ~/git_repo/sandbox/ipfs  time ipfs name resolve QmRknUZ69ifupZion2DCMrerEwUZgApqa5dsqEzuF1GYpQ
/ipfs/QmaM16fD27B2NLPtbTTbhGK9X57DvkRdi7TLyqkCv11k4E
ipfs name resolve QmRknUZ69ifupZion2DCMrerEwUZgApqa5dsqEzuF1GYpQ  0.03s user 0.02s system 0% cpu 4:09.32 total
 someuser  ~/git_repo/sandbox/ipfs  time ipfs name resolve QmRknUZ69ifupZion2DCMrerEwUZgApqa5dsqEzuF1GYpQ
/ipfs/QmaM16fD27B2NLPtbTTbhGK9X57DvkRdi7TLyqkCv11k4E
ipfs name resolve QmRknUZ69ifupZion2DCMrerEwUZgApqa5dsqEzuF1GYpQ  0.04s user 0.02s system 0% cpu 1:00.06 total
 someuser  ~/git_repo/sandbox/ipfs  time ipfs name resolve QmRknUZ69ifupZion2DCMrerEwUZgApqa5dsqEzuF1GYpQ
/ipfs/QmaM16fD27B2NLPtbTTbhGK9X57DvkRdi7TLyqkCv11k4E
ipfs name resolve QmRknUZ69ifupZion2DCMrerEwUZgApqa5dsqEzuF1GYpQ  0.03s user 0.01s system 92% cpu 0.051 total
 someuser  ~/git_repo/sandbox/ipfs  time ipfs name resolve QmRknUZ69ifupZion2DCMrerEwUZgApqa5dsqEzuF1GYpQ
/ipfs/QmaM16fD27B2NLPtbTTbhGK9X57DvkRdi7TLyqkCv11k4E
ipfs name resolve QmRknUZ69ifupZion2DCMrerEwUZgApqa5dsqEzuF1GYpQ  0.03s user 0.01s system 86% cpu 0.050 total
 someuser  ~/git_repo/sandbox/ipfs  time ipfs name resolve QmRknUZ69ifupZion2DCMrerEwUZgApqa5dsqEzuF1GYpQ
/ipfs/QmaM16fD27B2NLPtbTTbhGK9X57DvkRdi7TLyqkCv11k4E
ipfs name resolve QmRknUZ69ifupZion2DCMrerEwUZgApqa5dsqEzuF1GYpQ  0.03s user 0.01s system 89% cpu 0.048 total
 someuser  ~/git_repo/sandbox/ipfs  time ipfs name resolve QmRknUZ69ifupZion2DCMrerEwUZgApqa5dsqEzuF1GYpQ
/ipfs/QmaM16fD27B2NLPtbTTbhGK9X57DvkRdi7TLyqkCv11k4E
ipfs name resolve QmRknUZ69ifupZion2DCMrerEwUZgApqa5dsqEzuF1GYpQ  0.04s user 0.01s system 91% cpu 0.055 total
 someuser  ~/git_repo/sandbox/ipfs  time ipfs name resolve QmTwKUDjxfk5jaAvfoNCR5iESN6LW73QV8EfJBTNeWE1Gw
/ipfs/QmNayPSwvoasgu4VN3d2JQLrosQ8cEPdyjPCuvyT8G4idr
ipfs name resolve QmTwKUDjxfk5jaAvfoNCR5iESN6LW73QV8EfJBTNeWE1Gw  0.03s user 0.01s system 4% cpu 0.988 total
 someuser  ~/git_repo/sandbox/ipfs  time ipfs name resolve QmRknUZ69ifupZion2DCMrerEwUZgApqa5dsqEzuF1GYpQ
/ipfs/QmaM16fD27B2NLPtbTTbhGK9X57DvkRdi7TLyqkCv11k4E
ipfs name resolve QmRknUZ69ifupZion2DCMrerEwUZgApqa5dsqEzuF1GYpQ  0.04s user 0.02s system 0% cpu 6:13.67 total

@novusabeo

This comment has been minimized.

Show comment
Hide comment
@novusabeo

novusabeo Oct 17, 2018

Same issue here. Resolve times are pretty consistent in a range though for me around 30 seconds - 1:30 minute.

And just throwing this out there, as I know it may be an echo but for what IPFS is and what it can do in general ------ a 30s -1min delay is still pretty impressive given the scope. Though I am curious what is causing this?

Would more peers in the network help resolve this -- where are we with this?
Thinking more and more that maybe IPFS/IPNS isn't the problem but maybe networks & hardware aren't ready for what it can really do...

novusabeo commented Oct 17, 2018

Same issue here. Resolve times are pretty consistent in a range though for me around 30 seconds - 1:30 minute.

And just throwing this out there, as I know it may be an echo but for what IPFS is and what it can do in general ------ a 30s -1min delay is still pretty impressive given the scope. Though I am curious what is causing this?

Would more peers in the network help resolve this -- where are we with this?
Thinking more and more that maybe IPFS/IPNS isn't the problem but maybe networks & hardware aren't ready for what it can really do...

@Stebalien

This comment has been minimized.

Show comment
Hide comment
@Stebalien

Stebalien Oct 23, 2018

Contributor

The issue is mostly NATs. There are a bunch of peers in the DHT behind NATs so we have to try (and fail) dialing a bunch of peers before we can find dialable ones. @vyzo is working on two fixes:

  1. Better/reliable NAT detection.
  2. Auto-relay (nodes behind NATs will automatically use relays).

On top of this, I'd like to make NATed nodes operate in DHT client-only mode but that depends on (1).

Contributor

Stebalien commented Oct 23, 2018

The issue is mostly NATs. There are a bunch of peers in the DHT behind NATs so we have to try (and fail) dialing a bunch of peers before we can find dialable ones. @vyzo is working on two fixes:

  1. Better/reliable NAT detection.
  2. Auto-relay (nodes behind NATs will automatically use relays).

On top of this, I'd like to make NATed nodes operate in DHT client-only mode but that depends on (1).

@novusabeo

This comment has been minimized.

Show comment
Hide comment
@novusabeo

novusabeo Oct 23, 2018

Would initializing ipfs as --server profile help with this? I am getting this problem not only with my local node accessing files through IPNS but also without publishing anything to my peer id for IPNS and just accessing them by their specific hash.

But then accessing them outside my local network, say from a phone, works perfectly (just from their standard hash, not through IPNS; IPNS is slow regardless). Not to be paranoid but could this possibly relate to Net Neutrality and suppressing P2P systems by ISPs?? I'd imagine the process for accessing the IPFS network through the gateway looks similar. But I get how NAT detection needs improvement -- maybe a sort of DDNS would help. (IPDNS)

novusabeo commented Oct 23, 2018

Would initializing ipfs as --server profile help with this? I am getting this problem not only with my local node accessing files through IPNS but also without publishing anything to my peer id for IPNS and just accessing them by their specific hash.

But then accessing them outside my local network, say from a phone, works perfectly (just from their standard hash, not through IPNS; IPNS is slow regardless). Not to be paranoid but could this possibly relate to Net Neutrality and suppressing P2P systems by ISPs?? I'd imagine the process for accessing the IPFS network through the gateway looks similar. But I get how NAT detection needs improvement -- maybe a sort of DDNS would help. (IPDNS)

@novusabeo

This comment has been minimized.

Show comment
Hide comment
@novusabeo

novusabeo Oct 23, 2018

Setting timeout manually helps?
Also any possibility this could relate to a browser, cacheing, etc.? Browsers are not used to IPFS, certainly.

novusabeo commented Oct 23, 2018

Setting timeout manually helps?
Also any possibility this could relate to a browser, cacheing, etc.? Browsers are not used to IPFS, certainly.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment