Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

IPNS very slow #3860

Open
nezzard opened this issue Apr 11, 2017 · 90 comments
Open

IPNS very slow #3860

nezzard opened this issue Apr 11, 2017 · 90 comments

Comments

@nezzard
Copy link

@nezzard nezzard commented Apr 11, 2017

Hi, is it normal, that ipns loading very slow?
I tried to make something like cms with dynamic content, but ipns to slow, when i load site via ipns, first loading is very slow, if after that i reload page it's load quickly. But if i reload after few minutes, it's again load slow.

@whyrusleeping
Copy link
Member

@whyrusleeping whyrusleeping commented Apr 11, 2017

@nezzard This is generally a known issue, but providing more information is helpful. Are you resolving from your local node? Or are you resolving through the gateway?

@nezzard
Copy link
Author

@nezzard nezzard commented Apr 11, 2017

@whyrusleeping throw local, but sometimes gateway faster, sometimes local faster
So, for now I can't use ipns normally?

@whyrusleeping
Copy link
Member

@whyrusleeping whyrusleeping commented Apr 12, 2017

@nezzard when using locally, how many peers do you have connected? (ipfs swarm peers) The primary slowdown of ipns is connecting to enough of the right peers on the dht, once thats warmed up it should be faster.

DHT based ipns isnt as fast as something more centralized, but you can generally cache the results for longer than ipfs caches them. We should take a look at making these caches more configurable, and look into other ipns slowdowns.

When you say its 'very slow', what time range exactly are you experiencing? 1-5 seconds? 5-10, 10+ ?

@nezzard
Copy link
Author

@nezzard nezzard commented Apr 12, 2017

@whyrusleeping Sometimes it's really fast, sometimes i have
https://yadi.sk/i/mL6Q4OFX3Gu2nk

Ipfs swarm
/ip4/104.131.131.82/tcp/4001/ipfs/QmaCpDMGvV2BGHeYERUEnRQAwe3N8SzbUtfsmvsqQLuvuJ /ip4/104.131.180.155/tcp/4001/ipfs/QmeXAm1zdLbPaA9wVemaCjbeJgWsCrH4oSCrK2F92yWnbm /ip4/104.133.2.68/tcp/53366/ipfs/QmTAmvzNBsicnajpLTUnVqcPankP3pNDoqHpAtUNkK2rU7 /ip4/104.155.150.120/tcp/4001/ipfs/Qmep8LtipXUG4WSNgJGEtwmuaQQt77wRDL5nkMpZyDqrD3 /ip4/104.236.169.138/tcp/4001/ipfs/QmYodPH2C6xEYFPxNhK4how1frPdXFWVrZ3QGynTFCFfBe /ip4/104.236.176.52/tcp/4001/ipfs/QmSoLnSGccFuZQJzRadHn95W2CrSFmZuTdDWP8HXaHca9z /ip4/104.236.176.59/tcp/4001/ipfs/QmQ8MYL1ANybPTM5uamhzTnPwDwCFgfrdpYo9cwiEmVsge /ip4/104.236.179.241/tcp/4001/ipfs/QmSoLPppuBtQSGwKDZT2M73ULpjvfd3aZ6ha4oFGL1KrGM /ip4/104.236.76.40/tcp/4001/ipfs/QmSoLV4Bbm51jM9C4gDYZQ9Cy3U6aXMJDAbzgu2fzaDs64 /ip4/104.40.212.43/tcp/4001/ipfs/QmcvFeaip7B3RDmLU9MgqGcCRv881Citnv5cHkrTSusZD6 /ip4/106.246.181.100/tcp/4001/ipfs/QmQ6TbUShnjKbnJDSYdxaBb78Dz6fF82NMetDKnau3k7zW /ip4/108.161.120.136/tcp/27040/ipfs/QmNRM8W3u6gxAvm8WqSXqCVC6Wzknq66tdET6fLGh8zCVk /ip4/108.28.144.234/tcp/5002/ipfs/QmWfjhgBWjwiesWQPCC4CSV4q83vyBdSA6LRSaZLLCZoVH /ip4/112.196.16.84/tcp/4002/ipfs/QmbELjeVvfpbGYNcC4j4PPr6mnssp6jKWd4D6Jht8jDhiW /ip4/113.253.98.194/tcp/54388/ipfs/QmcL9BdiHQbRng6PvDzbJye7yG73ttNAkhA5hLGn22StM8 /ip4/121.122.82.230/tcp/58960/ipfs/QmPz9uv4HUP1er5TGaaoc4NVCbN8VFMrf5gwvxfmtSAmGv /ip4/128.199.219.111/tcp/4001/ipfs/QmSoLSafTMBsPKadTEgaXctDQVcqN88CNLHXMkTNwMKPnu /ip4/128.32.112.184/tcp/4001/ipfs/QmeM9rJsk6Ke57xMwMuCkJBb9pYGx7qVRkgzVD6zhxPaBx /ip4/128.32.153.243/tcp/1030/ipfs/QmYoH11GjCyoQW4HyZtSZcL8BqBuudaXWi1pdYyy1AroFd /ip4/134.71.135.172/tcp/4001/ipfs/QmU3q7GxnnhJabNh3ukDq2QsnzwzVpcT5FEPBcJcRu3Wq1 /ip4/138.201.53.216/tcp/4001/ipfs/QmWmJfJKfJmKtRqsTnygmWgJfsmHnXo4p3Uc1Atf8N5iQ5 /ip4/139.162.191.34/tcp/4001/ipfs/QmYfmBh8Pud13uwc5mbtCGRgYbxzsipY87xgjdj2TGeJWm /ip4/142.4.211.131/tcp/4001/ipfs/QmWWWLYe16uU53wPgdP3V5eEb8QRwoqUb35h5EMWoEyWaJ /ip4/159.203.77.184/tcp/4001/ipfs/QmeLGqhi5dFBpxD4xuzAWWcoip69i5SaneXL9Jb83sxSXo /ip4/163.172.222.20/tcp/4001/ipfs/Qmd4up4kjr8TNWc4rx6r4bFwpe6TJQjVVmfwtiv4q3FSPx /ip4/167.114.2.68/tcp/4001/ipfs/QmfY24aJDGyPyUJVyzL1QHPoegmFKuuScoCKrBk9asoTFG /ip4/168.235.149.174/tcp/4001/ipfs/QmbPFhS9YwUxE4rPeaqd7Vn6GEESd1MUUM67ECtYchHyFB /ip4/168.235.79.131/tcp/4001/ipfs/QmaqsmhXtQfKfiWi3jXdb4PxrN8JNi2zmXN13MDEktjK8H /ip4/168.235.90.18/tcp/4001/ipfs/QmWtA6WFyo44pYzQzHFtrtMWPHZiFEDFjUWihEY49obZ1e /ip4/169.231.33.236/tcp/55897/ipfs/QmQyTC3Bg2BkctdisKBvWPoG8Avr7HMrnNMNJS25ubjVUU /ip4/173.95.181.110/tcp/42615/ipfs/QmTxQ2Bv9gppcNvzAtRJiwNAahVhkUHxFt5mMYkW9qPjE6 /ip4/176.9.85.5/tcp/4001/ipfs/QmNUZW8yuNxdLSPMwvaafiMVN8fof5r2PrsUJAgyAn8Udb /ip4/178.19.251.249/tcp/4401/ipfs/QmR2FRyigN82VJc3MFZNz79L8Hunc3XvfAxU3eA3McRPHg /ip4/178.209.50.28/tcp/30852/ipfs/QmVfwJUWnj7GAkQtV4cDVrNDnZEwi4oxnyZaJc7xY7zaN3 /ip4/178.209.50.28/tcp/36706/ipfs/QmWCNyBxJS9iuwCrnnA3QfcrS9Yb67WXnZTiXZsMDFj2ja /ip4/178.62.61.185/tcp/4001/ipfs/QmSoLMeWqB7YGVLJN3pNLQpmmEk35v6wYtsMGLzSr5QBU3 /ip4/180.181.245.242/tcp/4001/ipfs/QmZg57eGmSgXs8cXeGJNsBknZTxdphZH9wWLDx8TdBQrMY /ip4/185.10.68.111/tcp/4001/ipfs/QmTNjTQy6sGFG39VSunS4v1UZRfPFevtGzHwr2h1xfa5Bh /ip4/185.21.217.59/tcp/4001/ipfs/QmQ4GzeQzyW3VcBgVacKSjrUrBxEo6s7VQrrkQyQwi1sxs /ip4/185.32.221.138/tcp/4001/ipfs/QmcmTqKUdasx9xwbG2DcyY95q6GcMzx8uUC9fVqdTyETrZ /ip4/185.61.148.187/tcp/4001/ipfs/QmQ5k9N7aVGECaNBLsX9ZeYJCYvcNWcKDZ8VacV9HGUwSC /ip4/185.97.214.103/tcp/4001/ipfs/QmbKGbNNyvBe6A7kUYQtUpXZU61QiTMnGGjqBx6zuvrYyj /ip4/188.226.129.60/tcp/4001/ipfs/QmWBthnxqH6CpAA9k9XGP9TqWMZGT6UC2DZ4x9qGr7eapc /ip4/188.25.26.115/tcp/32349/ipfs/QmVUR2mtHXCnm7KVyEjBQe1Vdp8XWG6RXC8f8FfrnAxCGJ /ip4/188.25.26.115/tcp/53649/ipfs/QmXctexVWdB4PqAquZ6Ksmu1FwwRMiYhQNfoiaWV4iqEFn /ip4/188.40.114.11/tcp/4001/ipfs/QmZY7MtK8ZbG1suwrxc7xEYZ2hQLf1dAWPRHhjxC8rjq8E /ip4/188.40.41.114/tcp/4001/ipfs/QmUYYq1rYdhmrU7za9zrc6adLmwFBKYx3ksTVU3y1RHomm /ip4/192.124.26.250/tcp/16808/ipfs/QmUnwLT7GK8yCxHrpEELTyHVwGFhiZFwmjrq3jypG9n1k8 /ip4/192.124.26.250/tcp/21486/ipfs/QmeBT8g5ekgXaF4ZPqAi1Y8ssuQTjtzacWB7HC7ZHY8CH7 /ip4/192.131.44.99/tcp/4001/ipfs/QmWQBr5KAnCpGiQa5888DYsJc4gF7x7SDzpT6eVW2SoMMQ /ip4/192.52.2.2/tcp/4001/ipfs/QmeJENdKrdD8Bcj6iSrYPAwfQpR2K1nC8aYFkZ7wXdN9ic /ip4/194.100.58.189/tcp/4001/ipfs/QmVPCaHpUJ2eKVMSgb54zZhYRUKokNsX32C4PSRWiKWY6w /ip4/194.135.91.244/tcp/4001/ipfs/QmbE4S5EBBuY7du97ARD3BizNqpdcwQ3iH1aGyo5c8Ezmb /ip4/195.154.182.94/tcp/1031/ipfs/QmUSfsmVqD8TTgnUcPDTrd24SbWDEpnmkWWr7eqbJT2g8y /ip4/199.188.101.24/tcp/4001/ipfs/QmSsjprNEhoDZJAZYscB4G23b1dhxJ1cmiCdC5k73N8Jra /ip4/204.236.253.32/tcp/4001/ipfs/QmYf9BoND8MCHfmzihpseFc6MA6JwBV1ZvHsSMPJVW9Hww /ip4/206.190.135.76/tcp/4001/ipfs/QmTRmYCFGJLz2s5tfnHiB1kwrfrtVSxKeSPxojMioZKVH6 /ip4/212.227.249.191/tcp/4001/ipfs/QmcZrBqWBYV3RGsPuhQX11QzpKAQ8SYfMYL1dGXuPmaDYF /ip4/212.47.243.156/tcp/4001/ipfs/QmPCfdoA8aDscrfNVAhB12YYJJ2CR9mDG2WtKYFoxwL182 /ip4/213.108.213.138/tcp/4001/ipfs/QmWHo4hLG3tkmfuCot3xGCzE2a822MCNQ1mAx1tdEXVL46 /ip4/213.32.16.10/tcp/4001/ipfs/QmcWjSF6prpJwBZsfPSfzGEL61agU1vcMNCX8K6qaH5PAq /ip4/217.210.239.98/tcp/48069/ipfs/QmWGUTL6pQe4ryneBarFqnMdFwTq847a2DnWNo4oYRHxEJ /ip4/217.234.48.60/tcp/65012/ipfs/QmPPnZRcPCPxDvqgz3nyg5QshSzCzqa837ABFU4H4ZzUQP /ip4/23.250.20.244/tcp/4001/ipfs/QmUgNCzhgGvjn9DAs22mCJ7bv3sFp6PWPD6Egt9aPopjVn /ip4/34.223.212.29/tcp/1024/ipfs/QmcXwJ34KM17jkwYwGjgUFvG7zBgGGnUXRYCJdvAPTc8CB /ip4/35.154.222.183/tcp/4001/ipfs/Qmecb2A1Ki34eb4jUuaaWBH8A3rRhiLaynoq4Yj7issF1L /ip4/37.187.116.23/tcp/4001/ipfs/QmbqE6UfCJaXST3i65zbr649s8cJCUoP9m3UFUrXcNgeDn /ip4/37.187.98.185/tcp/1045/ipfs/QmS7djjNercLL4R4kbEjs6eGtxmAiuWMwnvAhP6AkFB64U /ip4/37.205.9.176/tcp/4001/ipfs/QmdX1zPzUtGJzcQm2gz6fyiaX7XgthK5d4LNSJq3rUAsiP /ip4/40.112.223.87/tcp/4001/ipfs/QmWPSzKERs6KAjb8QfSXViFqyEUn3VZYYnXjgG6hJwXWYK /ip4/45.32.155.49/tcp/4001/ipfs/QmYdn8trPQMRZEURK3BRrwh2kSMrb6r6xMoFr1AC1hRmNG /ip4/45.63.24.86/tcp/4001/ipfs/Qmd66qwujno615ZPiJZYTm12SF1c9fuHcTMSU9mA4gvuwM /ip4/49.77.250.124/tcp/20540/ipfs/QmPXWsm3wCRdyTZAeu4gEon7i1xSQ1QsWsR2X6GpAB3x6r /ip4/5.186.55.132/tcp/1024/ipfs/QmR1mXyic9jSbyzLtnBU9gjbFY8K3TFHrpvJK88LSyPnd9 /ip4/5.28.92.193/tcp/4001/ipfs/QmZ9RMTK8YrgFY7EaYsWnE2AsDNHu1rm5LqadvhFmivPWF /ip4/5.9.150.40/tcp/4737/ipfs/QmaeXrsLHWm4gbjyEUJ4NtPsF3d36mXVzY5eTBQHLdMQ19 /ip4/50.148.88.236/tcp/4001/ipfs/QmUeaH7miiLjxneP3dgJ7EgYxCe6nR16C7xyA5NDzBAcP3 /ip4/50.31.11.244/tcp/4001/ipfs/QmYMdi1e6RV7nJ4xoNUcP4CrfuNdpskzLQ6YBT4xcdaKAV /ip4/50.53.255.232/tcp/20792/ipfs/QmTaqVy1m5MLUh2vPSU64m1nqBj5n3ghovXZ48V6ThLiLj /ip4/51.254.25.17/tcp/4002/ipfs/QmdKbeXoXnMbPDfLsAFPGZDJ41bQuRNKALQSydJ66k1FfH /ip4/52.168.18.22/tcp/9001/ipfs/QmV9eRZ3uJjk461cWSPc8gYTCqWmxLxMU6SFWbDjdYAsxA /ip4/52.170.218.157/tcp/9001/ipfs/QmRZvZiZrhJdZoDruT7w2QLKTdniThNwrpNeFFdZXAzY1s /ip4/52.233.193.228/tcp/4001/ipfs/QmcdQmd42P3Mer1XQrENkpKEW9Z97ucBb5iw3bEPqFnqHe /ip4/52.53.224.174/tcp/4001/ipfs/QmdhVq4BHYLmrsatWxw8FHVCspdTabdgptUaGxW2ow2F7Q /ip4/52.7.58.3/tcp/4001/ipfs/QmdG5Y7xqrtDkVjP1dDuwWvPcVHQJjyJqG5xK62VzMth2x /ip4/54.178.171.10/tcp/4091/ipfs/QmdtfJBMitotUWBX5YZ6rYeaYRFu6zfXXMZP6fygEWK2iu /ip4/54.190.54.51/tcp/4001/ipfs/QmZobm32XH2UiGi5uAg2KabEh6kRL6x64HB56ZF3oA4awR /ip4/54.208.247.108/tcp/4001/ipfs/QmdDyCsGm8Zzv4uyKB4MzX8wP7QDfSfVCsCNMZV5UxNgJd /ip4/54.70.38.180/tcp/1024/ipfs/QmSHCEevPPowdJKHPwivtTW6HsShGQz5qVrFytDeW1dHDv /ip4/54.70.48.46/tcp/1030/ipfs/QmeDcUc9ytZdLcuPHwDNrN1gj415ZFHr27gPgnqJqbf1hg /ip4/54.71.244.118/tcp/4001/ipfs/QmaGYHEnjr5SwSrjP44FHGahtdk3ShPf3DBYmDrZCa1nbS /ip4/54.89.97.141/tcp/4001/ipfs/QmRjxYdkT4x3QpAWqqcz1wqXhTUYrNBm6afaYGk5DQFeY8 /ip4/58.179.165.141/tcp/4001/ipfs/QmYoXumXQYX3FknhH1drVhgqnJd2vQ1ExECLAHykA1zhJZ /ip4/63.96.220.210/tcp/4001/ipfs/QmX4SxZFMgds5b1mf3y4KKHsrLijrFvKZ6HfjZN6DkY4j5 /ip4/65.19.134.242/tcp/4001/ipfs/QmYCLRXcux9BrLSkv3SuGEW6iu7nUD7QSg3YVHcLZjS5AT /ip4/66.56.15.111/tcp/4001/ipfs/QmZxW1oKFYNhQLjypNtUZJqtZMvzk1JNAQnfGLczan2RD2 /ip4/67.174.159.210/tcp/4001/ipfs/QmRNuP6GpZ4tAMvfgXNeCB6At4uRGqqTXBusHRxFh5n8Eq /ip4/69.12.67.106/tcp/4001/ipfs/QmT1q92VyoqysvC268kegsdxeNLR8gkEgpFzmnKWfqp29V /ip4/69.61.33.241/tcp/4001/ipfs/QmTtggHgG1tjAHrHfBDBLPmUvn5BwNRpZY4qMJRXnQ7bQj /ip4/69.62.223.164/tcp/4001/ipfs/QmZrzE3Gye318CU7ZsZ3YeEnw6L7RkbhBvmfU7ebRQEF54 /ip4/71.204.170.241/tcp/4001/ipfs/QmTwvAzEoWZjFAsv9rhXrcn1XPb7qhxDVZN1Q61AnZbqmM /ip4/72.177.11.53/tcp/4001/ipfs/QmPxFX8j1zbHNzLgmeScjX7pjKho2EgzGLaiANFTjLUAb4 /ip4/75.112.252.166/tcp/11465/ipfs/QmRWC4hgiM7Tzchz2uLAN6Yt1xWptqZWYPb5AWvv2DeMhp /ip4/78.46.68.56/tcp/53378/ipfs/QmbE9eo6PXuSHAASumNVZBKvPsVpSjgRDEqoMNHJ49cBKz /ip4/78.56.33.225/tcp/4001/ipfs/QmXokcQHHxSCNZgFv28hN7dTzxbLcXpCM1MUDRXa8G9wNK /ip4/79.175.125.102/tcp/58126/ipfs/QmdDA6QfLQ5sRez6Ev15yDCdumvBuYygeNjVZqFef693Gn /ip4/80.167.121.206/tcp/4001/ipfs/QmfFB7ShRaVPEy9Bbr9fu9xG947KCZqhCTw1utBNHBwGK2 /ip4/82.119.233.36/tcp/4001/ipfs/QmY3xH9PWc4NpmupJ9KWE4r1w9XshvW6oGVeHAApuvVU2K /ip4/82.197.194.135/tcp/41271/ipfs/QmQLW2mhJYPmhYmhkA2FZwFGdEXFjnsprB5DfBxCMRdBk9 /ip4/82.227.20.27/tcp/50190/ipfs/QmY8bMNkkNZvxw1pGVi4pqiXeszZnHY9wwr1Qvyv6QmfsE /ip4/84.217.19.85/tcp/62227/ipfs/QmaD38nfW4u97DPHDLz1cYWzhWUYPKrEianJs2dKctutpf /ip4/84.217.19.85/tcp/63787/ipfs/QmXKd1pJxTqTWNgGENcX2daiGLgWRPDDsXJe8eecQCr6Vh /ip4/86.0.212.51/tcp/50000/ipfs/Qmb9ECxYmPL9sc8jRNAwpGhgjEiXVHKb2qfS8jtjN5z7Pp /ip4/88.153.7.190/tcp/17396/ipfs/QmWTyP5FFpykrfocJ14AcQcwnuSdKAnVASWuFbtqCw3RPT /ip4/88.198.52.13/tcp/4001/ipfs/QmNhwcGyu8pyCHzHS9SuVyVNbg8SjpTKyFb72oofvL4Nf5 /ip4/88.99.13.90/tcp/4001/ipfs/QmTCM4KLAF1xG4ri2JBRigmjf8CLwAzkTs6ckCQbHaArR6 /ip4/89.23.224.58/tcp/37305/ipfs/QmWqjusr86LThkYgjAbNMa8gJ55wzVufkcv5E2TFfzYZXu /ip4/89.64.51.138/tcp/47111/ipfs/Qme63idhHJ2awgkdG952iddw5Ta9nrfQB3Bpn83V1Bqgvv /ip4/91.126.106.78/tcp/21076/ipfs/QmdFZQdcLbgjK5uUaJS2EiKMs4d2oke1DdyGoHAKRMcaXk /ip4/92.222.85.0/tcp/4001/ipfs/QmTm7RdPXbvdSwKQdjEcbtm4JKv1VebzJR7RDra3DpiWd7 /ip4/93.11.115.24/tcp/34730/ipfs/QmRztqxTvxvQXWi7JbtTXijzzngpDgVYwQ2YBccVkt7qjn /ip4/93.182.128.2/tcp/39803/ipfs/Qma8oBW3GNWvNbdEzWiNWenrGtF3DhDUBcUrrsTJBiNKJ2 /ip4/95.31.15.24/tcp/4001/ipfs/QmPxgtHFqyAdby5oqLT5UJGMjPFyGHu5zQcpZ1sKYcuX75 /ip4/96.84.144.177/tcp/4001/ipfs/Qma7U9CNhPnfLit2UL88CFKvizFCZ7pnxB38N3Y5WsZwFH

@Kubuxu
Copy link
Member

@Kubuxu Kubuxu commented Apr 17, 2017

Which ipfs version are you running?

@kikoncuo
Copy link

@kikoncuo kikoncuo commented Apr 24, 2017

@nezzard What tool are you using in your screenshot? I've seen it many times in the forums but I can't find it anywhere.

@nezzard
Copy link
Author

@nezzard nezzard commented Apr 24, 2017

@kikoncuo it's a tool from cloud service like dropbox
https://disk.yandex.ua/

@nezzard
Copy link
Author

@nezzard nezzard commented Apr 24, 2017

@Kubuxu The last at the time

@kikoncuo
Copy link

@kikoncuo kikoncuo commented Apr 24, 2017

@nezzard I meant the tool which you took the screenshot from, my bad

@nezzard
Copy link
Author

@nezzard nezzard commented Apr 25, 2017

@Kubuxu this tool inside the program yandex disk

@cpacia
Copy link

@cpacia cpacia commented Apr 27, 2017

So let me tell you some tweaks I've made which has helped quite a bit.

  1. I made the dht query size param accessible from the config. Setting it to like 5 or 6 speeds it up quite a bit.

  2. I also added some caching into the resolver so that if it can't find a record on the network (such as it expiring) it loads it from local cache. Obviously each record that is fetched updates the cache. This isn't really speed related but it does provide a slightly better UX as data remains available after it drops out of the dht.

  3. Using #2 for certain types of data where it doesn't matter if it's slightly stale, like profiles, I load the record from cache and use it to return the profile. Then in the background I do the IPNS call to fetch the latest profile and update the cache. This ensures that our profile calls are nearly instant while potentially being only slightly out of date.

@whyrusleeping
Copy link
Member

@whyrusleeping whyrusleeping commented Apr 28, 2017

We can probably add flags to the ipfs name resolve api that allow selection (per resolve) of the query size parameter, and also to say "just give me whatever value you have cached".

Both of those would be simple enough to implement without actually having to change too much

@whyrusleeping
Copy link
Member

@whyrusleeping whyrusleeping commented Apr 28, 2017

Another thing we could do it have a command that returns ipns results as they come in, and then when enough comes in to make a decision, says "This is the best one". This way you could start working with the first one you receive, then when the right one comes in, switch to using that

@MichaelMure
Copy link
Contributor

@MichaelMure MichaelMure commented May 26, 2017

I have some trouble as well with IPNS. I have a linux box and a windows box on the same LAN running ipfs 0.4.9 and I can't resolve IPNS addresses published from the other side, even after several minutes. I have 400 peers connected on one side, 250 on the other.

@cpacia your changes are in a branch somewhere ? That looks like a very handy addition for my project.

@MichaelMure
Copy link
Contributor

@MichaelMure MichaelMure commented May 26, 2017

Answering to myself, the fork is here: https://github.com/OpenBazaar/go-ipfs

@whyrusleeping any idea how I can debug this issue ?

@whyrusleeping
Copy link
Member

@whyrusleeping whyrusleeping commented May 26, 2017

@MichaelMure you cant resolve at all? Or its just very slow?

@MichaelMure
Copy link
Contributor

@MichaelMure MichaelMure commented May 26, 2017

Sometimes it just take times before being able to resolve and once it has been resolved once it works properly. But in this case it didn't resolve at all even after 30 minutes. It might be another issue but without a way to find out what's going on in ipfs, well ...

@nezzard
Copy link
Author

@nezzard nezzard commented Jun 21, 2017

I think ipns is very bad for use
You can check
http://ipfs.artpixel.com.ua/

It's load for 15 - 20 seconds

@hhff
Copy link

@hhff hhff commented Jul 1, 2017

I'm also experiencing massive resolution times with IPNS. Same behavior over here - the first resolution can take multiple minutes, then once a it's loaded, I can refresh the content in under a second.

If I leave it for a few minutes, then do another refresh, and that request cycle repeats the same behavior.

The "cache" for the resolution only appears to stay warm for a short period of time.

@hhff
Copy link

@hhff hhff commented Jul 1, 2017

I'm using a CNAME with _dnslink, for what it's worth.

Content is at www.ember-cli-deploy-ipfs.com

@alexandre1985
Copy link

@alexandre1985 alexandre1985 commented Aug 24, 2017

Ipfs is unusable. I have the daemon running on both of my computers inside a LAN and one "serving" one file (a video) that the other doesn't have. When I try to access that video from the pc that doesn't have the file, using localhost:8080/ipfs/... on my browser, the video is stopping and taking huge amount of times to load. HUGE amount of time in such a way that I can't watch the video.
When I netcat that video and pipe it through mplayer to the other computer I can watch the video stream great.
So this is a problem of ipfs and it has great great performance issues. So great that it doesn't make sense and makes the technology not worth using (as today 2017-08-24).
IPFS isn't delivering what it promised. Very disappointed

@whyrusleeping whyrusleeping added this to the Ipfs 0.4.12 milestone Sep 2, 2017
@kesar
Copy link

@kesar kesar commented Sep 2, 2017

Very disappointed

You should ask for a refund 👍

@alexandre1985
Copy link

@alexandre1985 alexandre1985 commented Sep 3, 2017

@kesar I mean this out of love. @jbenet (Juan Benet) says that it is going to release us from the backbone but currently ipfs network performance is very weak.
I would like ipfs to succeed but how can that be if I can see a video faster through the backbone than through ipfs hosting video file inside my LAN?
The performance of ipfs in this aspect is weak, to be modest. You should try this experiment yourself

@Calmarius
Copy link

@Calmarius Calmarius commented Oct 8, 2017

It took me more than a minute to resolve the domain published by my own computer... And it's not the DNS resolution it hangs at resolving the actual IPNS entry.

$ time ipfs resolve /ipns/QmQqR8R9nfFkWYH9P7xNPtAry8tT63miNyZwt121uXsmSU
/ipfs/QmQunuPzcLp2FiKwMDucJi957SrB8BygKA4C4J4h7VG4M9

real	1m0.078s
user	0m0.060s
sys	0m0.008s
@Stebalien
Copy link
Member

@Stebalien Stebalien commented Oct 9, 2017

We're working on fixing some low hanging fruit in the DHT that should alleviate this: libp2p/go-libp2p-kad-dht#88. You can expect this to appear in a release month or so (4.12 or 4.13).

We're also working on bypassing the DHT for recently accessed IPNS addresses by using pubsub ( #4047). However, that will likely remain under an experimental flag for a while as our current pubsub implementation is very naive.

@phr34k0
Copy link

@phr34k0 phr34k0 commented Oct 14, 2017

@Stebalien Can't wait for the update. Reverting back to IFPS solves the performance. IPNS just takes a million years to load. I'll just go back to Zeronet.io for what it's worth. It's x10 better. And damnnn #2105 .. 2015. 2 years worth of same problem. Woah

@Stebalien
Copy link
Member

@Stebalien Stebalien commented Oct 23, 2018

The issue is mostly NATs. There are a bunch of peers in the DHT behind NATs so we have to try (and fail) dialing a bunch of peers before we can find dialable ones. @vyzo is working on two fixes:

  1. Better/reliable NAT detection.
  2. Auto-relay (nodes behind NATs will automatically use relays).

On top of this, I'd like to make NATed nodes operate in DHT client-only mode but that depends on (1).

@novusabeo
Copy link

@novusabeo novusabeo commented Oct 23, 2018

Would initializing ipfs as --server profile help with this? I am getting this problem not only with my local node accessing files through IPNS but also without publishing anything to my peer id for IPNS and just accessing them by their specific hash.

But then accessing them outside my local network, say from a phone, works perfectly (just from their standard hash, not through IPNS; IPNS is slow regardless). Not to be paranoid but could this possibly relate to Net Neutrality and suppressing P2P systems by ISPs?? I'd imagine the process for accessing the IPFS network through the gateway looks similar. But I get how NAT detection needs improvement -- maybe a sort of DDNS would help. (IPDNS)

@novusabeo
Copy link

@novusabeo novusabeo commented Oct 23, 2018

Setting timeout manually helps?
Also any possibility this could relate to a browser, cacheing, etc.? Browsers are not used to IPFS, certainly.

@Stebalien
Copy link
Member

@Stebalien Stebalien commented Oct 24, 2018

Would initializing ipfs as --server profile help with this?

No.

Not to be paranoid but could this possibly relate to Net Neutrality and suppressing P2P systems by ISPs

Possibly? It could also just be a crappy router. IPFS creates a bunch of connections when trying to find content which can crash/overload many consumer routers.

Also any possibility this could relate to a browser, cacheing, etc.? Browsers are not used to IPFS, certainly.

Unlikely.

@dboreham
Copy link

@dboreham dboreham commented Nov 13, 2018

Forgive me but reading the complete thread above I'm actually still not sure what the underlying problem is here. It would be great if someone could write a concise description of the problem.

Fwiw my best guess is this:

When resolving an IPFS name, a node only has to find any matching record from peers. This is because any record found will by definition be identical to any other (see: immutable naming).

However, when resolving an IPNS name, a node must attempt to find the most fresh version of the record, because there is the potential for newer versions to exist, somewhere. So it looks to get a quorum perhaps, or waits a long time, or something... in an attempt to seek out the newest version.

But the above explanation doesn't make much sense with respect to what I see when testing : I made a fresh new IPFS file on my node, then a fresh new IPNS name that points to it. I then query another node (cloudflare gateway) and observe that it finds by IPFS file immediately but never finds my IPNS name. Since the IPNS name was new (and therefore clearly not stale), and the IPNS and IPFS data came from the same origin server, why the difference?

Put another way : if the reason is "DHT not great" why is IPNS affected and IPFS not affected since they use the same DHT?

@beenotung
Copy link

@beenotung beenotung commented Nov 18, 2018

@dboreham In case of your testing between cloudflare node and your local node, cloudflare node cannot be sure your node is the only node having that IPNS record, hence it also wait for responds from other nodes.

Instead of having this API to return the (probably) latest version of IPNS record, I suggest to have it return an observable / stream that will emit IPNS record as long as one version is recevied from the peers. There are other solution proposal out there as well.

@raulk
Copy link
Member

@raulk raulk commented Nov 18, 2018

@dboreham

Put another way : if the reason is "DHT not great" why is IPNS affected and IPFS not affected since they use the same DHT?

I think this has to do with the fact that IPFS files are content-addressed, so as soon as you find a hit, you know that’s the only possible value for the entry. IPNS entries are not content-addressed, so 1 hit doesn’t suffice, and in your example the query might get stuck asking more nodes that happen to be unreachable to get a quorum.

If this were the case (and this is purely speculative), IPNS could short-circuit the query if it happened to find the authoritative source for the record (you, who is reachable).

If I remember the DHT algo correctly, there is no pathway for that, and it could make sense to introduce it.

@Stebalien
Copy link
Member

@Stebalien Stebalien commented Jan 10, 2019

@beenotung

FYI, there's a new --stream flag that you can pass to ipfs name resolve to do exactly this. Also, excellent answer.

@linas
Copy link

@linas linas commented Oct 19, 2019

I'd like to add some remarks that haven't yet been remarked on. Some of this contradicts earlier information.

  • If I publish, using the http/json API to the local server, instead of the command-line tools, the first publish takes exactly 90s; subsequent publishes take exactly 60s.
  • Publishes with the command-line tools always take 60s exactly.
  • If I resolve locally, the resolution is instant, either with command-line tool OR with API code.
  • Unless I interleave another publish in the meanwhile, in which case the resolve now takes 60 seconds! (But not always, sometimes it's instant!)

It's a bit disheartening that the local-to-local IPNS has these timeouts. I can accept delays in publishing to the world, as the DHT decides where to put things, but for things of local origin, I would like to think that they're authoritative, and should be instant.

@dboreham
Copy link

@dboreham dboreham commented Oct 21, 2019

@linas fwiw I don't recall seeing these timeouts, so wondering if this is something specific to your setup? Might be worthwhile doing some tcpdump/wireshark snooping to see what's going on? Perhaps some DNS-to-nowhere issue, or a connection attempt to some IP that isn't listening, that has to time out (30 second timeout?) before proceeding to talk to the listening server?

@linas
Copy link

@linas linas commented Oct 21, 2019

Hi @dboreham OK, so I ran tcpdump on eth0 and lo twice; once while ipfs daemon is up but idle, and a second time while it's in use.

So, first on the idle system: Lots of traffic to/from 4001 and 10001. These consist entirely of SYN RST and ACK packets, from localhost to itself, IPv4 and IPv6, to/from ports 4001 and 10001 -- maybe a dozen every second, all the time. No data is being transferred in these packets. They're empty. The timing is stochastic, the time intervals are irregular. It would appear that something is opening sockets to the ipfs daemon and then resetting and closing them immediately. Over and over and over. The only something should be the ipfs daemon itself, so this is ... bizarre ... (in 5 minutes, I managed to capture exactly one packet between my port 5001 and an outside-world host. So my ipfs daemon is interacting with the outside world, just not very much, when sitting idle).

Next, I see dozens of ICMP port-unreachables emanating from port 4001 to 169.254.x.x addresses which tells me that local addresses are incorrectly leaking out into the IPFS protocols. (I don't use 169.254, I use 10.x.x.x for the internal LAN, so these addrs are not mine.) There were over a dozen distinct different 169.254 addrs visible.

I see MDNS responses exactly 10 seconds apart, to within a millisecond. The response includes my self key CID. The response packets always include all local host IP's, including the IP's of the various LXC containers on the host; all have 10.x.x.x entries. They're all marked [Unsolicited: True] so I'm not sure what purpose these are supposed to be serving.

Next, I run my IPFS client. It generates a small handful of POST /api/v0/add and POST /api/v0/object/patch/add-link. Then it generates the publish, and then exactly nothing happens (excluding the above-described garbage traffic) ... nothing happens for exactly 90 seconds, down to the millisecond. The publish was this:

    POST /api/v0/name/publish?stream-channels=true&json=true&encoding=json&arg=QmVKsgztubmcYzC8UMVvmN7duqNMvkQXyUuEvodmWsfVJD&key=xfoobar-key&lifetime=4h&ttl=30s HTTP/1.1\r\n
    Host: localhost:5001\r\n
    User-Agent: cpp-ipfs-api\r\n
    Accept: */*\r\n
    Content-Length: 0\r\n
    \r\n
    [Full request URI: http://localhost:5001/api/v0/name/publish?stream-channels=true&json=true&encoding=json&arg=QmVKsgztubmcYzC8UMVvmN7duqNMvkQXyUuEvodmWsfVJD&key=xfoobar-key&lifetime=4h&ttl=30s]
    [HTTP request 1/1]
    [Response in frame: 1709]

and exactly 90 seconds later:

Frame 1709: 71 bytes on wire (568 bits), 71 bytes captured (568 bits)
Ethernet II, Src: 00:00:00:00:00:00, Dst: 00:00:00:00:00:00
Internet Protocol Version 4, Src: 127.0.0.1, Dst: 127.0.0.1
Transmission Control Protocol, Src Port: 5001, Dst Port: 35584, Seq: 474, Ack: 251, Len: 5

etc...
Hypertext Transfer Protocol
    HTTP/1.1 200 OK\r\n
    Access-Control-Allow-Headers: X-Stream-Output, X-Chunked-Output, X-Content-Length\r\n
    Access-Control-Expose-Headers: X-Stream-Output, X-Chunked-Output, X-Content-Length\r\n
    Content-Type: application/json\r\n
    Server: go-ipfs/0.4.22\r\n
    Trailer: X-Stream-Error\r\n
    Vary: Origin\r\n
    Date: Mon, 21 Oct 2019 17:15:17 GMT\r\n
    Transfer-Encoding: chunked\r\n
    \r\n

That is a totally boring totally ordinary 200 OK response.

If you read the above, you may spot a ttl=30. I can confirm that the 90-second delay has nothing at all to do with the ttl setting (I tried ttl's of 5, 10, and 115, no change.)

There is no traffic on port 53; nothing is making any DNS queries on the system.

So: to summarize: nothing unusual, except for a fairly large number of completely bogus SYN/RST packets that do nothing at all. Except for this garbage, the ipfs daemon is effectively completely idle, sending nothing, receiving nothing. The 90 second timer, whatever it is, is in the ipfs daemon.

@aschmahmann
Copy link
Contributor

@aschmahmann aschmahmann commented Oct 23, 2019

@linas A few thoughts on what you've run into:

Publishes with the command-line tools always take 60s exactly.

I'm pretty sure this is just you hitting the DHT timeout. DHT publishes are taking a while and this is a known issue.

If I resolve locally, the resolution is instant, either with command-line tool OR with API code.
Unless I interleave another publish in the meanwhile, in which case the resolve now takes 60 seconds! (But not always, sometimes it's instant!)

What's happening here is that namesys has an internal cache that it uses that keeps published records for 1 minute by default. This means that local resolution would be non-instantaneous if you waited more than a minute between publishing and resolving.

If you know that the latest record is on your machine (e.g. because you published it), you can always resolve quickly by just passing the --offline to ipfs name resolve (btw --offline work for publishing as well if all you're interested in is local node operation).

@linas
Copy link

@linas linas commented Oct 28, 2019

Hi @aschmahmann continuing the conversation:

@linas A few thoughts on what you've run into:

Publishes with the command-line tools always take 60s exactly.

I'm pretty sure this is just you hitting the DHT timeout. DHT publishes are taking a while and this is a known issue.

Isn't this that issue? Or is there some other issue? Do you have the issue # for it?

Anyway, what you say is symptomatic of the general confusion here. If I both publish and resolve locally, I cannot imagine any valid reason why either operation would stall, for any reason. By "publishing locally", I mean that I contact the ipfs demon running on localhost, with the URL http://localhost:5001/api/v0/name/publish. Now, it might take that daemon quit a while to announce the new publish to the whole world, but there's no reason that I know of (can think of) why it could not return immediately (and with a status code of "success"). Why is there a timeout needed for this operation, and how could it ever not be successful?

For resolution, similar remarks apply: if I am trying to resolve a name that I published locally, just seconds earlier, I see no reason at all why the ipfs daemon cannot instantly respond to the http://localhost:5001/api/v0/name/resolve request. The daemon knows for a fact, that the publish that it has is authoritative. It knows that it is authoritative, because it is signed by the local private key. Sure, there might be someone else on the other side of the planet using that same private key also performing publishes, at the same time as I am, but it would be absurd to think that this might be happening, and to wait on that possibility.

What's happening here is that namesys has an internal cache that it uses that keeps published records for 1 minute by default. This means that local resolution would be non-instantaneous if you waited more than a minute between publishing and resolving.

I think you are talking about the TTL parameter. Right now, the docs are silent about the default value of the TTL parameter. Maybe it's one minute. However, if I change the TTL parameter to, say, 5 minutes, then the timeout is still 60 seconds. So this "explanation" cannot be correct. In any case, since my local daemon is the authority on the publish, the TTL would not apply to it. The TTL is intended for everyone else, and never for the authority (because, duhh, the authority always knows the answer, and never needs to ask anyone for it!)

If you know that the latest record is on your machine (e.g. because you published it), you can always resolve quickly by just passing the --offline to ipfs name resolve (btw --offline work for publishing as well if all you're interested in is local node operation).

There is no "offline" parameter in the documentation. Please look at the docs: https://github.com/ipfs/interface-js-ipfs-core/blob/master/SPEC/NAME.md#nameresolve -- the only parameters are recursive and nocache.

@aschmahmann
Copy link
Contributor

@aschmahmann aschmahmann commented Oct 28, 2019

@linas

Isn't this that issue?

Yes. For example see above, #3860 (comment). There are probably some related issues in go-libp2p-kad-dht as well.

{For ipfs name publish} Why is there a timeout needed for this operation, and how could it ever not be successful?

What you are running into here is the difference between a synchronous and asynchronous network operation. Two use cases:

Synchronous: I'd like to make some data accessible to you. I'd like to be confident that once I'm done running ipfs name publish --key=mykey /ipfs/QmData that I can turn off my machine and as long as someone else is hosting /ipfs/QmData then we're good to go. This could include me paying a third party to host my data, but not wanting to give them my IPNS publishing keys.

Asynchronous: I'd like to use IPNS as a way to address my content in a distributed way and will be doing lots of local work. If it takes a while for data to get published that's fine. If we ever wanted to know whether the data was publicly accessible we would just ask a friend if they could find it, or do a network query that explicitly ignores our local state.

The API is currently designed for the synchronous use case which is plenty useful and valid. If you'd like support for the asynchronous use case that's a new issue you should feel free to open (or even implement).

For resolution, similar remarks apply: ... The daemon knows for a fact, that the publish that it has is authoritative.

This is not correct and does not fully encompass the current and future use cases for IPNS. While IPNS is single writer, there are people who share IPNS keys between devices (IPNS keys are not restricted to the peerID keys, they can be arbitrary asymmetric key pairs). As long as they only edit one device at a time then they can feel reasonably confident in using IPNS to sync say a folder between multiple devices. Again, there is a flag that only looks at your local repo which you can use to achieve your goals.

I think you are talking about the TTL parameter ... However, if I change the TTL parameter to, say, 5 minutes, then the timeout is still 60 seconds ... The TTL is intended for everyone else, and never for the authority

I've already addressed why the TTL applies to everyone (unless you want to use --offline). There are options to modify the DHT timeout if you'd like or to stream the results. Again if you are only concerned about your machine there is an option available for you.

There is no "offline" parameter in the documentation.

It's not in the link you mentioned to js-ipfs-core (this is the go-ipfs repo) which might be a good point. @vasco-santos any idea where the documentation for the flag is?

In go-ipfs the documentation is there, but perhaps it's a bit confusing (if you have suggestions I'd encourage opening an issue or PR to fix it). The CLI docs https://docs.ipfs.io/reference/api/cli/#ipfs show that you can pass --offline to any ipfs command.

@linas
Copy link

@linas linas commented Oct 29, 2019

Ah! now we are getting somewhere! I will try a multi-part reply, as this is getting long. First: sync vs. async. I claim that sync "never makes sense". Either you are doing a publish to http://dotcom.com:5001/api/v0/name/publish and you have a reasonable expectation of dotcom.com staying up 24x7 or you are doing it wrong. If you (some app on your phone) does ipns publish and then you immediately shut down your laptop/phone, then either the OS will kill -9 the ipfs daemon due to lack of response (and so the publish never leaves the device), or you "wtf does my phone not turn off I gotta buy a new phone" and yank the battery (and the publish never leaves the device). If, by happy circumstance, the phone/laptop stayed on long enough, then you never needed the sync publish in the first place. To summarize: sync publish never makes sense.

What other scenarios are there?

@linas
Copy link

@linas linas commented Oct 29, 2019

Part 2: current and future use cases for IPNS. My use-case is this: I want to share keys with dozens or hundreds of others (who are doing compute of some kind, and are publishing results on ipfs). With the current IPNS design, I would need to share millions, up to 100 million keys with them. Why such an obscenely large number? Cause in my current infrastructure, backed by postgres, I have an extremely sparse dataset, of which only a few million or tens-of-millions of entries are non-zero (they follow a Zipfian distribution in size, and connectivity). Each one of my entries is immutable, thus has a unique content hash, but I need to hang time-varying data off of it (probabilities, counts, sensor-readings, etc.) and so I need dozens or hundreds or more processes cooperating to publish the latest readings hanging off of the immutable sign-posts that have been set up. This currently works fine on PostgreSQL and I can hit several K publishes/second (which is slower than I'd like, but whatever). I can point you at the unit tests.

With IPNS/IPFS.. ugh. Clearly, having to share millions of keys with hundreds of peers seems ... the opposite of "decentralized". In some ideal world, enhanced IPNS would do this:

     (PKI public-key, hash) ==> resolved CID

instead of what it currently does:

   PKI public-key ==> resolved CID

That way, I could share only one key with peers, and whenever those peers needed to look up the CID associated with some "well-known hash" they can just do that.

@aschmahmann
Copy link
Contributor

@aschmahmann aschmahmann commented Oct 29, 2019

sync publish never makes sense ... What other scenarios are there?

There are plenty of possible scenarios since what you are doing is giving the user an extra piece of information (that the publish completed) that you wouldn't have otherwise. Simple example: If I want to write code in JS but benefit from features or performance of go-ipfs I might have my application spin up a go-ipfs daemon and talk to it over the HTTP API. In this case my application would want to know when it is "safe to close".

@linas Your IPNS use case of needing millions of keys seems highly suspicious to me and is not relevant to this issue. If you could post on https://discuss.ipfs.io/ that would be great, the conversation could be continued there.

@linas
Copy link

@linas linas commented Oct 29, 2019

There are plenty of possible scenarios

!? Give one real example. The history of computer science has been the elimination of sync writes, from the 1970's onward. The invention of interrupts to avoid sync writes of machine status. The invention of caches to avoid sync writes to DRAM. The invention of DMA to avoid sync writes by I/O. The invention of register write-back queues. And that's just hardware. The invention of mutex locks. The invention of software message queues. The invention of publish/subscribe. All of these were driven by the need to eliminate the stalls associated with sync writes. All of these have become very popular precisely because because they avoid sync execution.

I'm kind of frustrated: one didn't have to make sync the default. It could have been async by default. In my code, I have to launch and dettach a thread for each IPNS publish, because I can't afford to wait for a 60/90 second timer to pop in the IPFS code. This bug itself has been open 2.5 years, and has accumulated cruft of me-too's. All discussion on all discussion forums have all basically concluded that "IPNS is broken and lets hunker down and wait until it's fixed". And you are trying to tell me that, no actually, there is a simple fix, nearly trivial -- to change the default (an undocumented API flag) , and everything will work for everybody? Holy cow! Change the default!

I mean, I know what I wrote here sounds hostile, but I don't know how else to put it. You're trying to tell me that "its a feature not a bug", but you really got to consider that pretty much the rest of the world thinks its a bug ...

@vasco-santos
Copy link
Member

@vasco-santos vasco-santos commented Oct 29, 2019

It's not in the link you mentioned to js-ipfs-core (this is the go-ipfs repo) which might be a good point. @vasco-santos any idea where the documentation for the flag is?

We were missing the documentation for that option, just created a PR to add it ipfs/js-ipfs#2569. Thanks for the heads up!

In go-ipfs the documentation is there, but perhaps it's a bit confusing (if you have suggestions I'd encourage opening an issue or PR to fix it). The CLI docs https://docs.ipfs.io/reference/api/cli/#ipfs show that you can pass --offline to any ipfs command.

Regarding the CLI in JS, it should also work with --offline

@aschmahmann
Copy link
Contributor

@aschmahmann aschmahmann commented Oct 29, 2019

I know what I wrote here sounds hostile

Yes it does, I think if you actually read through this issue you will see that the problem you're complaining about is not the problem this issue is about. Additionally, you have already been told how to get around this problem. As you can see someone has already put up a PR for documentation improvement (thanks @vasco-santos). If you have further complaints unrelated to this issue I'd recommend opening a new issue, this one is already pretty long.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
Linked pull requests

Successfully merging a pull request may close this issue.

None yet
You can’t perform that action at this time.