Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ipfs {ls||cat} <hash> halts, while gateway.ipfs shows the file right away #3530

Closed
avatar-lavventura opened this issue Dec 21, 2016 · 19 comments

Comments

@avatar-lavventura
Copy link

avatar-lavventura commented Dec 21, 2016

Version information:

go-ipfs version: 0.4.4-
Repo version: 4
System version: amd64/linux
Golang version: go1.7

Description:

I am using ipfs integrated to Ethereum but in a very basic manner.

In this following example: a client stores a ipfs hash value inside a contract at ethereum. Server would access to already stored ipfs hash value by using the same contract. When ipfs hash value is obtained at the server, server node could obtain the ipfs file by using ipfs {ls||cat||get} < hash_value >. But when I use ipfs {ls||cat||get} < hash_value > there may occur a halt. Since I am doing all this process inside a script halt problem locks my script.

On the contrary I could obtain the files from ipfs.gateway(which uses public gateways) using wget (this process force gateways to cache the content till its garbage collector clear up the disk space).

[Q] Should I try to retrieve files using wget -r "https://ipfs.io/ipfs/< some_hash >" since its connection seems more reliable than using ipfs {cat||ls||get} < my_hash >? I am not questioning ipfs, I just want to learn what my cause this problem or where I made a mistake and come up with a solution. So far using wget through ipfs.gateway solved the halt problem I am facing with.

Explanation of this question as follows:

I have a client and server node. On both machines, on the background I am running the following command as guided on the ipfs tutorials:

ipfs daemon &

On my client machine, I have add a file into ipfs:

[~]$ ipfs add hello.c
added < my_hash > hello.c

After that on my server, I do try to retrieve the file:

[~]$ ipfs {cat||ls||get} < my_hash >
//HALTS.

Following error message showed up multiple times.

14:18:59.986 ERROR  dht: no addresses on peer being sent!
					[local:<peer.ID UjQJqD>]
					[sending:<peer.ID aYb9MJ>]
					[remote:<peer.ID Z86ow1>] handlers.go:75

ipfs {cat||get||ls} < my_hash > halts and I do wait a long time but nothing happens, maybe after few hours it works (The system acting unreliably on my case). This situation is weird because same process was working before without any problem and suddenly it may stop working as it should be.

On the other hand, I can see the file right away at my browser after ipfs add is done: url:https://gateway.ipfs.io/ipfs/< some_hash > or could download the file using wget -r "https://ipfs.io/ipfs/< my_hash >"".

Please note that if a folder added into the ipfs add -r folder_name it could retrieved from folder's hash with the following command:

wget -nH --cut-dirs=2 --no-parent -r -l 0 -nc "https://ipfs.io/ipfs/< folders_hash >"

Thank you for your valuable time and help.

@wigy-opensource-developer

@avatar-lavventura Let me guess that both your client and server nodes are on the same LAN behind a firewall? I found some weird behaviour when 2 nodes were running that way in my LAN. Both wanted to punch the same hole in the NAT and only one of them succeeded.

Unfortunately even if this is your case, I have no idea what is the real problem. It just sounds that if you stop one of the daemons, you might get better results for now.

@avatar-lavventura
Copy link
Author

avatar-lavventura commented Dec 30, 2016

@wigy-opensource-developer thank you for your response. I am still facing with the same problem. My server runs behind a firewall (on a university domain, so there has to be a firewall). On my case if my client on the same LAN behind the same firewall, there is not halt issue between my client and server. But if the client is outside of the LAN, it halts as I explained. The problem may arise because of the firewall, but the public gateways still could retrieve the added data via ipfs, from my server?

@satra
Copy link

satra commented Mar 12, 2017

i have a similar issue:

a command like this takes forever on my local osx machine (ipfs 0.4.6)
ipfs ls /ipfs/QmWvQ8fz9XMJoxQHV5BHKmVqYoVNWkckUxQ8LNSgEkdBgD/bin/
(i don't know if it is actually doing something as it doesn't return for a long time - minutes to hours)

but i can see it instantly at:

https://ipfs.io/ipfs/QmWvQ8fz9XMJoxQHV5BHKmVqYoVNWkckUxQ8LNSgEkdBgD/bin

further if i do:

ipfs cat /ipfs/QmWvQ8fz9XMJoxQHV5BHKmVqYoVNWkckUxQ8LNSgEkdBgD/WMParcStatsLUT.txt it also hangs for a long time till i go and click on the file at:

https://ipfs.io/ipfs/QmWvQ8fz9XMJoxQHV5BHKmVqYoVNWkckUxQ8LNSgEkdBgD/

and then its instantaneous.

it almost seems like i can only get to the peer containing these things if and only if i reach it through the ipfs gateway.

the peer and client are on two different networks.

@satra
Copy link

satra commented Mar 12, 2017

ok in my case - i just learned that incoming port 4001 may be blocked. but in that case how does any data get out? since i am able to retrieve things via ipfs.io

@whyrusleeping
Copy link
Member

whyrusleeping commented Mar 12, 2017

@satra Data can get out via connections your node makes outwards. In NATed networks, you can always dial out, but dialing in is the tricky part. In this case, your node behind the firewall had made a connection out to the ipfs.io gateway nodes, but not to your other node outside the network.

@satra
Copy link

satra commented Mar 14, 2017

@whyrusleeping - i have since changed the firewall to open port 4001, and i can telnet to this port from a remote machine. on this remote machine i start a daemon and do ipfs ls and it hangs forever.

i remove everything from bootstrap and add only the id of the remote node, i.e. hoping that this effectively creates a two node network.

and then redo ipfs ls and it hangs forever.

is there a way to diagnose what's going on with traffic?

@whyrusleeping
Copy link
Member

whyrusleeping commented Mar 15, 2017

@satra when you telnet to that port does it print something like /multistream/1.0.0 back? If you see that then you can be sure that the ipfs node on the other side is accessible.

I would verify connectivity between the peers by doing ipfs ping $OTHER_PEER_ID.

@satra
Copy link

satra commented Mar 15, 2017

@whyrusleeping - thank you.

$ ipfs bootstrap
/ip4/18.13.53.53/tcp/4001/ipfs/QmSgC4gsjk3hd49Yosov2CadExkRts9dAdoEEyG2TN3jHu

$ ipfs ping QmSgC4gsjk3hd49Yosov2CadExkRts9dAdoEEyG2TN3jHu
PING QmSgC4gsjk3hd49Yosov2CadExkRts9dAdoEEyG2TN3jHu.
Pong received: time=23.43 ms
Pong received: time=23.57 ms
Pong received: time=12.91 ms
Pong received: time=7.19 ms
Pong received: time=22.03 ms
Pong received: time=15.29 ms
Pong received: time=15.72 ms
^C
00:01:48.015 ERROR commands/h: net/http: request canceled client.go:247
Error: net/http: request canceled

$ ipfs ls /ipfs/QmWvQ8fz9XMJoxQHV5BHKmVqYoVNWkckUxQ8LNSgEkdBgD/bin
(hangs)

$ ipfs version --all
go-ipfs version: 0.4.6-
Repo version: 5
System version: amd64/darwin
Golang version: go1.8

@whyrusleeping
Copy link
Member

whyrusleeping commented Mar 15, 2017

@satra and that ipfs ls command works on the other node? Seem really odd to me, If thats the case then this is likely not a connectivity problem

@satra
Copy link

satra commented Mar 15, 2017

@whyrusleeping - yes it does work on the other node. it seems it's related to size of the folder (which doesn't make much sense, because the peer should have been able to pass on all the info).

ipfs ls /ipfs/QmWvQ8fz9XMJoxQHV5BHKmVqYoVNWkckUxQ8LNSgEkdBgD/

should return instantly since it's on the ipfs gateway now.

that will show you the folders by size. try ls with /docs vs /bin.

@satra
Copy link

satra commented Mar 15, 2017

also ipfs refs is instantaneous on any of those folders, but ipfs ls isn't.

@whyrusleeping
Copy link
Member

whyrusleeping commented Mar 15, 2017

@satra Hrm... fetching is realllly slow for me, which may explain why it appears to hang. I was able to fetch all the directory entries in 2m45s.

The ipfs ls that hangs should finish instantly if you pass the flag --resolve-type=false.
It should also work much more quickly now that I've been playing with it on a server of mine

@whyrusleeping
Copy link
Member

whyrusleeping commented Mar 15, 2017

@satra The issue here is that ipfs ls by default will fetch the first block of every item in a given directory to check its type. This can be rather expensive on large directories, so the --resolve-type=false flag tells ipfs to not care about the type information.

@satra
Copy link

satra commented Mar 15, 2017

thanks for the explanations.

is the type information not cached on the peer with the refs? especially since i've pinned this directory on that peer. does it have to fetch the block?

@whyrusleeping
Copy link
Member

whyrusleeping commented Mar 15, 2017

The type information is cached on that peer if it has all the blocks already, yes. But your laptop (the one where ipfs ls was hanging, didnt have all the required blocks).

This really makes me think that ipfs ls should default to not getting type info. It makes for a poor UX in times like this.

@satra
Copy link

satra commented Mar 15, 2017

@whyrusleeping - thank you for walking me through this. now i can move onto one of my actual use cases and see if that works.

@whyrusleeping
Copy link
Member

whyrusleeping commented Mar 15, 2017

@satra no problem! Definitely let us know if you have any other issues

@avatar-lavventura
Copy link
Author

avatar-lavventura commented May 15, 2017

I guess on my case port:4001 was closed to public. Only nodes that on the same network was able to access it but nodes on the external network always face with halt issue. @satra

@whyrusleeping
Copy link
Member

whyrusleeping commented Sep 2, 2017

This issue seems to be resolved, please reopen as needed.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants