Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

'Making your own ipfs service' example does not work #2401

Closed
Zogg opened this issue Feb 24, 2016 · 24 comments
Closed

'Making your own ipfs service' example does not work #2401

Zogg opened this issue Feb 24, 2016 · 24 comments
Labels
help wanted Seeking public contribution on this issue need/verification This issue needs verification

Comments

@Zogg
Copy link

Zogg commented Feb 24, 2016

When I run go get to download all the libraries of the example application, error is thrown:

$ go get                                                                                                                                                                      [22:02:11] 
package code.google.com/p/go.net/context: unable to detect version control system for code.google.com/ path
package gx/ipfs/QmQopLATEYMNg7dVqZRNDfeE2S1yKy8zrRh5xnYiuqeZBn/goprocess: unrecognized import path "gx/ipfs/QmQopLATEYMNg7dVqZRNDfeE2S1yKy8zrRh5xnYiuqeZBn/goprocess" (import path does no
t begin with hostname)
package gx/ipfs/QmYtzQmUwPFGxjCXctJ8e6GXS8sYfoXy2pdeMbS5SFWqRi/go-multiaddr-net: unrecognized import path "gx/ipfs/QmYtzQmUwPFGxjCXctJ8e6GXS8sYfoXy2pdeMbS5SFWqRi/go-multiaddr-net" (impor
t path does not begin with hostname)
package gx/ipfs/QmYf7ng2hG5XBtJA3tN34DQ2GUN5HNksEw1rLDkmr6vGku/go-multihash: unrecognized import path "gx/ipfs/QmYf7ng2hG5XBtJA3tN34DQ2GUN5HNksEw1rLDkmr6vGku/go-multihash" (import path d
oes not begin with hostname)
package gx/ipfs/QmZNVWh8LLjAavuQ2JXuFmuYH3C11xo988vSgp7UQrTRj1/go-ipfs-util: unrecognized import path "gx/ipfs/QmZNVWh8LLjAavuQ2JXuFmuYH3C11xo988vSgp7UQrTRj1/go-ipfs-util" (import path d
oes not begin with hostname)
package gx/ipfs/QmZy2y8t9zQH2a1b8q2ZSLKp17ATuJoCNxxyMFG5qFExpt/go-net/context: unrecognized import path "gx/ipfs/QmZy2y8t9zQH2a1b8q2ZSLKp17ATuJoCNxxyMFG5qFExpt/go-net/context" (import pa
th does not begin with hostname)
package gx/ipfs/Qmazh5oNUVsDZTs2g59rq8aYQqwpss8tcUWQzor5sCCEuH/go-log: unrecognized import path "gx/ipfs/Qmazh5oNUVsDZTs2g59rq8aYQqwpss8tcUWQzor5sCCEuH/go-log" (import path does not begi
n with hostname)

<...>
package gx/ipfs/QmUBogf4nUefBjmYjn6jfsfPJRkmDGSeMhNj4usRKq69f4/go-libp2p/p2p/net/swarm: unrecognized import path "gx/ipfs/QmUBogf4nUefBjmYjn6jfsfPJRkmDGSeMhNj4usRKq69f4/go-libp2p/p2p/net
/swarm" (import path does not begin with hostname)
package gx/ipfs/QmUBogf4nUefBjmYjn6jfsfPJRkmDGSeMhNj4usRKq69f4/go-libp2p/p2p/net/swarm/addr: unrecognized import path "gx/ipfs/QmUBogf4nUefBjmYjn6jfsfPJRkmDGSeMhNj4usRKq69f4/go-libp2p/p2
p/net/swarm/addr" (import path does not begin with hostname)
package gx/ipfs/QmUBogf4nUefBjmYjn6jfsfPJRkmDGSeMhNj4usRKq69f4/go-libp2p/p2p/protocol/ping: unrecognized import path "gx/ipfs/QmUBogf4nUefBjmYjn6jfsfPJRkmDGSeMhNj4usRKq69f4/go-libp2p/p2p
/protocol/ping" (import path does not begin with hostname)

@hackergrrl
Copy link
Contributor

We've added another step in getting go-ipfs set up from source -- please see https://github.com/ipfs/go-ipfs/#build-from-source

Sorry about the hassle and thanks for commenting! We're working on making go get work seamlessly again!

@Zogg
Copy link
Author

Zogg commented Feb 29, 2016

Compiled from source, but upon launching host.go I receive this error:

@arte ➜  host rvm:(system)  go run host.go 
I am peer: QmUBbpfYe3CMdUcpM26nLzeYf6WsFT58TQEk9v5ATJ5KDU
20:09:01.484 ERROR     swarm2: swarm listener accept error: write tcp 127.0.0.2:4001->127.0.0.1:4001: write: broken pipe swarm_listen.go:128

@whyrusleeping
Copy link
Member

@Zogg that ones normal, and will be downgraded to a warning in the next update of our libp2p dependency. The program shouldnt exit after printing that, it should continue functioning just fine

@Zogg
Copy link
Author

Zogg commented Feb 29, 2016

I have host/client running in docker containers, on the same machine, yet client is unable to reach the host.

Host container:

root@4a754ee5a984:/ipfs# ./host 
I am peer: QmUBbpfYe3CMdUcpM26nLzeYf6WsFT58TQEk9v5ATJ5KDU
19:18:22.178 ERROR     swarm2: swarm listener accept error: write tcp 127.0.0.2:4001->127.0.0.1:4001: write: broken pipe swarm_listen.go:128
19:18:25.914 ERROR     swarm2: swarm listener accept error: write tcp 172.17.0.2:4001->172.17.0.1:4001: write: connection reset by peer swarm_listen.go:128
19:18:26.125 ERROR     swarm2: swarm listener accept error: write tcp 172.17.0.2:4001->172.17.0.1:4001: write: broken pipe swarm_listen.go:128
19:18:26.141 ERROR     swarm2: swarm listener accept error: write tcp 172.17.0.2:4001->172.17.0.1:60371: write: broken pipe swarm_listen.go:128
19:18:26.805 ERROR     swarm2: swarm listener accept error: write tcp 172.17.0.2:4001->172.17.0.1:4001: write: broken pipe swarm_listen.go:128
19:18:28.070 ERROR     swarm2: swarm listener accept error: write tcp 172.17.0.2:4001->172.17.0.1:4001: write: connection reset by peer swarm_listen.go:128

Client container:

root@184d4f3edf34:/ipfs# ./node QmUBbpfYe3CMdUcpM26nLzeYf6WsFT58TQEk9v5ATJ5KDU
I am peer <peer.ID QmUBbp> dialing <peer.ID QmUBbp>
19:18:25.924 ERROR     swarm2: swarm listener accept error: write tcp 127.0.0.2:4001->127.0.0.1:4001: write: connection reset by peer swarm_listen.go:128
routing: not found

@whyrusleeping
Copy link
Member

I'm not sure about how that example works on containers. the docker networking setup does really weird things to NAT traversal, and might screw things up a bit. Could you confirm whether or not it works when run on two separate machines on the same LAN? I havent looked at that code pretty much since writing it a year ago.

@guruvan
Copy link

guruvan commented Mar 24, 2016

I'm having a similar issue. (same error messages, just trying to get or pin files, though)
Using go-ipfs 0.4.0-dev, my own guruvan/go-ipfs image
Running on AWS on RancherOS, I've started 3 nodes via docker.
I put a few large files with a directory/wrap into ipfs from node1 and then tried to pin or get them from the other 2 nodes. All attempts would fail after a certain amount of data transferred.

  • node1 is behind AWS NAT instance, no ports are forwarded to it, but security groups allow traffic to required ports.
  • node2 node3 are not behind AWS nat.
  • Rancher provides an IPSec managed overlay network between containers (on top of docker networking)

Saw this issue, so moved node1 to run with --net host in the docker command. This alleviated most of the issue, but was still unable to complete the transfer.

Switched my Rancher stack to use host networking as well (avoiding docker NAT and the IPsec network) and this completed a transfer.

Looks like it should be easily reproduced if needed.

@guruvan
Copy link

guruvan commented Mar 24, 2016

OK - so to add some help to the above

node2 & node3 are on the same subnet - node3 failed to transfer the data, while node2 succeeded.

I changed node3 to have a config that uses only the explicitly desired interface, I used the PrivateIPv4, restarted the daemon. This ALSO was not successful, so I explicitly added node2 via ipfs swarm connect with the node2 private ipv4 as well. This ALSO stalled out - but did produce a volume of the above error.

No matter what I do ipfs puts errors like so:
3/24/2016 11:52:58 AM�[0;37m18:52:58.348 �[31mERROR �[0;34m swarm2: �[0mswarm listener accept error: read tcp 172.18.42.1:4001->10.0.3.23:4001: read: connection reset by peer �[0;37mswarm_listen.go:128�[0m

in which 172.18.42.1 is the local docker0 address (RancherOS user-docker) - 127.0.0.1 127.0.0.2, system-docker address, and IPsec address as mentioned above all are seen in this error

importantly I've also noted that on my AWS instances go-ipfs sets the swarm address to the private ipv4 ONLY, if ONLY the swarm address is set in the config - if the Gateway address is ALSO set to the private ipv4 then the SWARM address is set to BOTH the private and public ipv4 addresses.
EDIT: I note that this behavior is similar for my machines behind a NAT instance - with the Gateway address set to the local ipv4, the swarm config added the outside address. (but doesn't with only setting the swarm address in the config, as above)

I've spun up a fresh node (node4) and cannot get this data transferred (ipfs get) to it. It appears that all but the one successful node fail on the same blocks. I am unable to reproduce the successful transfer. :(

This is prep for automated release of data files - it would be so much preferable to release via ipfs than torrents. If I can provide any additional debug info - please let me know.

This is the wrap hash of the data I'm trying to make available via ipfs (and verify that it is truly available to downloaders) QmZz9yywiWN2C4SzGd3tQYLQkUGsyJctn7dGoQ9TTzKMDi

@whyrusleeping
Copy link
Member

@guruvan we've had a couple issues with docker NAT stuff. I will have more time to help you debug this weekend, but for now heres some tips and things i would try, some of this you might already know (and should serve as help for others with similar issues)

first, note down the peer IDs of all your involved nodes (run ipfs id)

To check what peers a given node is connected to, run ipfs swarm peers and search for the peer IDs at the end of addresses for the ones youre interested in.

To check connectivity to a given node, i normally start at an ipfs node that i know has good connectivity (my vps normally) and run ipfs dht findpeer <PEERID> for the peer you're investigating. This should list out all addresses that the peer is advertising. If the public address is in that list (and you arent already connected to them) you can run ipfs swarm connect <ADDR> where ADDR is the entire /ip4/...../ipfs/QmPeerID

If you can successfully connect a node to the node with the data, you should be able to run an ipfs get to grab the data youre interested in.

If you connect and arent able to get the data, i would check ipfs dht findprovs <CONTENT HASH> and see if the network returns any records indicating who has that content. If your peer that has the data doesnt show up there, then something interesting is wrong (likely added the data while not connected to the dht). In that case, I would try re-adding the data on the node that already has it (this will trigger a rebroadcast of the provider records). After that compeltes wait a little bit (for the records to propogate) and try running the ipfs get again from the other (non data holding) node.

If you cant make a connection from an outside node to your node with the data, the next thing I would try is making a connection from the data node out to other peers, then try fetching the data on those other peers. If that works, then the issue lies entirely with NAT traversal not working. Ipfs does require some amount of port forwarding to work on NAT'ed networks (whether manual forwarding, NAT PMP or upnp).

@guruvan
Copy link

guruvan commented Mar 25, 2016

thank you - definitely helps in debugging this -

  • a definite issue is that I'm seeing the NAT(gateway) instance's private ipv4 address in my logs. none of the nodes are running on that host. I have forwarded port 4001 to the one NATted host through it.
  • all my nodes have no trouble connecting to either public of private ipv4 addresses. In the case of connecting to the node behind my NAT gateway (which now is forwarded correctly) ipfs swarm connect to either it's private ipv4 OR the nat gateway's port is successful. IOW: full ipfs connectivity between all of my hosts by any legit ip address.
  • logs still appear to include illegitimate addresses:
    • docker bridge addresses
    • ipsec network addresses
    • loopback addresses
    • "fully traversed" nat gateway addresses

The docker addresses are surprising given the --net host in the docker commands. The loopbacks probably shouldn't be being advertised to peers (if this is indeed what's happening) - The ipsec overlay are a local thing, but shouldn't be being used now that all the containers are set not to use this network.

Hopefully by removing Docker networking, I should have the sufficient port forwarding in place, one host only is behind the NAT gw. I seem to have made a bit of a mess by starting up with swarm/API/Gateway addresses all set to bind-all interfaces.

There does seem some inconsistency in how go-ipfs is picking up the public ipv4 address on AWS (note this isn't a separate interface as it is at rackspace or DO), it runs off the eth0 interface.

  • as noted above apparently requires setting at least Gateway & swarm to the private ipv4 address
  • may need also to set API to the same to get ipfs to startup with the PUBLIC IPv4 address in the config
    (which seems to be the desired result)

I have successfully run ipfs getand ipfs pin add -r QmZz9yywiWN2C4SzGd3tQYLQkUGsyJctn7dGoQ9TTzKMDi - much wrangling - I found also the need to delete (recursively) the local copy of the above hash (and full dir content) - that finally got the last node to get the last 100MB or so.

I'll run some more tests tomorrow/weekend.

@whyrusleeping
Copy link
Member

@guruvan thanks for the testing! I started an issue to help track these issues and debugging efforts, if you have any other feedback, post it for us there (your help is very much appreciated!)

#2509

@mitar
Copy link

mitar commented Mar 28, 2016

I was doing ipfs dht findpeer and I got at the end:

23:31:25.968: error: dial attempt failed: failed to dial <peer.ID eQGvYh>
23:31:25.968: error: dial attempt failed: <peer.ID XKKFFa> --> <peer.ID Sazz6u> dial attempt failed: dial tcp6 [2400:8901::f03c:91ff:fec8:18e6]:4001: connect: no route to host
23:31:25.970: error: dial attempt failed: failed to dial <peer.ID cJsbGG>
23:31:26.264: error: dial attempt failed: failed to dial <peer.ID UxZwXS>
23:31:26.286: error: dial attempt failed: failed to dial <peer.ID SJq4E2>
23:31:26.288: error: dial attempt failed: failed to dial <peer.ID RcU9cf>
23:31:26.304: error: dial attempt failed: failed to dial <peer.ID emxBfq>
23:31:26.306: error: dial attempt failed: <peer.ID XKKFFa> --> <peer.ID a7Jtom> dial attempt failed: context deadline exceeded
23:31:26.306: error: dial attempt failed: <peer.ID XKKFFa> --> <peer.ID dTb5bQ> dial attempt failed: dial tcp6 [fc36:74d1:a0f6:a7e2:5c50:8c26:8b91:d8e2]:4001: connect: no route to host
23:31:26.306: error: dial attempt failed: <peer.ID XKKFFa> --> <peer.ID QkPyDs> dial attempt failed: context deadline exceeded
23:31:26.306: error: dial attempt failed: <peer.ID XKKFFa> --> <peer.ID ZjQrYx> dial attempt failed: dial tcp6 [2600:3c02::f03c:91ff:fe26:4e7d]:4001: connect: no route to host
23:31:26.306: error: routing: not found

Not sure if helpful.

(I do not have IPv6 connectivity.)

@guruvan
Copy link

guruvan commented Mar 29, 2016

@whyrusleeping - Thanks - I think I have it sorta working, but it's not clear what's going on.

I will put more notes in 2509

@whyrusleeping
Copy link
Member

@guruvan take a look at #2509 I think I have a solution

@guruvan
Copy link

guruvan commented Mar 29, 2016

I will test the solution in there later tonight :))

@whyrusleeping whyrusleeping added the help wanted Seeking public contribution on this issue label Aug 23, 2016
@whyrusleeping
Copy link
Member

can someone verify if the example in question still works?

@Kubuxu Kubuxu added the need/verification This issue needs verification label Aug 24, 2016
@inetic
Copy link

inetic commented Mar 23, 2017

I'm trying the example right now (link for reference) but having troubles with compilation as it seems two of the dependencies have changed.

Changing
"code.google.com/p/go.net/context" to "context"

and

"github.com/ipfs/go-ipfs/p2p/peer" to "gx/ipfs/QmWUswjn261LSyVxWAEpMVtPdy8zmKBJJfBpG3Qdpa8ZsE/go-libp2p-peer"

Seem to have fixed the compilation problem. I have not yet tried to run host and client on two different machines but running them on the same machine with each using different ipfs repo doesn't seem to work out of the box.

I am peer <peer.ID dS8sgN> dialing <peer.ID T8qN83>
dial attempt failed: context deadline exceeded

UPDATE: The same failure happens when two different PCs in different parts of the world try it.

@hsanjuan
Copy link
Contributor

For reference, the source for that example seems to be https://github.com/ipfs/examples/tree/master/examples/api/service. I'll give it a quick try too.

@hsanjuan
Copy link
Contributor

@inetic after getting it to build, the example works for me among two computers on the local network (taking the code from the repo above).

@hsanjuan
Copy link
Contributor

@inetic when running locally, are you using different configurations for ipfs so listening ports don't conflict? when running on two different PCs, it may be the case that the two peers don't manage to discover each other but I'm not sure. Discovery on local networks takes advantange of mDNS so should be faster.

@inetic
Copy link

inetic commented Mar 24, 2017

@hsanjuan

when running locally, are you using different configurations for ipfs so listening ports don't conflict?

Good point, I wasn't. Once I changed the ports in <repo>/config/Addresses/Swarm I got it working locally.

We also managed to connect two distant PCs, in my latest comment we failed at this because one PC was behind a proxy and was running Tor.

@minxinping0105
Copy link

@hsanjuan
When I run the example https://github.com/ipfs/examples/tree/master/examples/api/service, I get this error 'cannot find package "github.com/ipfs/go-ipfs/core/corenet"'. And I cannot find the corenet package on version 0.4.10. How to solve this problem?

@magik6k
Copy link
Member

magik6k commented Sep 5, 2017

@mib-kd743naq this API has been replaced with ipfs p2p - see https://github.com/ipfs/go-ipfs/blob/master/docs/experimental-features.md#ipfs-p2p. For feedback use #3994

@mib-kd743naq
Copy link
Contributor

@magik6k this is my first mention of this thread... perhaps you highlighted the wrong person?

@magik6k
Copy link
Member

magik6k commented Sep 8, 2017

Oh, sorry about that.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
help wanted Seeking public contribution on this issue need/verification This issue needs verification
Projects
None yet
Development

No branches or pull requests