Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ipfs v0.18: "ipfs does not seem to be available after x retries" #1835

Closed
ShadowJonathan opened this issue Jan 23, 2023 · 2 comments · Fixed by #1841
Closed

ipfs v0.18: "ipfs does not seem to be available after x retries" #1835

ShadowJonathan opened this issue Jan 23, 2023 · 2 comments · Fixed by #1841
Assignees
Labels
kind/bug A bug in existing code (including security flaws) P0 Critical: Tackled by core team ASAP

Comments

@ShadowJonathan
Copy link

Additional information:

  • OS: Debian 10
  • IPFS Cluster version: v1.0.4
  • Installation method: dist.ipfs.io

Describe the bug:

I'm not able to have ipfs-cluster access my ipfs instance after upgrading to 0.18, because it constantly errors with "ipfs does not seem to be available after x retries".

I've captured some traffic of an exchange, and here's how the raw TCP stream looks like (output from wireguard):

Details
POST /api/v0/id HTTP/1.1
Host: 127.0.0.1:5001
User-Agent: Go-http-client/1.1
Content-Length: 0
Content-Type: 
Accept-Encoding: gzip

HTTP/1.1 200 OK
Access-Control-Allow-Headers: X-Stream-Output, X-Chunked-Output, X-Content-Length
Access-Control-Expose-Headers: X-Stream-Output, X-Chunked-Output, X-Content-Length
Content-Type: application/json
Server: kubo/0.18.0
Trailer: X-Stream-Error
Vary: Origin
Date: Mon, 23 Jan 2023 19:56:34 GMT
Transfer-Encoding: chunked

b3b
{"ID":"[REDACTED]","PublicKey":"[REDACTED]","Addresses":["[REDACTED]"],"AgentVersion":"kubo/0.18.0/675037721-dirty","ProtocolVersion":"ipfs/0.1.0","Protocols":["/ipfs/bitswap","/ipfs/bitswap/1.0.0","/ipfs/bitswap/1.1.0","/ipfs/bitswap/1.2.0","/ipfs/id/1.0.0","/ipfs/id/push/1.0.0","/ipfs/kad/1.0.0","/ipfs/lan/kad/1.0.0","/ipfs/ping/1.0.0","/libp2p/autonat/1.0.0","/libp2p/circuit/relay/0.1.0","/libp2p/circuit/relay/0.2.0/hop","/libp2p/circuit/relay/0.2.0/stop","/libp2p/dcutr","/p2p/id/delta/1.0.0","/x/"]}

0

The error is thrown away here, and so i can't report it even if i had debug logs on (which i did enable).

@ShadowJonathan ShadowJonathan added kind/bug A bug in existing code (including security flaws) need/triage Needs initial labeling and prioritization labels Jan 23, 2023
@hsanjuan
Copy link
Collaborator

The error with v0.18.0 seems to be:

failed to parse multiaddr "/ip6/::1/udp/4001/quic-v1/webtransport/certhash/[...]": unknown protocol quic-v1

We will have to fix this for next release. Disabling quic/webstransport might be a workaround.

@hsanjuan hsanjuan self-assigned this Jan 24, 2023
@hsanjuan hsanjuan added P0 Critical: Tackled by core team ASAP and removed need/triage Needs initial labeling and prioritization labels Jan 24, 2023
@hsanjuan hsanjuan added this to the Release v1.0.5 milestone Jan 24, 2023
@ShadowJonathan
Copy link
Author

Apologies for not including the addresses in the debug output, I should've just scrubbed the IPs

hsanjuan added a commit that referenced this issue Jan 27, 2023
Fixes: #1835.

If IPFS introduces a new multiaddress type/string that we have not compiled,
we error. This caused issues with latest ipfs version (which we fixed by
upgrading libraries too). This makes cluster a bit more future proof with
upcoming ipfs versions.
hsanjuan added a commit that referenced this issue Jan 27, 2023
…#1841)

Fixes: #1835.

If IPFS introduces a new multiaddress type/string that we have not compiled,
we error. This caused issues with latest ipfs version (which we fixed by
upgrading libraries too). This makes cluster a bit more future proof with
upcoming ipfs versions.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/bug A bug in existing code (including security flaws) P0 Critical: Tackled by core team ASAP
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants