New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

namesys: IPNS/DNS resolution is very slow #2934

Open
Kubuxu opened this Issue Jul 2, 2016 · 15 comments

Comments

Projects
None yet
6 participants
@Kubuxu
Member

Kubuxu commented Jul 2, 2016

While checking performance I discovered that namesys introduces 400x to 5000x increased latency.
See:
IPFS path:

IPNS path:

I did hard refresh so e-tag and in browser caching shouldn't matter.

Complete load requires almost 40x more time to complete under IPNS. I can repeat those results easily. You can also try for your self: fs:/ipns/ipfs.io and fs:/ipfs/QmTzQ1JRkWErjk39mryYw2WVaphAZNAREyMchXzYQ7c15n/

@Stebalien

This comment has been minimized.

Contributor

Stebalien commented Jul 2, 2016

That's due to the overhead of the DNS lookup and the lack of DNS caching.

Assuming you already have the IPFS blocks cached on your local node, the IPFS lookup doesn't make any network requests. However, assuming you don't have a caching DNS server (e.g., dnsmasq) on your local node, you have to make a DNS lookup every time you lookup /ipns/ipfs.io/.... That's expensive.

One solution is to have the IPFS daemon cache DNS results. However, many modern operating systems (Windows, MacOS Ubuntu), come with DNS caching servers so this may be redundant in most cases (although, caching within the IPFS daemon it would shave off a few milliseconds at the cost of some complexity).

Try setting up something like dnsmasq and trying again.

@Kubuxu

This comment has been minimized.

Member

Kubuxu commented Jul 3, 2016

With dnsmasq it is still slow:

My resolve.conf:

nameserver 127.0.0.1
@Stebalien

This comment has been minimized.

Contributor

Stebalien commented Jul 3, 2016

That's 10x faster than it was before but still 10x slower than it could be. A simple, tiny LRU cache (and some synchronization mechanism to avoid parallel lookups of the same domain) in namesys should help significantly.

Note: For some reason, the DNS resolver in namesys/dns.go states that a cache would need a timeout but it doesn't really; it just needs to record DNS expiration times and check them on lookup. An optimal replacement strategy would replace stale entries first but that's not really important here.

@whyrusleeping

This comment has been minimized.

Member

whyrusleeping commented Mar 10, 2017

I noticed this being really slow when playing with a browser extension. Would be really cool to have this done soon

@whyrusleeping whyrusleeping modified the milestones: Ipfs 0.4.8, Ipfs 0.4.7 Mar 10, 2017

@Kubuxu

This comment has been minimized.

Member

Kubuxu commented Mar 11, 2017

I can try doing it soon (TM), the solution is to revive solution which we had some time ago that is using TTL wherever we can in combination with browser caches and etags.

@whyrusleeping whyrusleeping modified the milestones: Ipfs 0.4.9, Ipfs 0.4.8 Mar 24, 2017

@whyrusleeping

This comment has been minimized.

Member

whyrusleeping commented May 8, 2017

Update: We set up dns caching on the ipfs.io gateways, things are a bit faster. Given go's current APIs we cant easily get dns ttls back from a resolve call, so doing caching inside go-ipfs isnt yet feasible.

@whyrusleeping whyrusleeping modified the milestones: Ipfs 0.4.10, Ipfs 0.4.9 May 8, 2017

@magik6k magik6k modified the milestones: Ipfs 0.4.10, Ipfs 0.4.11 Jul 28, 2017

@whyrusleeping

This comment has been minimized.

Member

whyrusleeping commented Aug 28, 2017

@cpacia did some work on making the dns resolver pluggable. I wonder if we could do something along those lines to add caching internally to ipfs

@Kubuxu Kubuxu modified the milestones: Ipfs 0.4.12, go-ipfs 0.4.13 Nov 6, 2017

@dirkmc

This comment has been minimized.

Contributor

dirkmc commented Feb 8, 2018

@Stebalien @whyrusleeping I'm interested in helping out on this, what's the current state of play?

@whyrusleeping

This comment has been minimized.

Member

whyrusleeping commented Feb 8, 2018

@dirkmc DHT query perf (from the other issue) is the biggest one.

Some other things that would help though:

  • Configurable (via command flag) number of records to gather before selecting a value
  • Ability to 'trust' a node where peerID == IPNS (debateable)
  • benchmarks and metrics (opentracing data for ipns resolves would be awesome)
  • Experiment more with pubsub ipns and ipns follow
@makeworld-the-better-one

This comment has been minimized.

makeworld-the-better-one commented Feb 13, 2018

@whyrusleeping ipns still seems to be slow for me, would setting the --ttl flag help?

@whyrusleeping

This comment has been minimized.

Member

whyrusleeping commented Feb 13, 2018

@Cole128 sorry, this is a deeper issue than just adding a flag. We're working on it, and will post updates here as we improve things.

@makeworld-the-better-one

This comment has been minimized.

makeworld-the-better-one commented Feb 13, 2018

Alright, I'll watch this thread. What will the ttl flag do?
Edit: I used the ttl flag, setting it to 5hrs, as I have a cron job to republish every 4 hours. Things seem to work better now.

@makeworld-the-better-one

This comment has been minimized.

makeworld-the-better-one commented Feb 14, 2018

After a day or two, it doesn't actually seem to work better. Timing the command line resolve is fast, but actually accessing it in my browser is slow the first time, but then speeds up, even if I do a hard refresh.

@dirkmc

This comment has been minimized.

Contributor

dirkmc commented Feb 20, 2018

I created an issue to discuss the suggestion of a command line parameter to specify the number of IPNS records to retrieve from the DHT: #4723

@Kubuxu

This comment has been minimized.

Member

Kubuxu commented Feb 23, 2018

I would just like to point out that this issue was initially created about performance of IPNS->DNS

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment