Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Update HowTo on using the pool #218

Open
wants to merge 3 commits into
base: main
Choose a base branch
from
Open

Conversation

penguinpee
Copy link

As suggested by @mdavids on Discourse[1], update instructions
on using the pool.

Preferring the 2. pools is motivated by the increased usage
of IPv6 and the yet unresolved PR #176.

Second commit is optional, obviously.

[1] https://community.ntppool.org/t/fyi-removing-server-from-the-pool/2424

  • Update HowTo to on using the pool
  • Whitespace (space vs. tabs)

* default to use of pool directive
* default to us of 2. pools (for IPv6)
* document us of chrony and timedatectl
* provide updated samples
* fix dead links
* use tab for indent throughout the file
Use space indentation throughout the file.
@penguinpee
Copy link
Author

Sorry about the closing / re-opening ping-pong. I was hoping GitHub wold be smart about using hashes for references, thus allowing me to sneak this PR into a branch on my fork. Didn't work as expected...

@paulgear
Copy link

There is some crossover of your 2nd commit into your first commit at line 100; I personally think it would be better to keep any whitespace-only changes for a separate PR.

docs/ntppool/en/use.html Outdated Show resolved Hide resolved
@penguinpee
Copy link
Author

There is some crossover of your 2nd commit into your first commit at line 100; I personally think it would be better to keep any whitespace-only changes for a separate PR.

That part was already malformed, indentation wise, not using tabs for indentation as the rest of the file did. But, I'm fine with reverting the second commit entirely and leaving it for another PR.

penguinpee added a commit to penguinpee/ntppool that referenced this pull request May 23, 2022
This reverts commit 03859ca.

As suggested in abh#218, leaving this for another PR.
docs/ntppool/en/use.html Outdated Show resolved Hide resolved
penguinpee added a commit to penguinpee/ntppool that referenced this pull request May 24, 2022
docs/ntppool/use/sample-legacy-config.html Show resolved Hide resolved
pool 3.pool.ntp.org iburst
pool 0.pool.ntp.org iburst
pool 1.pool.ntp.org iburst
</pre>
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There needs to be only one pool directive. With chronyd 4 specified pools would result in 16 used servers. With ntpd, at least with some versions, I think it would decrease the number of servers as there is a dummy source added for each pool and I suspect it would increase the DNS traffic unnecessarily.

For ntpd, there should be tos maxclock 5 setting to limit the number of used servers to 4.

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I agree. And pool pool.ntp.org iburst would be a great choice, if it weren't for the fact that it has no AAAA-record. Hence, I propose pool 2.pool.ntp.org iburst.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You are right. Unless maxsources is used, chronyd will use 4 servers per pool. Smart. 🕶️

Setting tos maxclock 5 is correct if declaring a single pool. According to the documentation of ntpsec (a fork of ntpd) using only a single pool declaration has some limitations. That limitation being that associations are slower to be established, I infer from the comments in the sample configuration.

I haven't used ntpd recently, but in chronyd having a single pool declared, gets you in sync within a minute.

I'm fine with changing the configuration snippet for ntpd to:

pool 2.pool.ntp.org iburst
tos maxclock 5

I left out the driftfile directive. It's distribution specific and I believe taken care of by most distribution's default configuration.

For chronyd we need a remark then, that tos maxclock is not needed.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There needs to be only one pool directive. With chronyd 4 specified pools would result in 16 used servers.

@mlichvar is the number 4 determined by chrony or is it based on the number of addresses returned from a single DNS query of pool.ntp.org? If it's chrony, is that hardcoded or a configurable default value?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It's configurable using maxsources in connection with pool. Default value is 4 and max value is 16.

penguinpee added a commit to penguinpee/ntppool that referenced this pull request May 25, 2022
As pointed out in abh#218 ntpd and chronyd default to using
4 servers per pool.
Implement suggested changes from abh#218

 * Use a single pool directive
 * minor editorial changes
 * Revert "Whitespace (space vs. tabs)"
@penguinpee
Copy link
Author

Well, there has been no activity on the PR for over a week now. All suggested changes have been integrated. Do I need to request a final review from anybody?

docs/ntppool/en/use.html Show resolved Hide resolved
docs/ntppool/use/sample-legacy-config.html Show resolved Hide resolved
@@ -0,0 +1,5 @@
<pre class="code">
pool 2.pool.ntp.org iburst
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The problem with your current setup is that it excludes any servers in the 0, 1, and 3 DNS records. In underserved zones, this won't be a problem, but in zones with sufficient capacity, the results returned in 0, 1, and 3 should be a discrete set of servers from 2. So using maxsources is the only way to be correct for all cases:

Suggested change
pool 2.pool.ntp.org iburst
pool 0.pool.ntp.org iburst maxsources 1
pool 1.pool.ntp.org iburst maxsources 1
pool 2.pool.ntp.org iburst maxsources 1
pool 3.pool.ntp.org iburst maxsources 1

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The problem with your current setup is that it excludes any servers in the 0, 1, and 3 DNS records. In underserved zones, this won't be a problem, but in zones with sufficient capacity, the results returned in 0, 1, and 3 should be a discrete set of servers from 2.

As I understand it, the split of servers between 0, 1, 2, 3 is randomly changing over time. Even if 2 has fewer than four servers, the client should get four different addresses after few minutes. Of course, it would be better to use the non-numbered zone if it included both IPv4 and IPv6 servers.

pool 0.pool.ntp.org iburst maxsources 1

This is effectively almost identical to server 0.pool.ntp.org iburst. The only difference is in the initial selection of servers when some are not responding, e.g. 0 had some IPv6 servers and they were preferred by the client's DNS resolver, but IPv6 connectivity was broken. Probably not worth the change, as long as only 2 has IPv6 servers.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I agree with @mlichvar, using pool directive with maxsources 1 effectively turns it into a server directive.

I'm not familiar with the pool distribution logic. But seeing that major distributions ship NTP configurations only using 2.vendor.pool.ntp.org I don't see any immediate issue following suite. Besides these changes are somewhat masking the underlying issue documented in #176, which still needs to be solved.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

How is using pool with maxsources 1 equivalent to server? If a pool host stops responding, it will be removed and a DNS lookup re-triggered for a new one. If a host configured with server stops responding, it will just be ignored and no attempt will be made to rectify the situation. I still maintain that all pools should be included, not just 2. @abh has suggested previously that the special case of 2.pool.ntp.org with respect to IPv6 will be changing in the future, so better to include all of the pools.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The server will be replaced even if specified as server. I'd prefer a single pool to make the configuration simpler and reduce the number of DNS requests. If pool.ntp.org included IPv6 addresses, that would be the best choice. Until that happens, it is 2.pool.ntp.org. It can provide only a quarter of IPv4 addresses at a time, but this assignment into 0,1,2,3 is randomly changing over time, so the client can get any address from the whole set.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@abh has suggested previously that the special case of 2.pool.ntp.org with respect to IPv6 will be changing in the future, so better to include all of the pools.

Well, once that happened, it be easy to adjust the documentation again. But until that happens, we should stick with 2.pool.ntp.org in support of IPv6. PR #176 is nearing its third anniversary 🎂 but still waiting for a resolution 😞.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The server will be replaced even if specified as server. I'd prefer a single pool to make the configuration simpler and reduce the number of DNS requests. If pool.ntp.org included IPv6 addresses, that would be the best choice. Until that happens, it is 2.pool.ntp.org. It can provide only a quarter of IPv4 addresses at a time, but this assignment into 0,1,2,3 is randomly changing over time, so the client can get any address from the whole set.

That may be true for chrony, but it is not the case for the ntpd reference implementation. ntpd will never discard a configured "server 0.pool.ntp.org" unless it includes the "preempt" option to the server directive. Every association spun up by the pool directive has "preempt" automatically added. So with ntpd, "pool"-spun associations will be dropped if they do not contribute to the time solution for about 10 polls, while "server" associations are only eligible to be dropped with "preempt" specified. Pool associations will be replaced automatically, while "server ... preempt" associations are not replaced after they're dropped, making it undesirable in practice. I suspect NTPsec hasn't changed that behavior since it forked from the reference implementation 4.2.8 version.

I'd love to get clarification on how chrony behaves with "server ", "server preempt" (if it has the preempt option), and "pool ".

Thanks,
Dave Hart

Copy link

@hart-NTP hart-NTP Nov 20, 2022

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@abh has suggested previously that the special case of 2.pool.ntp.org with respect to IPv6 will be changing in the future, so better to include all of the pools.

Well, once that happened, it be easy to adjust the documentation again. But until that happens, we should stick with 2.pool.ntp.org in support of IPv6. PR #176 is nearing its third anniversary 🎂 but still waiting for a resolution 😞.

I respectfully disagree. IMHO configurations suggested by the pool project are likely to live on long after the pool project changes those suggestions. With that in mind, I think the pool project suggestion should be a single pool pool.ntp.org iburst which allows the pool project to decide when to start serving IPv6 addresses more widely, rather than suggesting people use 2.pool.ntp.org's current IPv6 peculiarity. There could be a note that IPv6-only systems should use 2.pool.ntp.org instead for now, but my preference would be to see the primary suggestion remain the unnumbered pool.ntp.org/vendor.pool.ntp.org.

It's also important to note that with the reference implementation (and possibly NTPsec), restrict ... nopeer restriction can prevent pool associations from working. If restrict default ... nopeer is used, it's critical to also have a similar restrict source directive that does not include nopeer. A restrict source directive causes every association that is spun up (including from server and pool) to get a host-specific automatic restriction which overrides any other restrictions that would otherwise apply to the association's remote address. This was put in place to allow users of the pool directive to be able to target restrictions at pool server IPs which are not known at configuration time. If this is confusing, just fire away and I'll be happy to explain further.

I strongly feel the suggested configuration should include only one pool directive. As noted previously, each additional pool directive requires a corresponding increase in tos maxclock for ntpd, complicating the suggested configuration. It also triggers additional relatively useless DNS queries, at least for ntpd users. ntpd with a single pool pool.ntp.org iburst synchronizes as fast as the currently suggested 4 server #.pool.ntp.org iburst directives because the implementation immediately does a DNS lookup of the pool hostname and holds on to all IPs returned and spins an association with the first IP address. As soon as that server's response is received, another pool association is spun one second later with the next IP address as long as tos masclock hasn't been exceeded, If the list IP addresses from the DNS query is exhausted, another DNS query is triggered immediately, and when that DNS response comes in, another pool association is spun up a second later. The net result is up to the lesser of maxclock - 1 and the number of usable pool IPs found for pool.ntp.org (currently 4 for IPv4) are spun up within seconds. With pool.ntp.org using 150 second DNS TTL, more servers will be spun up within 4 default 64s pool prototype association cycles.

I have milder feelings about suggesting a higher tos minclock configuration for ntpd users, but I think it should be considered. Currently, as the docs say, "for legacy purposes" the default is tos minsane 1 but it should really be a larger number that is less than tos minclock (which defaults to 3).

Putting it all together, this is my take on a suggested ntp.conf for ntpd pool use:

=== ntp.conf ===
driftfile /etc/ntp.drift

# Tight restrictions for the public, but loosen them for servers
# we use for time.  Note the lack of nopeer on "restrict source"
# is important, otherwise pool associations will not spin up.
# These restrictions do not allow the public to query via ntpq (noquery)
# or set up event traps used by network monitoring tools to keep tabs
# on remote ntpd instances (notrap). "limited" and "kod" refuse to
# respond to clients that query too often, by default more than once
# every 2 seconds in a burst or more than once per 8 seconds long term.
# Adding kod sends occasional "kiss of death" responses to clients
# exceeding the rate limit, providing no useful time and requesting
# the client to back off their polling interval, which they will if 
# using ntpd and their maxpoll allows.

restrict default nopeer noquery nomodify notrap limited kod
restrict source         noquery nomodify notrap limited kod

# Allow status queries and everything else from localhost and local net.
# If there are hosts used as time sources in the local network, they
# will be subject to the "restrict source" restrictions above, so they
# will not be able to use ntpq with this host.

restrict 127.0.0.1 
restrict ::1
restrict 192.168.0.0 mask 255.255.0.0

# Require at least two sources in reasonable agreement before adjusting
# the clock.  The default minsane is 1 "for legacy purposes."  Lower
# maxclock from the default 10 to a more reasonable number for the pool.
tos minsane 2 maxclock 5

pool pool.ntp.org iburst

=== ntp.conf ===

This is also a reasonable configuration for a non-refclock pool server, perhaps with slightly different tos knob-twisting:

tos minsane 3 minclock 5 maxclock 8

That makes a good pool server which will naturally gravitate to higher-quality sources and is a bit more paranoid about getting consensus between sources before considering itself synched and therefore serving time to clients and adjusting the local clock. You might want to throw in a hardwired server if you like a particular stratum-1 server that you can reach with low jitter.

Cheers,
Dave Hart

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'd love to get clarification on how chrony behaves with "server ", "server preempt" (if it has the preempt option), and "pool ".

Chrony's server statement has no 'preempt' option. Pool and server behave the same. Chrony expects pools to resolve to multiple addresses and you can specify how many addresses chrony will use from any given pool.

https://chrony.tuxfamily.org/doc/4.3/chrony.conf.html

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I respectfully disagree. IMHO configurations suggested by the pool project are likely to live on long after the pool project changes those suggestions.

This PR is about changing the configuration examples on the website. That can easily be adjusted if the pool layout changes and all pools start announcing IPv6 servers. Most distributions do the right thing and ship their own vendor configuration, which, most notably, quite often contains pool 2.vendor.pool.ntp.org ....

So, the issue with IPv4 limitations of 3/4 of the pool is widely known and worked around. It's time to fix it and enable IPv6 on all pools. Until then the documentation should be updated.

@penguinpee
Copy link
Author

People start noticing that the documentation is no longer following best practices.

Time for a nudge... (@abh)

@hart-NTP
Copy link

For those looking only at the bottom of this conversation, I've added three replies to existing threads above. I'm not familiar with github conversations on a pull request, can someone tell me if it's possible to quickly see the newest comments/replies instead of having them buried inline with older discussion?

@abh abh force-pushed the main branch 2 times, most recently from 5bfa803 to e43f7cf Compare August 12, 2024 22:16
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants