[ntpsec commit] Merge the mamycast theory in ntp.conf.txt back to the Server Discovery page.

Eric S. Raymond esr at ntpsec.org
Mon Oct 12 12:17:47 UTC 2015


Module:    ntpsec
Branch:    master
Commit:    f909873f570cb10785a04d33adb47c28130ea7c5
Changeset: http://git.ntpsec.org/ntpsec/commit/?id=f909873f570cb10785a04d33adb47c28130ea7c5

Author:    Eric S. Raymond <esr at thyrsus.com>
Date:      Mon Oct 12 08:16:50 2015 -0400

Merge the mamycast theory in ntp.conf.txt back to the Server Discovery page.

---

 docs/discover.txt | 195 +++++++++++++++++++++++++++++++++++++++++++++++-------
 ntpd/ntp.conf.txt | 192 +----------------------------------------------------
 2 files changed, 175 insertions(+), 212 deletions(-)

diff --git a/docs/discover.txt b/docs/discover.txt
index 40d6416..8cd40e6 100644
--- a/docs/discover.txt
+++ b/docs/discover.txt
@@ -159,35 +159,52 @@ link:autokey.html[Autokey Public Key Authentication] page.
 [[mcst]]
 == Manycast Scheme ==
 
-Manycast is an automatic server discovery and configuration paradigm new
-to NTPv4. It is intended as a means for a client to troll the nearby
-network neighborhood to find cooperating servers, validate them using
-cryptographic means and evaluate their time values with respect to other
-servers that might be lurking in the vicinity. It uses the grab-n'-drop
-paradigm with the additional feature that active means are used to grab
-additional servers should the number of associations fall below the
-`maxclock` option of the `tos` command.
+Manycast is an automatic server discovery and configuration paradigm
+new to NTPv4. It is intended as a means for a client to troll the
+nearby network neighborhood to find cooperating servers, validate them
+using cryptographic means and evaluate their time values with respect
+to other servers that might be lurking in the vicinity. It uses the
+grab-n'-drop paradigm with the additional feature that active means
+are used to grab additional servers should the number of associations
+fall below the `maxclock` option of the `tos` command. The intended
+result is that each manycast client mobilizes client associations with
+some number of the "best" of the nearby manycast servers, yet
+automatically reconfigures to sustain this number of servers should
+one or another fail.
 
 The manycast paradigm is not the anycast paradigm described in RFC-1546,
 which is designed to find a single server from a clique of servers
 providing the same service. The manycast paradigm is designed to find a
 plurality of redundant servers satisfying defined optimality criteria.
 
-A manycast client is configured using the `manycastclient` configuration
-command, which is similar to the `server` configuration command. It
-sends ordinary client mode messages, but with a broadcast address rather
-than a unicast address and sends only if less than `maxclock`
-associations remain and then only at the minimum feasible rate and
-minimum feasible time-to-live (TTL) hops. The polling strategy is
-designed to reduce as much as possible the volume of broadcast messages
-and the effects of implosion due to near-simultaneous arrival of
-manycast server messages. There can be as many manycast client
-associations as different addresses, each one serving as a template for
-future unicast client/server associations.
+Manycasting can be used with either symmetric key or public key
+cryptography. The public key infrastructure (PKI) offers the best
+protection against compromised keys and is generally considered
+stronger, at least with relatively large key sizes. It is implemented
+using the Autokey protocol and the OpenSSL cryptographic library
+available from _http://www.openssl.org/_. The library can also be used
+with other NTPv4 modes as well and is highly recommended, especially for
+broadcast modes.
+
+A manycast client is configured using the `manycastclient`
+configuration command, which is similar to the `server` configuration
+command but with a multicast (IPv4 class _D_ or IPv6 prefix _FF_)
+group address. The IANA has designated IPv4 address 224.1.1.1 and IPv6
+address FF05::101 (site local) for NTP.
+
+The client sends sends ordinary client mode messages, but to one of
+these broadcast addresses rather than a unicast address, and sends
+only if less than `maxclock` associations remain and then only at the
+minimum feasible rate and minimum feasible time-to-live (TTL)
+hops. The polling strategy is designed to reduce as much as possible
+the volume of broadcast messages and the effects of implosion due to
+near-simultaneous arrival of manycast server messages. There can be as
+many manycast client associations as different addresses, each one
+serving as a template for future unicast client/server associations.
 
 A manycast server is configured using the `manycastserver` command,
 which listens on the specified broadcast address for manycast client
-messages. If a manycast server is in scope of the current TTL and is
+messages.  If a manycast server is in scope of the current TTL and is
 itself synchronized to a valid source and operating at a stratum level
 equal to or lower than the manycast client, it replies with an ordinary
 unicast server message.
@@ -197,17 +214,149 @@ client association according to the matching manycast client template.
 This requires the server to be cryptographically authenticated and the
 server stratum to be less than or equal to the client stratum.
 
+Then, the client polls the server at its unicast address in
+burst mode in order to reliably set the host clock and validate the
+source. This normally results in a volley of eight client/server at 2-s
+intervals during which both the synchronization and cryptographic
+protocols run concurrently. Following the volley, the client runs the
+NTP intersection and clustering algorithms, which act to discard all but
+the "best" associations according to stratum and synchronization
+distance. The surviving associations then continue in ordinary
+client/server mode.
+
+The manycast client polling strategy is designed to reduce as much as
+possible the volume of manycast client messages and the effects of
+implosion due to near-simultaneous arrival of manycast server messages.
+The strategy is determined by the _manycastclient_, _tos_ and _ttl_
+configuration commands. The manycast poll interval is normally eight
+times the system poll interval, which starts out at the _minpoll_ value
+specified in the _manycastclient_, command and, under normal
+circumstances, increments to the _maxpolll_ value specified in this
+command. Initially, the TTL is set at the minimum hops specified by the
+ttl command. At each retransmission the TTL is increased until reaching
+the maximum hops specified by this command or a sufficient number of client
+associations have been found. Further retransmissions use the same TTL.
+
+The quality and reliability of the suite of associations discovered by
+the manycast client is determined by the NTP mitigation algorithms and
+the _minclock_ and _minsane_ values specified in the `tos` configuration
+command. At least _minsane_ candidate servers must be available and the
+mitigation algorithms produce at least _minclock_ survivors in order to
+synchronize the clock. Byzantine agreement principles require at least
+four candidates in order to correctly discard a single falseticker. For
+legacy purposes, _minsane_ defaults to 1 and _minclock_ defaults to 3.
+For manycast service _minsane_ should be explicitly set to 4, assuming
+at least that number of servers are available.
+
+If at least _minclock_ servers are found, the manycast poll interval is
+immediately set to eight times _maxpoll_. If less than _minclock_
+servers are found when the TTL has reached the maximum hops, the
+manycast poll interval is doubled. For each transmission after that, the
+poll interval is doubled again until reaching the maximum of eight times
+_maxpoll_. Further transmissions use the same poll interval and TTL
+values. Note that while all this is going on, each client/server
+association found is operating normally at the system poll interval.
+
+Administratively scoped multicast boundaries are normally specified by
+the network router configuration and, in the case of IPv6, the link/site
+scope prefix. By default, the increment for TTL hops is 32 starting from
+31; however, the _ttl_ configuration command can be used to modify the
+values to match the scope rules.
+
+It is often useful to narrow the range of acceptable servers which can
+be found by manycast client associations. Because manycast servers
+respond only when the client stratum is equal to or greater than the
+server stratum, primary (stratum 1) servers will find only primary
+servers in TTL range, which is probably the most common objective.
+However, unless configured otherwise, all manycast clients in TTL range
+will eventually find all primary servers in TTL range, which is probably
+not the most common objective in large networks. The `tos` command can
+be used to modify this behavior. Servers with stratum below _floor_ or
+above _ceiling_ specified in the `tos` command are strongly discouraged
+during the selection process; however, these servers may be temporally
+accepted if the number of servers within TTL range is less than
+_minclock_.
+
+The above actions occur for each manycast client message, which repeats
+at the designated poll interval. However, once the ephemeral client
+association is mobilized, subsequent manycast server replies are
+discarded, since that would result in a duplicate association. If during
+a poll interval the number of client associations falls below
+_minclock_, all manycast client prototype associations are reset to the
+initial poll interval and TTL hops and operation resumes from the
+beginning. It is important to avoid frequent manycast client messages,
+since each one requires all manycast servers in TTL range to respond.
+The result could well be an implosion, either minor or major, depending
+on the number of servers in range. The recommended value for _maxpoll_
+is 12 (4,096 s).
+
 It is possible and frequently useful to configure a host as both
 manycast client and manycast server. A number of hosts configured this
 way and sharing a common multicast group address will automatically
 organize themselves in an optimum configuration based on stratum and
-synchronization distance.
-
-The use of cryptograpic authentication is always a good idea in any
+synchronization distance. For example, consider an NTP subnet of two
+primary servers and a hundred or more dependent clients. With two
+exceptions, all servers and clients have identical configuration files
+including both `multicastclient` and `multicastserver` commands using,
+for instance, multicast group address 239.1.1.1. The only exception is
+that each primary server configuration file must include commands for
+the primary reference source such as a GPS receiver.
+
+The remaining configuration files for all secondary servers and clients
+have the same contents, except for the `tos` command, which is specific
+for each stratum level. For stratum 1 and stratum 2 servers, that
+command is not necessary. For stratum 3 and above servers the _floor_
+value is set to the intended stratum number. Thus, all stratum 3
+configuration files are identical, all stratum 4 files are identical and
+so forth.
+
+Once operations have stabilized in this scenario, the primary servers
+will find the primary reference source and each other, since they both
+operate at the same stratum (1), but not with any secondary server or
+client, since these operate at a higher stratum. The secondary servers
+will find the servers at the same stratum level. If one of the primary
+servers loses its GPS receiver, it will continue to operate as a client
+and other clients will time out the corresponding association and
+re-associate accordingly.
+
+Some administrators prefer to avoid running {ntpdman} continuously and
+run {ntpdman} `-q` as a cron job. In either case the servers must be
+configured in advance and the program fails if none are available when
+the cron job runs. A really slick application of manycast is with
+{ntpd} `-q`. The program wakes up, scans the local landscape looking
+for the usual suspects, selects the best from among the rascals, sets
+the clock and then departs. Servers do not have to be configured in
+advance and all clients throughout the network can have the same
+configuration file.
+
+The use of cryptographic authentication is always a good idea in any
 server discovery scheme. Both symmetric key and public key cryptography
 can be used in the same scenarios as described above for the
 broadast/multicast scheme.
 
+//=== Manycast Interactions with Autokey ===
+//
+//Each time a manycast client sends a client mode packet to a multicast
+//group address, all manycast servers in scope generate a reply including
+//the host name and status word. The manycast clients then run the Autokey
+//protocol, which collects and verifies all certificates involved.
+//Following the burst interval all but three survivors are cast off, but
+//the certificates remain in the local cache. It often happens that
+//several complete signing trails from the client to the primary servers
+//are collected in this way.
+//
+//About once an hour or less often if the poll interval exceeds this, the
+//client regenerates the Autokey key list. This is in general transparent
+//in client/server mode. However, about once per day the server private
+//value used to generate cookies is refreshed along with all manycast
+//client associations. In this case all cryptographic values including
+//certificates is refreshed. If a new certificate has been generated since
+//the last refresh epoch, it will automatically revoke all prior
+//certificates that happen to be in the certificate cache. At the same
+//time, the manycast scheme starts all over from the beginning and the
+//expanding ring shrinks to the minimum and increments from there while
+//collecting all servers in scope.
+
 [[pool]]
 == Server Pool Scheme ==
 
diff --git a/ntpd/ntp.conf.txt b/ntpd/ntp.conf.txt
index 32a6110..1042218 100644
--- a/ntpd/ntp.conf.txt
+++ b/ntpd/ntp.conf.txt
@@ -178,194 +178,8 @@ include::../docs/access-commands.txt[]
 
 === Manycasting ===
 
-Manycasting is a automatic discovery and configuration paradigm new to
-NTPv4. It is intended as a means for a multicast client to troll the
-nearby network neighborhood to find cooperating manycast servers,
-validate them using cryptographic means and evaluate their time values
-with respect to other servers that might be lurking in the vicinity. The
-intended result is that each manycast client mobilizes client
-associations with some number of the "best" of the nearby manycast
-servers, yet automatically reconfigures to sustain this number of
-servers should one or another fail.
-
-Note that the manycasting paradigm does not coincide with the anycast
-paradigm described in RFC-1546, which is designed to find a single
-server from a clique of servers providing the same service. The manycast
-paradigm is designed to find a plurality of redundant servers satisfying
-defined optimality criteria.
-
-Manycasting can be used with either symmetric key or public key
-cryptography. The public key infrastructure (PKI) offers the best
-protection against compromised keys and is generally considered
-stronger, at least with relatively large key sizes. It is implemented
-using the Autokey protocol and the OpenSSL cryptographic library
-available from _http://www.openssl.org/_. The library can also be used
-with other NTPv4 modes as well and is highly recommended, especially for
-broadcast modes.
-
-A persistent manycast client association is configured using the
-manycastclient command, which is similar to the server command but with
-a multicast (IPv4 class _D_ or IPv6 prefix _FF_) group address. The IANA
-has designated IPv4 address 224.1.1.1 and IPv6 address FF05::101 (site
-local) for NTP. When more servers are needed, it broadcasts manycast
-client messages to this address at the minimum feasible rate and minimum
-feasible time-to-live (TTL) hops, depending on how many servers have
-already been found. There can be as many manycast client associations as
-different group address, each one serving as a template for a future
-ephemeral unicast client/server association.
-
-Manycast servers configured with the _manycastserver_ command listen on
-the specified group address for manycast client messages. Note the
-distinction between manycast client, which actively broadcasts messages,
-and manycast server, which passively responds to them. If a manycast
-server is in scope of the current TTL and is itself synchronized to a
-valid source and operating at a stratum level equal to or lower than the
-manycast client, it replies to the manycast client message with an
-ordinary unicast server message.
-
-The manycast client receiving this message mobilizes an ephemeral
-client/server association according to the matching manycast client
-template, but only if cryptographically authenticated and the server
-stratum is less than or equal to the client stratum. Authentication is
-explicitly required and either symmetric key or public key (Autokey) can
-be used. Then, the client polls the server at its unicast address in
-burst mode in order to reliably set the host clock and validate the
-source. This normally results in a volley of eight client/server at 2-s
-intervals during which both the synchronization and cryptographic
-protocols run concurrently. Following the volley, the client runs the
-NTP intersection and clustering algorithms, which act to discard all but
-the "best" associations according to stratum and synchronization
-distance. The surviving associations then continue in ordinary
-client/server mode.
-
-The manycast client polling strategy is designed to reduce as much as
-possible the volume of manycast client messages and the effects of
-implosion due to near-simultaneous arrival of manycast server messages.
-The strategy is determined by the _manycastclient_, _tos_ and _ttl_
-configuration commands. The manycast poll interval is normally eight
-times the system poll interval, which starts out at the _minpoll_ value
-specified in the _manycastclient_, command and, under normal
-circumstances, increments to the _maxpolll_ value specified in this
-command. Initially, the TTL is set at the minimum hops specified by the
-ttl command. At each retransmission the TTL is increased until reaching
-the maximum hops specified by this command or a sufficient number of client
-associations have been found. Further retransmissions use the same TTL.
-
-The quality and reliability of the suite of associations discovered by
-the manycast client is determined by the NTP mitigation algorithms and
-the _minclock_ and _minsane_ values specified in the `tos` configuration
-command. At least _minsane_ candidate servers must be available and the
-mitigation algorithms produce at least _minclock_ survivors in order to
-synchronize the clock. Byzantine agreement principles require at least
-four candidates in order to correctly discard a single falseticker. For
-legacy purposes, _minsane_ defaults to 1 and _minclock_ defaults to 3.
-For manycast service _minsane_ should be explicitly set to 4, assuming
-at least that number of servers are available.
-
-If at least _minclock_ servers are found, the manycast poll interval is
-immediately set to eight times _maxpoll_. If less than _minclock_
-servers are found when the TTL has reached the maximum hops, the
-manycast poll interval is doubled. For each transmission after that, the
-poll interval is doubled again until reaching the maximum of eight times
-_maxpoll_. Further transmissions use the same poll interval and TTL
-values. Note that while all this is going on, each client/server
-association found is operating normally at the system poll interval.
-
-Administratively scoped multicast boundaries are normally specified by
-the network router configuration and, in the case of IPv6, the link/site
-scope prefix. By default, the increment for TTL hops is 32 starting from
-31; however, the _ttl_ configuration command can be used to modify the
-values to match the scope rules.
-
-It is often useful to narrow the range of acceptable servers which can
-be found by manycast client associations. Because manycast servers
-respond only when the client stratum is equal to or greater than the
-server stratum, primary (stratum 1) servers will find only primary
-servers in TTL range, which is probably the most common objective.
-However, unless configured otherwise, all manycast clients in TTL range
-will eventually find all primary servers in TTL range, which is probably
-not the most common objective in large networks. The `tos` command can
-be used to modify this behavior. Servers with stratum below _floor_ or
-above _ceiling_ specified in the `tos` command are strongly discouraged
-during the selection process; however, these servers may be temporally
-accepted if the number of servers within TTL range is less than
-_minclock_.
-
-The above actions occur for each manycast client message, which repeats
-at the designated poll interval. However, once the ephemeral client
-association is mobilized, subsequent manycast server replies are
-discarded, since that would result in a duplicate association. If during
-a poll interval the number of client associations falls below
-_minclock_, all manycast client prototype associations are reset to the
-initial poll interval and TTL hops and operation resumes from the
-beginning. It is important to avoid frequent manycast client messages,
-since each one requires all manycast servers in TTL range to respond.
-The result could well be an implosion, either minor or major, depending
-on the number of servers in range. The recommended value for _maxpoll_
-is 12 (4,096 s).
-
-It is possible and frequently useful to configure a host as both
-manycast client and manycast server. A number of hosts configured this
-way and sharing a common group address will automatically organize
-themselves in an optimum configuration based on stratum and
-synchronization distance. For example, consider an NTP subnet of two
-primary servers and a hundred or more dependent clients. With two
-exceptions, all servers and clients have identical configuration files
-including both `multicastclient` and `multicastserver` commands using,
-for instance, multicast group address 239.1.1.1. The only exception is
-that each primary server configuration file must include commands for
-the primary reference source such as a GPS receiver.
-
-The remaining configuration files for all secondary servers and clients
-have the same contents, except for the `tos` command, which is specific
-for each stratum level. For stratum 1 and stratum 2 servers, that
-command is not necessary. For stratum 3 and above servers the _floor_
-value is set to the intended stratum number. Thus, all stratum 3
-configuration files are identical, all stratum 4 files are identical and
-so forth.
-
-Once operations have stabilized in this scenario, the primary servers
-will find the primary reference source and each other, since they both
-operate at the same stratum (1), but not with any secondary server or
-client, since these operate at a higher stratum. The secondary servers
-will find the servers at the same stratum level. If one of the primary
-servers loses its GPS receiver, it will continue to operate as a client
-and other clients will time out the corresponding association and
-re-associate accordingly.
-
-Some administrators prefer to avoid running
-{ntpdman} continuously and run either {ntpdate} or
-{ntpdman} `-q` as a cron job. In either case the servers must be
-configured in advance and the program fails if none are available when
-the cron job runs. A really slick application of manycast is with
-{ntpd} `-q`. The program wakes up, scans the local landscape
-looking for the usual suspects, selects the best from among the rascals,
-sets the clock and then departs. Servers do not have to be configured in
-advance and all clients throughout the network can have the same
-configuration file.
-
-=== Manycast Interactions with Autokey ===
-
-Each time a manycast client sends a client mode packet to a multicast
-group address, all manycast servers in scope generate a reply including
-the host name and status word. The manycast clients then run the Autokey
-protocol, which collects and verifies all certificates involved.
-Following the burst interval all but three survivors are cast off, but
-the certificates remain in the local cache. It often happens that
-several complete signing trails from the client to the primary servers
-are collected in this way.
-
-About once an hour or less often if the poll interval exceeds this, the
-client regenerates the Autokey key list. This is in general transparent
-in client/server mode. However, about once per day the server private
-value used to generate cookies is refreshed along with all manycast
-client associations. In this case all cryptographic values including
-certificates is refreshed. If a new certificate has been generated since
-the last refresh epoch, it will automatically revoke all prior
-certificates that happen to be in the certificate cache. At the same
-time, the manycast scheme starts all over from the beginning and the
-expanding ring shrinks to the minimum and increments from there while
-collecting all servers in scope.
+For a detailed description of manycast operation, see the "Servery
+Discovery" page in the web documentation.
 
 === Manycast Options ===
 
@@ -805,7 +619,7 @@ One of the following exit values will be returned:
 
 == SEE ALSO ==
 
-{ntpdman}, {ntpqman}, {ntpqman}
+{ntpdman}, {ntpqman}.
 
 In addition to the manual pages provided, comprehensive documentation is
 available on the world wide web at {project-website}. A snapshot of



More information about the vc mailing list