How much do we care about high-load scenarios?
Stromeko at nexgo.de
Thu Sep 15 18:20:55 UTC 2016
Dan Drown writes:
> The limit they hit with their hardware was around 370kpps (with a
> single process receive), which is a lot of NTP.
> From my own testing with iperf high rate 64 byte UDP packets, max
> rate before 1% receive packet loss:
> i3-540 / Intel 82574L nic: ~469kpps
> Athlon(tm) 64 X2 4400+ / RTL8168 gig nic: ~64kpps
> Odroid C2: ~62kpps
> Raspberry Pi 2: ~19kpps
> Beaglebone Black: ~9kpps
> Raspberry Pi B+: ~4kpps
> Even these low end machines would be able to serve thousands (or
> millions even, if the clients are mostly nice) of NTP clients each.
These are throughput numbers, not response time distributions. I think
it's an established fact from even a back-of-the-envelope calculation
that NTP doesn't saturate a network link. There's a glimpse of the delay
distribution getting a fatter tail in the RADclock papers, but that data
is sadly quite out of date by now.
> So this doesn't seem like a burning issue to the average user.
The average user can't tell the difference since he doesn't know where
the variability in the remote NTP server comes from. As anecdotal
evidence, I've had to re-connect my VDSL this past weekend and now one
of the two PTB servers shows a 100µs difference in offset, while the
other hasn't changed, for a total of 200µs average difference between
them. I've moved the PPS offset on the GPS so that on average I'm
keeping it in the middle of these two. Due to the sequence of events
it's clear that the network somehow produces this result and not some
load on a server, but I couldn't know that in the general case.
+<[Q+ Matrix-12 WAVE#46+305 Neuron microQkb Andromeda XTk Blofeld]>+
SD adaptations for KORG EX-800 and Poly-800MkII V0.9:
More information about the devel