<div dir="ltr"><div>Hal, newbie question.</div><div><br></div><div>What use case on the internet would be saturating a Gb link with NTP? Surely, before that, we should be recommending a second server closer to the clients?</div><div><br></div><div>Assume a large University campus, with 30000 nodes (5k students, each with a tablet and phone, etc). Assume all nodes (including the IOT coffee maker) run an NTP client. With a poll of 100 secs (to make life easier), that is 3000 pkts/s .<br></div><div><div><div dir="ltr" class="gmail_signature" data-smartmail="gmail_signature"><br></div><div class="gmail_signature" data-smartmail="gmail_signature">I have an ancient Pentium 4, from 2005 or earlier. 4GB RAM, 32-bit. ntp -n -c monlist says 67000 slots. It is in the pool, since 2009 or so. CPU load is 1% on each core, except when I run updates, etc.<br></div><div class="gmail_signature" data-smartmail="gmail_signature"><br></div><div dir="ltr" class="gmail_signature" data-smartmail="gmail_signature">-- <br>Sanjeev Gupta<br>+65 98551208 <a href="http://www.linkedin.com/in/ghane" target="_blank">http://www.linkedin.com/in/ghane</a></div></div><br></div></div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Wed, Apr 7, 2021 at 12:43 PM Hal Murray via devel <<a href="mailto:devel@ntpsec.org">devel@ntpsec.org</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><br>
There are 4 places that might be the limiting factor.<br>
<br>
1) The wire might be full<br>
2) The Ethernet chip might not be able to process packets at full wire speed.<br>
3) The kernel's input dispatcher thread might run out of CPU cycles.<br>
4) The client threads might run out of CPU cycles.<br>
<br>
I don't have a good setup to demonstrate the Ethernet chip being the limiting <br>
factor. I could probably make one by plugging in a junk card. The gigabit <br>
chips that come on Dell motherboards are OK. They can get close to full wire <br>
speed, close enough that I'm missing under 100 bits between packets.<br>
<br>
The other limits are reasonably easy to demo.<br>
<br>
I have a box with an Intel E5-1650v3. It has 6 cores at 3.5 MHz. With <br>
HyperThreading, that's 12 CPUs.<br>
<br>
My standard test setup is an echo server. The kernel SoftIRQ thread gets one <br>
CPU. The other half of that core is left idle. There is an echo server <br>
thread on each of the other 10 CPUs.<br>
<br>
Measured NTP throughput:<br>
pkts/sec uSec/pkt<br>
426K 2.3 NTP (simple) 48 bytes<br>
320K 3.1 NTP + AES 68 bytes<br>
93K 10.7 NTP + NTS 232 bytes<br>
<br>
The wire limit for NTP+NTS (232 UDP bytes) is 407K packets per second. That's <br>
2.5 uSec per packet. With 10 CPUs, we have 25 uSec per packet. We only need <br>
11 so this processor chip should be able to keep up with a gigabit link <br>
running at full speed.<br>
<br>
Note that a workstation with 4 cores can probably keep up. That leaves 6 <br>
worker threads so we only get 15 uSec of CPU time for each packet, but that's <br>
still more than 11. I don't know how much running both CPUs of a core will <br>
slow things down. We'll have to wait and measure that.<br>
<br>
<br>
-- <br>
These are my opinions. I hate spam.<br>
<br>
<br>
<br>
_______________________________________________<br>
devel mailing list<br>
<a href="mailto:devel@ntpsec.org" target="_blank">devel@ntpsec.org</a><br>
<a href="http://lists.ntpsec.org/mailman/listinfo/devel" rel="noreferrer" target="_blank">http://lists.ntpsec.org/mailman/listinfo/devel</a><br>
</blockquote></div>