Threads
Hal Murray
hmurray at megapathdsl.net
Wed Apr 7 04:43:34 UTC 2021
There are 4 places that might be the limiting factor.
1) The wire might be full
2) The Ethernet chip might not be able to process packets at full wire speed.
3) The kernel's input dispatcher thread might run out of CPU cycles.
4) The client threads might run out of CPU cycles.
I don't have a good setup to demonstrate the Ethernet chip being the limiting
factor. I could probably make one by plugging in a junk card. The gigabit
chips that come on Dell motherboards are OK. They can get close to full wire
speed, close enough that I'm missing under 100 bits between packets.
The other limits are reasonably easy to demo.
I have a box with an Intel E5-1650v3. It has 6 cores at 3.5 MHz. With
HyperThreading, that's 12 CPUs.
My standard test setup is an echo server. The kernel SoftIRQ thread gets one
CPU. The other half of that core is left idle. There is an echo server
thread on each of the other 10 CPUs.
Measured NTP throughput:
pkts/sec uSec/pkt
426K 2.3 NTP (simple) 48 bytes
320K 3.1 NTP + AES 68 bytes
93K 10.7 NTP + NTS 232 bytes
The wire limit for NTP+NTS (232 UDP bytes) is 407K packets per second. That's
2.5 uSec per packet. With 10 CPUs, we have 25 uSec per packet. We only need
11 so this processor chip should be able to keep up with a gigabit link
running at full speed.
Note that a workstation with 4 cores can probably keep up. That leaves 6
worker threads so we only get 15 uSec of CPU time for each packet, but that's
still more than 11. I don't know how much running both CPUs of a core will
slow things down. We'll have to wait and measure that.
--
These are my opinions. I hate spam.
More information about the devel
mailing list