ntp at kaluga.net
Mon Jan 13 11:59:29 UTC 2020
cpu affinity? If you have network card with many tx/rx threads (modern
PCI-E card can use MSI-X and 'software irq'), you can bind different
card threads/irqs to cores and ntpd process to other core. On BSD we use
cpuset to spread and bing threads to cores.
On Linux see script set_irq_affinity.sh**from Intel drivers
<https://gist.github.com/SaveTheRbtz/8875474>and others in in drivers
Also you can google articles like 'linux router performance', for
example https://github.com/strizhechenko/netutils-linux (maybe also
rss-ladder tools can help ) or
Network stack tuning not simple. Performance is need good NIC
multithread chip and good driver. As I know, Intel NIC chipsets and
drivers really the best here.
13.01.2020 11:54, Hal Murray пишет:
>> and without 'limited' on ~5kpps I have 8-10% CPU regardless minitoring
>> enabled/disabled. About 1% on 1000pps.
> Is that within reason or worth investigating? 1% times 5 should be 5% rather
> than 8-10% but there may not be enough significant digits in any of the
>> For those who want to process hundreds of thousands of requests per second
>> (like 'national standard' servers) you can use multithreading and multiply
>> power of server.
> The current code isn't setup for threads. I think with a bit of work, we
> could get multiple threads on the server side.
> On an Intel(R) Core(TM) i7-4790 CPU @ 3.60GHz
> I can get 330K packets per second.
> 258K with AES CMAC.
> I don't have NTS numbers yet.
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the devel