Fuzz, Numbers

Mike Yurlov ntp at kaluga.net
Mon Jan 13 11:59:29 UTC 2020

cpu affinity? If you have network card with many tx/rx threads (modern 
PCI-E card can use MSI-X and 'software irq'), you can bind different 
card threads/irqs to cores and ntpd process to other core. On BSD we use 
cpuset to spread and bing threads to cores.

On Linux see script set_irq_affinity.sh**from Intel drivers 
<https://gist.github.com/SaveTheRbtz/8875474>and others in in drivers 

Also you can google articles like 'linux router performance', for 
example https://github.com/strizhechenko/netutils-linux (maybe also 
rss-ladder tools can help ) or 

Network stack tuning not simple. Performance is need good NIC 
multithread chip and good driver. As I know, Intel NIC chipsets and 
drivers really the best here.

Mike Yurlov

13.01.2020 11:54, Hal Murray пишет:
> Thanks.
>> and without 'limited' on ~5kpps I have 8-10% CPU regardless minitoring
>> enabled/disabled. About 1% on 1000pps.
> Is that within reason or worth investigating? 1% times 5 should be 5% rather
> than 8-10% but there may not be enough significant digits in any of the
> numbers.
>> For those who want to process hundreds of thousands of requests per  second
>> (like 'national standard' servers) you can use multithreading and  multiply
>> power of server.
> The current code isn't setup for threads.  I think with a bit of work, we
> could get multiple threads on the server side.
> On an Intel(R) Core(TM) i7-4790 CPU @ 3.60GHz
> I can get 330K packets per second.
> 258K with AES CMAC.
> I don't have NTS numbers yet.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://lists.ntpsec.org/pipermail/devel/attachments/20200113/e7dfaf40/attachment.htm>

More information about the devel mailing list