threads, locks, atomic
Hal Murray
hmurray at megapathdsl.net
Thu Jan 14 10:45:13 UTC 2021
Here is what I think we want to end up with:
N server threads processing client requests
M threads processing NTS-KE requests
1 main thread to process ntp.conf and timers
a client thread per server from server/pool in ntp.conf
a thread for each refclock
We already have M=2. Generalizing that should be easy.
We can get started with the main thread using the current code for the client
and refclock threads. We'll have to switch to another port number.
I'm picturing a global lock for everything but the server threads. The server
threads will need locks for the MRU table and the restrict list.
The thing that makes this whole idea possible is that the server code only
needs a small amount of data to fill in the reply packet. I'm expecting a
struct that will hold that data and a lock for that struct. The client and
refclock threads will process a response to get new data, get the lock, copy
the new data into the struct, then release the lock.
The rough edge on that picture is how to process ntpq/mode6 requests. A clean
option is to move mode6 to a new port or TCP. The config option will need to
get the global lock. There is a tangle if the config option wants to delete
the socket the packet arrived on.
I considered making the server threads also process mode6. It's just a few
lines of code in the not-request error path. That makes thinking about DDoS
attacks more complicated. I think we can dodge most of the problems if we
only allow one mode6 request at a time and drop others rather than queue them
up. My plan is to start that way, but I think it would be cleaner to move
mode6 processing to a separate socket, either a new UDP port or TCP.
------
Does that seem reasonable? Anybody spot something I missed?
Does anybody know anything about POSIX atomic operations? We need macros for
foo++ and foo += x. I think Intel hardware does the right thing so
performance impacts should be negligable.
--
These are my opinions. I hate spam.
More information about the devel
mailing list