Thoughts on networking and threads...
James Browning
jamesb.fe80 at gmail.com
Thu Dec 3 15:48:05 UTC 2020
On Sat, Nov 28, 2020, 12:17 AM Hal Murray via devel <devel at ntpsec.org>
wrote:
>
> I've been thinking about how to make ntpd serve lots and lots of clients.
>
> I think that requires the server to be multi-threaded, especially if we
> want
> to support NTS.
>
----------
>
> I think we should split ntpd into 3 chunks: client, server, and mode6.
> Currently, they are all tangled up at the receive packet processing. I
> don't
> mean 3 separate programs, but won't rule that out. The main idea is to
> understand where they interact.
>
> There is a draft RFC to have the client use something other than port
> 123.
> Splitting the client and server gets that.
>
> Using threads will clean up the packet processing at the cost of adding
> locks.
> I think the locks will be simple -- after we understand where they
> interact.
>
bottlenecks. I think the way to do that would be to have the client
export some variables, and the servers import them. Depending on how
far back/forward you want to support It could be as little as 22bytes
to much longer. v2 would require at least 16 more bytes, v1 eight
more than that, rfc958 (v0?) is sufficiently old/different that I
would not recommend humoring it. I'm probably not reading things
correctly though.
The client can stay single threaded. I haven't looked carefully, but the
> server doesn't need much data from the client side so I'm pretty sure we
> can
> make a clean handoff with a simple lock. The server only reads that data.
>
> It's possible that it might be easy and clean to split the client and
> server
> into separate programs. The client feeds a lot of into to the kernel
> where
> the server can get it back. I'll have to check on what that covers.
>
I'm not sure that would work very well. I think the only values in
the packet that could be pulled from the kernel are the time and
precision, which needs to be converted.
> It will probably help to rearrange the header files, something like
> network,
> utility, client, and server.
>
> If we start using threads, it will be easy to give each refclock a
> thread.
> Again, the critical step will be understanding how it interacts with the
> main
> client side thread. For example, the server side doesn't need to know
> anything about the peer struct.
>
I think it would be possible to farm out associations to external
processes and have them communicate data for sets of associations
via SHM or something. It should be possible to have sets for other
protocols, time receivers, server peer responses, LAN machines, the
pool w/ daily/weekly expiration for the worst/eldest, and so forth.
Humm. I wonder if the client side gets cleaner if we have a thread per
> server. That's probably a big step toward fixing the 1 second polling
> mess
> that keeps laptops from power saving.
>
> I haven't thought much about mode6. I'm willing to discuss anything. We
> could shift to TCP to avoid the DDoS amplification issues. If we stick
> with
> UDP, we need a new port number. The current code is mostly ASCII on the
> wire.
> Should we fix the rest of the binary data? Note that there are 2 parts
> to
> mode6, the client (peers) and the server (mrulist).
>
I think it should be possible to mostly replace mode 6 w/ essentially
a web-UI and have the various components reply to questions via JSON
and SHM, with ALPN running an updated version of mode 6 over the same
port.
config, mrulist, system, peer, and refclock I think. config would be
more complicated with it split up, also mrulist if multiple servers.
--------------
>
> When I started typing this, I was focused on the socket level interface.
> The current receive dispatching seems pretty ugly to me. Splitting the
> client and server was a way to clean that up. And I wanted many threads
> for the server. But the list above kept getting longer as I typed things
> in.
>
> Maybe we should start from scratch. Why bother with a Go translator?
>
I think we'd have to give up the name. Also, the output of ccgo3(?)
was rubbish and will not translate (mostly unneeded) long doubles.
> Maybe we should go help with chrony.
>
I think a fair number of people have wandered off.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://lists.ntpsec.org/pipermail/devel/attachments/20201203/30c8b0f4/attachment.htm>
More information about the devel
mailing list