Resuming the great cleanup
hmurray at megapathdsl.net
Sun May 27 18:56:00 UTC 2018
> SINGLESOCK: While messy and somewhat difficult, this is mostly a SMOP
> (Simple Matter of Programming). There is one potential technical risk,
> relatively minor I think.
> The reason for iterating over interfaces is that ntpd has the capability to
> block incoming packets by interface of origin. In order to go to a single
> epoll we either need to (a) abandon this feature, or (b) find a way to query
> the device a packet came through from the packet.
Could that feature be moved to a packet filter? I think most OSes support
some form of kernel level packet filtering. I'm not familiar with any
Does anybody use interface filtering?
> EVENTS: The code currently has a once-per-second tick that we want to
> eliminate in favor of alarms that only fire as needed. Unfortunately, this
> is going to be quite difficult. And we won't collect the major benefit
> (lower power consumption) until every piece of it is done.
Is there a better term than alarm? The normal case will be to wait for a
packet to arrive with a N second timeout. That's just a timeout on a poll.
I don't see that as anything alarming.
We can migrate the code in the right direction without major changes by
collecting future work events and putting them on a sorted queue.
> In our deployment scenarios, how often do we think a low-power device is
> *not* going to be watching a GPS/1PPS refclock? Smartphones and tablets are
> right out - anything mobile with a browser wants to know location, therefore
> will have a GPS.
Just because a platform has GPS doesn't mean that ntpd should get tangled up
with using it. On the scale of cell phones, GPS eats a lot of power. I'll
bet they play all sorts of turn-it-off games to save power.
Also, consider laptops instead of cell phones. How many of them have GPS?
You should probably add cleaning up SHM to your list. I think we want to
make the read side read-only. The current approach is polled. Maybe we
should move to a socket. ???
PPS processing is also polled. I think the API has an option to wakeup on
new data. I don't know if anybody has tried it.
There is a potential tangle in the low power area. To really save power, you
want to turn off the CPU clock that is used for timekeeping. That means
switching to the RTC/TOY clock. It may need a separate drift correction.
Maybe we need a hook to catch return-from-superlow-power so we can restart
the internal state, similar to what happens after the clock is stepped.
> 2. There's a subtle issue here with frequency of clock adjustment. Currently
> if we're slewing the clock it gets adjusted once per second. If we go to a
> fully event-driven architecture (and there are no refclocks) the frequency
> of adjustments will drop to the frequency of network traffic. This may not
> be a practical problem - I'm inclined to think it won't be - but we won't
> know until we measure.
Does the no-refclock case really adjust anything each second? There is no
new data. Why would it change the clock? The slewing is handled in the
kernel - there is no reason to keep poking it.
The refclock case is batched and merged into the normal packet flow.
How thread friendly is GO?
There is another potential cleanup area. There are 2 modes of PPS. The
normal mode mostly treats the PPS as another refclock. The other mode is to
let the kernel do all the work. This is not included in most kernels, but if
you are willing to build your own kernel you can get much better results. I
don't see any reason that we can't do the equivalent logic in userland.
This could potentially be included in the great refclock cleanup, but it
requires feedback from the sanity check level of ntpd to tell the PPS
processing that it should/shouldn't actually feed corrections to the kernel.
These are my opinions. I hate spam.
More information about the devel