Hal Murray hmurray at megapathdsl.net
Mon Apr 15 03:21:37 UTC 2019

> No, that description only holds for what are called "coarse" clocks.

Do you understand this area?

I think the term I've been missing is "dither".  I don't understand that area 
well enough to explain it to anybody.  Interesting timing.  I was at a talk a 
few weeks ago that covered dithering.  The context was audio processing and I 
wasn't smart enough to notice the NTP connection.


The idea with dithering is to add noise in bits below what you can measure.

There are several interesting quirks with the current code.

There isn't a sensible way to measure the step size of the system clock.  The 
current code fuzzes below the time to read the clock.  That has nothing to do 
with the clock tick size.  It's just a pipeline delay.  And the time-to-read 
that it uses includes lots more than just raw reading the clock: 150 ns vs 30 
ns on a reasonably modern Intel CPU.

You can see the actual clock step size in the histogram output of attic/clocks
I'm not sure how to automate that.

I haven't studied what ntpd does with coarse clocks.  I don't have a sample to 
test with.  The step size on Intel HPET  is ~70ns.  The time to read it is 
500-600 ns.

Step size on an Pi 1 is 1000 ns.  ~50 ns on a Pi 2 and Pi 3.

With the current code, get_systime() is fuzzed.  It's called from quite a few 
places.  The only ones that need fuzzing are the ones used for timekeeping.  
There are only 2 of those, one for sending requests and the other for sending 
replies.  The 2 packet receive time stamps don't get fuzzed.  Neither do the 
PPS time stamps.

There are several calls in the NTS code - just measuring elapsed times to do 
KE.  The API is convenient.  We should setup something to avoid fuzzing.

These are my opinions.  I hate spam.

More information about the devel mailing list