logging

Achim Gratz Stromeko at nexgo.de
Mon Apr 15 17:59:52 UTC 2019


Hal Murray via devel writes:
>> No, that description only holds for what are called "coarse" clocks.
>
> Do you understand this area?

Not in detail, just in a general way.

> I think the term I've been missing is "dither".

Yes.

> I don't understand that area well enough to explain it to anybody.
> Interesting timing.  I was at a talk a few weeks ago that covered
> dithering.  The context was audio processing and I wasn't smart enough
> to notice the NTP connection.

This always becomes an issue when you quantize a signal.  Whether
dithering is a viable solution depends on a lot of things, but it
usually is actually applied at multiple levels of a system (that in turn
creates its own set of problems).

If you have an audio CD that is properly mastered, the LSB of the
digital audio should be indistinguishable from a random sequence for
instance.  Yet that LSB is not actually lost, if the reconstruction
filter in your CD player is done correctly the audio that plays back
will have the full glorious 98dB SNR of 16bit audio.

> The idea with dithering is to add noise in bits below what you can measure.

That is often said, but it's wrong.  If you dither below your
measurement resolution it must be done in some part of the system that
ends up being measureable (via gain, say) or it was useless.  It is
quite common to dither into 3-4bits of the measurement resolution and
sometimes you even dither beyond the fullscale signal.

Dithering is additive noise (mostly white noise, but not always).  You
can view it either from a statistical or a spectral domain point, but
either way the dithering is added to mask certain undesirable features
or impairments in your signal, i.e. tonal spurs in audio or bias in
measurements that prevent a digital filter from converging.  The dither
itself is a known quantity and can later be removed entirely or at least
attenuated below the system noise floor.

> There are several interesting quirks with the current code.
>
> There isn't a sensible way to measure the step size of the system clock.  The 
> current code fuzzes below the time to read the clock.  That has nothing to do 
> with the clock tick size.  It's just a pipeline delay.  And the time-to-read 
> that it uses includes lots more than just raw reading the clock: 150 ns vs 30 
> ns on a reasonably modern Intel CPU.

The step size of the various clocks can be determined rather precisely
in terms of clock ticks, the Linux kernel does it at every boot to
figure out the best clock source.  Translated into "real" time that
picture is muddied by the fact that both the clock frequency and the
estimate of the clock frequency is continually off from the nominal.

> You can see the actual clock step size in the histogram output of attic/clocks
> I'm not sure how to automate that.

You can see the (correct) step size if the clock frequncy is reasonably
stable during the measurement, your measurement resolution is better
than the step size of the clock, the measurement jitter is smaller
than the step size and the measurement jitter is uncorrelated, zero mean
and white spectrum in the bandwidth of interest.

> I haven't studied what ntpd does with coarse clocks.

Simplifying a bit, let's say we know that we get a measurement result
with resolution 2^-n bits where n is smaller than the number of bits in
the result (let's assume all those bits are always zero).  There is no
way of knowing if the timer returned x just as it had incremented from
x-1 or returned us x and the incremented just after we've read it (which
we'd have liked to round up to x+1).  Other statistical properties of
the timing measurement being favorably "normal", that creates a bias in
our measurement of -(2^(n+1)) since the value always gets truncated
towards zero ("floor" in some parlance).  So you then add a random
number that has mean value 2^-(n+1).  Provided the individual
measurements are not correlated, which implies that the true but unknown
values are uniformly distributed within [x,x+1), any arithmetic we do
with these augmented measurements will converge to the true value with
the square root of the number of measurements we use in the calculation.
That is, if we calculate an average over four values, we get one more
bit of resolution we can rely on, the second bit past 16 values averaged,
and so on.  If you have an infinite running sum somewhere (in a
recurrence, say), then it will eventually become precise to the full
wordlength of your calculation.

> I don't have a sample to 
> test with.  The step size on Intel HPET  is ~70ns.  The time to read it is 
> 500-600 ns.

You actually have two sources of jitter here: one is the step size of
the measurement via the HPET counter, which is expected to be
well-behaved statistically if you measure external (non-synchronous)
events.  The other one is how long it takes to actually read the result.
The latter one has a large systematic component that you can in
principle remove, but it has a very unfavorable statistic (e.g. the mean
will not converge to the systematic delay).

> Step size on an Pi 1 is 1000 ns.  ~50 ns on a Pi 2 and Pi 3.

The Pi 1 has a software counter that runs at a nominal 1MHz.  Everything
else has some hardware counter providing the raw timer values (then scaled
with the frequency calibration).

> With the current code, get_systime() is fuzzed.  It's called from quite a few 
> places.  The only ones that need fuzzing are the ones used for
> timekeeping.

I haven't looked at the rationale for why one would need fuzzing in
different places.  You definitely need it for the FLL/PLL, since
otherwise you'd chase the quantization error once you are close enough
to target.

> There are only 2 of those, one for sending requests and the other for sending 
> replies.  The 2 packet receive time stamps don't get fuzzed.  Neither do the 
> PPS time stamps.

Generally there is no need to fuzz asynchronous time measurements.
These are guaranteed not to have correlated errors in them since they
originate from a different time scale.  On the other hand, the fuzzing
that gets applied is also far below the expected accuracy of these
measurements, so it is completely inconsequential.

> There are several calls in the NTS code - just measuring elapsed times to do 
> KE.  The API is convenient.  We should setup something to avoid
> fuzzing.

…or maybe you create a complication (aka bug magnet) by it that wasn't
really needed.


Regards,
Achim.
-- 
+<[Q+ Matrix-12 WAVE#46+305 Neuron microQkb Andromeda XTk Blofeld]>+

Factory and User Sound Singles for Waldorf Blofeld:
http://Synth.Stromeko.net/Downloads.html#WaldorfSounds



More information about the devel mailing list