Achim Gratz Stromeko at nexgo.de
Sun Nov 6 22:33:34 UTC 2016

Gary E. Miller writes:
>> The denominator is a temperature difference, which can't be reported
>> in °C; so that should be ppm/K.
> One °C is one °K for ratio purposes.  The offset cancels out when youu
> compute the delta.

There isn't any unit °K, it's just K; and as I said, °C must not be used
for temperature differences either.  The two scale factors are the same.

>> Over a period of time
>> longer than a week you need to take crystal aging into account also.
> I think my poor control of the test room temp vastly outweighs the
> aging.  So I'm pondering some sort of chamber.

The aging is indeed slow enough to get removed by the NTP loop, but it
shows up when you look at a week or more worth of data.  That will then
make your fit worse or even prevent it from converging, which is why you
explicitly need to take it into account there.

>> I'm only using values where the loop has converged to better than 1µs
>> for the fit.
> But that's the thing.  As long as the RasPi has GPS lock, any aging or
> temp error is correctetd in the loop.  Unless you want to provide good
> time with long GPS outages the aging is not important to NTP
> operation.

Your mental model of how the NTP loop works seems to be missing
something important: any change in the XO frequency shows up as an error
in the measurements that NTP makes.  Since that error is not unbiased
when the frequency drifts along with temperature, it will take quite
some time to get corrected and while it is getting corrected, there is a
time offset that is proportional to the derivative of the frequency

> Ditto here on one of my RasPi.  That clearly buffered the RasPi a bit
> from room temp, but magnified the load factor effect.  So on balance not
> a win.

That's why you will want to run it near the zero-TC point and with as
uniform as possible power dissipation.

>> I'll have to see how to
>> get some finer control over that load (and maybe use a second core
>> for "heating") so that I can operate exactly on the zero-TC point.
> I'm pondering a more direct thermal control.  Fans, heaters, etc.  But
> to me that is very much a low priority.

That doesn't work the way you seem to imagine it, you just pile on
another problem of keeping the temperature stable on a system with
substantial and varying power dissipation.  The time constants just
don't match up and get more unfavorable by the added thermal mass.  If
you want to throw hardware at it, just remove the XO and feed the rasPi
19.2MHz synthesized by a GPSDO (the navSpark timing module can do that
for about $80) and run ntpd in external discipline mode (if that still

> My concerns are different than yours.  I could care less if the OSC
> frequency drifts, as long as NTP applies a good loop correction.

Again, if your yardstick changes between measurements, that correction
is also off by definition.

> What I find worthy of fixing in the graphs is the frequency spikes,
> the temp related and especially the temp unrelated.  What other forces
> are changing the frequency.  Ts there some way to change the loop
> control to improve time and frequency accuracy?

You need to first and foremost ensure that the the XO frequency is
stable.  The accuracy at timescales below 100s is limited by the
short-term stability of the XO.  You can shift that error between time
and frequency within some reason, but it doesn't go away.

If the frequency isn't stable, then its rate of change must be within
the loop bandwidth; the slower any drifts, the better.  Any other errors
must be unbiased so they rapidly converge to zero by averaging.

If you cannot ensure that either, then you need to have a nested loop
that predicts the fast and/or systematic disturbances and incorporates
the resulting correction into the control algorithm as a feed-forward
component, so the feedback loop doesn't need to deal with that.
Ensuring stability for that nested loop is left as an exercise for the

The latter part would require extensive characterisation of each system
setup, so it is not really practical in the general case.  Before I've
changed the setup to the one I'm currently running, I've had the
temperature vs. ppm curve predicted (again, that was with a non-causal
filter, so the feed-forward part wouldn't have worked in reality) so
that it would have reduced the swing on the NTP loop correction by a
factor of 5 to 10, but some bias remained.  More gain on the correction
might get the offset down, but leans the loop towards oscillation or
even chaotic behaviour.

Running the XO near the zero-TC point does the same or better without
adding a nested loop and fiddling with unknown system and control
variables, so it's a much more practical approach, plus it doesn't cost
anything but a bit of power.  The differences between the actual loop
correction and the off-line prediction are now indeed almost unbiased.
The actual loop offset averages to zero (plus the MAD on the loop offset
runs somewhere between 200...400ns, which is only about 4 to 8 cycles
jitter of the 19.2MHz XO frequency).  99% of all PPS timestamps over the
last day stay within ±10µs, 75% within ±1µs and slightly less than half
within ±500ns.

+<[Q+ Matrix-12 WAVE#46+305 Neuron microQkb Andromeda XTk Blofeld]>+

SD adaptations for Waldorf Q V3.00R3 and Q+ V3.54R2:

More information about the devel mailing list