Neoclock-4X driver removal

Achim Gratz Stromeko at nexgo.de
Sun Aug 11 12:50:48 UTC 2019


Eric S. Raymond via devel writes:
> Achim Gratz via devel <devel at ntpsec.org>:
>> Eric S. Raymond via devel writes:
>> > * It has 2ms jitter, way worse than a cheap GPS these days.
>> 
>> That is actually much better than what most of the cheap GPS deliver
>> when connected over USB.
>
> You may be a bit behind the curve on this.

I love that start… when the cheapest shot is fired right at the
beginning, you just know that there isn't any real argument coming.

> I've measured 1ms jitter with the Macx-1, the device I designed in
> conjuction with Navisys back in 2012.  That was a bog-standard
> GPS+PL2303 design with 1PPS from the engine connected to the DCD 
> line on the PL2303.

True, however the "cheap GPS connected over USB" one can actually buy
are mostly using USB serials that don't even allow DCD to be handed
through and hence have no PPS.  PPS over USB serial is still rare to find.

Specifically for Navilock (which I have experience with), their pucks
with PPS have always a separate PPS line (TTL mostly) and have extra
circuitry that is not populated in the modules without PPS to make it
even more difficult to break out the PPS signal that the uBlox module
inside actually still provides (in case you're wondering, I've actually
succeeded in breaking it out).  The USB serial port is usually directly
from the uBlox module inside, which doesn´t support DCD at all or
otherwise a converter chip that only deals with RX/TX.

For the benefit of other readers: The Macx-1 GR-601W seems no longer
obtainable, but the successor products GR-701W and GR-801W may be.  I've
instead switched to NavSpark mini modules plus an FTDI breakout board
that has the full set of serial signals.

> That's how I know that it already took very little effort to pull down
> that jitter figure seven years ago. Another way to put this is that as
> far back as 2012 you had to be screwing the pooch pretty determindly
> to get as bad as 2ms.

Without PPS, that picture looks not quite as rosy, both with direct
serial and USB serial connection.  Plain USB serial is still noticeably
worse in that instance unless you chose very specific converter/driver
combinations.  Again, I use FTDI mostly where that becomes an issue as
the drivers are consistently well working across all OS.  Prolific
drivers on the other hand are all over the map and you have to be really
careful on Windows, specifically that the correct one is installed and
actually talks to the device.

> There's a realistic prospect of that jitter dropping to 0.25msec as
> people who make USB-to-serial chips stop bothering to support USB
> 1.1.

That's not the real reason and the numbers are wrong, too.  The reason
is that you need a high-speed endpoint to use microframes (which were
only specified with USB 2.0), and the number you are looking for is
125µs (there are eight microframes in a 1ms frame).  You need to provide
separate configurations for high-, full- and (if used at all) low-speed
endpoints anyway, so that the host can pick the one config it can or
wants to deal with; you can easily be USB 1.1 compatible and support
lower latency on the USB 2.0 endpoints at the same time this way.  It is
in fact recommended to provide alternative high-speed configurations
with longer poll rates in order for the host to pick up one that fits
the overall load (interrupt transfers reserve bandwidth on the bus, so
not all wishes can be granted -- one of the reasons certain devices
don't work well across a USB hub).

Most of the popular USB serial converters don't support high-speed
endpoints, whether or not they claim compliance to USB 1.1 or 2.0 or yet
some other version (the PL2303 is one of those).  Some of the FTDI USB
serial interface processors (probably all that support JTAG, but I
haven't trawled through their whole product line) actually can be
configured to be polled on each microframe, but it seems that the Linux
driver still only uses the setting that has them buffer at 1ms (down
from 16ms standard).  In any casev with USB being a host-driven system,
it all comes down to what the driver does and whether it actually uses
the capabilities of the device.  That it would work in principle can
easily be inferred from the fact that a rasPi has lower than 1ms jitter
on the ethernet port (which hangs off the USB 2.0 hub port, which in
turn ties into the USB 2.0 host interface).  That may not all be the
doing of the ethernet driver, though; USB2.0 hubs do a thing called
transaction translation that can yield surprising results with some
driver/device constellations.  Unfortunately I don't have a Pi3 A where
the host port gets broken out into the single USB 2.0 port to directly
interface with where one could try to isolate these effects.

> This may already have happened - I haven't been tracking that
> area closely because 1ms jitter is just bartely low enough not to be a
> real problem for an NTP source exoecteds to deliver WAN
> synchronization.

So, what makes 2ms a number that lets you throw a driver out and 1ms a
number that lets you keep it?  This looks very much like an arbitrary
limit to support a foregone conclusion instead of the other way around.
That was and still is very much the point of my contention with your
proposal.  There may be any number of reasons to drop this or another
driver, but this criteria in particular just isn't backed up by any
data.  And lest you tell me again I don't know what I'm talking about,
I've been running a DCF77 receiver in addition to two GPS (one USB only,
the other with PPS over DCD via an FTDI breakout board) on a rasPi 1B.
The VLF has a bit higher jitter than the GPS w/ PPS (let's not talk
about the USB serial only GPS), but chosing either one as the primary
source of time results in indistinguishable performance when I monitor
that rasPi over the network.  In fact, probably owing to the way the
network interface is working, the residual loop jitter as reported by
ntpd is smaller (by a factor of about 2…4) when I sync that particular
box to the other six stratum 1 servers on the same network.

>> > * All the usual signal-propagation and interference problems that
>> >   have caused most other longwave time receivers to be replaced by
>> >   GPSes.
>> 
>> Except that GPS still needs clear view of a relatively large portion of
>> the sky and VLW doesn't, aside from all the interference and signal
>> propagation issues that it has too, because it is operating just on a
>> different band of RF.
>
> What you say in true in theory.

Well, it's true in practise as well.  This is a result of the physics of
electromagnetic wave propagation and the constraints on where you can
put the computers and an antenna for the receiver.  If you care to look
what stratum-1 servers you get back from the NTP pool you'll see that
certain colocation centers have only VLF and no GPS among their clients.

> In practice, experience in the U.S.
> tells us pretty clearly that the tradeoff is in favor of GPS.
>
> How do we know this?  After the WWVB modulation change in 2012, all
> the Amwerican clock-radio vendors moved to GPS-conditioned units *and
> never looked back.*  Longwave receivers are no longer worth the NRE
> to build them here.

That example is irrelevant to the discussion since the application
(clock radio) has completely different operational and economical
constraints from the one we're discussing (NTP stratum-1 refclock).


Regards,
Achim.
-- 
+<[Q+ Matrix-12 WAVE#46+305 Neuron microQkb Andromeda XTk Blofeld]>+

Factory and User Sound Singles for Waldorf rackAttack:
http://Synth.Stromeko.net/Downloads.html#WaldorfSounds


More information about the devel mailing list