Adoption strategy

Eric S. Raymond esr at thyrsus.com
Fri Sep 15 17:40:21 UTC 2017


(Lifted from the thread on lldexp)

Ian Bruene via devel <devel at ntpsec.org>:
> As I understood it part of the rationale for NTPsec was the yawning security
> chasms in NTPclassic. Shouldn't wide adoption therefore be highly desirable?
> 
> Possible answer: Aunt Tillie is not going to hunt down NTPsec and install
> it, we can't get any real foothold in the diffuse installed base. But NTPsec
> *can* get into new OS releases and big server farms.

Yeah, that's it, basically.

The run-up to the 1.0 release is I think a good time to discuss our
adoption strategy. The fact that we now seem to be onboarding a new
senior dev - Fred Wright, who some of us know as an extremely effective
contributor to GPSD - adds value to the exercise.

It's Mark's job to make the big decisions about this, but I believe my
thinking runs quite parallel to his and I'm sure he'll correct me if I
get his evaluations wrong.

The project's goal is to fix time service, where "fix" centers on
improving security and reliability.  The slightly better timekeeping
and the rather dramatically improved monitoring tools are the
sizzle on that steak.

(Slightly improved timekeeping would be a bigger deal if not for the
limitations of our clock sources and the scale of network weather.
NTP Classic was already nearly as good that way as is functional.)

So, the next question is how we get NTPsec fielded to as many places
where it's needed as possible, as rapidly as possible.

The key thing to notice is that "where it's needed" is not uniformly
distributed across all users.  The bad guys don't bother attacking the
99.99% of all NTP clients who are on desktops behind dynamic IPs. The fat
targets for use as DDoS amplifiers are big data centers on static IPs.

This happily coincides with the set of users we want to love us and
give us money and send us engineers.  So *everything* points us at an
adoption push aimed at big data centers first.  Either directly or by
getting our stuff into the distro pipeline to their systems.

This has a number of implications.  The top one is that almost no
platform but Linux actually matters. We're doing minor-platform stuff
like *BSD and Mac more to signal competence and be good-guy citizens
of a culture that considers platform breadth a virtue than because
they actually matter to our strategy. Windows doesn't matter at all.

In general, dropping non-Linux platforms to lower our expected defect rate
is a good trade.  We need to look for the knee in the curve of complexity
reduction; the obvious one, which has been a project premise since week one,
is full C99/POSIX conformance. We're currently arguing about how far back
to support Mac OS X and NetBSD; it's not yet resolved, but it's a healthy
argument proceeding from the right premises.

Another implication is that ancient refclocks (anything EOLed)
probably don't matter either.  There may be a limited exception here
for FedGov installations with really ancient hardware locked in by
certification requirements, which is why we're still retaining some 
pretty crufty old stuff in the driver set. 

Yet a third implication is that support for 32-bit platforms is not
very important either.  We're doing that mainly because (a) good code
hygiene and (b) ARM32 and similar hardware make nice microservers.

> Along this line of thought you should look up Coase's Therom; there is a
> tremendous amount of generative value in understanding it.

Oh hell yes. The one-line version: Given sufficiently low transaction
costs, any externality will be internalized. But there's a lot of
unobvious freight in there - helps to study Coase's motivation for it.
-- 
		<a href="http://www.catb.org/~esr/">Eric S. Raymond</a>

My work is funded by the Internet Civil Engineering Institute: https://icei.org
Please visit their site and donate: the civilization you save might be your own.




More information about the devel mailing list