Deciding what modes to keep.

John Bell jdb at systemsartisans.com
Fri Sep 30 22:56:35 UTC 2016


To All -


    Current sysadmin here.  I think I can synthesize some user stories that may
be relevant.

 
> > > On Fri, 30 Sep 2016 14:50:12 -0400
> > > Daniel Franke <dfoxfranke at gmail.com> wrote:
> > >   
> > > > An empty specification in my new language has different semantics
> > > > than an empty specification in the old language so at some point
> > > > there will have to be a flag day as to which way it is
> > > > interpreted, but I don't think this is as big a deal as you've
> > > > made it out to be.  
> > > 

I tried looking through the archive of the "devel@" mailing list, and couldn't
find the relevant message(s).  *Daniel*, could you re-post a succinct summary of
the *specific* ways
an empty spec in the new world would be different than the empty spec in the old
world.  Specifically, the (possibly insecure) assumptions that would be rendered
inoperable.


> > > Gary Miller wrote:
> > >
> > > I think it is a big deal.  UNtil we have our own rabid base of
> > > followers the only way NTPsec grows is by taking NTP Classic
> > > users.  They will want a drop in replacement.
> > > 
> > > So any upward extensible is fine, but trivial back-compatibility is
> > > essential.  


Let me see if I can describe how I believe real sysadmins would behave in an
imaginary universe where we ship code that is *not* 100% backward compatible:


*Story 1: _very_ careful sysadmin*

SA - Oh look! A better NTP!  (downloads same, *reads the docs carefully*,
figures outs their config is busticated)
SA - Crap, I've been vulnerable all along!!  (fixes config, runs new code)
 Boss, we are better now.
Boss - *applauds*

All Good, and Improves Security.


*Story 2: careful sysadmin*

SA - Oh look! A better NTP! (downloads same, *skims the docs*, installs without
much thought)
(things break in immediate testing, because they were *relying* on old
insecurities)
SA - Crap, it's broken!
Boss - (glares)
SA - (*Now* reads docs carefully, figures out what went wrong, fixes)
SA - Boss, it's working now!  And we won't get hacked so easily.
Boss - *applauds*

Good Enough, and Improves Security.


*Story 3: usual (overburdened) sysadmin, trusting defaults, etc. - no special
config*

SA - Oh look!  A Better NTP!  (downloads same, installs, nothing breaks)
Boss - *applauds*

Great, and We Improved Internet Security!!


*Story 4: usual (overburdened) sysadmin, funky __incompatable__ config*

SA - Oh look! A better NTP!  (downloads same, installs, *things stop working as
expected*)
(users / peers complain that things are busticated)
Boss - Fix it!!
SA (in a panic) - (curses, reads docs, looks at old config, scratches head,
curses some more, fixes config, waits for the yelling to stop)
SA - Uh, Boss.... I think I fixed it.  And, uh, I think we're more secure.
Boss (glares) - OK, but why did you break it!
SA - Uh, it was broke before, but now it's *much* better.

Sucks To Be That SA, but We Improved Internet Security!



So it looks like only the people we hurt are those who are running an
environment they don't completely understand, *with known vulnerabilities*, who
we force to fix their bugs.  Is this a rare case?  Is it worth inflicting a bit
of pain to *Improve Internet Security*?

In this <a
href="http://arstechnica.com/security/2016/09/botnet-of-145k-cameras-reportedly-deliver-internets-biggest-ddos-ever/">day
and age</a>, is that a *bad* thing?



Respectfully submitted,



  - John D. Bell


More information about the devel mailing list