Possible abuse from fetching the leap second file

Mark Atwood fallenpegasus at gmail.com
Mon Aug 15 14:58:07 UTC 2016


The long term, I like the DNS for solutions to this kind of problem.  But,
under what name?

Other solutions are putting it in AWS & Cloudfront, and in their
equivalents at AZR and at GCS.  To take that route, I would want to arrange
that Amazon, Microsoft, and Google donate that capacity.   The those 3
cloud CDNs could handle that load.  But, that will take negotation time,
and programming time we don't have right now.

An even faster to implement solution would be to put it in github.com.   We
could do that today, and it would cost us nothing, and github on their
backend smoothly pours very high demand raw pages into the the assorted
worldwide cloud providers and into the CDNs. Plus it versions the data, and
they have wellknown TLS certs.

Let's do that!  Hal, others, do you happen to have copies of all the past
leap files, so we can synthesize a git history for it?

..m



On Sun, Aug 14, 2016 at 3:49 PM Hal Murray <hmurray at megapathdsl.net> wrote:

> Matt Selsky is working on Pythonizing the script that grabs a new leap
> second
> file.  The idea is to run a cron job that keeps it up to date.  That opens
> an
> interesting can of worms.
>
> As a general rule, you shouldn't use a resource on a system that you don't
> own without permission from the owner.  Informed consent might be a better
> term.  A system open to occasional  downloads by a human might not be
> willing
> to support automated fetches from many many systems.
>
> This case is doubly nasty in two ways.
>
> First, the load will normally be light but then go up sharply 60 days
> before
> the file expires.  (The doc mentions a crontab, but I can't find
> specifics.)
> That could easily turn into a DDoS.
>
> Second, the URL from NIST is unreliable[1] and the IEFT clone is out of
> date.
>  It's not obvious that NIST is expecting to support non US clients or that
> either NIST or IEFT is prepared to support high volumes of automated
> fetches.
>
> The clean solution is for us to provide the server(s), or at least the DNS
> so
> we can provide the servers tomorrow.  That commits us to long term support,
> but since we have control of everything we can fix it if something goes
> wrong.
>
> Does anybody know how many downloads/hour a cloud server can suppor?  I'm
> interested in this simple case, just downloading a small file, no fancy
> database processing.  Are there special web server packages designed for
> this
> case?
>
> How many clients are we expecting to run this code?
>
> Another approach might be to get the time zone people to distribute the
> leap
> second file too.  That seems to get updated often enough.
>
> ---------
>
> 1] The current URL is ftp://time.nist.gov/pub/leap-seconds.list
> DNS for time.nist.gov is setup for time, not ftp.  It rotates through all
> their public NTP servers and many of them don't support ftp.
>
>
> Matt:  The current code has an option to restart ntpd.  The current ntpd
> will
> check for a new leap file on SIGHUP but that will kill ntp-classic.
>
> Please see if you can find a simple way to spread the load.  We can reduce
> the load on the servers by a factor of 30 if you can spread that out over a
> month.
>
>
> --
> These are my opinions.  I hate spam.
>
>
>
> _______________________________________________
> devel mailing list
> devel at ntpsec.org
> http://lists.ntpsec.org/mailman/listinfo/devel
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.ntpsec.org/pipermail/devel/attachments/20160815/0d89fe21/attachment.html>


More information about the devel mailing list