Possible abuse from fetching the leap second file

Dan Drown dan-ntp at drown.org
Mon Aug 15 00:15:32 UTC 2016


Quoting Hal Murray <hmurray at megapathdsl.net>:
> Matt Selsky is working on Pythonizing the script that grabs a new leap second
> file.  The idea is to run a cron job that keeps it up to date.  That opens an
> interesting can of worms.
>
> As a general rule, you shouldn't use a resource on a system that you don't
> own without permission from the owner.  Informed consent might be a better
> term.  A system open to occasional  downloads by a human might not be willing
> to support automated fetches from many many systems.
>
> This case is doubly nasty in two ways.
>
> First, the load will normally be light but then go up sharply 60 days before
> the file expires.  (The doc mentions a crontab, but I can't find specifics.)
> That could easily turn into a DDoS.

I agree that it's impolite to automate this.  What's ok for 100  
servers to do isn't ok for 1 million.

> Second, the URL from NIST is unreliable[1] and the IEFT clone is out of date.
> It's not obvious that NIST is expecting to support non US clients or that
> either NIST or IEFT is prepared to support high volumes of automated fetches.
>
> The clean solution is for us to provide the server(s), or at least the DNS so
> we can provide the servers tomorrow.  That commits us to long term support,
> but since we have control of everything we can fix it if something  
> goes wrong.
>
> Does anybody know how many downloads/hour a cloud server can suppor?  I'm
> interested in this simple case, just downloading a small file, no fancy
> database processing.  Are there special web server packages designed for this
> case?

There are a few webservers designed for high connection count static  
file serving - lighttpd, nginx are two examples

I'd guess downloads/hour would be mainly limited on the packets per  
second side of things (especially on a cloud server, which are usually  
bad at high packets per second rates).

Starting with 100k packets per second and 21 packets to complete a  
http GET for the leapsecond file.  This gives a rate of 4,761 requests  
per second completed (and 409Mbit/s rate).  After an hour, that's 17  
million requests completed (182.4 Gbyte out, ~$16 in EC2).

Looking at it in a different way, let's take a theoretical cloud  
server that includes 4TB/month transfer.  That plan would cover around  
372 million requests for the leapsecond file over a month (at an  
average rate of around 143 requests/second).

This is also a thing that would be easy to mirror.  You'd want to  
distribute a gpg external signature with the file (updated every 6  
months?), so end users could be confident the leapsecond file wasn't  
messed with by a mirror.

All those numbers were with HTTP overhead, HTTPS overhead reduces  
these numbers by around 33%.

> How many clients are we expecting to run this code?
>
> Another approach might be to get the time zone people to distribute the leap
> second file too.  That seems to get updated often enough.

I'm using chrony's feature to read the leapsecond from the timezone files:
https://chrony.tuxfamily.org/manual.html#leapsectz-directive

I like this because the leapsecond updates come with regular OS  
updates.  Doesn't look like Ubuntu or Fedora have the Dec 31, 2016  
leap second yet, though.

[2015]
$ TZ=right/UTC date -d 'Jun 30 2015 23:59:60'
Tue Jun 30 23:59:60 UTC 2015
[2016]
$ TZ=right/UTC date -d 'Dec 31 2016 23:59:60'
date: invalid date ‘Dec 31 2016 23:59:60’


> 1] The current URL is ftp://time.nist.gov/pub/leap-seconds.list
> DNS for time.nist.gov is setup for time, not ftp.  It rotates through all
> their public NTP servers and many of them don't support ftp.
>
>
> Matt:  The current code has an option to restart ntpd.  The current ntpd will
> check for a new leap file on SIGHUP but that will kill ntp-classic.
>
> Please see if you can find a simple way to spread the load.  We can reduce
> the load on the servers by a factor of 30 if you can spread that out over a
> month.




More information about the devel mailing list