<div dir="ltr">The long term, I like the DNS for solutions to this kind of problem. But, under what name?<div><br></div><div>Other solutions are putting it in AWS & Cloudfront, and in their equivalents at AZR and at GCS. To take that route, I would want to arrange that Amazon, Microsoft, and Google donate that capacity. <span style="line-height:1.5">The those 3 cloud CDNs could handle that load. But, that will take negotation time, and programming time we don't have right now.</span></div><div><br></div><div>An even faster to implement solution would be to put it in <a href="http://github.com">github.com</a>. We could do that today, and it would cost us nothing, and github on their backend smoothly pours very high demand raw pages into the the assorted worldwide cloud providers and into the CDNs. Plus it versions the data, and they have wellknown TLS certs.</div><div><br></div><div>Let's do that! Hal, others, do you happen to have copies of all the past leap files, so we can synthesize a git history for it?</div><div><br></div><div>..m</div><div><br></div><div><br><br><div class="gmail_quote"><div dir="ltr">On Sun, Aug 14, 2016 at 3:49 PM Hal Murray <<a href="mailto:hmurray@megapathdsl.net">hmurray@megapathdsl.net</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">Matt Selsky is working on Pythonizing the script that grabs a new leap second<br>
file. The idea is to run a cron job that keeps it up to date. That opens an<br>
interesting can of worms.<br>
<br>
As a general rule, you shouldn't use a resource on a system that you don't<br>
own without permission from the owner. Informed consent might be a better<br>
term. A system open to occasional downloads by a human might not be willing<br>
to support automated fetches from many many systems.<br>
<br>
This case is doubly nasty in two ways.<br>
<br>
First, the load will normally be light but then go up sharply 60 days before<br>
the file expires. (The doc mentions a crontab, but I can't find specifics.)<br>
That could easily turn into a DDoS.<br>
<br>
Second, the URL from NIST is unreliable[1] and the IEFT clone is out of date.<br>
It's not obvious that NIST is expecting to support non US clients or that<br>
either NIST or IEFT is prepared to support high volumes of automated fetches.<br>
<br>
The clean solution is for us to provide the server(s), or at least the DNS so<br>
we can provide the servers tomorrow. That commits us to long term support,<br>
but since we have control of everything we can fix it if something goes wrong.<br>
<br>
Does anybody know how many downloads/hour a cloud server can suppor? I'm<br>
interested in this simple case, just downloading a small file, no fancy<br>
database processing. Are there special web server packages designed for this<br>
case?<br>
<br>
How many clients are we expecting to run this code?<br>
<br>
Another approach might be to get the time zone people to distribute the leap<br>
second file too. That seems to get updated often enough.<br>
<br>
---------<br>
<br>
1] The current URL is <a href="ftp://time.nist.gov/pub/leap-seconds.list" rel="noreferrer" target="_blank">ftp://time.nist.gov/pub/leap-seconds.list</a><br>
DNS for <a href="http://time.nist.gov" rel="noreferrer" target="_blank">time.nist.gov</a> is setup for time, not ftp. It rotates through all<br>
their public NTP servers and many of them don't support ftp.<br>
<br>
<br>
Matt: The current code has an option to restart ntpd. The current ntpd will<br>
check for a new leap file on SIGHUP but that will kill ntp-classic.<br>
<br>
Please see if you can find a simple way to spread the load. We can reduce<br>
the load on the servers by a factor of 30 if you can spread that out over a<br>
month.<br>
<br>
<br>
--<br>
These are my opinions. I hate spam.<br>
<br>
<br>
<br>
_______________________________________________<br>
devel mailing list<br>
<a href="mailto:devel@ntpsec.org" target="_blank">devel@ntpsec.org</a><br>
<a href="http://lists.ntpsec.org/mailman/listinfo/devel" rel="noreferrer" target="_blank">http://lists.ntpsec.org/mailman/listinfo/devel</a><br>
</blockquote></div></div></div>