ntpq update
Hal Murray
hmurray at megapathdsl.net
Fri Dec 23 01:29:04 UTC 2016
esr at thyrsus.com said:
>> The frags= and limit= on the mru command are only used
>> for the first batch. I'd like them to stick.
> There's a computation of those for second and later span requests that I
> transcribed from the C, down around line 1287 in packet.py. I'm very
> reluctant to mess with it; it's at the far end of some logic that I don't
> understand that seems to be trying to adapt to network conditions or server
> errors or *something*.
There is nothing magic there. We should add some debug printout so we can
see what is actually going on.
But if the user specifies how big, either by packets or slot count, I'd like
to stick with that size, at least until we get a better idea. I think it
will make more sense after we collect and print some statistics for total
packets and retransmissions. Then we can do some experiments.
>> The recent= gets included on the request for the second batch.
>> The server seems to do what we want if it gets that and also gets
>> where-to-start info, but we should clean that up.
> I just have. *That* part I understood.
Thanks.
[bogus packets]
> The case you describe should fail both the opcode and sequence checks, down
> around line 104. If you turn on debug, do you get the sequence mismatch
> message?
I haven't investigated. I did see some printout where it was expecting a
nonce and printed out a bunch of MRU data.
[memory]
> Can I have your code for reporting memory usage? I have two ideas for
> reducing it. I want to make sure we're looking at the same numbers.
It has been pushed. It's simple and crude.
I don't think that area is worth a lot of effort. The direct mode handles
the nasty case. If it doesn't fit in memory it's probably too big for
processing by eyeball. If you have to process by script, we can move to a
machine with plenty of memory.
We could store the raw data as a single fixed format string for each slot and
fish out a substring when needed.
--
These are my opinions. I hate spam.
More information about the devel
mailing list