sys_fuzzMime-Version: 1.0
Fred Wright
fw at fwright.net
Wed Jan 25 02:48:43 UTC 2017
On Tue, 24 Jan 2017, Gary E. Miller wrote:
> On Tue, 24 Jan 2017 15:22:20 -0800 (PST)
> Fred Wright <fw at fwright.net> wrote:
> > If one is dithering, the amount of dither should be based on the
> > clock's actual resolution, *not* the time required to read it. In a
> > sampled system, one would add dither equal to the quantization
> > interval, in order to produce results statistically similar to
> > sampling with infinite resolution. For time values, one would add
> > dither equal to the clock's counting period, to produce results
> > statistically similar to a clock running at infinite frequency.
>
> Possibly, but that is not how it works now. And would it be an
> improvement? Bring on the experiments!
I didn't say that it's worthwhile on modern systems; in fact I said
exactly the opposite further down. But if one *is* going to dither, then
the clock period is the correct amount. That's a peak-to-peak value, so
if one is adding signed dither, then the magnitude should be half that.
> > > There is an additional worm in this can. Some OSes with crappy
> > > clocks bumped the clock by a tiny bit each time you read it so that
> > > all clock-reads returned different results and you could use it for
> > > making unique IDs.
> >
> > That's not uncommon, but it's a really bad idea. Demanding that a
> > clock always return unique values is an unwarranted extension of the
> > job description of a clock.
>
> Well then, you just said the current NTP implementation is a bad idea.
No, what I said is that it's a bad idea for an *OS time function* to
corrupt the value in the name of uniqueness. That's what Hal was talking
about.
> In practice, with nano Second resolution clocks doing CLOCK_MONOTOMIC
> is not hard.
Not necessarily (assuming you're actually talking about uniqueness rather
than mere monotonicity), for a couple of reasons:
1) Most clock counters don't really run at 1GHz, so they don't really have
nanosecond resolution. (in spite of what clock_getres() may say).
2) Even if the clock really did run at 1GHz, if it could be read in under
1ns it would still be "coarse". I'm not aware of any systems that can
*currently* do that, but it's certainly not beyond the realm of
possibility. Assuming that machines will never be faster than X is one of
those not-future-proof assumptions like Y2K.
Note that "monotonic" does not necessarily mean unique. Mathematically,
it means that values are either strictly nondecreasing or strictly
nonincreasing. In the context of time, only the former interpretation
makes sense, but it doesn't prohibit repeated values. Uniqueness and
monotonicity are orthogonal properties.
Nothing in the POSIX spec says that CLOCK_MONOTONIC values are guaranteed
to be unique. See:
http://pubs.opengroup.org/onlinepubs/9699919799/
It doesn't really say much of anything, except that the epoch is arbitrary
and that it isn't adjusted by clock_settime(). The absence of backward
steps from the latter is where the monotonicity comes from.
> > The proper way to derive unique values
> > from a clock is to wrap it with something that fudges *its* values as
> > needed, without inflicting lies on the clock itself.
>
> Sorta circular since NTP reads the system clock, applies fudge, then
> adjusts the sysclock t match.
Umm, I think you're assuming that "fudges" above means some kind of NTP
time adjustment. I used it in the generic "fudge factor" sense, in this
case meaning whatever adjustment is needed to ensure uniqueness.
Suppose one has:
clock_val_t get_time(void);
Then (ignoring thread safety) one could have something like:
clock_val_t get_unique_time(void)
{
static clock_val_t last_time = 0;
clock_val_t new_time = get_time();
return new_time > last_time ? (last_time = new_time) : ++last_time;
}
The result is both unique and monotonic, and differs from the actual time
by the minimum amount necessary to meet those conditions.
That code of course assumes that clock_val_t is an integer, and gets
messier with multi-component time representations like "struct timespec".
> > Also note that in some contexts it's reasonable to extend the
> > resolution of a "coarse" clock (without breaking "fine" clocks) by
> > reading the clock in a loop until the value changes. This approach
> > is completely neutered by a uniqueness kludge.
>
> I do not see how that helps NTP, just adds latency.
Of course. But in *some contexts* it's useful, and it's broken if the OS
insists on corrupting the time in the name of uniqueness.
> > The clock_getres() function is supposed to report the actual clock
> > resolution, which is what should determine the amount of dither, but
> > in practice it's rarely correctly implemented. E.g., in the Linux
> > cases I've tested, it ignores the hardware properties and just retuns
> > 1ns.
>
> And it probably can not even determine the hardware properties.
It knows perfectly well what the actual (or at least nominal) oscillator
frequency is, since otherwise it wouldn't be able to convert the counter
values to standard time units. It just can't be bothered to use it for
clock_getres().
Fred Wright
More information about the devel
mailing list