sys_fuzzMime-Version: 1.0
Eric S. Raymond
esr at thyrsus.com
Wed Jan 25 11:28:47 UTC 2017
Hal Murray <hmurray at megapathdsl.net>:
> > All ARM processors back to the ARM6 (1992) have one as well. A little web
> > searching finds clear indications of cycle counters on the UltraSparc (SPARC
> > V9), Alpha, MIPS, PowerPC, IA64 and PA-RISC.
>
> On ARM, you can't read it from user land unless a mode bit is set.
That's OK, for our purpose only the implementation of clock_gettime(2)
meeds to see it.
> > Reading between the lines, it looks to me like this hardware feature became
> > ubiquitous in the early 1990s and that one of the drivers was
> > hardware-assisted crypto. It is therefore *highly* unlikely to be omitted
> > from any new design, even in low-power embedded. And if you have a TSC,
> > sampling it is a trivial handful of assembler instructions.
>
> What does that have to do with crypto?
Generation of unique nonces, for starters.
> Just to make sure we are all on the same wavelength...
>
> User code never reads that register. Modern kernels use it for timekeeping.
Right, that's why I'm not worried about not being able to see it from
userland.
> I think kernels went through 3 stages:
>
> Old old kernels were very coarse. They bumped the clock on an interrupt.
> End of story.
>
> Old kernels use the TSC (or equivalent) to interpolate between interrupts.
> (A comment from Mills clued me in. I was plotting temperature vs drift. It
> got a lot cleaner when I moved the temperature probe from the CPU crystal
> over to the RTC/TOY clock crystal. I haven't looked for the code.)
>
> Current kernels don't use interrupts for keeping time. It's all done with
> the TSC.
This agrees with my understanding, though I'm not clear on when the transition
from old old to old happened.
One thing I don't know. It occurred to me yesterday that if I were
implementing a POSIX-compliant OS today I would definitely write
get_clocktime(2) so it does *not* surrender the running process's
schedule slot. I don't know if it's done this way.
> There is an interesting worm in this area. Most PCs fuzz the CPU frequency
> to meet EMI regulations. There used to be a separate clock-generator chip:
> crystal in, all-the-clocks-you-need out. It's all in the big-chip now, but
> you can get specs for the old chips. The logic controlling the PLL
> deliberately down modulated the CPU frequency by a 1/2% or so at a (handwave)
> 30KHz rate.
Sorry, I don't see the relevance.
> Just because the hardware has a TSC (or equivalent), doesn't mean that the
> software uses it. I wouldn't be all that surprised if the OS for an IoT size
> device still had an old-old clock routine.
I would be. A large and increasing fraction of these are running Linux, and it
has been a long time since Linux was even "old".
> We should write a hack program to collect data and make pretty histograms or
> whatever. If we are smart enough, we can probably make it scream and shout
> if it ever finds an old-old/coarse clock. If we are lucky, we can run that
> early in the install path.
Checking to see if back-to-back clock_gettime(2) calls yield a fuzz
greater than or equal to 1/HZ seems like the obvious check, at least
under Linux. We could log that where ntpd computes the fuzz.
--
<a href="http://www.catb.org/~esr/">Eric S. Raymond</a>
More information about the devel
mailing list