sys_fuzzMime-Version: 1.0

Hal Murray hmurray at megapathdsl.net
Wed Jan 25 10:12:05 UTC 2017


esr at thyrsus.com said:
>> Mark/Eric: Can you guarantee that we will never run on
>> a system with a crappy clock?  In this context, crappy means
>> one that takes big steps.

> OK, now that I think I understand this issue I'm going to say "Yes, we can
> assume this".

> All x86 machines back to the Pentium (1993) have a hardware cycle counter;
> it's called the TSC. As an interesting detail, this was a 64-bit register
> even when the primary word size was 32 bits.

> All ARM processors back to the ARM6 (1992) have one as well. A little web
> searching finds clear indications of cycle counters on the UltraSparc (SPARC
> V9), Alpha, MIPS, PowerPC, IA64 and PA-RISC.

On ARM, you can't read it from user land unless a mode bit is set.  Last time 
I tried, it wasn't set.  I found directions on how to set it, but that 
required building a kernel module and I never got that far.

> I also hunted for information on dedicated smartphone processors. I found
> clear indication of a cycle counter on the Qualcomm Snapdragon and clouded
> ones for Apple A-series processors.  The Nvidia Tegra, MediaTek, HiSilicon
> and Samsung HyExynos chips are all recent ARM variants and can therefore be
> assumed to have an ARM %tick register.

> Reading between the lines, it looks to me like this hardware feature became
> ubiquitous in the early 1990s and that one of the drivers was
> hardware-assisted crypto.  It is therefore *highly* unlikely to be omitted
> from any new design, even in low-power embedded.  And if you have a TSC,
> sampling it is a trivial handful of assembler instructions.

What does that have to do with crypto?

I've never used it for anything other than timing.


> I think I can take it from here. -- 

Just to make sure we are all on the same wavelength...

User code never reads that register.  Modern kernels use it for timekeeping.

I think kernels went through 3 stages:

Old old kernels were very coarse.  They bumped the clock on an interrupt.  
End of story.

Old kernels use the TSC (or equivalent) to interpolate between interrupts.  
(A comment from Mills clued me in.  I was plotting temperature vs drift.  It 
got a lot cleaner when I moved the temperature probe from the CPU crystal 
over to the RTC/TOY clock crystal.  I haven't looked for the code.)

Current kernels don't use interrupts for keeping time.  It's all done with 
the TSC.

There is an interesting worm in this area.  Most PCs fuzz the CPU frequency 
to meet EMI regulations.  There used to be a separate clock-generator chip: 
crystal in, all-the-clocks-you-need out.  It's all in the big-chip now, but 
you can get specs for the old chips.  The logic controlling the PLL 
deliberately down modulated the CPU frequency by a 1/2% or so at a (handwave) 
30KHz rate.

----------

Just because the hardware has a TSC (or equivalent), doesn't mean that the 
software uses it.  I wouldn't be all that surprised if the OS for an IoT size 
device still had an old-old clock routine.

We should write a hack program to collect data and make pretty histograms or 
whatever.  If we are smart enough, we can probably make it scream and shout 
if it ever finds an old-old/coarse clock.  If we are lucky, we can run that 
early in the install path.


-- 
These are my opinions.  I hate spam.





More information about the devel mailing list