Testing ntpd and/or timing from gpsd
Chris Johns
chrisj at ntpsec.org
Mon May 16 00:30:37 UTC 2016
On 13/05/2016 15:36, Gary E. Miller wrote:
> On Fri, 13 May 2016 10:46:05 +1000
> Chris Johns <chrisj at ntpsec.org> wrote:
>
>> On 13/05/2016 07:37, Gary E. Miller wrote:
>>>
>>> It would be insane for a switch to have EEE enabled without a way to
>>> turn it off, so you likely have never seen it turned on.
>>
>> The switch should only enable EEE if the PHY says it is ok. If you
>> disable it in the PHY and switch should honour it. A PHY that knows
>> nothing about EEE can not be expected to interoperate with a port
>> operating in that mode.
>
> Agreed. But my point still stands.
>
All I was thinking is people should test and see how they go first. If
the control can be per port then ports not being used or not causing
problems can be powered down.
>>
>>> I can enable autonegotiation for EEE it on some of my switches,
>>> nothing but trouble.
>>
>> I would make sure this is not a compliance issue with your switches
>> and/or PHYs connecting to it. I seem to remember some issue with
>> early devices that ended up not matching the standard but I cannot
>> find the references. Power up latency would be an issue as a send
>> results in a reset timer starting in the PHY and depend on the MAC
>> the data may be buffered in the PHY until the link has come up.
>
> All I can say is I lost a ton of packets, all the time, when I enabled
> automatic EEE.
>
There are MACs which know about EEE mode and MACs that do not. I would
not expect packet loss from a MAC that knows about EEE. For MACs that do
not know about EEE some PHYs can buffer some data while the link is
established. I am not sure what happens when that buffer is full but I
would expect the PHY can apply some form of back pressure to the MAC
unless the MAC cannot support flow-control frames and in that case you
have other issues.
>> EEE and PTP together does not make sense to me.
>
> Yeah, two flakey technologies added together, what could go wrong?
>
:)
>>> If your backbone is GigE then it is likely way better than you
>>> think. Turn on PTP and your hosts will time link closer than you
>>> have ever seen, if it works at all. Hardware timestamping of the
>>> ethernet packets works well, when it works at all.
>>>
>>> PTP shows that the main problem with high accuracty NTP is the
>>> network stack and the OS, not the network itself.
>>
>> Yeap, this is important once you get down to this level. Xilinx
>> recommend using an RTOS when doing PTP on their Zynq processor and
>> Marvell talk about measuring the delay in clocks in their PHYs.
>
> I can't see how the OS matters at all.
>
>> Interrupt latency seems to be the key factor.
>
> Not a factor at all. In hardware PTP all the timestamping is done
> in the ethernet chip itself. The software jitter matters not at all
> since the time was already captured in the hardware.
The handling of the hardware time-stamp in the PTP stack is outside what
I know but my naive view is the jitter you would see would be the
distance in time from the point the hardware captures the counter to the
point in time the software handles it. The size of that jitter would be
effected by any hardware delays and the overhead and performance of the
operating system. An RTOS with deterministic interrupt latency and
thread dispatch times help bound that value. I suspect all Xilinx is
saying in the Zynq TRM is "with Linux your results may vary and if they
do, do not call us, rather bound the latency some how, eg an RTOS".
Chris
More information about the devel
mailing list