PROPOSED, change of stance, release metronome

Amar Takhar verm at darkbeer.org
Tue Mar 15 13:57:06 UTC 2016


On 2016-03-15 01:46 -0700, Hal Murray wrote:
> I'm not sure what you have in mind for "test results".  How is a non-wizard 
> going to be able to evaluate anything other than a yes/no, and we will 
> already have filtered out the no-s.

That is all they need to know really.  The purpose of the test and whether it 
passed.  It can be as simple as the test page for a build showing all green.  
For now the existence of a snapshot for that revision on the FTP will denote it 
passing all the tests.


> What is the purpose of a release?
> 
> I'm assuming it's a tag on a pile of bits so we have a way to talk about them 
> and a way to focus testing and usage on a smaller number of things to keep 
> track of and support.

That and comfort.  Users rely on us to say 'we believe this is a good set of 
changes'.  A revision passing all the tests doesn't make it safe.  It could be 
in the middle of a WIP that created a bug.  Even when you try not to break up 
work this way it can happen.  A release is a safe checkpoint between work.


> The idea of "recommended" sounds good.  It looks like you are automating the 
> 2 week-ish release cycle.  We still have to work out which releases to 
> support long term and how long.

Automating the testing, yes.  Typically with these types of systems you choose a 
release that is the most 'stable'.  Eg, the last X commits have passed all 
tests.  We can decide that that number will be but at a minimum right now it 
looks like a full test cycle will take a week.  Which means all commits within 
that week will land in the next (live) test run.  I can shrink this down if I 
purchase more RPIs that is the only limiting factor -- real hardware to run ntpd 
on.


> In terms of testing...
> 
> I look at a lot of graphs by hand.  I think I could automate some of that, 
> but that would still leave a lot of work.

Can you send me an email detailing what you do and a line-by-line list of 
commands you run and what you do with the output?  It shouldn't be difficult to 
automate it.


> It may not be possible (or worth the effort) to totally automate the testing.

We can see.. if it's easy there is no harm in adding it.


> This will probably change when Eric's TESTFRAME starts working.  (I'm 
> assuming we can run some real servers setup to collect data so we can gather 
> a bunch of test cases when "interesting" things happen.)
> 
> We also have to test refclocks.

Yes, I am in contact with a local company they have some RF shielded boxes and 
GPS signal generators.  This would let me put a refclock in one of these boxes 
and generate a custom GPS signal to pickup and test against.  I can pickup some 
of the cheaper refclocks to test and it would also let us exercise all the 
common code.


> We can get real traffic by putting up pool servers.

Yes.. I want to do this I have a really good and what I think is secure design 
down for handling the pool.  I also have a system to detect fake instances that 
harvest IPs for scanning.  I have the groundwork done for this already.


Amar.


More information about the devel mailing list