A while back when we were discussing the performance of the Shortt
free pendulum clock a reference was made to tvb's paper on allen
deviation, http://www.leapsecond.com/hsn2006/ch2.pdf, which I found to
be an excellent primer on the subject. It was interesting to see that
with only a subset of the data, the allen deviations up to about the
total of the data collection period could be calculated with
reasonable accuracy. This had me thinking that if just a proportion of
the data covering up to a specific averaging time gave good results,
would disconnected data amounting to the same period give the same
results. To me it seems that accuracy of the results is not related to
the need to capture every event consecutively, it is more a case of
collecting the same size data set even though samples were not
consecutive. My reasoning behind this is that any set of data for a
DUT should give the same results even though the data sets are not
related time wise. OK, there are affects caused by different
environmental conditions and drift but these can be calculated out.
The only think that would shoot a big hole in this is if there was a
repeatable difference between alternate cycles.
So why am I saying this, well from what I have read on this group and
on the web, I have been left with a feeling that it was vital to
capture every event over a samplig period to ensure an accurate
measurement. This requires equipment capable of time-stamping each
event or employing such techniques as picket-fence. This is due to the
limitations of most counters being unable to reset in time to measure
the next time period of an input. At this stage I cannot see why it is
not possible to just measure a cycle, let the counter/timer reset and
then let it measure the next full cycle that follows. Agreed this
would mean that alternate cycles were lost (assuming the counter/timer
can reset within the space of one cycle) but the measurement could
still collect the same amount of data points, it would just take twice
as long. In fact, it could be possible to make the counter/timer
measure alternate cycles on the opposite transitions, thereby reducing
the total measurement time to just one and a half times the 'normal'
time. With respect to any problem related to alternate cycles, the
measurement system could be made to collect two data sets with single
cycle skipped between each set.
The difference will be that the data set would consist of measurements
of each individual non-sequential cycle as opposed to a history of the
start times of each cycle.
So the short story is, does the data stream really have to consist of
sequential samples or is it just a statistical thing so for the same
size of data set, the results should be similar.
Steve Rooke - ZL3TUV & G8KVD & JAKDTTNW
Omnium finis imminet
Steve Rooke wrote:
A while back when we were discussing the performance of the Shortt
free pendulum clock a reference was made to tvb's paper on allen
deviation, http://www.leapsecond.com/hsn2006/ch2.pdf, which I found to
be an excellent primer on the subject. It was interesting to see that
with only a subset of the data, the allen deviations up to about the
total of the data collection period could be calculated with
reasonable accuracy. This had me thinking that if just a proportion of
the data covering up to a specific averaging time gave good results,
would disconnected data amounting to the same period give the same
results. To me it seems that accuracy of the results is not related to
the need to capture every event consecutively, it is more a case of
collecting the same size data set even though samples were not
consecutive. My reasoning behind this is that any set of data for a
DUT should give the same results even though the data sets are not
related time wise. OK, there are affects caused by different
environmental conditions and drift but these can be calculated out.
The only think that would shoot a big hole in this is if there was a
repeatable difference between alternate cycles.
So why am I saying this, well from what I have read on this group and
on the web, I have been left with a feeling that it was vital to
capture every event over a samplig period to ensure an accurate
measurement. This requires equipment capable of time-stamping each
event or employing such techniques as picket-fence. This is due to the
limitations of most counters being unable to reset in time to measure
the next time period of an input. At this stage I cannot see why it is
not possible to just measure a cycle, let the counter/timer reset and
then let it measure the next full cycle that follows. Agreed this
would mean that alternate cycles were lost (assuming the counter/timer
can reset within the space of one cycle) but the measurement could
still collect the same amount of data points, it would just take twice
as long. In fact, it could be possible to make the counter/timer
measure alternate cycles on the opposite transitions, thereby reducing
the total measurement time to just one and a half times the 'normal'
time. With respect to any problem related to alternate cycles, the
measurement system could be made to collect two data sets with single
cycle skipped between each set.
The difference will be that the data set would consist of measurements
of each individual non-sequential cycle as opposed to a history of the
start times of each cycle.
So the short story is, does the data stream really have to consist of
sequential samples or is it just a statistical thing so for the same
size of data set, the results should be similar.
73,
Steve
Steve
It is essential to measure the phase differences between every Nth zero
crossing without missing any such cycles.
You don't have to time stamp every zero crossing every Nth one will
suffice but one then has no information for shorter time intervals than
N periods.
More accurate estimation of the Allan deviation is possible if the time
interval between time stamps is shorter.
The reason that you can't omit one of the time stamps in the sequence
(if you wish to accurately characterise the frequency stability of the
source under test) is that the process isn't stationary.
Estimates of classical measures such as the mean and standard deviation
from the samples diverge as the number of samples increase.
Whilst attempts have been made to estimate the error due to deadtime,
the corrections require that the phase noise characteristics of the 2
(or more) sources being compared are accurately known.
Avoiding deadtime problems is fairly easy if you use an instrument that
can timestamp events on the fly.
It is almost trivial to build such an instrument within a single FPGA or
CPLD.
Bruce
In message 1231b6a80904070509m5b8bb638gbc088500444254c3@mail.gmail.com, Steve
Rooke writes:
So why am I saying this, well from what I have read on this group and
on the web, I have been left with a feeling that it was vital to
capture every event over a samplig period to ensure an accurate
measurement.
It is vital only to simplify the calculation of the uncertainties
on the result more than the result itself.
If you skip every other time interval, you have no information about
noise of the obvious 1/p frequency, just like Nyquist says.
Dividing a 10MHz signal to 1PPS, and measuring the adev on that,
therefore gives us no right to talk about what happens on the
fast side of tau=1sec.
Aperiodic sampling can be an incredible powerfull tool to use
instead: Comparing the two 10MHz signals by measuring the
difference i duration between ramdomly chosen sequences of
thousand samples, gives very detailed information, as long as
you know the exact relative placement of your 1k sample
runs relative to each other.
The mathmatical handling is nasty though.
--
Poul-Henning Kamp | UNIX since Zilog Zeus 3.20
phk@FreeBSD.ORG | TCP/IP since RFC 956
FreeBSD committer | BSD since 4.3-tahoe
Never attribute to malice what can adequately be explained by incompetence.
In message 49DB496E.6030602@xtra.co.nz, Bruce Griffiths writes:
Steve Rooke wrote:
It is essential to measure the phase differences between every Nth zero
crossing without missing any such cycles.
And he does, except it is only every 2N instead of 1N.
--
Poul-Henning Kamp | UNIX since Zilog Zeus 3.20
phk@FreeBSD.ORG | TCP/IP since RFC 956
FreeBSD committer | BSD since 4.3-tahoe
Never attribute to malice what can adequately be explained by incompetence.
Steve,
You've asked a couple of questions. Let me start with this.
It is true that if one were only interested in the performance
of a pendulum (or quartz or atomic) clock for averaging times
of one day that all you would need is a series of time error
(aka phase) measurements made about the same time once
a day (doesn't have to be that exact). After one week, you'd
have 7 error measurements (=6 frequency =5 stability points)
and this is adequate to calculate the ADEV for tau 1 day.
This alone allows you to rank your clock among all the other
pendulum clocks out there. Note also you get time error and
rate error from these few data points too.
As another example, suppose you have a nice HP 10811A
oscillator and want to measure its drift rate. In this case you
could spend just 100 seconds and measure its frequency
once a day, or even once every couple of days. Do this for
a month and you'd have several dozen points. If you plot
these frequency measurements you will likely see that they
approximately fall on a line; the slope of the is the frequency
drift rate of the 10811. The general shape of the points, or
the fit of the line is a rough indication of how consistent the
drift rate is or if it's increasing or decreasing.
Neither of these examples require a lot of data. Both of these
are real-world examples.
OK so far?
/tvb
Tom,
I understand fully the points that you have made but I have obviously
not made my point clear to all and i apologise for my poor
communication skills.
This is what I'm getting at:
Using your adev1.exe from http://www.leapsecond.com/tools/adev1.htm
and processing various forms of gps.dat from
http://www.leapsecond.com/pages/gpsdo-sim/gps.dat.gz.
C:\Documents and Settings\Steve Rooke\Desktop>adev1.exe 1 <gps.dat
** Sampling period: 1 s
** Phase data scale factor: 1.000e+000
** Total phase samples: 400000
** Normal and Overlapping Allan deviation:
1 tau, 3.0127e-009 adev(n=399998), 3.0127e-009 oadev(n=399998)
2 tau, 1.5110e-009 adev(n=199998), 1.5119e-009 oadev(n=399996)
5 tau, 6.2107e-010 adev(n=79998), 6.1983e-010 oadev(n=399990)
10 tau, 3.1578e-010 adev(n=39998), 3.1549e-010 oadev(n=399980)
20 tau, 1.6531e-010 adev(n=19998), 1.6534e-010 oadev(n=399960)
50 tau, 7.2513e-011 adev(n=7998), 7.3531e-011 oadev(n=399900)
100 tau, 4.0029e-011 adev(n=3998), 4.0618e-011 oadev(n=399800)
200 tau, 2.1512e-011 adev(n=1998), 2.1633e-011 oadev(n=399600)
500 tau, 9.2193e-012 adev(n=798), 9.1630e-012 oadev(n=399000)
1000 tau, 4.9719e-012 adev(n=398), 4.7750e-012 oadev(n=398000)
2000 tau, 2.6742e-012 adev(n=198), 2.5214e-012 oadev(n=396000)
5000 tau, 1.0010e-012 adev(n=78), 1.1032e-012 oadev(n=390000)
10000 tau, 6.1333e-013 adev(n=38), 6.1039e-013 oadev(n=380000)
20000 tau, 3.8162e-013 adev(n=18), 3.2913e-013 oadev(n=360000)
50000 tau, 1.0228e-013 adev(n=6), 1.5074e-013 oadev(n=300000)
100000 tau, 5.8577e-014 adev(n=2), 6.7597e-014 oadev(n=200000)
So far, so good. Now I delete every even line in the file which leaves
me with 200000 lines of data (400000 lines in original gps.dat file).
(awk 'and(NR, 1) == 0 {print}' <gps.dat >gps1.dat)
C:\Documents and Settings\Steve Rooke\Desktop>adev1.exe 1 <gps1.dat
** Sampling period: 1 s
** Phase data scale factor: 1.000e+000
** Total phase samples: 200000
** Normal and Overlapping Allan deviation:
1 tau, 3.0257e-009 adev(n=199998), 3.0257e-009 oadev(n=199998)
2 tau, 1.5373e-009 adev(n=99998), 1.5345e-009 oadev(n=199996)
5 tau, 6.3147e-010 adev(n=39998), 6.3057e-010 oadev(n=199990)
10 tau, 3.3140e-010 adev(n=19998), 3.3067e-010 oadev(n=199980)
20 tau, 1.7872e-010 adev(n=9998), 1.7810e-010 oadev(n=199960)
50 tau, 7.9428e-011 adev(n=3998), 8.1216e-011 oadev(n=199900)
100 tau, 4.2352e-011 adev(n=1998), 4.3265e-011 oadev(n=199800)
200 tau, 2.2001e-011 adev(n=998), 2.2593e-011 oadev(n=199600)
500 tau, 9.6853e-012 adev(n=398), 9.5441e-012 oadev(n=199000)
1000 tau, 5.0139e-012 adev(n=198), 5.0387e-012 oadev(n=198000)
2000 tau, 2.7994e-012 adev(n=98), 2.7090e-012 oadev(n=196000)
5000 tau, 1.4280e-012 adev(n=38), 1.2214e-012 oadev(n=190000)
10000 tau, 7.4881e-013 adev(n=18), 6.5814e-013 oadev(n=180000)
20000 tau, 7.6518e-013 adev(n=8), 3.7253e-013 oadev(n=160000)
50000 tau, 2.4698e-014 adev(n=2), 1.3539e-013 oadev(n=100000)
Obviously we don't have enough data now for a measurement of 100000
tau but the results for the other tau are quite close, especially when
there are sufficient data points. Now this is discontinuous data,
exactly what I was trying to allude to.
OK, so now I take only the top 200000 lines of the gps.dat file (head
-200000 gps.dat >gps2.dat)
C:\Documents and Settings\Steve Rooke\Desktop>adev1.exe 1 <gps2.dat
** Sampling period: 1 s
** Phase data scale factor: 1.000e+000
** Total phase samples: 200000
** Normal and Overlapping Allan deviation:
1 tau, 3.0411e-009 adev(n=199998), 3.0411e-009 oadev(n=199998)
2 tau, 1.4985e-009 adev(n=99998), 1.4999e-009 oadev(n=199996)
5 tau, 6.1964e-010 adev(n=39998), 6.2010e-010 oadev(n=199990)
10 tau, 3.1315e-010 adev(n=19998), 3.1339e-010 oadev(n=199980)
20 tau, 1.6499e-010 adev(n=9998), 1.6495e-010 oadev(n=199960)
50 tau, 7.1425e-011 adev(n=3998), 7.3416e-011 oadev(n=199900)
100 tau, 3.9940e-011 adev(n=1998), 4.0730e-011 oadev(n=199800)
200 tau, 2.1488e-011 adev(n=998), 2.1558e-011 oadev(n=199600)
500 tau, 8.4809e-012 adev(n=398), 9.0886e-012 oadev(n=199000)
1000 tau, 4.9223e-012 adev(n=198), 4.7104e-012 oadev(n=198000)
2000 tau, 2.4335e-012 adev(n=98), 2.4515e-012 oadev(n=196000)
5000 tau, 1.0308e-012 adev(n=38), 1.0861e-012 oadev(n=190000)
10000 tau, 5.9504e-013 adev(n=18), 6.1031e-013 oadev(n=180000)
20000 tau, 3.6277e-013 adev(n=8), 3.1994e-013 oadev(n=160000)
50000 tau, 1.0630e-013 adev(n=2), 1.6715e-013 oadev(n=100000)
Is there any Linux tools for calculating adev as I'm having to run
Windows in a VMware session?
73,
Steve
2009/4/8 Tom Van Baak tvb@leapsecond.com:
Steve,
You've asked a couple of questions. Let me start with this.
It is true that if one were only interested in the performance
of a pendulum (or quartz or atomic) clock for averaging times
of one day that all you would need is a series of time error
(aka phase) measurements made about the same time once
a day (doesn't have to be that exact). After one week, you'd
have 7 error measurements (=6 frequency =5 stability points)
and this is adequate to calculate the ADEV for tau 1 day.
This alone allows you to rank your clock among all the other
pendulum clocks out there. Note also you get time error and
rate error from these few data points too.
As another example, suppose you have a nice HP 10811A
oscillator and want to measure its drift rate. In this case you
could spend just 100 seconds and measure its frequency
once a day, or even once every couple of days. Do this for
a month and you'd have several dozen points. If you plot
these frequency measurements you will likely see that they
approximately fall on a line; the slope of the is the frequency
drift rate of the 10811. The general shape of the points, or
the fit of the line is a rough indication of how consistent the
drift rate is or if it's increasing or decreasing.
Neither of these examples require a lot of data. Both of these
are real-world examples.
OK so far?
/tvb
time-nuts mailing list -- time-nuts@febo.com
To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
and follow the instructions there.
--
Steve Rooke - ZL3TUV & G8KVD & JAKDTTNW
Omnium finis imminet
Steve
If you delete every second measurement then your effective minimum
sampling time is now 2s and you can no longer calculate ADEV for tau< 2s.
You can still calculate ADEV for tau = 100,000 sec.
If you delete all but the first 200,000 lines then you can calculated
ADEV for tau=1sec and up to tau= 25,000 sec with reasonable accuracy.
You shouldn't lose sight of the fact that ADEV and OADEV are both
estimates of the Allan deviation.
Bruce
Steve Rooke wrote:
Tom,
I understand fully the points that you have made but I have obviously
not made my point clear to all and i apologise for my poor
communication skills.
This is what I'm getting at:
Using your adev1.exe from http://www.leapsecond.com/tools/adev1.htm
and processing various forms of gps.dat from
http://www.leapsecond.com/pages/gpsdo-sim/gps.dat.gz.
C:\Documents and Settings\Steve Rooke\Desktop>adev1.exe 1 <gps.dat
** Sampling period: 1 s
** Phase data scale factor: 1.000e+000
** Total phase samples: 400000
** Normal and Overlapping Allan deviation:
1 tau, 3.0127e-009 adev(n=399998), 3.0127e-009 oadev(n=399998)
2 tau, 1.5110e-009 adev(n=199998), 1.5119e-009 oadev(n=399996)
5 tau, 6.2107e-010 adev(n=79998), 6.1983e-010 oadev(n=399990)
10 tau, 3.1578e-010 adev(n=39998), 3.1549e-010 oadev(n=399980)
20 tau, 1.6531e-010 adev(n=19998), 1.6534e-010 oadev(n=399960)
50 tau, 7.2513e-011 adev(n=7998), 7.3531e-011 oadev(n=399900)
100 tau, 4.0029e-011 adev(n=3998), 4.0618e-011 oadev(n=399800)
200 tau, 2.1512e-011 adev(n=1998), 2.1633e-011 oadev(n=399600)
500 tau, 9.2193e-012 adev(n=798), 9.1630e-012 oadev(n=399000)
1000 tau, 4.9719e-012 adev(n=398), 4.7750e-012 oadev(n=398000)
2000 tau, 2.6742e-012 adev(n=198), 2.5214e-012 oadev(n=396000)
5000 tau, 1.0010e-012 adev(n=78), 1.1032e-012 oadev(n=390000)
10000 tau, 6.1333e-013 adev(n=38), 6.1039e-013 oadev(n=380000)
20000 tau, 3.8162e-013 adev(n=18), 3.2913e-013 oadev(n=360000)
50000 tau, 1.0228e-013 adev(n=6), 1.5074e-013 oadev(n=300000)
100000 tau, 5.8577e-014 adev(n=2), 6.7597e-014 oadev(n=200000)
So far, so good. Now I delete every even line in the file which leaves
me with 200000 lines of data (400000 lines in original gps.dat file).
(awk 'and(NR, 1) == 0 {print}' <gps.dat >gps1.dat)
C:\Documents and Settings\Steve Rooke\Desktop>adev1.exe 1 <gps1.dat
** Sampling period: 1 s
** Phase data scale factor: 1.000e+000
** Total phase samples: 200000
** Normal and Overlapping Allan deviation:
1 tau, 3.0257e-009 adev(n=199998), 3.0257e-009 oadev(n=199998)
2 tau, 1.5373e-009 adev(n=99998), 1.5345e-009 oadev(n=199996)
5 tau, 6.3147e-010 adev(n=39998), 6.3057e-010 oadev(n=199990)
10 tau, 3.3140e-010 adev(n=19998), 3.3067e-010 oadev(n=199980)
20 tau, 1.7872e-010 adev(n=9998), 1.7810e-010 oadev(n=199960)
50 tau, 7.9428e-011 adev(n=3998), 8.1216e-011 oadev(n=199900)
100 tau, 4.2352e-011 adev(n=1998), 4.3265e-011 oadev(n=199800)
200 tau, 2.2001e-011 adev(n=998), 2.2593e-011 oadev(n=199600)
500 tau, 9.6853e-012 adev(n=398), 9.5441e-012 oadev(n=199000)
1000 tau, 5.0139e-012 adev(n=198), 5.0387e-012 oadev(n=198000)
2000 tau, 2.7994e-012 adev(n=98), 2.7090e-012 oadev(n=196000)
5000 tau, 1.4280e-012 adev(n=38), 1.2214e-012 oadev(n=190000)
10000 tau, 7.4881e-013 adev(n=18), 6.5814e-013 oadev(n=180000)
20000 tau, 7.6518e-013 adev(n=8), 3.7253e-013 oadev(n=160000)
50000 tau, 2.4698e-014 adev(n=2), 1.3539e-013 oadev(n=100000)
Obviously we don't have enough data now for a measurement of 100000
tau but the results for the other tau are quite close, especially when
there are sufficient data points. Now this is discontinuous data,
exactly what I was trying to allude to.
OK, so now I take only the top 200000 lines of the gps.dat file (head
-200000 gps.dat >gps2.dat)
C:\Documents and Settings\Steve Rooke\Desktop>adev1.exe 1 <gps2.dat
** Sampling period: 1 s
** Phase data scale factor: 1.000e+000
** Total phase samples: 200000
** Normal and Overlapping Allan deviation:
1 tau, 3.0411e-009 adev(n=199998), 3.0411e-009 oadev(n=199998)
2 tau, 1.4985e-009 adev(n=99998), 1.4999e-009 oadev(n=199996)
5 tau, 6.1964e-010 adev(n=39998), 6.2010e-010 oadev(n=199990)
10 tau, 3.1315e-010 adev(n=19998), 3.1339e-010 oadev(n=199980)
20 tau, 1.6499e-010 adev(n=9998), 1.6495e-010 oadev(n=199960)
50 tau, 7.1425e-011 adev(n=3998), 7.3416e-011 oadev(n=199900)
100 tau, 3.9940e-011 adev(n=1998), 4.0730e-011 oadev(n=199800)
200 tau, 2.1488e-011 adev(n=998), 2.1558e-011 oadev(n=199600)
500 tau, 8.4809e-012 adev(n=398), 9.0886e-012 oadev(n=199000)
1000 tau, 4.9223e-012 adev(n=198), 4.7104e-012 oadev(n=198000)
2000 tau, 2.4335e-012 adev(n=98), 2.4515e-012 oadev(n=196000)
5000 tau, 1.0308e-012 adev(n=38), 1.0861e-012 oadev(n=190000)
10000 tau, 5.9504e-013 adev(n=18), 6.1031e-013 oadev(n=180000)
20000 tau, 3.6277e-013 adev(n=8), 3.1994e-013 oadev(n=160000)
50000 tau, 1.0630e-013 adev(n=2), 1.6715e-013 oadev(n=100000)
Is there any Linux tools for calculating adev as I'm having to run
Windows in a VMware session?
73,
Steve
2009/4/8 Tom Van Baak tvb@leapsecond.com:
Steve,
You've asked a couple of questions. Let me start with this.
It is true that if one were only interested in the performance
of a pendulum (or quartz or atomic) clock for averaging times
of one day that all you would need is a series of time error
(aka phase) measurements made about the same time once
a day (doesn't have to be that exact). After one week, you'd
have 7 error measurements (=6 frequency =5 stability points)
and this is adequate to calculate the ADEV for tau 1 day.
This alone allows you to rank your clock among all the other
pendulum clocks out there. Note also you get time error and
rate error from these few data points too.
As another example, suppose you have a nice HP 10811A
oscillator and want to measure its drift rate. In this case you
could spend just 100 seconds and measure its frequency
once a day, or even once every couple of days. Do this for
a month and you'd have several dozen points. If you plot
these frequency measurements you will likely see that they
approximately fall on a line; the slope of the is the frequency
drift rate of the 10811. The general shape of the points, or
the fit of the line is a rough indication of how consistent the
drift rate is or if it's increasing or decreasing.
Neither of these examples require a lot of data. Both of these
are real-world examples.
OK so far?
/tvb
time-nuts mailing list -- time-nuts@febo.com
To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
and follow the instructions there.
Bruce,
But how does that explain the output of Tom's adev1 program which
still seems to give a a good measurement at tau = 1s?
73,
Steve
2009/4/8 Bruce Griffiths bruce.griffiths@xtra.co.nz:
Steve
If you delete every second measurement then your effective minimum
sampling time is now 2s and you can no longer calculate ADEV for tau< 2s.
You can still calculate ADEV for tau = 100,000 sec.
If you delete all but the first 200,000 lines then you can calculated
ADEV for tau=1sec and up to tau= 25,000 sec with reasonable accuracy.
You shouldn't lose sight of the fact that ADEV and OADEV are both
estimates of the Allan deviation.
Bruce
Steve Rooke wrote:
Tom,
I understand fully the points that you have made but I have obviously
not made my point clear to all and i apologise for my poor
communication skills.
This is what I'm getting at:
Using your adev1.exe from http://www.leapsecond.com/tools/adev1.htm
and processing various forms of gps.dat from
http://www.leapsecond.com/pages/gpsdo-sim/gps.dat.gz.
C:\Documents and Settings\Steve Rooke\Desktop>adev1.exe 1 <gps.dat
** Sampling period: 1 s
** Phase data scale factor: 1.000e+000
** Total phase samples: 400000
** Normal and Overlapping Allan deviation:
1 tau, 3.0127e-009 adev(n=399998), 3.0127e-009 oadev(n=399998)
2 tau, 1.5110e-009 adev(n=199998), 1.5119e-009 oadev(n=399996)
5 tau, 6.2107e-010 adev(n=79998), 6.1983e-010 oadev(n=399990)
10 tau, 3.1578e-010 adev(n=39998), 3.1549e-010 oadev(n=399980)
20 tau, 1.6531e-010 adev(n=19998), 1.6534e-010 oadev(n=399960)
50 tau, 7.2513e-011 adev(n=7998), 7.3531e-011 oadev(n=399900)
100 tau, 4.0029e-011 adev(n=3998), 4.0618e-011 oadev(n=399800)
200 tau, 2.1512e-011 adev(n=1998), 2.1633e-011 oadev(n=399600)
500 tau, 9.2193e-012 adev(n=798), 9.1630e-012 oadev(n=399000)
1000 tau, 4.9719e-012 adev(n=398), 4.7750e-012 oadev(n=398000)
2000 tau, 2.6742e-012 adev(n=198), 2.5214e-012 oadev(n=396000)
5000 tau, 1.0010e-012 adev(n=78), 1.1032e-012 oadev(n=390000)
10000 tau, 6.1333e-013 adev(n=38), 6.1039e-013 oadev(n=380000)
20000 tau, 3.8162e-013 adev(n=18), 3.2913e-013 oadev(n=360000)
50000 tau, 1.0228e-013 adev(n=6), 1.5074e-013 oadev(n=300000)
100000 tau, 5.8577e-014 adev(n=2), 6.7597e-014 oadev(n=200000)
So far, so good. Now I delete every even line in the file which leaves
me with 200000 lines of data (400000 lines in original gps.dat file).
(awk 'and(NR, 1) == 0 {print}' <gps.dat >gps1.dat)
C:\Documents and Settings\Steve Rooke\Desktop>adev1.exe 1 <gps1.dat
** Sampling period: 1 s
** Phase data scale factor: 1.000e+000
** Total phase samples: 200000
** Normal and Overlapping Allan deviation:
1 tau, 3.0257e-009 adev(n=199998), 3.0257e-009 oadev(n=199998)
2 tau, 1.5373e-009 adev(n=99998), 1.5345e-009 oadev(n=199996)
5 tau, 6.3147e-010 adev(n=39998), 6.3057e-010 oadev(n=199990)
10 tau, 3.3140e-010 adev(n=19998), 3.3067e-010 oadev(n=199980)
20 tau, 1.7872e-010 adev(n=9998), 1.7810e-010 oadev(n=199960)
50 tau, 7.9428e-011 adev(n=3998), 8.1216e-011 oadev(n=199900)
100 tau, 4.2352e-011 adev(n=1998), 4.3265e-011 oadev(n=199800)
200 tau, 2.2001e-011 adev(n=998), 2.2593e-011 oadev(n=199600)
500 tau, 9.6853e-012 adev(n=398), 9.5441e-012 oadev(n=199000)
1000 tau, 5.0139e-012 adev(n=198), 5.0387e-012 oadev(n=198000)
2000 tau, 2.7994e-012 adev(n=98), 2.7090e-012 oadev(n=196000)
5000 tau, 1.4280e-012 adev(n=38), 1.2214e-012 oadev(n=190000)
10000 tau, 7.4881e-013 adev(n=18), 6.5814e-013 oadev(n=180000)
20000 tau, 7.6518e-013 adev(n=8), 3.7253e-013 oadev(n=160000)
50000 tau, 2.4698e-014 adev(n=2), 1.3539e-013 oadev(n=100000)
Obviously we don't have enough data now for a measurement of 100000
tau but the results for the other tau are quite close, especially when
there are sufficient data points. Now this is discontinuous data,
exactly what I was trying to allude to.
OK, so now I take only the top 200000 lines of the gps.dat file (head
-200000 gps.dat >gps2.dat)
C:\Documents and Settings\Steve Rooke\Desktop>adev1.exe 1 <gps2.dat
** Sampling period: 1 s
** Phase data scale factor: 1.000e+000
** Total phase samples: 200000
** Normal and Overlapping Allan deviation:
1 tau, 3.0411e-009 adev(n=199998), 3.0411e-009 oadev(n=199998)
2 tau, 1.4985e-009 adev(n=99998), 1.4999e-009 oadev(n=199996)
5 tau, 6.1964e-010 adev(n=39998), 6.2010e-010 oadev(n=199990)
10 tau, 3.1315e-010 adev(n=19998), 3.1339e-010 oadev(n=199980)
20 tau, 1.6499e-010 adev(n=9998), 1.6495e-010 oadev(n=199960)
50 tau, 7.1425e-011 adev(n=3998), 7.3416e-011 oadev(n=199900)
100 tau, 3.9940e-011 adev(n=1998), 4.0730e-011 oadev(n=199800)
200 tau, 2.1488e-011 adev(n=998), 2.1558e-011 oadev(n=199600)
500 tau, 8.4809e-012 adev(n=398), 9.0886e-012 oadev(n=199000)
1000 tau, 4.9223e-012 adev(n=198), 4.7104e-012 oadev(n=198000)
2000 tau, 2.4335e-012 adev(n=98), 2.4515e-012 oadev(n=196000)
5000 tau, 1.0308e-012 adev(n=38), 1.0861e-012 oadev(n=190000)
10000 tau, 5.9504e-013 adev(n=18), 6.1031e-013 oadev(n=180000)
20000 tau, 3.6277e-013 adev(n=8), 3.1994e-013 oadev(n=160000)
50000 tau, 1.0630e-013 adev(n=2), 1.6715e-013 oadev(n=100000)
Is there any Linux tools for calculating adev as I'm having to run
Windows in a VMware session?
73,
Steve
2009/4/8 Tom Van Baak tvb@leapsecond.com:
Steve,
You've asked a couple of questions. Let me start with this.
It is true that if one were only interested in the performance
of a pendulum (or quartz or atomic) clock for averaging times
of one day that all you would need is a series of time error
(aka phase) measurements made about the same time once
a day (doesn't have to be that exact). After one week, you'd
have 7 error measurements (=6 frequency =5 stability points)
and this is adequate to calculate the ADEV for tau 1 day.
This alone allows you to rank your clock among all the other
pendulum clocks out there. Note also you get time error and
rate error from these few data points too.
As another example, suppose you have a nice HP 10811A
oscillator and want to measure its drift rate. In this case you
could spend just 100 seconds and measure its frequency
once a day, or even once every couple of days. Do this for
a month and you'd have several dozen points. If you plot
these frequency measurements you will likely see that they
approximately fall on a line; the slope of the is the frequency
drift rate of the 10811. The general shape of the points, or
the fit of the line is a rough indication of how consistent the
drift rate is or if it's increasing or decreasing.
Neither of these examples require a lot of data. Both of these
are real-world examples.
OK so far?
/tvb
time-nuts mailing list -- time-nuts@febo.com
To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
and follow the instructions there.
time-nuts mailing list -- time-nuts@febo.com
To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
and follow the instructions there.
--
Steve Rooke - ZL3TUV & G8KVD & JAKDTTNW
Omnium finis imminet
Steve
It cant, it must be a matter of interpretation.
Perhaps it means something like:
1 tau means tau = 1x the interval between consecutive measurements.
2 tau means tau = 2x the interval between consecutive measurements
100000 tau means tau = 100,000 x the interval between consecutive
measurements
Bruce
Steve Rooke wrote:
Bruce,
But how does that explain the output of Tom's adev1 program which
still seems to give a a good measurement at tau = 1s?
73,
Steve
2009/4/8 Bruce Griffiths bruce.griffiths@xtra.co.nz:
Steve
If you delete every second measurement then your effective minimum
sampling time is now 2s and you can no longer calculate ADEV for tau< 2s.
You can still calculate ADEV for tau = 100,000 sec.
If you delete all but the first 200,000 lines then you can calculated
ADEV for tau=1sec and up to tau= 25,000 sec with reasonable accuracy.
You shouldn't lose sight of the fact that ADEV and OADEV are both
estimates of the Allan deviation.
Bruce
Steve Rooke wrote:
Tom,
I understand fully the points that you have made but I have obviously
not made my point clear to all and i apologise for my poor
communication skills.
This is what I'm getting at:
Using your adev1.exe from http://www.leapsecond.com/tools/adev1.htm
and processing various forms of gps.dat from
http://www.leapsecond.com/pages/gpsdo-sim/gps.dat.gz.
C:\Documents and Settings\Steve Rooke\Desktop>adev1.exe 1 <gps.dat
** Sampling period: 1 s
** Phase data scale factor: 1.000e+000
** Total phase samples: 400000
** Normal and Overlapping Allan deviation:
1 tau, 3.0127e-009 adev(n=399998), 3.0127e-009 oadev(n=399998)
2 tau, 1.5110e-009 adev(n=199998), 1.5119e-009 oadev(n=399996)
5 tau, 6.2107e-010 adev(n=79998), 6.1983e-010 oadev(n=399990)
10 tau, 3.1578e-010 adev(n=39998), 3.1549e-010 oadev(n=399980)
20 tau, 1.6531e-010 adev(n=19998), 1.6534e-010 oadev(n=399960)
50 tau, 7.2513e-011 adev(n=7998), 7.3531e-011 oadev(n=399900)
100 tau, 4.0029e-011 adev(n=3998), 4.0618e-011 oadev(n=399800)
200 tau, 2.1512e-011 adev(n=1998), 2.1633e-011 oadev(n=399600)
500 tau, 9.2193e-012 adev(n=798), 9.1630e-012 oadev(n=399000)
1000 tau, 4.9719e-012 adev(n=398), 4.7750e-012 oadev(n=398000)
2000 tau, 2.6742e-012 adev(n=198), 2.5214e-012 oadev(n=396000)
5000 tau, 1.0010e-012 adev(n=78), 1.1032e-012 oadev(n=390000)
10000 tau, 6.1333e-013 adev(n=38), 6.1039e-013 oadev(n=380000)
20000 tau, 3.8162e-013 adev(n=18), 3.2913e-013 oadev(n=360000)
50000 tau, 1.0228e-013 adev(n=6), 1.5074e-013 oadev(n=300000)
100000 tau, 5.8577e-014 adev(n=2), 6.7597e-014 oadev(n=200000)
So far, so good. Now I delete every even line in the file which leaves
me with 200000 lines of data (400000 lines in original gps.dat file).
(awk 'and(NR, 1) == 0 {print}' <gps.dat >gps1.dat)
C:\Documents and Settings\Steve Rooke\Desktop>adev1.exe 1 <gps1.dat
** Sampling period: 1 s
** Phase data scale factor: 1.000e+000
** Total phase samples: 200000
** Normal and Overlapping Allan deviation:
1 tau, 3.0257e-009 adev(n=199998), 3.0257e-009 oadev(n=199998)
2 tau, 1.5373e-009 adev(n=99998), 1.5345e-009 oadev(n=199996)
5 tau, 6.3147e-010 adev(n=39998), 6.3057e-010 oadev(n=199990)
10 tau, 3.3140e-010 adev(n=19998), 3.3067e-010 oadev(n=199980)
20 tau, 1.7872e-010 adev(n=9998), 1.7810e-010 oadev(n=199960)
50 tau, 7.9428e-011 adev(n=3998), 8.1216e-011 oadev(n=199900)
100 tau, 4.2352e-011 adev(n=1998), 4.3265e-011 oadev(n=199800)
200 tau, 2.2001e-011 adev(n=998), 2.2593e-011 oadev(n=199600)
500 tau, 9.6853e-012 adev(n=398), 9.5441e-012 oadev(n=199000)
1000 tau, 5.0139e-012 adev(n=198), 5.0387e-012 oadev(n=198000)
2000 tau, 2.7994e-012 adev(n=98), 2.7090e-012 oadev(n=196000)
5000 tau, 1.4280e-012 adev(n=38), 1.2214e-012 oadev(n=190000)
10000 tau, 7.4881e-013 adev(n=18), 6.5814e-013 oadev(n=180000)
20000 tau, 7.6518e-013 adev(n=8), 3.7253e-013 oadev(n=160000)
50000 tau, 2.4698e-014 adev(n=2), 1.3539e-013 oadev(n=100000)
Obviously we don't have enough data now for a measurement of 100000
tau but the results for the other tau are quite close, especially when
there are sufficient data points. Now this is discontinuous data,
exactly what I was trying to allude to.
OK, so now I take only the top 200000 lines of the gps.dat file (head
-200000 gps.dat >gps2.dat)
C:\Documents and Settings\Steve Rooke\Desktop>adev1.exe 1 <gps2.dat
** Sampling period: 1 s
** Phase data scale factor: 1.000e+000
** Total phase samples: 200000
** Normal and Overlapping Allan deviation:
1 tau, 3.0411e-009 adev(n=199998), 3.0411e-009 oadev(n=199998)
2 tau, 1.4985e-009 adev(n=99998), 1.4999e-009 oadev(n=199996)
5 tau, 6.1964e-010 adev(n=39998), 6.2010e-010 oadev(n=199990)
10 tau, 3.1315e-010 adev(n=19998), 3.1339e-010 oadev(n=199980)
20 tau, 1.6499e-010 adev(n=9998), 1.6495e-010 oadev(n=199960)
50 tau, 7.1425e-011 adev(n=3998), 7.3416e-011 oadev(n=199900)
100 tau, 3.9940e-011 adev(n=1998), 4.0730e-011 oadev(n=199800)
200 tau, 2.1488e-011 adev(n=998), 2.1558e-011 oadev(n=199600)
500 tau, 8.4809e-012 adev(n=398), 9.0886e-012 oadev(n=199000)
1000 tau, 4.9223e-012 adev(n=198), 4.7104e-012 oadev(n=198000)
2000 tau, 2.4335e-012 adev(n=98), 2.4515e-012 oadev(n=196000)
5000 tau, 1.0308e-012 adev(n=38), 1.0861e-012 oadev(n=190000)
10000 tau, 5.9504e-013 adev(n=18), 6.1031e-013 oadev(n=180000)
20000 tau, 3.6277e-013 adev(n=8), 3.1994e-013 oadev(n=160000)
50000 tau, 1.0630e-013 adev(n=2), 1.6715e-013 oadev(n=100000)
Is there any Linux tools for calculating adev as I'm having to run
Windows in a VMware session?
73,
Steve
2009/4/8 Tom Van Baak tvb@leapsecond.com:
Steve,
You've asked a couple of questions. Let me start with this.
It is true that if one were only interested in the performance
of a pendulum (or quartz or atomic) clock for averaging times
of one day that all you would need is a series of time error
(aka phase) measurements made about the same time once
a day (doesn't have to be that exact). After one week, you'd
have 7 error measurements (=6 frequency =5 stability points)
and this is adequate to calculate the ADEV for tau 1 day.
This alone allows you to rank your clock among all the other
pendulum clocks out there. Note also you get time error and
rate error from these few data points too.
As another example, suppose you have a nice HP 10811A
oscillator and want to measure its drift rate. In this case you
could spend just 100 seconds and measure its frequency
once a day, or even once every couple of days. Do this for
a month and you'd have several dozen points. If you plot
these frequency measurements you will likely see that they
approximately fall on a line; the slope of the is the frequency
drift rate of the 10811. The general shape of the points, or
the fit of the line is a rough indication of how consistent the
drift rate is or if it's increasing or decreasing.
Neither of these examples require a lot of data. Both of these
are real-world examples.
OK so far?
/tvb
time-nuts mailing list -- time-nuts@febo.com
To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
and follow the instructions there.
time-nuts mailing list -- time-nuts@febo.com
To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
and follow the instructions there.
Bruce,
I hear what you say but the results seem to correlate quite well:-
1 tau, 3.0127e-009 adev(n=399998), 3.0127e-009 oadev(n=399998)
1 tau, 3.0257e-009 adev(n=199998), 3.0257e-009 oadev(n=199998)
And using the first half of the data:-
1 tau, 3.0411e-009 adev(n=199998), 3.0411e-009 oadev(n=199998)
So I'm trying to understand why this won't work.
73,
Steve
2009/4/9 Bruce Griffiths bruce.griffiths@xtra.co.nz:
Steve
It cant, it must be a matter of interpretation.
Perhaps it means something like:
1 tau means tau = 1x the interval between consecutive measurements.
2 tau means tau = 2x the interval between consecutive measurements
100000 tau means tau = 100,000 x the interval between consecutive
measurements
Bruce
Steve Rooke wrote:
Bruce,
But how does that explain the output of Tom's adev1 program which
still seems to give a a good measurement at tau = 1s?
73,
Steve
2009/4/8 Bruce Griffiths bruce.griffiths@xtra.co.nz:
Steve
If you delete every second measurement then your effective minimum
sampling time is now 2s and you can no longer calculate ADEV for tau< 2s.
You can still calculate ADEV for tau = 100,000 sec.
If you delete all but the first 200,000 lines then you can calculated
ADEV for tau=1sec and up to tau= 25,000 sec with reasonable accuracy.
You shouldn't lose sight of the fact that ADEV and OADEV are both
estimates of the Allan deviation.
Bruce
Steve Rooke wrote:
Tom,
I understand fully the points that you have made but I have obviously
not made my point clear to all and i apologise for my poor
communication skills.
This is what I'm getting at:
Using your adev1.exe from http://www.leapsecond.com/tools/adev1.htm
and processing various forms of gps.dat from
http://www.leapsecond.com/pages/gpsdo-sim/gps.dat.gz.
C:\Documents and Settings\Steve Rooke\Desktop>adev1.exe 1 <gps.dat
** Sampling period: 1 s
** Phase data scale factor: 1.000e+000
** Total phase samples: 400000
** Normal and Overlapping Allan deviation:
1 tau, 3.0127e-009 adev(n=399998), 3.0127e-009 oadev(n=399998)
2 tau, 1.5110e-009 adev(n=199998), 1.5119e-009 oadev(n=399996)
5 tau, 6.2107e-010 adev(n=79998), 6.1983e-010 oadev(n=399990)
10 tau, 3.1578e-010 adev(n=39998), 3.1549e-010 oadev(n=399980)
20 tau, 1.6531e-010 adev(n=19998), 1.6534e-010 oadev(n=399960)
50 tau, 7.2513e-011 adev(n=7998), 7.3531e-011 oadev(n=399900)
100 tau, 4.0029e-011 adev(n=3998), 4.0618e-011 oadev(n=399800)
200 tau, 2.1512e-011 adev(n=1998), 2.1633e-011 oadev(n=399600)
500 tau, 9.2193e-012 adev(n=798), 9.1630e-012 oadev(n=399000)
1000 tau, 4.9719e-012 adev(n=398), 4.7750e-012 oadev(n=398000)
2000 tau, 2.6742e-012 adev(n=198), 2.5214e-012 oadev(n=396000)
5000 tau, 1.0010e-012 adev(n=78), 1.1032e-012 oadev(n=390000)
10000 tau, 6.1333e-013 adev(n=38), 6.1039e-013 oadev(n=380000)
20000 tau, 3.8162e-013 adev(n=18), 3.2913e-013 oadev(n=360000)
50000 tau, 1.0228e-013 adev(n=6), 1.5074e-013 oadev(n=300000)
100000 tau, 5.8577e-014 adev(n=2), 6.7597e-014 oadev(n=200000)
So far, so good. Now I delete every even line in the file which leaves
me with 200000 lines of data (400000 lines in original gps.dat file).
(awk 'and(NR, 1) == 0 {print}' <gps.dat >gps1.dat)
C:\Documents and Settings\Steve Rooke\Desktop>adev1.exe 1 <gps1.dat
** Sampling period: 1 s
** Phase data scale factor: 1.000e+000
** Total phase samples: 200000
** Normal and Overlapping Allan deviation:
1 tau, 3.0257e-009 adev(n=199998), 3.0257e-009 oadev(n=199998)
2 tau, 1.5373e-009 adev(n=99998), 1.5345e-009 oadev(n=199996)
5 tau, 6.3147e-010 adev(n=39998), 6.3057e-010 oadev(n=199990)
10 tau, 3.3140e-010 adev(n=19998), 3.3067e-010 oadev(n=199980)
20 tau, 1.7872e-010 adev(n=9998), 1.7810e-010 oadev(n=199960)
50 tau, 7.9428e-011 adev(n=3998), 8.1216e-011 oadev(n=199900)
100 tau, 4.2352e-011 adev(n=1998), 4.3265e-011 oadev(n=199800)
200 tau, 2.2001e-011 adev(n=998), 2.2593e-011 oadev(n=199600)
500 tau, 9.6853e-012 adev(n=398), 9.5441e-012 oadev(n=199000)
1000 tau, 5.0139e-012 adev(n=198), 5.0387e-012 oadev(n=198000)
2000 tau, 2.7994e-012 adev(n=98), 2.7090e-012 oadev(n=196000)
5000 tau, 1.4280e-012 adev(n=38), 1.2214e-012 oadev(n=190000)
10000 tau, 7.4881e-013 adev(n=18), 6.5814e-013 oadev(n=180000)
20000 tau, 7.6518e-013 adev(n=8), 3.7253e-013 oadev(n=160000)
50000 tau, 2.4698e-014 adev(n=2), 1.3539e-013 oadev(n=100000)
Obviously we don't have enough data now for a measurement of 100000
tau but the results for the other tau are quite close, especially when
there are sufficient data points. Now this is discontinuous data,
exactly what I was trying to allude to.
OK, so now I take only the top 200000 lines of the gps.dat file (head
-200000 gps.dat >gps2.dat)
C:\Documents and Settings\Steve Rooke\Desktop>adev1.exe 1 <gps2.dat
** Sampling period: 1 s
** Phase data scale factor: 1.000e+000
** Total phase samples: 200000
** Normal and Overlapping Allan deviation:
1 tau, 3.0411e-009 adev(n=199998), 3.0411e-009 oadev(n=199998)
2 tau, 1.4985e-009 adev(n=99998), 1.4999e-009 oadev(n=199996)
5 tau, 6.1964e-010 adev(n=39998), 6.2010e-010 oadev(n=199990)
10 tau, 3.1315e-010 adev(n=19998), 3.1339e-010 oadev(n=199980)
20 tau, 1.6499e-010 adev(n=9998), 1.6495e-010 oadev(n=199960)
50 tau, 7.1425e-011 adev(n=3998), 7.3416e-011 oadev(n=199900)
100 tau, 3.9940e-011 adev(n=1998), 4.0730e-011 oadev(n=199800)
200 tau, 2.1488e-011 adev(n=998), 2.1558e-011 oadev(n=199600)
500 tau, 8.4809e-012 adev(n=398), 9.0886e-012 oadev(n=199000)
1000 tau, 4.9223e-012 adev(n=198), 4.7104e-012 oadev(n=198000)
2000 tau, 2.4335e-012 adev(n=98), 2.4515e-012 oadev(n=196000)
5000 tau, 1.0308e-012 adev(n=38), 1.0861e-012 oadev(n=190000)
10000 tau, 5.9504e-013 adev(n=18), 6.1031e-013 oadev(n=180000)
20000 tau, 3.6277e-013 adev(n=8), 3.1994e-013 oadev(n=160000)
50000 tau, 1.0630e-013 adev(n=2), 1.6715e-013 oadev(n=100000)
Is there any Linux tools for calculating adev as I'm having to run
Windows in a VMware session?
73,
Steve
2009/4/8 Tom Van Baak tvb@leapsecond.com:
Steve,
You've asked a couple of questions. Let me start with this.
It is true that if one were only interested in the performance
of a pendulum (or quartz or atomic) clock for averaging times
of one day that all you would need is a series of time error
(aka phase) measurements made about the same time once
a day (doesn't have to be that exact). After one week, you'd
have 7 error measurements (=6 frequency =5 stability points)
and this is adequate to calculate the ADEV for tau 1 day.
This alone allows you to rank your clock among all the other
pendulum clocks out there. Note also you get time error and
rate error from these few data points too.
As another example, suppose you have a nice HP 10811A
oscillator and want to measure its drift rate. In this case you
could spend just 100 seconds and measure its frequency
once a day, or even once every couple of days. Do this for
a month and you'd have several dozen points. If you plot
these frequency measurements you will likely see that they
approximately fall on a line; the slope of the is the frequency
drift rate of the 10811. The general shape of the points, or
the fit of the line is a rough indication of how consistent the
drift rate is or if it's increasing or decreasing.
Neither of these examples require a lot of data. Both of these
are real-world examples.
OK so far?
/tvb
time-nuts mailing list -- time-nuts@febo.com
To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
and follow the instructions there.
time-nuts mailing list -- time-nuts@febo.com
To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
and follow the instructions there.
time-nuts mailing list -- time-nuts@febo.com
To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
and follow the instructions there.
--
Steve Rooke - ZL3TUV & G8KVD & JAKDTTNW
Omnium finis imminet