time-nuts@lists.febo.com

Discussion of precise time and frequency measurement

View all threads

Characterising frequency standards

BG
Bruce Griffiths
Sun, Apr 12, 2009 1:25 PM

Hej Magnus

Magnus Danielson wrote:

Bruce Griffiths skrev:

Rex wrote:

Bruce Griffiths wrote:

...

Brice

An impostor? An alias? :-)

And I thought I was alluding to aliasing of the phase noise spectrum not
the characters of the alphabet.

So it is not a case of shot noise of Bruce fingers? :)
I know mine has some, and besides that there are several bugs in the
language unit...

Cheers,
Magnus


time-nuts mailing list -- time-nuts@febo.com
To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
and follow the instructions there.

More a case of digital jitter.
Perhaps the control system phase noise was too high.

Bruce

Hej Magnus Magnus Danielson wrote: > Bruce Griffiths skrev: > >> Rex wrote: >> >>> Bruce Griffiths wrote: >>> >>> >>>> ... >>>> >>>> Brice >>>> >>>> >>>> >>>> >>> An impostor? An alias? :-) >>> >>> >>> >>> >> And I thought I was alluding to aliasing of the phase noise spectrum not >> the characters of the alphabet. >> > > So it is not a case of shot noise of Bruce fingers? :) > I know mine has some, and besides that there are several bugs in the > language unit... > > Cheers, > Magnus > > _______________________________________________ > time-nuts mailing list -- time-nuts@febo.com > To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts > and follow the instructions there. > > More a case of digital jitter. Perhaps the control system phase noise was too high. Bruce
SR
Steve Rooke
Mon, Apr 13, 2009 12:34 AM

Bruce,

2009/4/12 Bruce Griffiths bruce.griffiths@xtra.co.nz:

Steve

Steve Rooke wrote:

If I take two sequential phase readings from an input source and place
this into one data set and aniother two readings from the same source
but spaced by one cycle and put this in a second data set. From the
first data set I can calculate ADEV for tau = 1s and can calculate
ADEV for tau = 2 sec from the second data set. If I now pre-process
the data in the second set to remove all the effects of drift (given
that I have already determined this), I now have two 1 sec samples
which show a statistical difference and can be fed to ADEV with a tau0
= 1 sec producing a result for tau = 1 sec. The results from this
second calculation should show equal accuracy as that using the first
data set (given the limited size of the data set).

You need to give far more detail as its unclear exactly what you are
doing with what samples.
Label all the phase samples and then show which samples belong to which
data set.
Also need to show clearly what you mean by skipping a cycle.

Say I have a 1Hz input source and my counter measures the period of
the first cycle and assigns this to A1. At the end of the first cycle
the counter is able to be rest and re-triggered to capture the second
cycle and assign this to A2. So far 2 sec have passed and I have two
readings in data set A.

I now repeat the experiment and assign the measurement of the first
period to B1. The counter I am using this time is unable to stop at
the end of the first measurement and retrigger immediately so I'm
unable to measure the second cycle but is left in the armed position.
When the third cycle starts, the counter triggers and completes the
measurement of the third cycle which is now assigned to B2.

For the purposes of my original text, the first data set refers to A1
& A2. Similarly the second data set refers to B1 & B2. Reference to
pre-processing of the second data set refers to mathematically
removing the effects of drift from B1 & B2 to produce a third data set
which is used as the data input for an ADEV calculation where tau0 = 1
sec with output of tau = 1 sec.

I now collect a large data set but with a single cycle skipped between
each sample. I feed this into ADEV using tau0 = 2 sec to produce tau
results >= 2 sec. I then pre-process the data to remove any drift and
feed this to ADEV with a tau0 = 1 sec to produce just the tau = 1 sec
result. I now have a complete set of results for tau >= 1 sec. Agreed,
there is the issue of modulation at 1/2 input f but ignoring this for
the moment, this should give a valid result.

Again you need to give more detail.

In this case the data set is constructed from the measurement of the
cycle periods of a 1Hz input source where even cycles are skipped,
hence each data point is a measurement of the period of each odd (1,
3, 5, 7...) cycle of the incoming waveform. In this case the time
between each measurement is 2 sec so ADEV is calculated with tau = 2
sec for tau >= 2 sec. This data set is then mathematically processed
to remove the effects of drift, bearing in mind the 2 sec spacing of
each data point, and ADEV is then calculated with tau0 = 1 sec for tau
= 1 sec.

Now indulge me while I have a flight of fantasy.

As the effects of jitter and phase noise will produce a statistical
distribution of measurements, any results from these ADEV calculations
will be limited on accuracy by the size of the data set. Only if we
sample for a very long time will we see the very limits of the effects
of noise.

What noise from what source?

PN - White noise phase WPM, Flicker noise phase FPM, White noise
frequency WFM, Flicker noise frequency FFM and Random walk frequency
RWFM.

Noise in such measurements can originate in the measuring instrument or
the source.

Indeed, and this is an important aspect to consider as we have been
discussing the effects of induced jitter/PN to a frequency standard
when it is buffered and divided down. Ideally measurements of ADEV
would be made on the raw frequency standard source (eg. 10MHz) rather
than, say, a divided 1Hz signal.

For short measurement times quantisation noise and instrumental noise
may mask the noise from the source but they are still present.

Well, these form the noise floor of our measurement system.

The samples which deviate the most from the median will
occur very infrequently and it is statistically likely that they will
not occur adjacent to another highly deviated sample. We could
pre-process the data to remove all drift and then sort it into an
array of increasing size. This would give the greatest deviations at
each end of the array. For 1 sec stability the deviation would be the
greatest difference from the median of the first and last samples in
the array. For a 2 sec stability, this same calculation could be made
taking the first two and last two readings in the array and
calculating their difference from 2 x the median. This calculation
could be continued until all the data is used for the final
calculation. In fact the whole sorted data set could be fed to ADEV to
produce a result that would show better worse case measurement of the
input source which still has some statistical probability. In theory,
if we took an infinite number of samples, there would be a whole
string of absolutely maximum deviation measurements in a row which
would show the absolute worse case.

Is any of this valid or just bad physics, I don't know, but I'm sure
it will solicit interesting comment.

No, not poor physics but poor statistics.

Well, poor statistics possibly but that branch of mathematics is not
only about interpreting data it is also about predicting events. What
I proposed is predicting events that would otherwise occur very
infrequently and hence be difficult to collect but would have a baring
on the measurement of the total stability of an oscillator. I'm just
thinking out loud.

73,
Steve

73,
Steve

2009/4/10 Tom Van Baak tvb@leapsecond.com:

I think the penny has dropped now, thanks. It's interesting that the
ADEV calculation still works even without continuous data as all the
reading I have done has led me to belive this was sacrosanct.

We need to be careful about what you mean by "continuous".
Let me probe a bit further to make sure you or others understand.

The data that you first mentioned, some GPS and OCXO data at:
   http://www.leapsecond.com/pages/gpsdo-sim
was recorded once per second, for 400,000 samples without any
interruption; that's over 4 days of continuous data.

As you see it is very possible to extract every other, or every 10th,
every 60th, or every Nth point from this large data set to create a
smaller data set.

Is it as if you had several counters all connected to the same DUT.
Perhaps one makes a new phase measurement each second,
another makes a measurement every 10 seconds; maybe a third
counter just measures once a minute.

The key here is not how often they make measurements, but that
they all keep running at their particular rate.

The data sets you get from these counters all represent 4 days
of measurement; what changes is the measurement interval, the
tau0, or whatever your ADEV tool calls it.

Now the ADEV plots you get from these counters will all match
perfectly with the only exception being that the every-60 second
counter cannot give you any ADEV points for tau less than 60;
the every-10 second counter cannot give you points for tau less
than 10 seconds; and for that matter; the every 1-second counter
cannot give you points for tau less than 1 second.

So what makes all these "continuous" is that the runs were not
interrupted and that the data points were taken at regular intervals.

The x-axis of an ADEV plot spans a logarithmic range of tau. The
farthest point on the right is limited by how long your run was. If
you collect data for 4 or 5 days you can compute and plot points
out to around 1 day or 10^5 seconds.

On the other hand, the farthest point on the left is limited by how
fast you collect data. If you collect one point every 10 seconds,
then tau=10 is your left-most point. Yes, it's common to collect data
every second; in this case you can plot down to tau=1s. Some of
my instruments can collect phase data at 1000 points per second
(huge files!) and this means my leftmost ADEV point is 1 millisecond.

Here's an example of collecting data at 10 Hz:
http://www.leapsecond.com/pages/gpsdo/
You can see this allows me to plot from ADEV tau = 0.1 s.

Does all this make sense now?

What I now believe is that it's possible to measure oscillator
performance with less than optimal test gear. This will enable me to
see the effects of any experiments I make in the future. If you can't
measure it, how can you know that what your doing is good or bad.

Very true. So what one or several performance measurements
are you after?

/tvb


time-nuts mailing list -- time-nuts@febo.com
To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
and follow the instructions there.

Bruce


time-nuts mailing list -- time-nuts@febo.com
To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
and follow the instructions there.

--
Steve Rooke - ZL3TUV & G8KVD & JAKDTTNW
Omnium finis imminet

Bruce, 2009/4/12 Bruce Griffiths <bruce.griffiths@xtra.co.nz>: > Steve > > Steve Rooke wrote: >> If I take two sequential phase readings from an input source and place >> this into one data set and aniother two readings from the same source >> but spaced by one cycle and put this in a second data set. From the >> first data set I can calculate ADEV for tau = 1s and can calculate >> ADEV for tau = 2 sec from the second data set. If I now pre-process >> the data in the second set to remove all the effects of drift (given >> that I have already determined this), I now have two 1 sec samples >> which show a statistical difference and can be fed to ADEV with a tau0 >> = 1 sec producing a result for tau = 1 sec. The results from this >> second calculation should show equal accuracy as that using the first >> data set (given the limited size of the data set). >> >> > You need to give far more detail as its unclear exactly what you are > doing with what samples. > Label all the phase samples and then show which samples belong to which > data set. > Also need to show clearly what you mean by skipping a cycle. Say I have a 1Hz input source and my counter measures the period of the first cycle and assigns this to A1. At the end of the first cycle the counter is able to be rest and re-triggered to capture the second cycle and assign this to A2. So far 2 sec have passed and I have two readings in data set A. I now repeat the experiment and assign the measurement of the first period to B1. The counter I am using this time is unable to stop at the end of the first measurement and retrigger immediately so I'm unable to measure the second cycle but is left in the armed position. When the third cycle starts, the counter triggers and completes the measurement of the third cycle which is now assigned to B2. For the purposes of my original text, the first data set refers to A1 & A2. Similarly the second data set refers to B1 & B2. Reference to pre-processing of the second data set refers to mathematically removing the effects of drift from B1 & B2 to produce a third data set which is used as the data input for an ADEV calculation where tau0 = 1 sec with output of tau = 1 sec. > >> I now collect a large data set but with a single cycle skipped between >> each sample. I feed this into ADEV using tau0 = 2 sec to produce tau >> results >= 2 sec. I then pre-process the data to remove any drift and >> feed this to ADEV with a tau0 = 1 sec to produce just the tau = 1 sec >> result. I now have a complete set of results for tau >= 1 sec. Agreed, >> there is the issue of modulation at 1/2 input f but ignoring this for >> the moment, this should give a valid result. >> >> > Again you need to give more detail. In this case the data set is constructed from the measurement of the cycle periods of a 1Hz input source where even cycles are skipped, hence each data point is a measurement of the period of each odd (1, 3, 5, 7...) cycle of the incoming waveform. In this case the time between each measurement is 2 sec so ADEV is calculated with tau = 2 sec for tau >= 2 sec. This data set is then mathematically processed to remove the effects of drift, bearing in mind the 2 sec spacing of each data point, and ADEV is then calculated with tau0 = 1 sec for tau = 1 sec. >> Now indulge me while I have a flight of fantasy. >> >> As the effects of jitter and phase noise will produce a statistical >> distribution of measurements, any results from these ADEV calculations >> will be limited on accuracy by the size of the data set. Only if we >> sample for a very long time will we see the very limits of the effects >> of noise. > What noise from what source? PN - White noise phase WPM, Flicker noise phase FPM, White noise frequency WFM, Flicker noise frequency FFM and Random walk frequency RWFM. > Noise in such measurements can originate in the measuring instrument or > the source. Indeed, and this is an important aspect to consider as we have been discussing the effects of induced jitter/PN to a frequency standard when it is buffered and divided down. Ideally measurements of ADEV would be made on the raw frequency standard source (eg. 10MHz) rather than, say, a divided 1Hz signal. > For short measurement times quantisation noise and instrumental noise > may mask the noise from the source but they are still present. Well, these form the noise floor of our measurement system. > > >> The samples which deviate the most from the median will >> occur very infrequently and it is statistically likely that they will >> not occur adjacent to another highly deviated sample. We could >> pre-process the data to remove all drift and then sort it into an >> array of increasing size. This would give the greatest deviations at >> each end of the array. For 1 sec stability the deviation would be the >> greatest difference from the median of the first and last samples in >> the array. For a 2 sec stability, this same calculation could be made >> taking the first two and last two readings in the array and >> calculating their difference from 2 x the median. This calculation >> could be continued until all the data is used for the final >> calculation. In fact the whole sorted data set could be fed to ADEV to >> produce a result that would show better worse case measurement of the >> input source which still has some statistical probability. In theory, >> if we took an infinite number of samples, there would be a whole >> string of absolutely maximum deviation measurements in a row which >> would show the absolute worse case. >> >> Is any of this valid or just bad physics, I don't know, but I'm sure >> it will solicit interesting comment. >> >> > No, not poor physics but poor statistics. Well, poor statistics possibly but that branch of mathematics is not only about interpreting data it is also about predicting events. What I proposed is predicting events that would otherwise occur very infrequently and hence be difficult to collect but would have a baring on the measurement of the total stability of an oscillator. I'm just thinking out loud. 73, Steve > >> 73, >> Steve >> >> 2009/4/10 Tom Van Baak <tvb@leapsecond.com>: >> >>>> I think the penny has dropped now, thanks. It's interesting that the >>>> ADEV calculation still works even without continuous data as all the >>>> reading I have done has led me to belive this was sacrosanct. >>>> >>> We need to be careful about what you mean by "continuous". >>> Let me probe a bit further to make sure you or others understand. >>> >>> The data that you first mentioned, some GPS and OCXO data at: >>>    http://www.leapsecond.com/pages/gpsdo-sim >>> was recorded once per second, for 400,000 samples without any >>> interruption; that's over 4 days of continuous data. >>> >>> As you see it is very possible to extract every other, or every 10th, >>> every 60th, or every Nth point from this large data set to create a >>> smaller data set. >>> >>> Is it as if you had several counters all connected to the same DUT. >>> Perhaps one makes a new phase measurement each second, >>> another makes a measurement every 10 seconds; maybe a third >>> counter just measures once a minute. >>> >>> The key here is not how often they make measurements, but that >>> they all keep running at their particular rate. >>> >>> The data sets you get from these counters all represent 4 days >>> of measurement; what changes is the measurement interval, the >>> tau0, or whatever your ADEV tool calls it. >>> >>> Now the ADEV plots you get from these counters will all match >>> perfectly with the only exception being that the every-60 second >>> counter cannot give you any ADEV points for tau less than 60; >>> the every-10 second counter cannot give you points for tau less >>> than 10 seconds; and for that matter; the every 1-second counter >>> cannot give you points for tau less than 1 second. >>> >>> So what makes all these "continuous" is that the runs were not >>> interrupted and that the data points were taken at regular intervals. >>> >>> The x-axis of an ADEV plot spans a logarithmic range of tau. The >>> farthest point on the *right* is limited by how long your run was. If >>> you collect data for 4 or 5 days you can compute and plot points >>> out to around 1 day or 10^5 seconds. >>> >>> On the other hand, the farthest point on the *left* is limited by how >>> fast you collect data. If you collect one point every 10 seconds, >>> then tau=10 is your left-most point. Yes, it's common to collect data >>> every second; in this case you can plot down to tau=1s. Some of >>> my instruments can collect phase data at 1000 points per second >>> (huge files!) and this means my leftmost ADEV point is 1 millisecond. >>> >>> Here's an example of collecting data at 10 Hz: >>> http://www.leapsecond.com/pages/gpsdo/ >>> You can see this allows me to plot from ADEV tau = 0.1 s. >>> >>> Does all this make sense now? >>> >>> >>>> What I now believe is that it's possible to measure oscillator >>>> performance with less than optimal test gear. This will enable me to >>>> see the effects of any experiments I make in the future. If you can't >>>> measure it, how can you know that what your doing is good or bad. >>>> >>> Very true. So what one or several performance measurements >>> are you after? >>> >>> /tvb >>> >>> >>> _______________________________________________ >>> time-nuts mailing list -- time-nuts@febo.com >>> To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts >>> and follow the instructions there. >>> >>> >> >> >> >> > > Bruce > > _______________________________________________ > time-nuts mailing list -- time-nuts@febo.com > To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts > and follow the instructions there. > -- Steve Rooke - ZL3TUV & G8KVD & JAKDTTNW Omnium finis imminet
SR
Steve Rooke
Mon, Apr 13, 2009 12:50 AM

2009/4/13 Magnus Danielson magnus@rubidium.dyndns.org:

Dead time is when the counter looses track of time in between two
consecutive measurements. A zero dead-time counter uses the stop of one
measure as the start of the next measure.

This becomes very important when the data to be measured has a degree
of randomness and it is therefore important to capture all the data
without any dead time. In the case of measurements of phase error in
an oscillator, it should be possible to miss some data points provided
that the frequency of capture is still known (assuming that accuracy
of drift measurements is required).

Depending on the dominant noise type, the ADEV measure will be biased.

If the noise has a component related to the measurement frequency,
agreed, but I have already commented on that before.

Indeed, there would be a loss of statistical data but this could be
made up by sampling over a period of twice the time. This system is
blind to noise at 1/2 f but ways and means could be taken to account
for that, IE. taking two data sets with a single cycle space between
them or taking another small data set with 2 cycles skipped between
each measurement.

Actually, you can take any number of 2 cycle measures and be unable to
detect the 1/2 f oscillation without detecting it. In order to be able
to detect it you will need to take 2 measures and be able to make an odd
number of cycles trigger difference between them to have a chance.

Agreed.

The trouble is that the modulation is at the Nyquist frequency of the 1
cycle data, so it will fold down to DC on sampling it at half-rate.
Canceling it from other DC offset errors could be challenging.

Comparing the frequency calculated from the data would show a 2Hz
offset with the fundamental frequency of the source.

Sampling it at 1/3 rate would discover it thought.

Agreed.

I'm looking at what can be acheieved by a budget strapped amateur who
would have trouble purchasing a later counter capable of measuring
with zero dead time.

Beleive me, that's where I am too. Patience and saving money for things
I really want and allowing accumulation over time has allowed me some
pretty fancy tools in my private lab. Infact I have to lend some of my
gear to commercial labs as I outperform them...

Well, that's a goal for me but I'm looking at what is achievable in
the short term instead of sitting on my hands.

I recalled wrong. You should look for Barnes "Tables of Bias Functions,
B1 and B2, for Variance Based on Finite Samples of Processes with Power
Law Spectral Densities", NBS Technical Note 375, Janurary 1969 as well
as Barnes and Allan "Variance Based on Data with Dead Time Between the
Mesurements" NIST Technical Note 1318, 1990.

A ahort into to the subject is found in NIST Special Publication 1065 by
W.J. Riley as found on http://www.wriley.com along other excelent
material. The good thing about that material is that he gives good
references, as one should.

Thanks for the pointer.

I could look at doing that perhaps.

You should have two counters of equivalent performance, preferably same
model. It's a rather expensive approach IMHO.

It may still be cheaper than the purchase of a counter capable of
continuous collection, especially if you already have a counter that
is capable at 1/2 f.

Have a look at the possibility of picking up a HP 5371A or 5372A. You
can usually snag one for about 600 USD or 1000 USD respectively on Ebay.

I'd have to be a really good boy for Santa to bring me something of
that ilk. Perhaps the lotto will come up one day :-)

73,
Steve

Cheers,
Magnus


time-nuts mailing list -- time-nuts@febo.com
To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
and follow the instructions there.

--
Steve Rooke - ZL3TUV & G8KVD & JAKDTTNW
Omnium finis imminet

2009/4/13 Magnus Danielson <magnus@rubidium.dyndns.org>: >>> Dead time is when the counter looses track of time in between two >>> consecutive measurements. A zero dead-time counter uses the stop of one >>> measure as the start of the next measure. >> >> This becomes very important when the data to be measured has a degree >> of randomness and it is therefore important to capture all the data >> without any dead time. In the case of measurements of phase error in >> an oscillator, it should be possible to miss some data points provided >> that the frequency of capture is still known (assuming that accuracy >> of drift measurements is required). > > Depending on the dominant noise type, the ADEV measure will be biased. If the noise has a component related to the measurement frequency, agreed, but I have already commented on that before. >> Indeed, there would be a loss of statistical data but this could be >> made up by sampling over a period of twice the time. This system is >> blind to noise at 1/2 f but ways and means could be taken to account >> for that, IE. taking two data sets with a single cycle space between >> them or taking another small data set with 2 cycles skipped between >> each measurement. > > Actually, you can take any number of 2 cycle measures and be unable to > detect the 1/2 f oscillation without detecting it. In order to be able > to detect it you will need to take 2 measures and be able to make an odd > number of cycles trigger difference between them to have a chance. Agreed. > The trouble is that the modulation is at the Nyquist frequency of the 1 > cycle data, so it will fold down to DC on sampling it at half-rate. > Canceling it from other DC offset errors could be challenging. Comparing the frequency calculated from the data would show a 2Hz offset with the fundamental frequency of the source. > Sampling it at 1/3 rate would discover it thought. Agreed. >> I'm looking at what can be acheieved by a budget strapped amateur who >> would have trouble purchasing a later counter capable of measuring >> with zero dead time. > > Beleive me, that's where I am too. Patience and saving money for things > I really want and allowing accumulation over time has allowed me some > pretty fancy tools in my private lab. Infact I have to lend some of my > gear to commercial labs as I outperform them... Well, that's a goal for me but I'm looking at what is achievable in the short term instead of sitting on my hands. > I recalled wrong. You should look for Barnes "Tables of Bias Functions, > B1 and B2, for Variance Based on Finite Samples of Processes with Power > Law Spectral Densities", NBS Technical Note 375, Janurary 1969 as well > as Barnes and Allan "Variance Based on Data with Dead Time Between the > Mesurements" NIST Technical Note 1318, 1990. > > A ahort into to the subject is found in NIST Special Publication 1065 by > W.J. Riley as found on http://www.wriley.com along other excelent > material. The good thing about that material is that he gives good > references, as one should. Thanks for the pointer. >> I could look at doing that perhaps. > > You should have two counters of equivalent performance, preferably same > model. It's a rather expensive approach IMHO. It may still be cheaper than the purchase of a counter capable of continuous collection, especially if you already have a counter that is capable at 1/2 f. > Have a look at the possibility of picking up a HP 5371A or 5372A. You > can usually snag one for about 600 USD or 1000 USD respectively on Ebay. I'd have to be a really good boy for Santa to bring me something of that ilk. Perhaps the lotto will come up one day :-) 73, Steve > Cheers, > Magnus > > _______________________________________________ > time-nuts mailing list -- time-nuts@febo.com > To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts > and follow the instructions there. > -- Steve Rooke - ZL3TUV & G8KVD & JAKDTTNW Omnium finis imminet
MD
Magnus Danielson
Mon, Apr 13, 2009 12:09 PM

Steve,

Steve Rooke wrote:

Bruce,

2009/4/12 Bruce Griffiths bruce.griffiths@xtra.co.nz:

Steve

Steve Rooke wrote:

If I take two sequential phase readings from an input source and place
this into one data set and aniother two readings from the same source
but spaced by one cycle and put this in a second data set. From the
first data set I can calculate ADEV for tau = 1s and can calculate
ADEV for tau = 2 sec from the second data set. If I now pre-process
the data in the second set to remove all the effects of drift (given
that I have already determined this), I now have two 1 sec samples
which show a statistical difference and can be fed to ADEV with a tau0
= 1 sec producing a result for tau = 1 sec. The results from this
second calculation should show equal accuracy as that using the first
data set (given the limited size of the data set).

You need to give far more detail as its unclear exactly what you are
doing with what samples.
Label all the phase samples and then show which samples belong to which
data set.
Also need to show clearly what you mean by skipping a cycle.

Say I have a 1Hz input source and my counter measures the period of
the first cycle and assigns this to A1. At the end of the first cycle
the counter is able to be rest and re-triggered to capture the second
cycle and assign this to A2. So far 2 sec have passed and I have two
readings in data set A.

Strange counter. Traditionally counters rests after the stop event have
occured, since they cannot know anything else.The Gate time gives a hint
on the first point in time it can trigger, the gate just arms the stop
event. There is no real end point. It can however rest and retrigger the
start event ASAP when gate times are sufficiently large. It's just a
smart rearrangement of what to do when to achieve zero dead-time for
period/frequency measurements.

You could also use a counter which is pseudo zero dead time in that it
can time-stamp three values, two differences without deadtime but has
deadtime after that. Essentially two counters where the stop event of
the first is the start event of the next.

I now repeat the experiment and assign the measurement of the first
period to B1. The counter I am using this time is unable to stop at
the end of the first measurement and retrigger immediately so I'm
unable to measure the second cycle but is left in the armed position.
When the third cycle starts, the counter triggers and completes the
measurement of the third cycle which is now assigned to B2.

This is what most normal counters do.

For the purposes of my original text, the first data set refers to A1
& A2. Similarly the second data set refers to B1 & B2. Reference to
pre-processing of the second data set refers to mathematically
removing the effects of drift from B1 & B2 to produce a third data set
which is used as the data input for an ADEV calculation where tau0 = 1
sec with output of tau = 1 sec.

You would need to use bias adjustments, but the B1 & B2 period/frequency
samples is badly tainted data and should not be used.having a deadtime
at the size of tau0 is serious bussness. Removing the phase drift over
the dead time does not aid you since if you remove the phase ramp of the
evolving clock, that of ft or vt (depending on which normalisation you
prefer), you have the background phase noise. What we want to do is to
characterize this phase noise. Taking two samples of it back-to-back and
taking two samples with a (equalent sized length) gap becomes two
different filters. Maybe some ascii art may aid:
__
__  |  |__
|__|

y1 y2 y3
A1 A2

A2-A1 = y2-y1

vs.
__
__    |  |
|__|

  y1  y2   y3
  B1        B2

B2-B1 = y3-y1

Consider now the case when frequency samples has twice the tau of the
above examples
_____
__      |    |__
|_____|

 y1    y2

y2-y1

These examples where all based on sequences of frequency measurements,
just as you indicate in your caes.

As you see on the differences, the nominal frequency cancels and the
nominal phase error has also cancled out, so there is nothing to
compensate there. Drift rate would however not be canceled, but for most
of our sources, the noise is higher than the drift rate for shorter taus.

Time-differences allows us to skip every other cycle thought.

I now collect a large data set but with a single cycle skipped between
each sample. I feed this into ADEV using tau0 = 2 sec to produce tau
results >= 2 sec. I then pre-process the data to remove any drift and
feed this to ADEV with a tau0 = 1 sec to produce just the tau = 1 sec
result. I now have a complete set of results for tau >= 1 sec. Agreed,
there is the issue of modulation at 1/2 input f but ignoring this for
the moment, this should give a valid result.

Again you need to give more detail.

In this case the data set is constructed from the measurement of the
cycle periods of a 1Hz input source where even cycles are skipped,
hence each data point is a measurement of the period of each odd (1,
3, 5, 7...) cycle of the incoming waveform. In this case the time
between each measurement is 2 sec so ADEV is calculated with tau = 2
sec for tau >= 2 sec. This data set is then mathematically processed
to remove the effects of drift, bearing in mind the 2 sec spacing of
each data point, and ADEV is then calculated with tau0 = 1 sec for tau
= 1 sec.

How did you establish the effect of drift?

What noise from what source?

PN - White noise phase WPM, Flicker noise phase FPM, White noise
frequency WFM, Flicker noise frequency FFM and Random walk frequency
RWFM.

These are just the names for the various 1/f power noises. They enter
through a myriad of places, white phase noise and 1/f is common to
amplifiers, 1/f^5 is thermal noise onto the same amplifiers. 1/f^2 is
oscillator shaped white phase noise and 1/f^3 is oscillator shaped 1/f
noise. Rubiola spends quite some time on that subject, both in his
excelent book and in various papers.

Noise in such measurements can originate in the measuring instrument or
the source.

Indeed, and this is an important aspect to consider as we have been
discussing the effects of induced jitter/PN to a frequency standard
when it is buffered and divided down. Ideally measurements of ADEV
would be made on the raw frequency standard source (eg. 10MHz) rather
than, say, a divided 1Hz signal.

Yes and no. There are benefits in dividing it down, you can identify
cycle slips easier and adjust for them, where as one 10 MHz cycle to
another can be a bit anonymous. To get the best performance for ADEV at
1 s using a 1 Hz signal is not optimum thought. A slightly higher rate
will allow for quicker gathering of high statistical freedom and thus
improved statistical stability as allowed through the overlapping Allan
Deviation estimator as compared to use the non-overlapping Allan
Deviation estimator on the same time-stretch of samples. When running
long runs, sufficient freedom may be achieved even using the
non-overlapping estimator.

A divide down does not have to make significant change to phase-noise,
its effect can be minimized as we have discussed before.

The 1 PPS signal is also quite historical artifact which is still quite
handy. It allows direct comparision of non-equalent frequencies as the
division ration is adjusted. It is also what comes out of a majority of
GPS receivers. Few GPS receivers evaluate their time offset at a faster
rate than 1 Hz anyway, but 2, 5, 10 and 20 Hz is available. The L1 C/A
signal would allow for a rate of 1 kHz but it would require really good
signal conditions.

For high resolution work, the PPS is not that good, since beating two 10
MHz would give you some 5-7 decades of better resolution if you can
handle the problems with slow slopes.

For short measurement times quantisation noise and instrumental noise
may mask the noise from the source but they are still present.

Well, these form the noise floor of our measurement system.

Some of them we can control, though better triggering devices, as
learned the hard way and investigated by many.

Other ways to handle it is to use cross-correlation techniques where two
independent system noises sees the same signal, in which case only the
input source noise correlate and the system noise effect can be
partially canceled out.

There are systematic noise problems also, such as lack of zero dead
time, resolution, interpolator distorsion etc.

Cheers,
Magnus

Steve, Steve Rooke wrote: > Bruce, > > 2009/4/12 Bruce Griffiths <bruce.griffiths@xtra.co.nz>: >> Steve >> >> Steve Rooke wrote: >>> If I take two sequential phase readings from an input source and place >>> this into one data set and aniother two readings from the same source >>> but spaced by one cycle and put this in a second data set. From the >>> first data set I can calculate ADEV for tau = 1s and can calculate >>> ADEV for tau = 2 sec from the second data set. If I now pre-process >>> the data in the second set to remove all the effects of drift (given >>> that I have already determined this), I now have two 1 sec samples >>> which show a statistical difference and can be fed to ADEV with a tau0 >>> = 1 sec producing a result for tau = 1 sec. The results from this >>> second calculation should show equal accuracy as that using the first >>> data set (given the limited size of the data set). >>> >>> >> You need to give far more detail as its unclear exactly what you are >> doing with what samples. >> Label all the phase samples and then show which samples belong to which >> data set. >> Also need to show clearly what you mean by skipping a cycle. > > Say I have a 1Hz input source and my counter measures the period of > the first cycle and assigns this to A1. At the end of the first cycle > the counter is able to be rest and re-triggered to capture the second > cycle and assign this to A2. So far 2 sec have passed and I have two > readings in data set A. Strange counter. Traditionally counters rests after the stop event have occured, since they cannot know anything else.The Gate time gives a hint on the first point in time it can trigger, the gate just arms the stop event. There is no real end point. It can however rest and retrigger the start event ASAP when gate times are sufficiently large. It's just a smart rearrangement of what to do when to achieve zero dead-time for period/frequency measurements. You could also use a counter which is pseudo zero dead time in that it can time-stamp three values, two differences without deadtime but has deadtime after that. Essentially two counters where the stop event of the first is the start event of the next. > I now repeat the experiment and assign the measurement of the first > period to B1. The counter I am using this time is unable to stop at > the end of the first measurement and retrigger immediately so I'm > unable to measure the second cycle but is left in the armed position. > When the third cycle starts, the counter triggers and completes the > measurement of the third cycle which is now assigned to B2. This is what most normal counters do. > For the purposes of my original text, the first data set refers to A1 > & A2. Similarly the second data set refers to B1 & B2. Reference to > pre-processing of the second data set refers to mathematically > removing the effects of drift from B1 & B2 to produce a third data set > which is used as the data input for an ADEV calculation where tau0 = 1 > sec with output of tau = 1 sec. You would need to use bias adjustments, but the B1 & B2 period/frequency samples is badly tainted data and should not be used.having a deadtime at the size of tau0 is serious bussness. Removing the phase drift over the dead time does not aid you since if you remove the phase ramp of the evolving clock, that of f*t or v*t (depending on which normalisation you prefer), you have the background phase noise. What we want to do is to characterize this phase noise. Taking two samples of it back-to-back and taking two samples with a (equalent sized length) gap becomes two different filters. Maybe some ascii art may aid: __ __ | |__ |__| y1 y2 y3 A1 A2 A2-A1 = y2-y1 vs. __ __ __| |__ |__| y1 y2 y3 B1 B2 B2-B1 = y3-y1 Consider now the case when frequency samples has twice the tau of the above examples _____ __ | |__ |_____| y1 y2 y2-y1 These examples where all based on sequences of frequency measurements, just as you indicate in your caes. As you see on the differences, the nominal frequency cancels and the nominal phase error has also cancled out, so there is nothing to compensate there. Drift rate would however not be canceled, but for most of our sources, the noise is higher than the drift rate for shorter taus. Time-differences allows us to skip every other cycle thought. >>> I now collect a large data set but with a single cycle skipped between >>> each sample. I feed this into ADEV using tau0 = 2 sec to produce tau >>> results >= 2 sec. I then pre-process the data to remove any drift and >>> feed this to ADEV with a tau0 = 1 sec to produce just the tau = 1 sec >>> result. I now have a complete set of results for tau >= 1 sec. Agreed, >>> there is the issue of modulation at 1/2 input f but ignoring this for >>> the moment, this should give a valid result. >>> >>> >> Again you need to give more detail. > > In this case the data set is constructed from the measurement of the > cycle periods of a 1Hz input source where even cycles are skipped, > hence each data point is a measurement of the period of each odd (1, > 3, 5, 7...) cycle of the incoming waveform. In this case the time > between each measurement is 2 sec so ADEV is calculated with tau = 2 > sec for tau >= 2 sec. This data set is then mathematically processed > to remove the effects of drift, bearing in mind the 2 sec spacing of > each data point, and ADEV is then calculated with tau0 = 1 sec for tau > = 1 sec. How did you establish the effect of drift? >> What noise from what source? > > PN - White noise phase WPM, Flicker noise phase FPM, White noise > frequency WFM, Flicker noise frequency FFM and Random walk frequency > RWFM. These are just the names for the various 1/f power noises. They enter through a myriad of places, white phase noise and 1/f is common to amplifiers, 1/f^5 is thermal noise onto the same amplifiers. 1/f^2 is oscillator shaped white phase noise and 1/f^3 is oscillator shaped 1/f noise. Rubiola spends quite some time on that subject, both in his excelent book and in various papers. >> Noise in such measurements can originate in the measuring instrument or >> the source. > > Indeed, and this is an important aspect to consider as we have been > discussing the effects of induced jitter/PN to a frequency standard > when it is buffered and divided down. Ideally measurements of ADEV > would be made on the raw frequency standard source (eg. 10MHz) rather > than, say, a divided 1Hz signal. Yes and no. There are benefits in dividing it down, you can identify cycle slips easier and adjust for them, where as one 10 MHz cycle to another can be a bit anonymous. To get the best performance for ADEV at 1 s using a 1 Hz signal is not optimum thought. A slightly higher rate will allow for quicker gathering of high statistical freedom and thus improved statistical stability as allowed through the overlapping Allan Deviation estimator as compared to use the non-overlapping Allan Deviation estimator on the same time-stretch of samples. When running long runs, sufficient freedom may be achieved even using the non-overlapping estimator. A divide down does not have to make significant change to phase-noise, its effect can be minimized as we have discussed before. The 1 PPS signal is also quite historical artifact which is still quite handy. It allows direct comparision of non-equalent frequencies as the division ration is adjusted. It is also what comes out of a majority of GPS receivers. Few GPS receivers evaluate their time offset at a faster rate than 1 Hz anyway, but 2, 5, 10 and 20 Hz is available. The L1 C/A signal would allow for a rate of 1 kHz but it would require really good signal conditions. For high resolution work, the PPS is not that good, since beating two 10 MHz would give you some 5-7 decades of better resolution if you can handle the problems with slow slopes. >> For short measurement times quantisation noise and instrumental noise >> may mask the noise from the source but they are still present. > > Well, these form the noise floor of our measurement system. Some of them we can control, though better triggering devices, as learned the hard way and investigated by many. Other ways to handle it is to use cross-correlation techniques where two independent system noises sees the same signal, in which case only the input source noise correlate and the system noise effect can be partially canceled out. There are systematic noise problems also, such as lack of zero dead time, resolution, interpolator distorsion etc. Cheers, Magnus
SR
Steve Rooke
Sat, Apr 25, 2009 9:40 AM

Hi Magnus,

2009/4/14 Magnus Danielson magnus@rubidium.dyndns.org

Say I have a 1Hz input source and my counter measures the period of
the first cycle and assigns this to A1. At the end of the first cycle
the counter is able to be rest and re-triggered to capture the second
cycle and assign this to A2. So far 2 sec have passed and I have two
readings in data set A.

Strange counter. Traditionally counters rests after the stop event have
occured, since they cannot know anything else.The Gate time gives a hint
on the first point in time it can trigger, the gate just arms the stop
event. There is no real end point. It can however rest and retrigger the
start event ASAP when gate times are sufficiently large. It's just a
smart rearrangement of what to do when to achieve zero dead-time for
period/frequency measurements.

I am making period measurements so the gate time does not come into it. My
counter can be set to continuously take period readings starting/stopping on
a positive or negative edge. Also when my counter finishes a reading it can
generate a SRQ allowing me to transfer the measurement to the PC and I can
also immediately generate a reset of the counter to take another
measurement. Unfortunately it is not possible for the counter to be reset
and then trigger again before the last triggering event has finished, IE. an
individual trigger event can only be used once per measurement cycle, the
same trigger event cannot stop one period measurement and start a second
one. All this means that there will always be a one period gap between each
period measurement.

You could also use a counter which is pseudo zero dead time in that it
can time-stamp three values, two differences without deadtime but has
deadtime after that. Essentially two counters where the stop event of
the first is the start event of the next.

Yes, I could do that but it is extra expense and complication which I do not
think is necessary.

I now repeat the experiment and assign the measurement of the first
period to B1. The counter I am using this time is unable to stop at
the end of the first measurement and retrigger immediately so I'm
unable to measure the second cycle but is left in the armed position.
When the third cycle starts, the counter triggers and completes the
measurement of the third cycle which is now assigned to B2.

This is what most normal counters do.

So we can agree on this.

For the purposes of my original text, the first data set refers to A1

& A2. Similarly the second data set refers to B1 & B2. Reference to
pre-processing of the second data set refers to mathematically
removing the effects of drift from B1 & B2 to produce a third data set
which is used as the data input for an ADEV calculation where tau0 = 1
sec with output of tau = 1 sec.

You would need to use bias adjustments, but the B1 & B2 period/frequency
samples is badly tainted data and should not be used.having a deadtime
at the size of tau0 is serious bussness. Removing the phase drift over

But for the purposes of how i now think it can be calculated, tau0 will be
set equal to 2 x actual period of input source, IE. if f = 1Hz, tau0 = 2
sec.

Lets take a look at what we are saying about "badly tainted data" here. The
whole purpose of this exercise is to predict the effects of noise on a
stable frequency. We have already agreed that a phase/frequency modulation
source ate EXACTLY 1/2 of the input source will be masked by this method but
we can get round that. So for the rest of the measurement, we have half the
data per tau than if there was no missing data. This will have some baring
on the accuracy of the result but will only be significant for maximum tau,
in almost exactly the same degree that existing ADEV measurements have
limited accuracy at maximum tau as there are not enough measurements to
provide the statistical probably over that time, IE, if we measure for
100,000 seconds, the calculation for tau = 100,000 will have only one set of
values. Remember we are looking at noise here and if for the "missing data"
method we take readings for twice the full test time as a "conventional"
test, we will have data with the same amount of statistical probability.
This "badly tainted data" is just the same unless we have such periodic
effects that over the period of the whole test we will always miss them.
There is no magic here.

the dead time does not aid you since if you remove the phase ramp of the

evolving clock, that of ft or vt (depending on which normalisation you
prefer), you have the background phase noise. What we want to do is to
characterize this phase noise. Taking two samples of it back-to-back and
taking two samples with a (equalent sized length) gap becomes two
different filters. Maybe some ascii art may aid:

For a 1Hz input I would be able to calculate for tau >= 2 with the
unmodified data using tau0 = 2 sec. If I remove the effects of drift, all my
data points are the same as measuring for a "conventional" ADEV test
provided that I I only calculate for tau = 1 and tau0 = 1. Using the data
with the effects of drift removed for calculating for all tau would
certainly give incorrect results as it would not show the effects of drift.
In a "conventional" measurement of ADEV for tau = 1, successive pairs of
data points are used in the calculation and the whole lot averaged. The
effects of drift (for any reasonable oscillator we are considering) between
any two sequential 1 second period measurements is so small that it does not
affect the ADEV measurement. You would only see an incorrect result if you
took measurement data points with large periods of time between them, IE.
the first and last data points on a 100,000 second run for instance.

  __

__  |  |__
|__|

y1 y2 y3
A1 A2

A2-A1 = y2-y1

vs.
__
__    |  |
|__|

  y1  y2   y3
  B1        B2

B2-B1 = y3-y1

Actually we are considering the period of a waveform which is time between
successive instances of the waveform moving through the same point in the
same direction, IE. it would include the positive and negative half cycles
of the waveform. BTW, ASCII art does not work so well on today's
proportionate fonts.

Consider now the case when frequency samples has twice the tau of the
above examples
_____
__      |    |__
|_____|

 y1    y2

y2-y1

These examples where all based on sequences of frequency measurements,
just as you indicate in your caes.

As you see on the differences, the nominal frequency cancels and the
nominal phase error has also cancled out, so there is nothing to
compensate there. Drift rate would however not be canceled, but for most
of our sources, the noise is higher than the drift rate for shorter taus.

Well, if there is a phase error it would cause the positive and negative
halves of waveform to differ and therefore not cancel out. But as you so
rightly say, and to what I alluded to before, this phase error would be
expected to be considerably smaller than the noise in our tests. Now for my
"missing data" method, there is twice the amount of time between data points
so the phase errors would be doubled in size. This may, or may not now
affect the measurement for tau0 = 1 for tau = 1 so I have proposed to remove
that phase error by pre-processing the data but ONLY for this one
calculation of ADEV for tau = 1, NOT for the other tau. It may be that the
phase error is still small compared with the noise and so the data does not
need to be processed to remove the drift. That would be proved by performing
the calculation with and without the drift removed. But this does not mean
that it would then e OK to calculate ADEV for tau >= 1 using tau0 = 1 using
the "missing data" method as this WOULD give the wrong indication of the
effects of drift.

Time-differences allows us to skip every other cycle thought.

In this case the data set is constructed from the measurement of the
cycle periods of a 1Hz input source where even cycles are skipped,
hence each data point is a measurement of the period of each odd (1,
3, 5, 7...) cycle of the incoming waveform. In this case the time
between each measurement is 2 sec so ADEV is calculated with tau = 2
sec for tau >= 2 sec. This data set is then mathematically processed
to remove the effects of drift, bearing in mind the 2 sec spacing of
each data point, and ADEV is then calculated with tau0 = 1 sec for tau
= 1 sec.

How did you establish the effect of drift?

In the case of the data I was using, there actually does not appear to be a
great deal of drift and I have made no adjustments to account for it BUT
theoretically it could make a difference.

PN - White noise phase WPM, Flicker noise phase FPM, White noise

frequency WFM, Flicker noise frequency FFM and Random walk frequency
RWFM.

These are just the names for the various 1/f power noises. They enter
through a myriad of places, white phase noise and 1/f is common to
amplifiers, 1/f^5 is thermal noise onto the same amplifiers. 1/f^2 is
oscillator shaped white phase noise and 1/f^3 is oscillator shaped 1/f
noise. Rubiola spends quite some time on that subject, both in his
excelent book and in various papers.

But these are the effects we are measuring here.

Indeed, and this is an important aspect to consider as we have been

discussing the effects of induced jitter/PN to a frequency standard
when it is buffered and divided down. Ideally measurements of ADEV
would be made on the raw frequency standard source (eg. 10MHz) rather
than, say, a divided 1Hz signal.

Yes and no. There are benefits in dividing it down, you can identify
cycle slips easier and adjust for them, where as one 10 MHz cycle to
another can be a bit anonymous. To get the best performance for ADEV at
1 s using a 1 Hz signal is not optimum thought. A slightly higher rate
will allow for quicker gathering of high statistical freedom and thus
improved statistical stability as allowed through the overlapping Allan
Deviation estimator as compared to use the non-overlapping Allan
Deviation estimator on the same time-stretch of samples. When running
long runs, sufficient freedom may be achieved even using the
non-overlapping estimator.

Agreed, it would be down to what is the maximum rate that the measurement
system is able to take readings and record them. 1 second has just been used
as an example in this discussion but it is really not that optimal.

A divide down does not have to make significant change to phase-noise,
its effect can be minimized as we have discussed before.

Maybe but this topic has in itself generated a lot of discussion and the
outcome is that care must be taken with this aspect. This is really the
point I was making, we don't want to be measuring noise induced in the
buffering and division circuits as this will completely ruin our tests.

The 1 PPS signal is also quite historical artifact which is still quite

handy. It allows direct comparision of non-equalent frequencies as the
division ration is adjusted. It is also what comes out of a majority of
GPS receivers. Few GPS receivers evaluate their time offset at a faster
rate than 1 Hz anyway, but 2, 5, 10 and 20 Hz is available. The L1 C/A
signal would allow for a rate of 1 kHz but it would require really good
signal conditions.

Indeed although it has been suggested that 10MHz ocxos be divided down to
around 1 Hz so they can be measured by sound cards in the past.

For high resolution work, the PPS is not that good, since beating two 10
MHz would give you some 5-7 decades of better resolution if you can
handle the problems with slow slopes.

Are you proposing measuring phase differences here as for ADEV this would
surely add the noise of both sources into the mix which would be
undesirable.

For short measurement times quantisation noise and instrumental noise

may mask the noise from the source but they are still present.

Well, these form the noise floor of our measurement system.

Some of them we can control, though better triggering devices, as
learned the hard way and investigated by many.

I guess it would be possible to measure the system against itself with
something like a short delay from it's own internal timing source.

Other ways to handle it is to use cross-correlation techniques where two
independent system noises sees the same signal, in which case only the
input source noise correlate and the system noise effect can be
partially canceled out.

If you could measure the same signal with multiple systems, it would be
possible to cancel out the noise effects of the measuring system.

There are systematic noise problems also, such as lack of zero dead
time, resolution, interpolator distorsion etc.

Indeed. Thanks for your input on this, it's really enabled me to focus more.

Cheers,
Steve

Cheers,
Magnus


time-nuts mailing list -- time-nuts@febo.com
To unsubscribe, go to
https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
and follow the instructions there.

--
Steve Rooke - ZL3TUV & G8KVD & JAKDTTNW
A man with one clock knows what time it is;
A man with two clocks is never quite sure.

Hi Magnus, 2009/4/14 Magnus Danielson <magnus@rubidium.dyndns.org> > > Say I have a 1Hz input source and my counter measures the period of > > the first cycle and assigns this to A1. At the end of the first cycle > > the counter is able to be rest and re-triggered to capture the second > > cycle and assign this to A2. So far 2 sec have passed and I have two > > readings in data set A. > > > Strange counter. Traditionally counters rests after the stop event have > occured, since they cannot know anything else.The Gate time gives a hint > on the first point in time it can trigger, the gate just arms the stop > event. There is no real end point. It can however rest and retrigger the > start event ASAP when gate times are sufficiently large. It's just a > smart rearrangement of what to do when to achieve zero dead-time for > period/frequency measurements. I am making period measurements so the gate time does not come into it. My counter can be set to continuously take period readings starting/stopping on a positive or negative edge. Also when my counter finishes a reading it can generate a SRQ allowing me to transfer the measurement to the PC and I can also immediately generate a reset of the counter to take another measurement. Unfortunately it is not possible for the counter to be reset and then trigger again before the last triggering event has finished, IE. an individual trigger event can only be used once per measurement cycle, the same trigger event cannot stop one period measurement and start a second one. All this means that there will always be a one period gap between each period measurement. > You could also use a counter which is pseudo zero dead time in that it > can time-stamp three values, two differences without deadtime but has > deadtime after that. Essentially two counters where the stop event of > the first is the start event of the next. Yes, I could do that but it is extra expense and complication which I do not think is necessary. > > I now repeat the experiment and assign the measurement of the first > > period to B1. The counter I am using this time is unable to stop at > > the end of the first measurement and retrigger immediately so I'm > > unable to measure the second cycle but is left in the armed position. > > When the third cycle starts, the counter triggers and completes the > > measurement of the third cycle which is now assigned to B2. > This is what most normal counters do. So we can agree on this. > For the purposes of my original text, the first data set refers to A1 > > & A2. Similarly the second data set refers to B1 & B2. Reference to > > pre-processing of the second data set refers to mathematically > > removing the effects of drift from B1 & B2 to produce a third data set > > which is used as the data input for an ADEV calculation where tau0 = 1 > > sec with output of tau = 1 sec. > You would need to use bias adjustments, but the B1 & B2 period/frequency > samples is badly tainted data and should not be used.having a deadtime > at the size of tau0 is serious bussness. Removing the phase drift over But for the purposes of how i now think it can be calculated, tau0 will be set equal to 2 x actual period of input source, IE. if f = 1Hz, tau0 = 2 sec. Lets take a look at what we are saying about "badly tainted data" here. The whole purpose of this exercise is to predict the effects of noise on a stable frequency. We have already agreed that a phase/frequency modulation source ate EXACTLY 1/2 of the input source will be masked by this method but we can get round that. So for the rest of the measurement, we have half the data per tau than if there was no missing data. This will have some baring on the accuracy of the result but will only be significant for maximum tau, in almost exactly the same degree that existing ADEV measurements have limited accuracy at maximum tau as there are not enough measurements to provide the statistical probably over that time, IE, if we measure for 100,000 seconds, the calculation for tau = 100,000 will have only one set of values. Remember we are looking at noise here and if for the "missing data" method we take readings for twice the full test time as a "conventional" test, we will have data with the same amount of statistical probability. This "badly tainted data" is just the same unless we have such periodic effects that over the period of the whole test we will always miss them. There is no magic here. the dead time does not aid you since if you remove the phase ramp of the > evolving clock, that of f*t or v*t (depending on which normalisation you > prefer), you have the background phase noise. What we want to do is to > characterize this phase noise. Taking two samples of it back-to-back and > taking two samples with a (equalent sized length) gap becomes two > different filters. Maybe some ascii art may aid: For a 1Hz input I would be able to calculate for tau >= 2 with the unmodified data using tau0 = 2 sec. If I remove the effects of drift, all my data points are the same as measuring for a "conventional" ADEV test provided that I I only calculate for tau = 1 and tau0 = 1. Using the data with the effects of drift removed for calculating for all tau would certainly give incorrect results as it would not show the effects of drift. In a "conventional" measurement of ADEV for tau = 1, successive pairs of data points are used in the calculation and the whole lot averaged. The effects of drift (for any reasonable oscillator we are considering) between any two sequential 1 second period measurements is so small that it does not affect the ADEV measurement. You would only see an incorrect result if you took measurement data points with large periods of time between them, IE. the first and last data points on a 100,000 second run for instance. > __ > __ | |__ > |__| > > y1 y2 y3 > A1 A2 > > A2-A1 = y2-y1 > > vs. > __ > __ __| |__ > |__| > > y1 y2 y3 > B1 B2 > > B2-B1 = y3-y1 Actually we are considering the period of a waveform which is time between successive instances of the waveform moving through the same point in the same direction, IE. it would include the positive and negative half cycles of the waveform. BTW, ASCII art does not work so well on today's proportionate fonts. > Consider now the case when frequency samples has twice the tau of the > above examples > _____ > __ | |__ > |_____| > > y1 y2 > y2-y1 > > These examples where all based on sequences of frequency measurements, > just as you indicate in your caes. > > As you see on the differences, the nominal frequency cancels and the > nominal phase error has also cancled out, so there is nothing to > compensate there. Drift rate would however not be canceled, but for most > of our sources, the noise is higher than the drift rate for shorter taus. Well, if there is a phase error it would cause the positive and negative halves of waveform to differ and therefore not cancel out. But as you so rightly say, and to what I alluded to before, this phase error would be expected to be considerably smaller than the noise in our tests. Now for my "missing data" method, there is twice the amount of time between data points so the phase errors would be doubled in size. This may, or may not now affect the measurement for tau0 = 1 for tau = 1 so I have proposed to remove that phase error by pre-processing the data but ONLY for this one calculation of ADEV for tau = 1, NOT for the other tau. It may be that the phase error is still small compared with the noise and so the data does not need to be processed to remove the drift. That would be proved by performing the calculation with and without the drift removed. But this does not mean that it would then e OK to calculate ADEV for tau >= 1 using tau0 = 1 using the "missing data" method as this WOULD give the wrong indication of the effects of drift. > Time-differences allows us to skip every other cycle thought. > > > In this case the data set is constructed from the measurement of the > > cycle periods of a 1Hz input source where even cycles are skipped, > > hence each data point is a measurement of the period of each odd (1, > > 3, 5, 7...) cycle of the incoming waveform. In this case the time > > between each measurement is 2 sec so ADEV is calculated with tau = 2 > > sec for tau >= 2 sec. This data set is then mathematically processed > > to remove the effects of drift, bearing in mind the 2 sec spacing of > > each data point, and ADEV is then calculated with tau0 = 1 sec for tau > > = 1 sec. > How did you establish the effect of drift? In the case of the data I was using, there actually does not appear to be a great deal of drift and I have made no adjustments to account for it BUT theoretically it could make a difference. > PN - White noise phase WPM, Flicker noise phase FPM, White noise > > frequency WFM, Flicker noise frequency FFM and Random walk frequency > > RWFM. > These are just the names for the various 1/f power noises. They enter > through a myriad of places, white phase noise and 1/f is common to > amplifiers, 1/f^5 is thermal noise onto the same amplifiers. 1/f^2 is > oscillator shaped white phase noise and 1/f^3 is oscillator shaped 1/f > noise. Rubiola spends quite some time on that subject, both in his > excelent book and in various papers. But these are the effects we are measuring here. > Indeed, and this is an important aspect to consider as we have been > > discussing the effects of induced jitter/PN to a frequency standard > > when it is buffered and divided down. Ideally measurements of ADEV > > would be made on the raw frequency standard source (eg. 10MHz) rather > > than, say, a divided 1Hz signal. > Yes and no. There are benefits in dividing it down, you can identify > cycle slips easier and adjust for them, where as one 10 MHz cycle to > another can be a bit anonymous. To get the best performance for ADEV at > 1 s using a 1 Hz signal is not optimum thought. A slightly higher rate > will allow for quicker gathering of high statistical freedom and thus > improved statistical stability as allowed through the overlapping Allan > Deviation estimator as compared to use the non-overlapping Allan > Deviation estimator on the same time-stretch of samples. When running > long runs, sufficient freedom may be achieved even using the > non-overlapping estimator. Agreed, it would be down to what is the maximum rate that the measurement system is able to take readings and record them. 1 second has just been used as an example in this discussion but it is really not that optimal. > A divide down does not have to make significant change to phase-noise, > its effect can be minimized as we have discussed before. Maybe but this topic has in itself generated a lot of discussion and the outcome is that care must be taken with this aspect. This is really the point I was making, we don't want to be measuring noise induced in the buffering and division circuits as this will completely ruin our tests. The 1 PPS signal is also quite historical artifact which is still quite > handy. It allows direct comparision of non-equalent frequencies as the > division ration is adjusted. It is also what comes out of a majority of > GPS receivers. Few GPS receivers evaluate their time offset at a faster > rate than 1 Hz anyway, but 2, 5, 10 and 20 Hz is available. The L1 C/A > signal would allow for a rate of 1 kHz but it would require really good > signal conditions. Indeed although it has been suggested that 10MHz ocxos be divided down to around 1 Hz so they can be measured by sound cards in the past. > For high resolution work, the PPS is not that good, since beating two 10 > MHz would give you some 5-7 decades of better resolution if you can > handle the problems with slow slopes. Are you proposing measuring phase differences here as for ADEV this would surely add the noise of both sources into the mix which would be undesirable. >> For short measurement times quantisation noise and instrumental noise > >> may mask the noise from the source but they are still present. > > > > Well, these form the noise floor of our measurement system. > Some of them we can control, though better triggering devices, as > learned the hard way and investigated by many. I guess it would be possible to measure the system against itself with something like a short delay from it's own internal timing source. > Other ways to handle it is to use cross-correlation techniques where two > independent system noises sees the same signal, in which case only the > input source noise correlate and the system noise effect can be > partially canceled out. If you could measure the same signal with multiple systems, it would be possible to cancel out the noise effects of the measuring system. > There are systematic noise problems also, such as lack of zero dead > time, resolution, interpolator distorsion etc. Indeed. Thanks for your input on this, it's really enabled me to focus more. Cheers, Steve > > Cheers, > Magnus > > _______________________________________________ > time-nuts mailing list -- time-nuts@febo.com > To unsubscribe, go to > https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts > and follow the instructions there. > -- Steve Rooke - ZL3TUV & G8KVD & JAKDTTNW A man with one clock knows what time it is; A man with two clocks is never quite sure.