Ulrich,
2009/4/11 Ulrich Bangert df6jb@ulrich-bangert.de:
So why would my counter show any significant differences
between a 1 sec or 2 sec gate time?
suppose your source has a 0.5 Hz frequency modulation. Would you see it with
2 s gate time or a integer multiple of it? Would you notice it with 1 s gate
time or an odd integer of it?
Agreed, if the source is modulated at exactly 1/2 the input frequency,
the measurement would be blind to it. So the way to account for this
would be to take half the readings, then skip one cycle and take the
other half. Examination of the data would then show the modulation.
I've just done a Google search for "dead time correction
scheme" and I just turn up results relating to particle
physics where it seems measurements are unable to keep up
with the flow of data, hence the need to factor in the dead
time of system.
Google for the STABLE32 manual. THIS literature will bring you a lot
further, many well documented source examples in Forth and PL/1, hi. F.e.
you may look here:
Thanks for the pointers.
Kind regards,
Steve
Best regards
Ulrich Bangert
-----Ursprungliche Nachricht-----
Von: time-nuts-bounces@febo.com
[mailto:time-nuts-bounces@febo.com] Im Auftrag von Steve Rooke
Gesendet: Freitag, 10. April 2009 12:55
An: Discussion of precise time and frequency measurement
Betreff: [!! SPAM] Re: [time-nuts] Characterising frequency standards
Ulrich,
2009/4/10 Ulrich Bangert df6jb@ulrich-bangert.de:
Steve,
I think the penny has dropped now, thanks. It's
interesting that the
ADEV calculation still works even without continuous data
as all the
reading I have done has led me to belive this was sacrosanct.
The penny may be falling but it is not completely dropped:
Of course
you can feed your ADEV calculation with every second sample removed
and setting Tau0 = 2. And of course you receive a result
that now is
in "harmony" with your all samples / Tau0 = 1 s
computation. Had you
done frequency measurements the reason for this appearant
"harmony" is
that your counter does not show significant different behaviour
whether set to 1 s gate time or alternate 2 second gate time.
So why would my counter show any significant differences
between a 1 sec or 2 sec gate time?
Nevertheless leaving every second sample out is NOT exactly
the same
as continous data with Tau0 = 2 s. Instead it is data with
Tau0 = 1 s
and a DEAD TIME of 1s. There are dead time correction schemes
available in the literature.
I've just done a Google search for "dead time correction
scheme" and I just turn up results relating to particle
physics where it seems measurements are unable to keep up
with the flow of data, hence the need to factor in the dead
time of system. This form of application does not appear to
correlate with the measurement of plain oscillators. Yes
there is dead time, per say, but I fail to see how this can
detract significantly from continuous data given a sufficient
data set size (as for a total measurement time).
I guess what we need is a real data set which would show that
this form of ADEV calculation produces incorrect results, IE.
the proof of the pudding is in the eating.
73,
Steve
[mailto:time-nuts-bounces@febo.com]
Im Auftrag von Steve Rooke
Gesendet: Donnerstag, 9. April 2009 14:00
An: Tom Van Baak; Discussion of precise time and frequency
measurement
Betreff: Re: [time-nuts] Characterising frequency standards
Tom,
2009/4/9 Tom Van Baak tvb@leapsecond.com:
The first argument to the adev1 program is the sampling
interval t0.
The program doesn't know how far apart the input file
samples are
taken so it is your job to specify this. The default is 1 second.
If you have data taken one second apart then t0 = 1.
If you have data taken two seconds apart then t0 = 2.
If you have data taken 60 seconds apart then t0 = 60, etc.
If, as in your case, you take raw one second data and
remove every
other sample (a perfectly valid thing to do), then t0 = 2.
Make sense now? It's still "continuous data" in the
sense that all
measurements are a fixed interval apart. But in any ADEV
calculation
you have to specify the raw data interval.
I think the penny has dropped now, thanks. It's
interesting that the
ADEV calculation still works even without continuous data
as all the
reading I have done has led me to belive this was sacrosanct.
What I now believe is that it's possible to measure oscillator
performance with less than optimal test gear. This will
enable me to
see the effects of any experiments I make in the future.
If you can't
measure it, how can you know that what your doing is good or bad.
Steve Rooke - ZL3TUV & G8KVD & JAKDTTNW
Omnium finis imminet
time-nuts mailing list -- time-nuts@febo.com
To unsubscribe, go to
https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
and follow the instructions there.
time-nuts mailing list -- time-nuts@febo.com
To unsubscribe, go to
https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
and follow the instructions there.
--
Steve Rooke - ZL3TUV & G8KVD & JAKDTTNW
Omnium finis imminet
time-nuts mailing list -- time-nuts@febo.com
To unsubscribe, go to
https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
and follow the instructions there.
time-nuts mailing list -- time-nuts@febo.com
To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
and follow the instructions there.
--
Steve Rooke - ZL3TUV & G8KVD & JAKDTTNW
Omnium finis imminet
Tom,
2009/4/11 Tom Van Baak tvb@leapsecond.com:
Nevertheless leaving every second sample out is NOT exactly the same as
continous data with Tau0 = 2 s. Instead it is data with Tau0 = 1 s and a
DEAD TIME of 1s. There are dead time correction schemes available in the
literature.
Ulrich, and Steve,
Wait, are we talking phase measurements here or frequency
measurements? My assumption with this thread is that Steve
is simply taking phase (time error) measurements, as in my
GPS raw data page, in which case there is no such thing as
dead time.
Yes, phase measurements as in the original GPS.dat data set on your site.
73,
Steve
/tvb
time-nuts mailing list -- time-nuts@febo.com
To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
and follow the instructions there.
--
Steve Rooke - ZL3TUV & G8KVD & JAKDTTNW
Omnium finis imminet
2009/4/11 Magnus Danielson magnus@rubidium.dyndns.org:
Tom Van Baak skrev:
Nevertheless leaving every second sample out is NOT exactly the same as
continous data with Tau0 = 2 s. Instead it is data with Tau0 = 1 s and a
DEAD TIME of 1s. There are dead time correction schemes available in the
literature.
Ulrich, and Steve,
Wait, are we talking phase measurements here or frequency
measurements? My assumption with this thread is that Steve
is simply taking phase (time error) measurements, as in my
GPS raw data page, in which case there is no such thing as
dead time.
I agree. I was also considering this earlier but put my mind to rest by
assuming phase/time samples.
Dead time is when the counter looses track of time in between two
consecutive measurements. A zero dead-time counter uses the stop of one
measure as the start of the next measure.
This becomes very important when the data to be measured has a degree
of randomness and it is therefore important to capture all the data
without any dead time. In the case of measurements of phase error in
an oscillator, it should be possible to miss some data points provided
that the frequency of capture is still known (assuming that accuracy
of drift measurements is required).
If you have a series of time-error values taken each second and then
drop every other sample and just recall that the time between the
samples is now 2 seconds, then the tau0 has become 2s without causing
dead-time. However, if the original data would have been kept, better
statistical properties would be given, unless there is a strong
repetitive disturbance at 2 s period, in which case it would be filtered
out.
Indeed, there would be a loss of statistical data but this could be
made up by sampling over a period of twice the time. This system is
blind to noise at 1/2 f but ways and means could be taken to account
for that, IE. taking two data sets with a single cycle space between
them or taking another small data set with 2 cycles skipped between
each measurement.
An example when one does get dead-time, consider a frequency counter
which measures frequency with a gate-time of say 2 s. However, before it
re-arms and start the next measures is takes 300 ms. The two samples
will have 2,3 s between its start and actually spans 4,3 seconds rather
than 4 seconds. When doing Allan Deviation calculations on such a
measurement series, it will be biased and the bias may be compensated,
but these days counters with zero dead-time is readily available or the
problem can be avoided by careful consideration.
I'm looking at what can be acheieved by a budget strapped amateur who
would have trouble purchasing a later counter capable of measuring
with zero dead time.
I believe Grenhall made some extensive analysis of the biasing of
dead-time, so it should be available from NIST F&T online library.
I'll see what I can find.
Before zero dead-time counters was available, a setup of two counters
was used so that they where interleaved so the dead-time was the measure
time of the other.
I could look at doing that perhaps.
I can collect some references to dead-time articles if anyone need them.
I'd happy to.
73,
Steve
Cheers,
Magnus
time-nuts mailing list -- time-nuts@febo.com
To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
and follow the instructions there.
--
Steve Rooke - ZL3TUV & G8KVD & JAKDTTNW
Omnium finis imminet
If I take two sequential phase readings from an input source and place
this into one data set and aniother two readings from the same source
but spaced by one cycle and put this in a second data set. From the
first data set I can calculate ADEV for tau = 1s and can calculate
ADEV for tau = 2 sec from the second data set. If I now pre-process
the data in the second set to remove all the effects of drift (given
that I have already determined this), I now have two 1 sec samples
which show a statistical difference and can be fed to ADEV with a tau0
= 1 sec producing a result for tau = 1 sec. The results from this
second calculation should show equal accuracy as that using the first
data set (given the limited size of the data set).
I now collect a large data set but with a single cycle skipped between
each sample. I feed this into ADEV using tau0 = 2 sec to produce tau
results >= 2 sec. I then pre-process the data to remove any drift and
feed this to ADEV with a tau0 = 1 sec to produce just the tau = 1 sec
result. I now have a complete set of results for tau >= 1 sec. Agreed,
there is the issue of modulation at 1/2 input f but ignoring this for
the moment, this should give a valid result.
Now indulge me while I have a flight of fantasy.
As the effects of jitter and phase noise will produce a statistical
distribution of measurements, any results from these ADEV calculations
will be limited on accuracy by the size of the data set. Only if we
sample for a very long time will we see the very limits of the effects
of noise. The samples which deviate the most from the median will
occur very infrequently and it is statistically likely that they will
not occur adjacent to another highly deviated sample. We could
pre-process the data to remove all drift and then sort it into an
array of increasing size. This would give the greatest deviations at
each end of the array. For 1 sec stability the deviation would be the
greatest difference from the median of the first and last samples in
the array. For a 2 sec stability, this same calculation could be made
taking the first two and last two readings in the array and
calculating their difference from 2 x the median. This calculation
could be continued until all the data is used for the final
calculation. In fact the whole sorted data set could be fed to ADEV to
produce a result that would show better worse case measurement of the
input source which still has some statistical probability. In theory,
if we took an infinite number of samples, there would be a whole
string of absolutely maximum deviation measurements in a row which
would show the absolute worse case.
Is any of this valid or just bad physics, I don't know, but I'm sure
it will solicit interesting comment.
73,
Steve
2009/4/10 Tom Van Baak tvb@leapsecond.com:
I think the penny has dropped now, thanks. It's interesting that the
ADEV calculation still works even without continuous data as all the
reading I have done has led me to belive this was sacrosanct.
We need to be careful about what you mean by "continuous".
Let me probe a bit further to make sure you or others understand.
The data that you first mentioned, some GPS and OCXO data at:
http://www.leapsecond.com/pages/gpsdo-sim
was recorded once per second, for 400,000 samples without any
interruption; that's over 4 days of continuous data.
As you see it is very possible to extract every other, or every 10th,
every 60th, or every Nth point from this large data set to create a
smaller data set.
Is it as if you had several counters all connected to the same DUT.
Perhaps one makes a new phase measurement each second,
another makes a measurement every 10 seconds; maybe a third
counter just measures once a minute.
The key here is not how often they make measurements, but that
they all keep running at their particular rate.
The data sets you get from these counters all represent 4 days
of measurement; what changes is the measurement interval, the
tau0, or whatever your ADEV tool calls it.
Now the ADEV plots you get from these counters will all match
perfectly with the only exception being that the every-60 second
counter cannot give you any ADEV points for tau less than 60;
the every-10 second counter cannot give you points for tau less
than 10 seconds; and for that matter; the every 1-second counter
cannot give you points for tau less than 1 second.
So what makes all these "continuous" is that the runs were not
interrupted and that the data points were taken at regular intervals.
The x-axis of an ADEV plot spans a logarithmic range of tau. The
farthest point on the right is limited by how long your run was. If
you collect data for 4 or 5 days you can compute and plot points
out to around 1 day or 10^5 seconds.
On the other hand, the farthest point on the left is limited by how
fast you collect data. If you collect one point every 10 seconds,
then tau=10 is your left-most point. Yes, it's common to collect data
every second; in this case you can plot down to tau=1s. Some of
my instruments can collect phase data at 1000 points per second
(huge files!) and this means my leftmost ADEV point is 1 millisecond.
Here's an example of collecting data at 10 Hz:
http://www.leapsecond.com/pages/gpsdo/
You can see this allows me to plot from ADEV tau = 0.1 s.
Does all this make sense now?
What I now believe is that it's possible to measure oscillator
performance with less than optimal test gear. This will enable me to
see the effects of any experiments I make in the future. If you can't
measure it, how can you know that what your doing is good or bad.
Very true. So what one or several performance measurements
are you after?
/tvb
time-nuts mailing list -- time-nuts@febo.com
To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
and follow the instructions there.
--
Steve Rooke - ZL3TUV & G8KVD & JAKDTTNW
Omnium finis imminet
Steve
Steve Rooke wrote:
2009/4/11 Magnus Danielson magnus@rubidium.dyndns.org:
Tom Van Baak skrev:
Nevertheless leaving every second sample out is NOT exactly the same as
continous data with Tau0 = 2 s. Instead it is data with Tau0 = 1 s and a
DEAD TIME of 1s. There are dead time correction schemes available in the
literature.
Ulrich, and Steve,
Wait, are we talking phase measurements here or frequency
measurements? My assumption with this thread is that Steve
is simply taking phase (time error) measurements, as in my
GPS raw data page, in which case there is no such thing as
dead time.
I agree. I was also considering this earlier but put my mind to rest by
assuming phase/time samples.
Dead time is when the counter looses track of time in between two
consecutive measurements. A zero dead-time counter uses the stop of one
measure as the start of the next measure.
This becomes very important when the data to be measured has a degree
of randomness and it is therefore important to capture all the data
without any dead time. In the case of measurements of phase error in
an oscillator, it should be possible to miss some data points provided
that the frequency of capture is still known (assuming that accuracy
of drift measurements is required).
If you have a series of time-error values taken each second and then
drop every other sample and just recall that the time between the
samples is now 2 seconds, then the tau0 has become 2s without causing
dead-time. However, if the original data would have been kept, better
statistical properties would be given, unless there is a strong
repetitive disturbance at 2 s period, in which case it would be filtered
out.
Indeed, there would be a loss of statistical data but this could be
made up by sampling over a period of twice the time. This system is
blind to noise at 1/2 f but ways and means could be taken to account
for that, IE. taking two data sets with a single cycle space between
them or taking another small data set with 2 cycles skipped between
each measurement.
An example when one does get dead-time, consider a frequency counter
which measures frequency with a gate-time of say 2 s. However, before it
re-arms and start the next measures is takes 300 ms. The two samples
will have 2,3 s between its start and actually spans 4,3 seconds rather
than 4 seconds. When doing Allan Deviation calculations on such a
measurement series, it will be biased and the bias may be compensated,
but these days counters with zero dead-time is readily available or the
problem can be avoided by careful consideration.
I'm looking at what can be acheieved by a budget strapped amateur who
would have trouble purchasing a later counter capable of measuring
with zero dead time.
You don't need a full featured counter for this application.
One can easily implement a zero deadtime counter or the equivalent
thereof in an FPGA.
I believe Grenhall made some extensive analysis of the biasing of
dead-time, so it should be available from NIST F&T online library.
I'll see what I can find.
You still need to know the phase noise spectrum of the source being
characterised.
Before zero dead-time counters was available, a setup of two counters
was used so that they where interleaved so the dead-time was the measure
time of the other.
I could look at doing that perhaps.
Very easy to do at low cost in an FPGA.
I can collect some references to dead-time articles if anyone need them.
I'd happy to.
73,
Steve
Cheers,
Magnus
time-nuts mailing list -- time-nuts@febo.com
To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
and follow the instructions there.
Brice
Steve
Steve Rooke wrote:
If I take two sequential phase readings from an input source and place
this into one data set and aniother two readings from the same source
but spaced by one cycle and put this in a second data set. From the
first data set I can calculate ADEV for tau = 1s and can calculate
ADEV for tau = 2 sec from the second data set. If I now pre-process
the data in the second set to remove all the effects of drift (given
that I have already determined this), I now have two 1 sec samples
which show a statistical difference and can be fed to ADEV with a tau0
= 1 sec producing a result for tau = 1 sec. The results from this
second calculation should show equal accuracy as that using the first
data set (given the limited size of the data set).
You need to give far more detail as its unclear exactly what you are
doing with what samples.
Label all the phase samples and then show which samples belong to which
data set.
Also need to show clearly what you mean by skipping a cycle.
I now collect a large data set but with a single cycle skipped between
each sample. I feed this into ADEV using tau0 = 2 sec to produce tau
results >= 2 sec. I then pre-process the data to remove any drift and
feed this to ADEV with a tau0 = 1 sec to produce just the tau = 1 sec
result. I now have a complete set of results for tau >= 1 sec. Agreed,
there is the issue of modulation at 1/2 input f but ignoring this for
the moment, this should give a valid result.
Again you need to give more detail.
Now indulge me while I have a flight of fantasy.
As the effects of jitter and phase noise will produce a statistical
distribution of measurements, any results from these ADEV calculations
will be limited on accuracy by the size of the data set. Only if we
sample for a very long time will we see the very limits of the effects
of noise.
What noise from what source?
Noise in such measurements can originate in the measuring instrument or
the source.
For short measurement times quantisation noise and instrumental noise
may mask the noise from the source but they are still present.
The samples which deviate the most from the median will
occur very infrequently and it is statistically likely that they will
not occur adjacent to another highly deviated sample. We could
pre-process the data to remove all drift and then sort it into an
array of increasing size. This would give the greatest deviations at
each end of the array. For 1 sec stability the deviation would be the
greatest difference from the median of the first and last samples in
the array. For a 2 sec stability, this same calculation could be made
taking the first two and last two readings in the array and
calculating their difference from 2 x the median. This calculation
could be continued until all the data is used for the final
calculation. In fact the whole sorted data set could be fed to ADEV to
produce a result that would show better worse case measurement of the
input source which still has some statistical probability. In theory,
if we took an infinite number of samples, there would be a whole
string of absolutely maximum deviation measurements in a row which
would show the absolute worse case.
Is any of this valid or just bad physics, I don't know, but I'm sure
it will solicit interesting comment.
No, not poor physics but poor statistics.
73,
Steve
2009/4/10 Tom Van Baak tvb@leapsecond.com:
I think the penny has dropped now, thanks. It's interesting that the
ADEV calculation still works even without continuous data as all the
reading I have done has led me to belive this was sacrosanct.
We need to be careful about what you mean by "continuous".
Let me probe a bit further to make sure you or others understand.
The data that you first mentioned, some GPS and OCXO data at:
http://www.leapsecond.com/pages/gpsdo-sim
was recorded once per second, for 400,000 samples without any
interruption; that's over 4 days of continuous data.
As you see it is very possible to extract every other, or every 10th,
every 60th, or every Nth point from this large data set to create a
smaller data set.
Is it as if you had several counters all connected to the same DUT.
Perhaps one makes a new phase measurement each second,
another makes a measurement every 10 seconds; maybe a third
counter just measures once a minute.
The key here is not how often they make measurements, but that
they all keep running at their particular rate.
The data sets you get from these counters all represent 4 days
of measurement; what changes is the measurement interval, the
tau0, or whatever your ADEV tool calls it.
Now the ADEV plots you get from these counters will all match
perfectly with the only exception being that the every-60 second
counter cannot give you any ADEV points for tau less than 60;
the every-10 second counter cannot give you points for tau less
than 10 seconds; and for that matter; the every 1-second counter
cannot give you points for tau less than 1 second.
So what makes all these "continuous" is that the runs were not
interrupted and that the data points were taken at regular intervals.
The x-axis of an ADEV plot spans a logarithmic range of tau. The
farthest point on the right is limited by how long your run was. If
you collect data for 4 or 5 days you can compute and plot points
out to around 1 day or 10^5 seconds.
On the other hand, the farthest point on the left is limited by how
fast you collect data. If you collect one point every 10 seconds,
then tau=10 is your left-most point. Yes, it's common to collect data
every second; in this case you can plot down to tau=1s. Some of
my instruments can collect phase data at 1000 points per second
(huge files!) and this means my leftmost ADEV point is 1 millisecond.
Here's an example of collecting data at 10 Hz:
http://www.leapsecond.com/pages/gpsdo/
You can see this allows me to plot from ADEV tau = 0.1 s.
Does all this make sense now?
What I now believe is that it's possible to measure oscillator
performance with less than optimal test gear. This will enable me to
see the effects of any experiments I make in the future. If you can't
measure it, how can you know that what your doing is good or bad.
Very true. So what one or several performance measurements
are you after?
/tvb
time-nuts mailing list -- time-nuts@febo.com
To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
and follow the instructions there.
Bruce
Bruce Griffiths wrote:
...
Brice
An impostor? An alias? :-)
Rex wrote:
Bruce Griffiths wrote:
...
Brice
An impostor? An alias? :-)
And I thought I was alluding to aliasing of the phase noise spectrum not
the characters of the alphabet.
Bruce
time-nuts mailing list -- time-nuts@febo.com
To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
and follow the instructions there.
Steve Rooke skrev:
2009/4/11 Magnus Danielson magnus@rubidium.dyndns.org:
Tom Van Baak skrev:
Nevertheless leaving every second sample out is NOT exactly the same as
continous data with Tau0 = 2 s. Instead it is data with Tau0 = 1 s and a
DEAD TIME of 1s. There are dead time correction schemes available in the
literature.
Ulrich, and Steve,
Wait, are we talking phase measurements here or frequency
measurements? My assumption with this thread is that Steve
is simply taking phase (time error) measurements, as in my
GPS raw data page, in which case there is no such thing as
dead time.
I agree. I was also considering this earlier but put my mind to rest by
assuming phase/time samples.
Dead time is when the counter looses track of time in between two
consecutive measurements. A zero dead-time counter uses the stop of one
measure as the start of the next measure.
This becomes very important when the data to be measured has a degree
of randomness and it is therefore important to capture all the data
without any dead time. In the case of measurements of phase error in
an oscillator, it should be possible to miss some data points provided
that the frequency of capture is still known (assuming that accuracy
of drift measurements is required).
Depending on the dominant noise type, the ADEV measure will be biased.
If you have a series of time-error values taken each second and then
drop every other sample and just recall that the time between the
samples is now 2 seconds, then the tau0 has become 2s without causing
dead-time. However, if the original data would have been kept, better
statistical properties would be given, unless there is a strong
repetitive disturbance at 2 s period, in which case it would be filtered
out.
Indeed, there would be a loss of statistical data but this could be
made up by sampling over a period of twice the time. This system is
blind to noise at 1/2 f but ways and means could be taken to account
for that, IE. taking two data sets with a single cycle space between
them or taking another small data set with 2 cycles skipped between
each measurement.
Actually, you can take any number of 2 cycle measures and be unable to
detect the 1/2 f oscillation without detecting it. In order to be able
to detect it you will need to take 2 measures and be able to make an odd
number of cycles trigger difference between them to have a chance.
The trouble is that the modulation is at the Nyquist frequency of the 1
cycle data, so it will fold down to DC on sampling it at half-rate.
Canceling it from other DC offset errors could be challenging.
Sampling it at 1/3 rate would discover it thought.
An example when one does get dead-time, consider a frequency counter
which measures frequency with a gate-time of say 2 s. However, before it
re-arms and start the next measures is takes 300 ms. The two samples
will have 2,3 s between its start and actually spans 4,3 seconds rather
than 4 seconds. When doing Allan Deviation calculations on such a
measurement series, it will be biased and the bias may be compensated,
but these days counters with zero dead-time is readily available or the
problem can be avoided by careful consideration.
I'm looking at what can be acheieved by a budget strapped amateur who
would have trouble purchasing a later counter capable of measuring
with zero dead time.
Beleive me, that's where I am too. Patience and saving money for things
I really want and allowing accumulation over time has allowed me some
pretty fancy tools in my private lab. Infact I have to lend some of my
gear to commercial labs as I outperform them...
I believe Grenhall made some extensive analysis of the biasing of
dead-time, so it should be available from NIST F&T online library.
I'll see what I can find.
I recalled wrong. You should look for Barnes "Tables of Bias Functions,
B1 and B2, for Variance Based on Finite Samples of Processes with Power
Law Spectral Densities", NBS Technical Note 375, Janurary 1969 as well
as Barnes and Allan "Variance Based on Data with Dead Time Between the
Mesurements" NIST Technical Note 1318, 1990.
A ahort into to the subject is found in NIST Special Publication 1065 by
W.J. Riley as found on http://www.wriley.com along other excelent
material. The good thing about that material is that he gives good
references, as one should.
Before zero dead-time counters was available, a setup of two counters
was used so that they where interleaved so the dead-time was the measure
time of the other.
I could look at doing that perhaps.
You should have two counters of equivalent performance, preferably same
model. It's a rather expensive approach IMHO.
Have a look at the possibility of picking up a HP 5371A or 5372A. You
can usually snag one for about 600 USD or 1000 USD respectively on Ebay.
Cheers,
Magnus
Bruce Griffiths skrev:
Rex wrote:
Bruce Griffiths wrote:
...
Brice
An impostor? An alias? :-)
And I thought I was alluding to aliasing of the phase noise spectrum not
the characters of the alphabet.
So it is not a case of shot noise of Bruce fingers? :)
I know mine has some, and besides that there are several bugs in the
language unit...
Cheers,
Magnus