My understanding is that MVAR(mtau0) is equivalent to filtering the phase
samples x(n) by averaging m samples to produce x'(n)
[x'(n)=1/m(x(n)+x(n+1)..x(n+m-1))] and then calculating AVAR for
tau=m*tau0 on the filtered sequence. Thus, MVAR already performs an
averaging/ lowpass filtering operation. Adding another averaging filter
prior to calculating MVAR would seem to be defining a new type of stability
measurement.
Not familiar with the 5370... Is it possible to configure it to average
measurements over the complete tau0 interval with no dead time between
measurements? Assuming the 5370 can average 100 evenly spaced measurements
within the measurement interval (1s?), calculating MVAR on the captured
sequence would produce MVAR(m*.01)) for m being a multiple of 100. i.e.,
tau0 here is actually .01, not 1, but values for MVAR(tau) for tau's less
than 1s are not available.
Shouldn't the quantization/ measurement noise power be easy to measure?
Can't it just be subtracted from the MVAR plot? I've done this with AVAR in
the past to produce 'seemingly' meaningful results (i.e. I'm not an expert).
I calculated the PSD of x(n) and it was clear where the measurements were
being limited by noise (flat section at higher frequencies). From this I
was able to estimate the measurement noise power.
AVAR_MEASURED(tau)=AVAR_CUT(tau)+AVAR_REF(tau)+AVAR_MEAS(tau)
i.e. The measured AVAR is equal to the sum of the AVAR of the clock under
test (CUT), the AVAR of the reference clock, and the AVAR of the
measurement noise. If the reference clock is much better than the CUT
AVAR_REF(tau) can be ignored. AVAR_MEAS(tau) is known from the PSD of x(n)
and can be subtracted from AVAR_MEASURED(tau) to produce a better estimate
of AVAR_CUT(tau).
Depending on the confidence intervals of AVAR_MEASURED(tau) and the noise
power estimate, you can get varying degrees of cancellation. 10dB of
improvement seemed quite easy to obtain.
James
Message: 7
Date: Tue, 28 Jul 2015 21:51:07 +0000
From: Poul-Henning Kamp phk@phk.freebsd.dk
To: time-nuts@febo.com
Subject: [time-nuts] Modified Allan Deviation and counter averaging
Message-ID: 2884.1438120267@critter.freebsd.dk
Content-Type: text/plain; charset="us-ascii"
Sorry this is a bit long-ish, but I figure I'm saving time putting
in all the details up front.
The canonical time-nut way to set up a MVAR measurement is to feed
two sources to a HP5370 and measure the time interval between their
zero crossings often enough to resolve any phase ambiguities caused
by frequency differences.
The computer unfolds the phase wrap-arounds, and calculates the
MVAR using the measurement rate, typically 100, 10 or 1 Hz, as the
minimum Tau.
However, the HP5370 has noise-floor in the low picoseconds, which
creates the well known diagonal left bound on what we can measure
this way.
So it is tempting to do this instead:
Every measurement period, we let the HP5370 do a burst of 100
measurements[*] and feed the average to MVAR, and push the diagonal
line an order of magnitude (sqrt(100)) further down.
At its specified rate, the HP5370 will take 1/30th of a second to
do a 100 sample average measurement.
If we are measuring once each second, that's only 3% of the Tau.
No measurement is ever instantaneous, simply because the two zero
crossings are not happening right at the mesurement epoch.
If I measure two 10MHz signals the canonical way, the first zero
crossing could come as late as 100(+epsilon) nanoseconds after the
epoch, and the second as much as 100(+epsilon) nanoseconds later.
An actual point of the measurement doesn't even exist, but picking
with the midpoint we get an average delay of 75ns, worst case 150ns.
That works out to one part in 13 million which is a lot less than 3%,
but certainly not zero as the MVAR formula pressume.
Eyeballing it, 3% is well below the reproducibility I see on MVAR
measurements, and I have therefore waved the method and result
through, without a formal proof.
However, I have very carefully made sure to never show anybody
any of these plots because of the lack of proof.
Thanks to Johns Turbo-5370 we can do burst measurements at much
higher rates than 3000/s, and thus potentially push the diagonal
limit more than a decade to the left, while still doing minimum
violence to the mathematical assumptions under MVAR.
[*] The footnote is this: The HP5370 firwmare does not make triggered
bust averages an easy measurement, but we can change that, in
particular with Johns Turbo-5370.
But before I attempt to do that, I would appreciate if a couple of
the more math-savy time-nuts could ponder the soundness of the
concept.
Apart from the delayed measurement point, I have not been able
to identify any issues.
The frequency spectrum filtered out by the averaging is waaaay to
the left of our minimum Tau.
Phase wrap-around inside bursts can be detected and unfolded
in the processing.
Am I overlooking anything ?
--
Poul-Henning Kamp | UNIX since Zilog Zeus 3.20
phk@FreeBSD.ORG | TCP/IP since RFC 956
FreeBSD committer | BSD since 4.3-tahoe
Never attribute to malice what can adequately be explained by incompetence.
Hi James,
On 07/30/2015 06:34 PM, James Peroulas wrote:
My understanding is that MVAR(mtau0) is equivalent to filtering the phase
samples x(n) by averaging m samples to produce x'(n)
[x'(n)=1/m(x(n)+x(n+1)..x(n+m-1))] and then calculating AVAR for
tau=m*tau0 on the filtered sequence. Thus, MVAR already performs an
averaging/ lowpass filtering operation. Adding another averaging filter
prior to calculating MVAR would seem to be defining a new type of stability
measurement.
Yes, fhat's how MVAR works.
Not familiar with the 5370... Is it possible to configure it to average
measurements over the complete tau0 interval with no dead time between
measurements? Assuming the 5370 can average 100 evenly spaced measurements
within the measurement interval (1s?), calculating MVAR on the captured
sequence would produce MVAR(m*.01)) for m being a multiple of 100. i.e.,
tau0 here is actually .01, not 1, but values for MVAR(tau) for tau's less
than 1s are not available.
The stock 5370 isn't a great tool for this. The accelerator board that
replaces the CPU and allows for us to add algorithms, makes the counter
hardware much more adapted for this setup.
Shouldn't the quantization/ measurement noise power be easy to measure?
Can't it just be subtracted from the MVAR plot? I've done this with AVAR in
the past to produce 'seemingly' meaningful results (i.e. I'm not an expert).
You can curve-fit an estimation of that noise and "remove" it from the
plot. For lower taus the confidence intervals will suffer in practice.
I calculated the PSD of x(n) and it was clear where the measurements were
being limited by noise (flat section at higher frequencies). From this I
was able to estimate the measurement noise power.
It is. Notice that some of it is noise and some is noise-like
systematics from the quantization.
AVAR_MEASURED(tau)=AVAR_CUT(tau)+AVAR_REF(tau)+AVAR_MEAS(tau)
i.e. The measured AVAR is equal to the sum of the AVAR of the clock under
test (CUT), the AVAR of the reference clock, and the AVAR of the
measurement noise. If the reference clock is much better than the CUT
AVAR_REF(tau) can be ignored. AVAR_MEAS(tau) is known from the PSD of x(n)
and can be subtracted from AVAR_MEASURED(tau) to produce a better estimate
of AVAR_CUT(tau).
Depending on the confidence intervals of AVAR_MEASURED(tau) and the noise
power estimate, you can get varying degrees of cancellation. 10dB of
improvement seemed quite easy to obtain.
Using the Lambda counter approach, filtering with the average blocks of
Modified Allan Variance, makes the white phase noise slope go 1/tau^3
rather than 1/tau^2 as it is for normal Allan Variance. This means that
the limiting slope of the white noise will cut over to the actual noise
for lower tau. so that is an important tool already there. Also, it
achieves it with known properties in confidence intervals. Using the
Omega counter approach, you can get further improvements by about 1.25
dB, which is then deemed optimal as the Omega counter method is a linear
regression / least square method for estimating the frequency samples
and then those is used for AVAR processing.
The next trick to pull is to do cross correlation of two independent
channels, so that their noise does not correlate. This can help for some
of it, but systematics can become a limiting factor.
Cheers,
Magnus
James
Message: 7
Date: Tue, 28 Jul 2015 21:51:07 +0000
From: Poul-Henning Kamp phk@phk.freebsd.dk
To: time-nuts@febo.com
Subject: [time-nuts] Modified Allan Deviation and counter averaging
Message-ID: 2884.1438120267@critter.freebsd.dk
Content-Type: text/plain; charset="us-ascii"
Sorry this is a bit long-ish, but I figure I'm saving time putting
in all the details up front.
The canonical time-nut way to set up a MVAR measurement is to feed
two sources to a HP5370 and measure the time interval between their
zero crossings often enough to resolve any phase ambiguities caused
by frequency differences.
The computer unfolds the phase wrap-arounds, and calculates the
MVAR using the measurement rate, typically 100, 10 or 1 Hz, as the
minimum Tau.
However, the HP5370 has noise-floor in the low picoseconds, which
creates the well known diagonal left bound on what we can measure
this way.
So it is tempting to do this instead:
Every measurement period, we let the HP5370 do a burst of 100
measurements[*] and feed the average to MVAR, and push the diagonal
line an order of magnitude (sqrt(100)) further down.
At its specified rate, the HP5370 will take 1/30th of a second to
do a 100 sample average measurement.
If we are measuring once each second, that's only 3% of the Tau.
No measurement is ever instantaneous, simply because the two zero
crossings are not happening right at the mesurement epoch.
If I measure two 10MHz signals the canonical way, the first zero
crossing could come as late as 100(+epsilon) nanoseconds after the
epoch, and the second as much as 100(+epsilon) nanoseconds later.
An actual point of the measurement doesn't even exist, but picking
with the midpoint we get an average delay of 75ns, worst case 150ns.
That works out to one part in 13 million which is a lot less than 3%,
but certainly not zero as the MVAR formula pressume.
Eyeballing it, 3% is well below the reproducibility I see on MVAR
measurements, and I have therefore waved the method and result
through, without a formal proof.
However, I have very carefully made sure to never show anybody
any of these plots because of the lack of proof.
Thanks to Johns Turbo-5370 we can do burst measurements at much
higher rates than 3000/s, and thus potentially push the diagonal
limit more than a decade to the left, while still doing minimum
violence to the mathematical assumptions under MVAR.
[*] The footnote is this: The HP5370 firwmare does not make triggered
bust averages an easy measurement, but we can change that, in
particular with Johns Turbo-5370.
But before I attempt to do that, I would appreciate if a couple of
the more math-savy time-nuts could ponder the soundness of the
concept.
Apart from the delayed measurement point, I have not been able
to identify any issues.
The frequency spectrum filtered out by the averaging is waaaay to
the left of our minimum Tau.
Phase wrap-around inside bursts can be detected and unfolded
in the processing.
Am I overlooking anything ?
--
Poul-Henning Kamp | UNIX since Zilog Zeus 3.20
phk@FreeBSD.ORG | TCP/IP since RFC 956
FreeBSD committer | BSD since 4.3-tahoe
Never attribute to malice what can adequately be explained by incompetence.
time-nuts mailing list -- time-nuts@febo.com
To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
and follow the instructions there.
Shouldn't the quantization/ measurement noise power be easy to measure?
So I guess I havn't explained my idea well enough yet.
If you look at the attached plot there are four datasets.
"100Hz", "10Hz" and "1Hz" are the result of collecting TI measurements
at these rates.
As expected the X^(3/2) slope white PM noise is reduced by sqrt(10)
every time we increase the measurement frequency by a factor 10.
The "1Hz 10avg" dataset is where the HP5370 does 10 measurements
as fast as possible, once per second, and returns the average.
The key observation here is I get the same sqrt(10) improvement
without having to capture, store and process 10 times as many
datapoints.
Obviously I learn nothing about the Tau [0.1 ... 1.0] range, but
as you can see, that's not really a loss in this case.
If this method is valid, possibly conditioned on paying attention
to the counters STDDEV calculation...
and If we can get the turbo-5370 to give us an average of 5000
measurements once every second.
Then the PM noise curtain drops from 5e-11 to 7e-13 @ Tau=1s
Poul-Henning
PS: The above plot is made by processing a single 100 Hz raw data file
which is ny "new" HP5065 against an GPSDO.
--
Poul-Henning Kamp | UNIX since Zilog Zeus 3.20
phk@FreeBSD.ORG | TCP/IP since RFC 956
FreeBSD committer | BSD since 4.3-tahoe
Never attribute to malice what can adequately be explained by incompetence.
If you look at the attached plot there are four datasets.
And of course...
Here it is:
--
Poul-Henning Kamp | UNIX since Zilog Zeus 3.20
phk@FreeBSD.ORG | TCP/IP since RFC 956
FreeBSD committer | BSD since 4.3-tahoe
Never attribute to malice what can adequately be explained by incompetence.
Ok... time to show my lack of knowledge in public and ask a very simple
question:
Can someone explain in very simple terms what this graph means?
My current interpretation is as following:
"For a 100Hz input, if you look to your signal in 0.1s intervals,
there´s about 1.0e-11 frequency error on average (RMS average?)"
How far from the truth am I?
Daniel
Em 31/07/2015 18:04, Poul-Henning Kamp escreveu:
If you look at the attached plot there are four datasets.
And of course...
Here it is:
time-nuts mailing list -- time-nuts@febo.com
To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
and follow the instructions there.
In message 55BC202F.6060509@gmail.com, Daniel Mendes writes:
Can someone explain in very simple terms what this graph means?
My current interpretation is as following:
"For a 100Hz input, if you look to your signal in 0.1s intervals,
there´s about 1.0e-11 frequency error on average (RMS average?)"
Close: To a first approximation MVAR is the standard-deviation of
the frequency, as a function of the time-interval you measure the
frequency over.
--
Poul-Henning Kamp | UNIX since Zilog Zeus 3.20
phk@FreeBSD.ORG | TCP/IP since RFC 956
FreeBSD committer | BSD since 4.3-tahoe
Never attribute to malice what can adequately be explained by incompetence.
Hi
If on the same graph you plotted the “low pass filter” response of your sample / average
process, it would show how much / how little impact there likely is. It’s not any different than
a standard circuit analysis. The old “poles at 10X frequency don’t count” rule. No measurement
we ever make is 100% perfect, so a small impact does not immediately rule out an approach.
Your measurement gets better by some number related to the number of samples. It might be
square root of N, it could be something else. If it’s sqrt(N), a 100 sample burst is getting you an
order of magnitude better number when you sample. You could go another 10X at 10K samples.
A very real question comes up about “better” in this case. It probably does not improve accuracy,
resolution, repeatability, and noise floor all to the same degree. At some point it improves some
of those and makes your MADEV measurement less accurate.
=====
Because we strive for perfection in our measurements, anything that impacts their accuracy is suspect.
A very closely related (and classic) example is lowpass filtering in front of an ADEV measurement.
People have questioned doing this back at least into the early 1970’s. There may have been earlier questions,
if so I was not there to hear them. It took about 20 years to come up with a “blessed” filtering approach
for ADEV. It still is suspect to some because it (obviously) changes the ADEV plot you get at the shortest tau.
That kind of decades long debate makes getting a conclusive answer to a question like this unlikely.
=====
The approach you are using is still a discrete time sampling approach. As such it does not directly violate
the data requirements for ADEV or MADEV. As long as the sample burst is much shorter than the Tau you
are after, this will be true. If the samples cover < 1% of the Tau, it is very hard to demonstrate a noise
spectrum that this process messes up. Put in the context of the circuit pole, you now are at 100X the design
frequency. At that point it’s way less of a filter than the sort of vaguely documented ADEV pre-filtering
that was going on for years and years ….. (names withheld to protect the guilty …)
Is this in a back door way saying that these numbers probably are (at best) 1% of reading sorts of data?
Yes indeed that’s an implicit part of my argument. If you have devices that repeat to three digits on multiple
runs, this may not be the approach you would want to use. In 40 years of doing untold thousands these
measurements I have yet to see devices (as opposed to instrument / measurement floors) that repeat to
under 1% of reading.
Bob
On Jul 31, 2015, at 5:04 PM, Poul-Henning Kamp phk@phk.freebsd.dk wrote:
If you look at the attached plot there are four datasets.
And of course...
Here it is:
--
Poul-Henning Kamp | UNIX since Zilog Zeus 3.20
phk@FreeBSD.ORG | TCP/IP since RFC 956
FreeBSD committer | BSD since 4.3-tahoe
Never attribute to malice what can adequately be explained by incompetence.
<allan.png>_______________________________________________
time-nuts mailing list -- time-nuts@febo.com
To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
and follow the instructions there.
Hi
If you take more data (rate is faster) the noise floor of the data set at 1 second goes
down as square root N. (speeds up 10X, noise down by ~1/3)
Past a point, the resultant plot is not messed up by the process involved in the sampling.
Bob
On Jul 31, 2015, at 9:26 PM, Daniel Mendes dmendesf@gmail.com wrote:
Ok... time to show my lack of knowledge in public and ask a very simple question:
Can someone explain in very simple terms what this graph means?
My current interpretation is as following:
"For a 100Hz input, if you look to your signal in 0.1s intervals, there´s about 1.0e-11 frequency error on average (RMS average?)"
How far from the truth am I?
Daniel
Em 31/07/2015 18:04, Poul-Henning Kamp escreveu:
If you look at the attached plot there are four datasets.
And of course...
Here it is:
time-nuts mailing list -- time-nuts@febo.com
To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
and follow the instructions there.
time-nuts mailing list -- time-nuts@febo.com
To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
and follow the instructions there.
In message 49C4CCD3-09CE-48A4-82B8-9285A43814E3@n1k.org, Bob Camp writes:
The approach you are using is still a discrete time sampling
approach. As such it does not directly violate the data requirements
for ADEV or MADEV. As long as the sample burst is much shorter
than the Tau you are after, this will be true. If the samples cover < 1%
of the Tau, it is very hard to demonstrate a noise spectrum that
this process messes up.
So this is where it gets interesting, because I suspect that your
1% "lets play it safe" threshold is overly pessimistic.
I agree that there are other error processes than white PM which
would get messed up by this and that general low-pass filtering
would be much more suspect.
But what bothers me is that as far as I can tell from real-life
measurements, as long as the dominant noise process is white PM,
even 99% Tau averaging gives me the right result.
I have tried to find a way to plug this into the MVAR definition
based on phase samples (Wikipedia's first formula under "Definition")
and as far as I can tell, it comes out the same in the end, provided
I assume only white PM noise.
But I have not found any references to this "optimization" anywhere
and either I'm doing something wrong, or I'm doing something else
wrong.
I'd like to know which it is :-)
--
Poul-Henning Kamp | UNIX since Zilog Zeus 3.20
phk@FreeBSD.ORG | TCP/IP since RFC 956
FreeBSD committer | BSD since 4.3-tahoe
Never attribute to malice what can adequately be explained by incompetence.
Hi
On Aug 1, 2015, at 4:32 PM, Poul-Henning Kamp phk@phk.freebsd.dk wrote:
In message 49C4CCD3-09CE-48A4-82B8-9285A43814E3@n1k.org, Bob Camp writes:
The approach you are using is still a discrete time sampling
approach. As such it does not directly violate the data requirements
for ADEV or MADEV. As long as the sample burst is much shorter
than the Tau you are after, this will be true. If the samples cover < 1%
of the Tau, it is very hard to demonstrate a noise spectrum that
this process messes up.
So this is where it gets interesting, because I suspect that your
1% "lets play it safe" threshold is overly pessimistic.
I completely agree with that. It’s more a limit that lets you do some sampling but steers clear
of any real challenge to the method.
I agree that there are other error processes than white PM which
would get messed up by this and that general low-pass filtering
would be much more suspect.
But what bothers me is that as far as I can tell from real-life
measurements, as long as the dominant noise process is white PM,
even 99% Tau averaging gives me the right result.
Indeed a number of people noticed this with low pass filtering …. back
a number (~1975) of years ago….
The key point being that white PM is the dominant noise process. If you have a discrete spur in there,
it will indeed make a difference. You can fairly easily construct a sample averaging process that drops a zero on a spur
(average over a exactly a full period …). How that works with discontinuous sampling is
not quite as clean as how it works with a continuous sample (you now average over N out of M periods… ).
I have tried to find a way to plug this into the MVAR definition
based on phase samples (Wikipedia's first formula under "Definition")
and as far as I can tell, it comes out the same in the end, provided
I assume only white PM noise.
Which is why very sharp people debated filtering on ADEV for years before anything really
got even partial settled.
But I have not found any references to this "optimization" anywhere
and either I'm doing something wrong, or I'm doing something else
wrong.
I'd like to know which it is :-)
Well umm …. errr …. some people have been known to simply document
what they do. They then demonstrate that for normal noise processes it’s not an issue.
Do an “adequate” number of real world comparisons and then move on with it.
There are some pretty big names in the business that have gone that route. Some
of them are often referred to with three and four letter initials …. In this case probably note
the issue (or advantage !!) with discrete spurs and move on.
If you are looking for real fun, I would dig out Stein’s paper on pre-filtering and ADEV. That would
give you you a starting point and a framework to extend to MADEV.
Truth in lending: The whole discrete spur thing described above is entirely from work on a very similar
problem. I have not proven it with your sampling approach.
Bob
--
Poul-Henning Kamp | UNIX since Zilog Zeus 3.20
phk@FreeBSD.ORG | TCP/IP since RFC 956
FreeBSD committer | BSD since 4.3-tahoe
Never attribute to malice what can adequately be explained by incompetence.