time-nuts@lists.febo.com

Discussion of precise time and frequency measurement

View all threads

Re: [time-nuts] Modified Allan Deviation and counter averaging

MD
Magnus Danielson
Sun, Aug 2, 2015 5:52 AM

Hi Poul-Henning,

On 08/01/2015 10:32 PM, Poul-Henning Kamp wrote:


In message 49C4CCD3-09CE-48A4-82B8-9285A43814E3@n1k.org, Bob Camp writes:

The approach you are using is still a discrete time sampling
approach. As such it does not directly violate the data requirements
for ADEV or MADEV.  As long as the sample burst is much shorter
than the Tau you are after, this will be true. If the samples cover < 1%
of the Tau, it is very hard to demonstrate a noise spectrum that
this process messes up.

So this is where it gets interesting, because I suspect that your
1% "lets play it safe" threshold is overly pessimistic.

I agree that there are other error processes than white PM which
would get messed up by this and that general low-pass filtering
would be much more suspect.

But what bothers me is that as far as I can tell from real-life
measurements, as long as the dominant noise process is white PM,
even 99% Tau averaging gives me the right result.

I have tried to find a way to plug this into the MVAR definition
based on phase samples (Wikipedia's first formula under "Definition")
and as far as I can tell, it comes out the same in the end, provided
I assume only white PM noise.

I put that formula there, and I think Dave trimmed the text a little.

For true white PM random noise you can move your phase samples around,
but you gain nothing by bursting them. For any other form of random
noise and for the systematic noise, you alter the total filtering
behavior as compared to AVAR or MVAR, and it is through altering the
frequency behavior rhat biases in values is born. MVAR itself has biases
compared to AVAR for all noises due to its filtering behavior.

The bursting that you propose is similar to the uneven spreading of
samples you have in the dead-time sampling, where the time between the
start-samples of your frequency measures is T, but the time between the
start and stop samples of your frequency measures is tau. This creates a
different coloring of the spectrum than if the stop sample of the
previous frequency measure also is the start sample of the next
frequency measure. This coloring then creates a bias-ing depending on
the frequency spectrum of the noise (systematic or random), so you need
to correct it with the appropriate biasing function. See the Allan
deviation wikipedia article section of biasing functions and do read the
original Dave Allan Feb 1966 article.

For doing what you propose, you will have to define the time properities
of the burst, so that woould need to have the time between the bursts
(tau) and time between burst samples (alpha). You would also need to
define the number of burst samples (O). You can define a bias function
through analysis. However, you can sketch the behavior for various
noises. For white random phase noise, there is no correlation between
phase samples, which also makes the time between them un-interesting, so
we can re-arrange our sampling for that noise as we seem fit. For other
noises, you will create a coloring and I predict that the number of
averaged samples O will be the filtering effect, but the time between
samples should not be important. For systematic noise such as the
quantization noise, you will again interact, and that with a filtering
effect.

At some times the filtering effect is useful, see MVAR and PVAR, but for
many it becomes an uninteresting effect.

But I have not found any references to this "optimization" anywhere
and either I'm doing something wrong, or I'm doing something else
wrong.

I'd like to know which it is :-)

You're doing it wrong. :)

PS. At music festival, so quality references is at home.

Cheers.
Magnus

Hi Poul-Henning, On 08/01/2015 10:32 PM, Poul-Henning Kamp wrote: > -------- > In message <49C4CCD3-09CE-48A4-82B8-9285A43814E3@n1k.org>, Bob Camp writes: > >> The approach you are using is still a discrete time sampling >> approach. As such it does not directly violate the data requirements >> for ADEV or MADEV. As long as the sample burst is much shorter >> than the Tau you are after, this will be true. If the samples cover < 1% >> of the Tau, it is very hard to demonstrate a noise spectrum that >> this process messes up. > > So this is where it gets interesting, because I suspect that your > 1% "lets play it safe" threshold is overly pessimistic. > > I agree that there are other error processes than white PM which > would get messed up by this and that general low-pass filtering > would be much more suspect. > > But what bothers me is that as far as I can tell from real-life > measurements, as long as the dominant noise process is white PM, > even 99% Tau averaging gives me the right result. > > I have tried to find a way to plug this into the MVAR definition > based on phase samples (Wikipedia's first formula under "Definition") > and as far as I can tell, it comes out the same in the end, provided > I assume only white PM noise. I put that formula there, and I think Dave trimmed the text a little. For true white PM *random* noise you can move your phase samples around, but you gain nothing by bursting them. For any other form of random noise and for the systematic noise, you alter the total filtering behavior as compared to AVAR or MVAR, and it is through altering the frequency behavior rhat biases in values is born. MVAR itself has biases compared to AVAR for all noises due to its filtering behavior. The bursting that you propose is similar to the uneven spreading of samples you have in the dead-time sampling, where the time between the start-samples of your frequency measures is T, but the time between the start and stop samples of your frequency measures is tau. This creates a different coloring of the spectrum than if the stop sample of the previous frequency measure also is the start sample of the next frequency measure. This coloring then creates a bias-ing depending on the frequency spectrum of the noise (systematic or random), so you need to correct it with the appropriate biasing function. See the Allan deviation wikipedia article section of biasing functions and do read the original Dave Allan Feb 1966 article. For doing what you propose, you will have to define the time properities of the burst, so that woould need to have the time between the bursts (tau) and time between burst samples (alpha). You would also need to define the number of burst samples (O). You can define a bias function through analysis. However, you can sketch the behavior for various noises. For white random phase noise, there is no correlation between phase samples, which also makes the time between them un-interesting, so we can re-arrange our sampling for that noise as we seem fit. For other noises, you will create a coloring and I predict that the number of averaged samples O will be the filtering effect, but the time between samples should not be important. For systematic noise such as the quantization noise, you will again interact, and that with a filtering effect. At some times the filtering effect is useful, see MVAR and PVAR, but for many it becomes an uninteresting effect. > But I have not found any references to this "optimization" anywhere > and either I'm doing something wrong, or I'm doing something else > wrong. > > I'd like to know which it is :-) > You're doing it wrong. :) PS. At music festival, so quality references is at home. Cheers. Magnus
PK
Poul-Henning Kamp
Sun, Aug 2, 2015 11:07 PM

In message 55BDB002.8060408@rubidium.dyndns.org, Magnus Danielson writes:

For true white PM random noise you can move your phase samples around,
but you gain nothing by bursting them.

I gain nothing mathematically, but in practical terms it would be
a lot more manageable to record an average of 1000 measurements
once per second, than 1000 measurements every second.

For any other form of random
noise and for the systematic noise, you alter the total filtering
behavior [...]

Agreed.

--
Poul-Henning Kamp      | UNIX since Zilog Zeus 3.20
phk@FreeBSD.ORG        | TCP/IP since RFC 956
FreeBSD committer      | BSD since 4.3-tahoe
Never attribute to malice what can adequately be explained by incompetence.

-------- In message <55BDB002.8060408@rubidium.dyndns.org>, Magnus Danielson writes: >For true white PM *random* noise you can move your phase samples around, >but you gain nothing by bursting them. I gain nothing mathematically, but in practical terms it would be a lot more manageable to record an average of 1000 measurements once per second, than 1000 measurements every second. >For any other form of random >noise and for the systematic noise, you alter the total filtering >behavior [...] Agreed. -- Poul-Henning Kamp | UNIX since Zilog Zeus 3.20 phk@FreeBSD.ORG | TCP/IP since RFC 956 FreeBSD committer | BSD since 4.3-tahoe Never attribute to malice what can adequately be explained by incompetence.
MD
Magnus Danielson
Tue, Aug 4, 2015 11:11 AM

Poul-Henning,

On 08/03/2015 01:07 AM, Poul-Henning Kamp wrote:


In message 55BDB002.8060408@rubidium.dyndns.org, Magnus Danielson writes:

For true white PM random noise you can move your phase samples around,
but you gain nothing by bursting them.

I gain nothing mathematically, but in practical terms it would be
a lot more manageable to record an average of 1000 measurements
once per second, than 1000 measurements every second.

Yes, averaging them in blocks and only send the block result is indeed a
good thing, as long as we can establish the behavior and avoid or remove
any biases introduced. Bursting them in itself does not give much gain,
as the processing needs to be done anyway and even rate works just as
well. A benefit of a small burstiness is that you can work on beat-notes
not being multiple of the tau0 you want, say 1 s.

As in any processing, cycle-unwrapping needs to be done, as it would
waste the benefit.

For random noise, the effect of the burst or indeed aggregate into
blocks of samples is just the same as doing overlapping processing as
was done for ADEV in the early 70thies as a first step towards better
confidence intervals. For white noise, there is no correlation between
any samples, so you can sample them at random. However, for ADEV the
point is to analyze this for a particular observation interval, so for
each measure being squared, the observation interval needs to be
respected. For the colored noises, there is a correlation between the
samples, and it is the correlation of the observation interval that main
filtering mechanism of the ADEV. However, since the underlying source is
noise, you can use any set of phase-tripplets to add to the accumulated
variance. The burst or block average, provides such a overlapping
processing mechanism.

However, for systematic noise such as the counter's first order time
quantization (thus ignoring any fine-grained variations) will interact
in different ways with burst-sampling depending on the burst length.
This is the part we should look at to see how we best achieve a
reduction of that noise in order to quicker reach the actual signal and
reference noise.

For any other form of random
noise and for the systematic noise, you alter the total filtering
behavior [...]

Agreed.

I wonder if not the filter properties of the burst average is altered
compared to an evenly spread block, such that when treated as MDEV
measures we have a difference. The burst filter-properties should be
similar to that of PWM over the burst repetition rate.

I just contradicted myself. I will come back to this topic, one has to
be careful as filter properties will color the result and biases can
occur. Most of these biases is usually not very useful, but the MDEV
averaging is, if used correctly.

Cheers,
Magnus

Poul-Henning, On 08/03/2015 01:07 AM, Poul-Henning Kamp wrote: > -------- > In message <55BDB002.8060408@rubidium.dyndns.org>, Magnus Danielson writes: > >> For true white PM *random* noise you can move your phase samples around, >> but you gain nothing by bursting them. > > I gain nothing mathematically, but in practical terms it would be > a lot more manageable to record an average of 1000 measurements > once per second, than 1000 measurements every second. Yes, averaging them in blocks and only send the block result is indeed a good thing, as long as we can establish the behavior and avoid or remove any biases introduced. Bursting them in itself does not give much gain, as the processing needs to be done anyway and even rate works just as well. A benefit of a small burstiness is that you can work on beat-notes not being multiple of the tau0 you want, say 1 s. As in any processing, cycle-unwrapping needs to be done, as it would waste the benefit. For random noise, the effect of the burst or indeed aggregate into blocks of samples is just the same as doing overlapping processing as was done for ADEV in the early 70thies as a first step towards better confidence intervals. For white noise, there is no correlation between any samples, so you can sample them at random. However, for ADEV the point is to analyze this for a particular observation interval, so for each measure being squared, the observation interval needs to be respected. For the colored noises, there is a correlation between the samples, and it is the correlation of the observation interval that main filtering mechanism of the ADEV. However, since the underlying source is noise, you can use any set of phase-tripplets to add to the accumulated variance. The burst or block average, provides such a overlapping processing mechanism. However, for systematic noise such as the counter's first order time quantization (thus ignoring any fine-grained variations) will interact in different ways with burst-sampling depending on the burst length. This is the part we should look at to see how we best achieve a reduction of that noise in order to quicker reach the actual signal and reference noise. >> For any other form of random >> noise and for the systematic noise, you alter the total filtering >> behavior [...] > > Agreed. I wonder if not the filter properties of the burst average is altered compared to an evenly spread block, such that when treated as MDEV measures we have a difference. The burst filter-properties should be similar to that of PWM over the burst repetition rate. I just contradicted myself. I will come back to this topic, one has to be careful as filter properties will color the result and biases can occur. Most of these biases is usually not very useful, but the MDEV averaging is, if used correctly. Cheers, Magnus