LJ
Lux, Jim
Tue, May 3, 2022 3:06 PM
On 5/3/22 1:57 AM, Matthias Welwarsky wrote:
Magnus, Jim - thanks a lot. Your post encouraged me to look especially into
flicker noise an how to generate it in the time domain. I now use randn() and
a low-pass filter. Also, I think I understood now how to create phase vs
frequency noise.
There's some papers out there (mentioned on the list in the past) about
synthesizing colored noise. Taking "White" noise and running it through
a filter is one approach. Another is doing an inverse FFT, but that has
the issue of needing to know how many samples you need. Although I
suppose one could do some sort of continuous overlapping window scheme
(which, when it comes right down to it is just another way of filtering)).
https://dl.acm.org/doi/abs/10.1145/368273.368574
or
C. A. Greenhall. Models for Flicker Noise in DSN Oscillators. The Deep
Space Network Progress Report, Volume XIII, pp. 183-193, February 15, 1973.
https://ipnpr.jpl.nasa.gov/progress_report/XIII/XIIIY.PDF
W. J. Hurd. Efficient Generation of Statistically Good Pseudonoise by
Linearly Interconnected Shift Registers. The Deep Space Network Progress
Report, Volume XI, pp. 1-1, October 15, 1972.
https://ipnpr.jpl.nasa.gov/progress_report/XI/XIQ.PDF
A Wideband Digital Pseudo-Gaussian Noise Generator
W. J. Hurd
https://ipnpr.jpl.nasa.gov/progress_report/III/IIIP.PDF
On 5/3/22 1:57 AM, Matthias Welwarsky wrote:
> Magnus, Jim - thanks a lot. Your post encouraged me to look especially into
> flicker noise an how to generate it in the time domain. I now use randn() and
> a low-pass filter. Also, I think I understood now how to create phase vs
> frequency noise.
There's some papers out there (mentioned on the list in the past) about
synthesizing colored noise. Taking "White" noise and running it through
a filter is one approach. Another is doing an inverse FFT, but that has
the issue of needing to know how many samples you need. Although I
suppose one could do some sort of continuous overlapping window scheme
(which, when it comes right down to it is just another way of filtering)).
https://dl.acm.org/doi/abs/10.1145/368273.368574
or
C. A. Greenhall. Models for Flicker Noise in DSN Oscillators. The Deep
Space Network Progress Report, Volume XIII, pp. 183-193, February 15, 1973.
https://ipnpr.jpl.nasa.gov/progress_report/XIII/XIIIY.PDF
W. J. Hurd. Efficient Generation of Statistically Good Pseudonoise by
Linearly Interconnected Shift Registers. The Deep Space Network Progress
Report, Volume XI, pp. 1-1, October 15, 1972.
https://ipnpr.jpl.nasa.gov/progress_report/XI/XIQ.PDF
A Wideband Digital Pseudo-Gaussian Noise Generator
W. J. Hurd
https://ipnpr.jpl.nasa.gov/progress_report/III/IIIP.PDF
MD
Magnus Danielson
Tue, May 3, 2022 8:08 PM
Dear Matthias,
On 2022-05-03 10:57, Matthias Welwarsky wrote:
Dear all,
thanks for your kind comments, corrections and suggestions. Please forgive if
I don't reply to all of your comments individually. Summary response follows:
Attila - yes, I realize temperature dependence is one key parameter. I model
this meanwhile as a frequency shift over time.
Bob - I agree in principle, real world data is a good reality check for any
model, but there are only so few datasets available and most of the time they
don't contain associated environmental data. You get a mix of effects without
any chance to isolate them.
Environmental effects tends to be recognizeable by their periodic
behavior, i.e. period of the day and the period of the heating/AC. Real
oscillator data tends to be quite relevant as you can simulate what it
would mean to lock that oscillator up. TvB made a simulator on those
grounds. Good exercise.
Magnus, Jim - thanks a lot. Your post encouraged me to look especially into
flicker noise an how to generate it in the time domain. I now use randn() and
a low-pass filter. Also, I think I understood now how to create phase vs
frequency noise.
Happy to get you up to speed on that.
One particular name to check out articles for is Charles "Chuck"
Greenhall, JPL.
For early work, also look att James "Jim" Barnes, NBS (later named NIST).
Both these fine gentlement is recommended reading almost anything they
write on the topic actually.
I've some Timelab screenshots attached, ADEV and frequency plot of a data set
I generated with the following matlab function, plus some temperature response
modeled outside of this function.
function [phase] = synth_osc(samples,da,wpn,wfn,fpn,ffn)
# low-pass butterworth filter for 1/f noise generator
[b,a] = butter(1, 0.1);
Notice that 1/f is power-spectrum density, straight filter will give you
1/f^2 in power-spectrum, just as an integration slope.
One approach to flicker filter is an IIR filter with the weighing of
1/sqrt(n+1) where n is tap index, and feed it normal noise. You need to
"flush out" state before you use it so you have a long history to help
shaping. For a 1024 sample series, I do 2048 samples and only use the
last 1024. Efficient? No. Quick-and-dirty? Yes.
The pole/zero type of filters of Barnes let you synthesize an 1/f slope
by balancing the properties. How dense and thus how small ripples you
get, you decide. Greenhall made the point of recording the state, and
provides BASIC code that calculate the state rather than run an infinite
sequence to let the initial state converge to the 1/f state.
Greenhall published an article illustrating a whole range of methods to
do it. He wrote the simulation code to be used in JPL for their clock
development.
Flicker noise is indeed picky.
Cheers,
Magnus
# aging
phase = (((1:samples)/86400).^2)*da;
# white phase noise
phase += (randn(1, samples))*wpn;
# white frequency noise
phase += cumsum(randn(1, samples))*wfn;
# 1/f phase noise
phase += filter(b,a,randn(1,samples))*fpn;
# 1/f frequency noise
phase += cumsum(filter(b,a,randn(1,samples))*ffn);
end
osc = synth_osc(400000, -50e-6, 5e-11, 1e-11, 5e-11, 5e-11);
Thanks.
On Montag, 2. Mai 2022 17:12:47 CEST Matthias Welwarsky wrote:
Dear all,
I'm trying to come up with a reasonably simple model for an OCXO that I can
parametrize to experiment with a GPSDO simulator. For now I have the
following matlab function that "somewhat" does what I think is reasonable,
but I would like a reality check.
This is the matlab code:
function [phase] = synth_osc(samples,da,wn,fn)
aging
phase = (((1:samples)/86400).^2)*da;
white noise
phase += (rand(1,samples)-0.5)*wn;
flicker noise
phase += cumsum(rand(1,samples)-0.5)*fn;
end
There are three components in the model, aging, white noise and flicker
noise, with everything expressed in fractions of seconds.
The first term basically creates a base vector that has a quadratic aging
function. It can be parametrized e.g. from an OCXO datasheet, daily aging
given in s/s per day.
The second term models white noise. It's just a random number scaled to the
desired 1-second uncertainty.
The third term is supposed to model flicker noise. It's basically a random
walk scaled to the desired magnitude.
As an example, the following function call would create a phase vector for a
10MHz oscillator with one day worth of samples, with an aging of about 5
Millihertz per day, 10ps/s white noise and 10ns/s of flicker noise:
phase = osc_synth(86400, -44e-6, 10e-12, 10e-9);
What I'd like to know - is that a "reasonable" model or is it just too far
off of reality to be useful? What could be changed or improved?
Best regards,
Matthias
time-nuts mailing list -- time-nuts@lists.febo.com -- To unsubscribe send an
email to time-nuts-leave@lists.febo.com To unsubscribe, go to and follow
the instructions there.
time-nuts mailing list -- time-nuts@lists.febo.com -- To unsubscribe send an email to time-nuts-leave@lists.febo.com
To unsubscribe, go to and follow the instructions there.
Dear Matthias,
On 2022-05-03 10:57, Matthias Welwarsky wrote:
> Dear all,
>
> thanks for your kind comments, corrections and suggestions. Please forgive if
> I don't reply to all of your comments individually. Summary response follows:
>
> Attila - yes, I realize temperature dependence is one key parameter. I model
> this meanwhile as a frequency shift over time.
>
> Bob - I agree in principle, real world data is a good reality check for any
> model, but there are only so few datasets available and most of the time they
> don't contain associated environmental data. You get a mix of effects without
> any chance to isolate them.
Environmental effects tends to be recognizeable by their periodic
behavior, i.e. period of the day and the period of the heating/AC. Real
oscillator data tends to be quite relevant as you can simulate what it
would mean to lock that oscillator up. TvB made a simulator on those
grounds. Good exercise.
>
> Magnus, Jim - thanks a lot. Your post encouraged me to look especially into
> flicker noise an how to generate it in the time domain. I now use randn() and
> a low-pass filter. Also, I think I understood now how to create phase vs
> frequency noise.
Happy to get you up to speed on that.
One particular name to check out articles for is Charles "Chuck"
Greenhall, JPL.
For early work, also look att James "Jim" Barnes, NBS (later named NIST).
Both these fine gentlement is recommended reading almost anything they
write on the topic actually.
> I've some Timelab screenshots attached, ADEV and frequency plot of a data set
> I generated with the following matlab function, plus some temperature response
> modeled outside of this function.
>
> function [phase] = synth_osc(samples,da,wpn,wfn,fpn,ffn)
> # low-pass butterworth filter for 1/f noise generator
> [b,a] = butter(1, 0.1);
Notice that 1/f is power-spectrum density, straight filter will give you
1/f^2 in power-spectrum, just as an integration slope.
One approach to flicker filter is an IIR filter with the weighing of
1/sqrt(n+1) where n is tap index, and feed it normal noise. You need to
"flush out" state before you use it so you have a long history to help
shaping. For a 1024 sample series, I do 2048 samples and only use the
last 1024. Efficient? No. Quick-and-dirty? Yes.
The pole/zero type of filters of Barnes let you synthesize an 1/f slope
by balancing the properties. How dense and thus how small ripples you
get, you decide. Greenhall made the point of recording the state, and
provides BASIC code that calculate the state rather than run an infinite
sequence to let the initial state converge to the 1/f state.
Greenhall published an article illustrating a whole range of methods to
do it. He wrote the simulation code to be used in JPL for their clock
development.
Flicker noise is indeed picky.
Cheers,
Magnus
> # aging
> phase = (((1:samples)/86400).^2)*da;
> # white phase noise
> phase += (randn(1, samples))*wpn;
> # white frequency noise
> phase += cumsum(randn(1, samples))*wfn;
> # 1/f phase noise
> phase += filter(b,a,randn(1,samples))*fpn;
> # 1/f frequency noise
> phase += cumsum(filter(b,a,randn(1,samples))*ffn);
> end
>
> osc = synth_osc(400000, -50e-6, 5e-11, 1e-11, 5e-11, 5e-11);
>
> Thanks.
>
> On Montag, 2. Mai 2022 17:12:47 CEST Matthias Welwarsky wrote:
>> Dear all,
>>
>> I'm trying to come up with a reasonably simple model for an OCXO that I can
>> parametrize to experiment with a GPSDO simulator. For now I have the
>> following matlab function that "somewhat" does what I think is reasonable,
>> but I would like a reality check.
>>
>> This is the matlab code:
>>
>> function [phase] = synth_osc(samples,da,wn,fn)
>> # aging
>> phase = (((1:samples)/86400).^2)*da;
>> # white noise
>> phase += (rand(1,samples)-0.5)*wn;
>> # flicker noise
>> phase += cumsum(rand(1,samples)-0.5)*fn;
>> end
>>
>> There are three components in the model, aging, white noise and flicker
>> noise, with everything expressed in fractions of seconds.
>>
>> The first term basically creates a base vector that has a quadratic aging
>> function. It can be parametrized e.g. from an OCXO datasheet, daily aging
>> given in s/s per day.
>>
>> The second term models white noise. It's just a random number scaled to the
>> desired 1-second uncertainty.
>>
>> The third term is supposed to model flicker noise. It's basically a random
>> walk scaled to the desired magnitude.
>>
>> As an example, the following function call would create a phase vector for a
>> 10MHz oscillator with one day worth of samples, with an aging of about 5
>> Millihertz per day, 10ps/s white noise and 10ns/s of flicker noise:
>>
>> phase = osc_synth(86400, -44e-6, 10e-12, 10e-9);
>>
>> What I'd like to know - is that a "reasonable" model or is it just too far
>> off of reality to be useful? What could be changed or improved?
>>
>> Best regards,
>> Matthias
>>
>>
>> _______________________________________________
>> time-nuts mailing list -- time-nuts@lists.febo.com -- To unsubscribe send an
>> email to time-nuts-leave@lists.febo.com To unsubscribe, go to and follow
>> the instructions there.
>
> _______________________________________________
> time-nuts mailing list -- time-nuts@lists.febo.com -- To unsubscribe send an email to time-nuts-leave@lists.febo.com
> To unsubscribe, go to and follow the instructions there.
BK
Bob kb8tq
Tue, May 3, 2022 9:23 PM
Hi
The gotcha is that there are a number of very normal OCXO “behaviors” that are not
covered by any of the standard statistical models. Coping with these issue is at least
as important at working with the stuff that is coved by any of the standard statistical
models ….
Bob
On May 3, 2022, at 3:57 AM, Matthias Welwarsky time-nuts@welwarsky.de wrote:
Dear all,
thanks for your kind comments, corrections and suggestions. Please forgive if
I don't reply to all of your comments individually. Summary response follows:
Attila - yes, I realize temperature dependence is one key parameter. I model
this meanwhile as a frequency shift over time.
Bob - I agree in principle, real world data is a good reality check for any
model, but there are only so few datasets available and most of the time they
don't contain associated environmental data. You get a mix of effects without
any chance to isolate them.
Magnus, Jim - thanks a lot. Your post encouraged me to look especially into
flicker noise an how to generate it in the time domain. I now use randn() and
a low-pass filter. Also, I think I understood now how to create phase vs
frequency noise.
I've some Timelab screenshots attached, ADEV and frequency plot of a data set
I generated with the following matlab function, plus some temperature response
modeled outside of this function.
function [phase] = synth_osc(samples,da,wpn,wfn,fpn,ffn)
# low-pass butterworth filter for 1/f noise generator
[b,a] = butter(1, 0.1);
# aging
phase = (((1:samples)/86400).^2)*da;
# white phase noise
phase += (randn(1, samples))*wpn;
# white frequency noise
phase += cumsum(randn(1, samples))*wfn;
# 1/f phase noise
phase += filter(b,a,randn(1,samples))*fpn;
# 1/f frequency noise
phase += cumsum(filter(b,a,randn(1,samples))*ffn);
end
osc = synth_osc(400000, -50e-6, 5e-11, 1e-11, 5e-11, 5e-11);
Thanks.
On Montag, 2. Mai 2022 17:12:47 CEST Matthias Welwarsky wrote:
Dear all,
I'm trying to come up with a reasonably simple model for an OCXO that I can
parametrize to experiment with a GPSDO simulator. For now I have the
following matlab function that "somewhat" does what I think is reasonable,
but I would like a reality check.
This is the matlab code:
function [phase] = synth_osc(samples,da,wn,fn)
aging
phase = (((1:samples)/86400).^2)*da;
white noise
phase += (rand(1,samples)-0.5)*wn;
flicker noise
phase += cumsum(rand(1,samples)-0.5)*fn;
end
There are three components in the model, aging, white noise and flicker
noise, with everything expressed in fractions of seconds.
The first term basically creates a base vector that has a quadratic aging
function. It can be parametrized e.g. from an OCXO datasheet, daily aging
given in s/s per day.
The second term models white noise. It's just a random number scaled to the
desired 1-second uncertainty.
The third term is supposed to model flicker noise. It's basically a random
walk scaled to the desired magnitude.
As an example, the following function call would create a phase vector for a
10MHz oscillator with one day worth of samples, with an aging of about 5
Millihertz per day, 10ps/s white noise and 10ns/s of flicker noise:
phase = osc_synth(86400, -44e-6, 10e-12, 10e-9);
What I'd like to know - is that a "reasonable" model or is it just too far
off of reality to be useful? What could be changed or improved?
Best regards,
Matthias
time-nuts mailing list -- time-nuts@lists.febo.com -- To unsubscribe send an
email to time-nuts-leave@lists.febo.com To unsubscribe, go to and follow
the instructions there.
Hi
The gotcha is that there are a number of very normal OCXO “behaviors” that are not
covered by any of the standard statistical models. Coping with these issue is at least
as important at working with the stuff that is coved by any of the standard statistical
models ….
Bob
> On May 3, 2022, at 3:57 AM, Matthias Welwarsky <time-nuts@welwarsky.de> wrote:
>
> Dear all,
>
> thanks for your kind comments, corrections and suggestions. Please forgive if
> I don't reply to all of your comments individually. Summary response follows:
>
> Attila - yes, I realize temperature dependence is one key parameter. I model
> this meanwhile as a frequency shift over time.
>
> Bob - I agree in principle, real world data is a good reality check for any
> model, but there are only so few datasets available and most of the time they
> don't contain associated environmental data. You get a mix of effects without
> any chance to isolate them.
>
> Magnus, Jim - thanks a lot. Your post encouraged me to look especially into
> flicker noise an how to generate it in the time domain. I now use randn() and
> a low-pass filter. Also, I think I understood now how to create phase vs
> frequency noise.
>
> I've some Timelab screenshots attached, ADEV and frequency plot of a data set
> I generated with the following matlab function, plus some temperature response
> modeled outside of this function.
>
> function [phase] = synth_osc(samples,da,wpn,wfn,fpn,ffn)
> # low-pass butterworth filter for 1/f noise generator
> [b,a] = butter(1, 0.1);
>
> # aging
> phase = (((1:samples)/86400).^2)*da;
> # white phase noise
> phase += (randn(1, samples))*wpn;
> # white frequency noise
> phase += cumsum(randn(1, samples))*wfn;
> # 1/f phase noise
> phase += filter(b,a,randn(1,samples))*fpn;
> # 1/f frequency noise
> phase += cumsum(filter(b,a,randn(1,samples))*ffn);
> end
>
> osc = synth_osc(400000, -50e-6, 5e-11, 1e-11, 5e-11, 5e-11);
>
> Thanks.
>
> On Montag, 2. Mai 2022 17:12:47 CEST Matthias Welwarsky wrote:
>> Dear all,
>>
>> I'm trying to come up with a reasonably simple model for an OCXO that I can
>> parametrize to experiment with a GPSDO simulator. For now I have the
>> following matlab function that "somewhat" does what I think is reasonable,
>> but I would like a reality check.
>>
>> This is the matlab code:
>>
>> function [phase] = synth_osc(samples,da,wn,fn)
>> # aging
>> phase = (((1:samples)/86400).^2)*da;
>> # white noise
>> phase += (rand(1,samples)-0.5)*wn;
>> # flicker noise
>> phase += cumsum(rand(1,samples)-0.5)*fn;
>> end
>>
>> There are three components in the model, aging, white noise and flicker
>> noise, with everything expressed in fractions of seconds.
>>
>> The first term basically creates a base vector that has a quadratic aging
>> function. It can be parametrized e.g. from an OCXO datasheet, daily aging
>> given in s/s per day.
>>
>> The second term models white noise. It's just a random number scaled to the
>> desired 1-second uncertainty.
>>
>> The third term is supposed to model flicker noise. It's basically a random
>> walk scaled to the desired magnitude.
>>
>> As an example, the following function call would create a phase vector for a
>> 10MHz oscillator with one day worth of samples, with an aging of about 5
>> Millihertz per day, 10ps/s white noise and 10ns/s of flicker noise:
>>
>> phase = osc_synth(86400, -44e-6, 10e-12, 10e-9);
>>
>> What I'd like to know - is that a "reasonable" model or is it just too far
>> off of reality to be useful? What could be changed or improved?
>>
>> Best regards,
>> Matthias
>>
>>
>> _______________________________________________
>> time-nuts mailing list -- time-nuts@lists.febo.com -- To unsubscribe send an
>> email to time-nuts-leave@lists.febo.com To unsubscribe, go to and follow
>> the instructions there.
>
> <oscillator-freq.png><oscillator.png>_______________________________________________
> time-nuts mailing list -- time-nuts@lists.febo.com -- To unsubscribe send an email to time-nuts-leave@lists.febo.com
> To unsubscribe, go to and follow the instructions there.
AK
Attila Kinali
Wed, May 4, 2022 4:49 PM
There's some papers out there (mentioned on the list in the past) about
synthesizing colored noise. Taking "White" noise and running it through
a filter is one approach. Another is doing an inverse FFT, but that has
the issue of needing to know how many samples you need. Although I
suppose one could do some sort of continuous overlapping window scheme
(which, when it comes right down to it is just another way of filtering)).
https://dl.acm.org/doi/abs/10.1145/368273.368574
This one does not work with the noises relevant to oscillators.
First of all, 1/f^a-noise is not stationary, the conditional
expectation E[X(t)|X(t_0)], knowing the value X(t_0) at time t_0
is E[X(t)|X(t_0)] = X(t_0) I.e. it is not the mean of the X(t)
neither is it its value X(0) at some start time. Also, the
ensemble variance changes over time grows (iirc with t^a).
Yes, this also means that 1/f^a noise is not ergodic and thus
most of what you learned in statistics classes does not apply!
You have been warned! ;-)
Second, 1/f^a noise has singularities in its auto-covariance
for a= 2*n + 1 (n=0,1,2,...). Most notably for a=1 and a=3.
See Mandelbrot and Van Ness' work [1] on this.
Over the years, I've tried a few methods on generating noise
for oscillator simmulation, but only two did have any practical
value (others had some flaws somewhere that made them sub-optimal):
- FFT based systems
- Filter based systems
FFT based systems take a white, normal distributed noise source,
Fourier transform it, filter it in frequency domain and transform
it back. Runtime is dominated by the FFT and thus O(n*log(n)).
There was a nice paper by either Barnes or Greenhall (or both?)
on this, which I seem currently unable to find. This is also the
method employed by the bruiteur tool from sigma-theta.
Biggest disadvantage of this method is, that it operates on the
whole sample length multiple times. I.e it becomes slow very
quickly, especially when the whole sample length is larger
than main memory. But they deliver exact results with exactly
the spectrum / time-correlation you want.
Filter based approaches use some approximation using linear
filters to get the same effect. They can acheive O(n*log(n)) as
well, with O(log(n)) consumption in memory if done right [2]
(based on [3] which explains the filter in simpler terms).
Be aware that it's only an approximation and that the spectrum
will have some wiggles. Though this shouldn't be a problem
in practice. Speed is, in my experience, slightly quicker
than FFT based approaches, as less memory is touched. But
it uses a quite a bit more randomness and thus the random
number generator becomes the bottleneck. But I also have to
admit, the implementation I had for comparison was far from
well coded, so take this with a grain of salt.
There is also the way of doing fractional integration as
has been proposed by Barnes and Allan in [4]. Unfortunately,
the naive implementation of this approach leads to a O(n^2)
runtime and size O(n) memory consumption. There are faster
algorithms to do fractional integration (e.g. [5]) and
I have a few ideas of my own, but I haven't had time to try
any of them yet.
And while we are at it, another warning: rand(3) and by extension
all random number generators based on it, has a rather small
number of states. IIRC the one implemented in glibc has only 31
bits of state. While two billion states sounds like a lot, this
isn't that much for simulation of noise in oscillators. Even
algorithms that are efficient in terms of randomness consume
tens of bytes of random numbers per sample produced. If a
normal distribution is approximated by averaging samples, that
goes quickly into the hundreds of bytes. And suddenly, the
random number generator warps around after just a few million
samples. Even less in a filter based approach.
Thus, I would highly recommend using a high number of state
PRNG generator like xoroshiro1024* [6]. That the algorithm
does not pass all randomness tests with perfect score isn't as
much of an issue in this application, as long as the random
numbers are sufficiently uncorrelated (which they are).
I also recommend using an efficient Gaussian random number
generator instead of averaging. Not only does averaging
create a maximum and minimum value based on the number of
samples averaged over, it is also very slow because it uses
a lot of random numbers. A better approach is to use the
Ziggurat algorithm [7] which uses only about 72-80bit
of entropy per generated sample.
And before you ask, yes sigma-theta/bruiteur uses xoroshift1024*
and the Ziggurat algorithm ;-)
Attila Kinali
[1] Fractional Brownian Motions, Fractional Noises and Applications,
by Mandelbrot and Van Ness, 1968
http://users.math.yale.edu/~bbm3/web_pdfs/052fractionalBrownianMotions.pdf
[2] Efficient Generation of 1/f^a Noise Sequences for Pulsed Radar
Simulation, by Brooker and Inggs, 2010
[3] Efficient modeling of 1/f^a Noise Using Multirate Process,
by Park, Muhammad, and Roy, 2006
[4] A Statistical Model of Flicker Noise, by Barnes and Allan, 1966
[5] A high-speed algorithm for computation of fractional
differentiation and fractional integration,
by Fukunaga and Shimizu, 2013
http://dx.doi.org/10.1098/rsta.2012.0152
[6] https://prng.di.unimi.it/
[7] The Ziggurat Method for Generating Random Variables,
by Marsaglia and Tsang, 2000
http://dx.doi.org/10.18637/jss.v005.i08
The driving force behind research is the question: "Why?"
There are things we don't understand and things we always
wonder about. And that's why we do research.
-- Kobayashi Makoto
On Tue, 3 May 2022 08:06:22 -0700
"Lux, Jim" <jim@luxfamily.com> wrote:
> There's some papers out there (mentioned on the list in the past) about
> synthesizing colored noise. Taking "White" noise and running it through
> a filter is one approach. Another is doing an inverse FFT, but that has
> the issue of needing to know how many samples you need. Although I
> suppose one could do some sort of continuous overlapping window scheme
> (which, when it comes right down to it is just another way of filtering)).
>
> https://dl.acm.org/doi/abs/10.1145/368273.368574
This one does not work with the noises relevant to oscillators.
First of all, 1/f^a-noise is not stationary, the conditional
expectation E[X(t)|X(t_0)], knowing the value X(t_0) at time t_0
is E[X(t)|X(t_0)] = X(t_0) I.e. it is not the mean of the X(t)
neither is it its value X(0) at some start time. Also, the
ensemble variance changes over time grows (iirc with t^a).
Yes, this also means that 1/f^a noise is not ergodic and thus
most of what you learned in statistics classes does not apply!
You have been warned! ;-)
Second, 1/f^a noise has singularities in its auto-covariance
for a= 2*n + 1 (n=0,1,2,...). Most notably for a=1 and a=3.
See Mandelbrot and Van Ness' work [1] on this.
Over the years, I've tried a few methods on generating noise
for oscillator simmulation, but only two did have any practical
value (others had some flaws somewhere that made them sub-optimal):
1) FFT based systems
2) Filter based systems
FFT based systems take a white, normal distributed noise source,
Fourier transform it, filter it in frequency domain and transform
it back. Runtime is dominated by the FFT and thus O(n*log(n)).
There was a nice paper by either Barnes or Greenhall (or both?)
on this, which I seem currently unable to find. This is also the
method employed by the bruiteur tool from sigma-theta.
Biggest disadvantage of this method is, that it operates on the
whole sample length multiple times. I.e it becomes slow very
quickly, especially when the whole sample length is larger
than main memory. But they deliver exact results with exactly
the spectrum / time-correlation you want.
Filter based approaches use some approximation using linear
filters to get the same effect. They can acheive O(n*log(n)) as
well, with O(log(n)) consumption in memory if done right [2]
(based on [3] which explains the filter in simpler terms).
Be aware that it's only an approximation and that the spectrum
will have some wiggles. Though this shouldn't be a problem
in practice. Speed is, in my experience, slightly quicker
than FFT based approaches, as less memory is touched. But
it uses a quite a bit more randomness and thus the random
number generator becomes the bottleneck. But I also have to
admit, the implementation I had for comparison was far from
well coded, so take this with a grain of salt.
There is also the way of doing fractional integration as
has been proposed by Barnes and Allan in [4]. Unfortunately,
the naive implementation of this approach leads to a O(n^2)
runtime and size O(n) memory consumption. There are faster
algorithms to do fractional integration (e.g. [5]) and
I have a few ideas of my own, but I haven't had time to try
any of them yet.
And while we are at it, another warning: rand(3) and by extension
all random number generators based on it, has a rather small
number of states. IIRC the one implemented in glibc has only 31
bits of state. While two billion states sounds like a lot, this
isn't that much for simulation of noise in oscillators. Even
algorithms that are efficient in terms of randomness consume
tens of bytes of random numbers per sample produced. If a
normal distribution is approximated by averaging samples, that
goes quickly into the hundreds of bytes. And suddenly, the
random number generator warps around after just a few million
samples. Even less in a filter based approach.
Thus, I would highly recommend using a high number of state
PRNG generator like xoroshiro1024* [6]. That the algorithm
does not pass all randomness tests with perfect score isn't as
much of an issue in this application, as long as the random
numbers are sufficiently uncorrelated (which they are).
I also recommend using an efficient Gaussian random number
generator instead of averaging. Not only does averaging
create a maximum and minimum value based on the number of
samples averaged over, it is also very slow because it uses
a lot of random numbers. A better approach is to use the
Ziggurat algorithm [7] which uses only about 72-80bit
of entropy per generated sample.
And before you ask, yes sigma-theta/bruiteur uses xoroshift1024*
and the Ziggurat algorithm ;-)
Attila Kinali
[1] Fractional Brownian Motions, Fractional Noises and Applications,
by Mandelbrot and Van Ness, 1968
http://users.math.yale.edu/~bbm3/web_pdfs/052fractionalBrownianMotions.pdf
[2] Efficient Generation of 1/f^a Noise Sequences for Pulsed Radar
Simulation, by Brooker and Inggs, 2010
[3] Efficient modeling of 1/f^a Noise Using Multirate Process,
by Park, Muhammad, and Roy, 2006
[4] A Statistical Model of Flicker Noise, by Barnes and Allan, 1966
[5] A high-speed algorithm for computation of fractional
differentiation and fractional integration,
by Fukunaga and Shimizu, 2013
http://dx.doi.org/10.1098/rsta.2012.0152
[6] https://prng.di.unimi.it/
[7] The Ziggurat Method for Generating Random Variables,
by Marsaglia and Tsang, 2000
http://dx.doi.org/10.18637/jss.v005.i08
--
The driving force behind research is the question: "Why?"
There are things we don't understand and things we always
wonder about. And that's why we do research.
-- Kobayashi Makoto
AK
Attila Kinali
Wed, May 4, 2022 4:50 PM
Hoi Bob,
On Tue, 3 May 2022 16:23:27 -0500
Bob kb8tq kb8tq@n1k.org wrote:
The gotcha is that there are a number of very normal OCXO “behaviors” that are not
covered by any of the standard statistical models.
Could you elaborate a bit on what these "normal behaviours" are?
Attila Kinali
--
The driving force behind research is the question: "Why?"
There are things we don't understand and things we always
wonder about. And that's why we do research.
-- Kobayashi Makoto
Hoi Bob,
On Tue, 3 May 2022 16:23:27 -0500
Bob kb8tq <kb8tq@n1k.org> wrote:
> The gotcha is that there are a number of very normal OCXO “behaviors” that are not
> covered by any of the standard statistical models.
Could you elaborate a bit on what these "normal behaviours" are?
Attila Kinali
--
The driving force behind research is the question: "Why?"
There are things we don't understand and things we always
wonder about. And that's why we do research.
-- Kobayashi Makoto
MW
Matthias Welwarsky
Wed, May 4, 2022 5:31 PM
Magnus, Attila, Bob,
thanks again for the inspirational posts, truly appreciated.
However. I'm looking for something reasonably simple just for the purpose of
GPSDO simulation. Here, most of the finer details of noise are not very
relevant. I don't really care for PSD, for example. What I'm looking for is a
tool that can produce a phase vector that just resembles what a real
oscillator is doing, looking from afar, with a little squinting. For example:
synth_osc(N, -1e-8, 2.5e-11, 2e-11, 0, 0);
This gives me a vector that, as far as Allan deviation is concerned, looks
remarkably like an LPRO-101. With some other parameters I can produce a
credible resemblance to a PRS10. Add a bit of temperature wiggle and it's
enough to run it through the simulator and tune parameters. The finer details
are anyway completely lost on a GPSDO. Reaction to transients, especially from
GPS, are much more interesting, which is why the logical next step is to
produce a GPS (or GNSS) phase vector that can be parametrized and spiked with
some oddities to see how different control loop parameters influence the
output. But for that I don't have an immediate need, the GPS data files on
Leapsecond.com are enough for now.
Regards,
Matthias
On Dienstag, 3. Mai 2022 22:08:49 CEST Magnus Danielson via time-nuts wrote:
Dear Matthias,
On 2022-05-03 10:57, Matthias Welwarsky wrote:
Dear all,
thanks for your kind comments, corrections and suggestions. Please forgive
if I don't reply to all of your comments individually. Summary response
follows:
Attila - yes, I realize temperature dependence is one key parameter. I
model this meanwhile as a frequency shift over time.
Bob - I agree in principle, real world data is a good reality check for
any
model, but there are only so few datasets available and most of the time
they don't contain associated environmental data. You get a mix of
effects without any chance to isolate them.
Environmental effects tends to be recognizeable by their periodic
behavior, i.e. period of the day and the period of the heating/AC. Real
oscillator data tends to be quite relevant as you can simulate what it
would mean to lock that oscillator up. TvB made a simulator on those
grounds. Good exercise.
Magnus, Jim - thanks a lot. Your post encouraged me to look especially
into
flicker noise an how to generate it in the time domain. I now use randn()
and a low-pass filter. Also, I think I understood now how to create phase
vs frequency noise.
Happy to get you up to speed on that.
One particular name to check out articles for is Charles "Chuck"
Greenhall, JPL.
For early work, also look att James "Jim" Barnes, NBS (later named NIST).
Both these fine gentlement is recommended reading almost anything they
write on the topic actually.
I've some Timelab screenshots attached, ADEV and frequency plot of a data
set I generated with the following matlab function, plus some temperature
response modeled outside of this function.
function [phase] = synth_osc(samples,da,wpn,wfn,fpn,ffn)
# low-pass butterworth filter for 1/f noise generator
[b,a] = butter(1, 0.1);
Notice that 1/f is power-spectrum density, straight filter will give you
1/f^2 in power-spectrum, just as an integration slope.
One approach to flicker filter is an IIR filter with the weighing of
1/sqrt(n+1) where n is tap index, and feed it normal noise. You need to
"flush out" state before you use it so you have a long history to help
shaping. For a 1024 sample series, I do 2048 samples and only use the
last 1024. Efficient? No. Quick-and-dirty? Yes.
The pole/zero type of filters of Barnes let you synthesize an 1/f slope
by balancing the properties. How dense and thus how small ripples you
get, you decide. Greenhall made the point of recording the state, and
provides BASIC code that calculate the state rather than run an infinite
sequence to let the initial state converge to the 1/f state.
Greenhall published an article illustrating a whole range of methods to
do it. He wrote the simulation code to be used in JPL for their clock
development.
Flicker noise is indeed picky.
Cheers,
Magnus
# aging
phase = (((1:samples)/86400).^2)*da;
# white phase noise
phase += (randn(1, samples))*wpn;
# white frequency noise
phase += cumsum(randn(1, samples))*wfn;
# 1/f phase noise
phase += filter(b,a,randn(1,samples))*fpn;
# 1/f frequency noise
phase += cumsum(filter(b,a,randn(1,samples))*ffn);
end
osc = synth_osc(400000, -50e-6, 5e-11, 1e-11, 5e-11, 5e-11);
Thanks.
On Montag, 2. Mai 2022 17:12:47 CEST Matthias Welwarsky wrote:
Dear all,
I'm trying to come up with a reasonably simple model for an OCXO that I
can
parametrize to experiment with a GPSDO simulator. For now I have the
following matlab function that "somewhat" does what I think is
reasonable,
but I would like a reality check.
This is the matlab code:
function [phase] = synth_osc(samples,da,wn,fn)
aging
phase = (((1:samples)/86400).^2)*da;
white noise
phase += (rand(1,samples)-0.5)*wn;
flicker noise
phase += cumsum(rand(1,samples)-0.5)*fn;
end
There are three components in the model, aging, white noise and flicker
noise, with everything expressed in fractions of seconds.
The first term basically creates a base vector that has a quadratic aging
function. It can be parametrized e.g. from an OCXO datasheet, daily aging
given in s/s per day.
The second term models white noise. It's just a random number scaled to
the
desired 1-second uncertainty.
The third term is supposed to model flicker noise. It's basically a
random
walk scaled to the desired magnitude.
As an example, the following function call would create a phase vector
for a 10MHz oscillator with one day worth of samples, with an aging of
about 5 Millihertz per day, 10ps/s white noise and 10ns/s of flicker
noise:
phase = osc_synth(86400, -44e-6, 10e-12, 10e-9);
What I'd like to know - is that a "reasonable" model or is it just too
far
off of reality to be useful? What could be changed or improved?
Best regards,
Matthias
time-nuts mailing list -- time-nuts@lists.febo.com -- To unsubscribe send
an email to time-nuts-leave@lists.febo.com To unsubscribe, go to and
follow the instructions there.
Magnus, Attila, Bob,
thanks again for the inspirational posts, truly appreciated.
However. I'm looking for something reasonably simple just for the purpose of
GPSDO simulation. Here, most of the finer details of noise are not very
relevant. I don't really care for PSD, for example. What I'm looking for is a
tool that can produce a phase vector that just resembles what a real
oscillator is doing, looking from afar, with a little squinting. For example:
synth_osc(N, -1e-8, 2.5e-11, 2e-11, 0, 0);
This gives me a vector that, as far as Allan deviation is concerned, looks
remarkably like an LPRO-101. With some other parameters I can produce a
credible resemblance to a PRS10. Add a bit of temperature wiggle and it's
enough to run it through the simulator and tune parameters. The finer details
are anyway completely lost on a GPSDO. Reaction to transients, especially from
GPS, are much more interesting, which is why the logical next step is to
produce a GPS (or GNSS) phase vector that can be parametrized and spiked with
some oddities to see how different control loop parameters influence the
output. But for that I don't have an immediate need, the GPS data files on
Leapsecond.com are enough for now.
Regards,
Matthias
On Dienstag, 3. Mai 2022 22:08:49 CEST Magnus Danielson via time-nuts wrote:
> Dear Matthias,
>
> On 2022-05-03 10:57, Matthias Welwarsky wrote:
> > Dear all,
> >
> > thanks for your kind comments, corrections and suggestions. Please forgive
> > if I don't reply to all of your comments individually. Summary response
> > follows:
> >
> > Attila - yes, I realize temperature dependence is one key parameter. I
> > model this meanwhile as a frequency shift over time.
> >
> > Bob - I agree in principle, real world data is a good reality check for
> > any
> > model, but there are only so few datasets available and most of the time
> > they don't contain associated environmental data. You get a mix of
> > effects without any chance to isolate them.
>
> Environmental effects tends to be recognizeable by their periodic
> behavior, i.e. period of the day and the period of the heating/AC. Real
> oscillator data tends to be quite relevant as you can simulate what it
> would mean to lock that oscillator up. TvB made a simulator on those
> grounds. Good exercise.
>
> > Magnus, Jim - thanks a lot. Your post encouraged me to look especially
> > into
> > flicker noise an how to generate it in the time domain. I now use randn()
> > and a low-pass filter. Also, I think I understood now how to create phase
> > vs frequency noise.
>
> Happy to get you up to speed on that.
>
> One particular name to check out articles for is Charles "Chuck"
> Greenhall, JPL.
>
> For early work, also look att James "Jim" Barnes, NBS (later named NIST).
>
> Both these fine gentlement is recommended reading almost anything they
> write on the topic actually.
>
> > I've some Timelab screenshots attached, ADEV and frequency plot of a data
> > set I generated with the following matlab function, plus some temperature
> > response modeled outside of this function.
> >
> > function [phase] = synth_osc(samples,da,wpn,wfn,fpn,ffn)
> >
> > # low-pass butterworth filter for 1/f noise generator
> > [b,a] = butter(1, 0.1);
>
> Notice that 1/f is power-spectrum density, straight filter will give you
> 1/f^2 in power-spectrum, just as an integration slope.
>
> One approach to flicker filter is an IIR filter with the weighing of
> 1/sqrt(n+1) where n is tap index, and feed it normal noise. You need to
> "flush out" state before you use it so you have a long history to help
> shaping. For a 1024 sample series, I do 2048 samples and only use the
> last 1024. Efficient? No. Quick-and-dirty? Yes.
>
> The pole/zero type of filters of Barnes let you synthesize an 1/f slope
> by balancing the properties. How dense and thus how small ripples you
> get, you decide. Greenhall made the point of recording the state, and
> provides BASIC code that calculate the state rather than run an infinite
> sequence to let the initial state converge to the 1/f state.
>
> Greenhall published an article illustrating a whole range of methods to
> do it. He wrote the simulation code to be used in JPL for their clock
> development.
>
> Flicker noise is indeed picky.
>
> Cheers,
> Magnus
>
> > # aging
> > phase = (((1:samples)/86400).^2)*da;
> > # white phase noise
> > phase += (randn(1, samples))*wpn;
> > # white frequency noise
> > phase += cumsum(randn(1, samples))*wfn;
> > # 1/f phase noise
> > phase += filter(b,a,randn(1,samples))*fpn;
> > # 1/f frequency noise
> > phase += cumsum(filter(b,a,randn(1,samples))*ffn);
> >
> > end
> >
> > osc = synth_osc(400000, -50e-6, 5e-11, 1e-11, 5e-11, 5e-11);
> >
> > Thanks.
> >
> > On Montag, 2. Mai 2022 17:12:47 CEST Matthias Welwarsky wrote:
> >> Dear all,
> >>
> >> I'm trying to come up with a reasonably simple model for an OCXO that I
> >> can
> >> parametrize to experiment with a GPSDO simulator. For now I have the
> >> following matlab function that "somewhat" does what I think is
> >> reasonable,
> >> but I would like a reality check.
> >>
> >> This is the matlab code:
> >>
> >> function [phase] = synth_osc(samples,da,wn,fn)
> >> # aging
> >> phase = (((1:samples)/86400).^2)*da;
> >> # white noise
> >> phase += (rand(1,samples)-0.5)*wn;
> >> # flicker noise
> >> phase += cumsum(rand(1,samples)-0.5)*fn;
> >> end
> >>
> >> There are three components in the model, aging, white noise and flicker
> >> noise, with everything expressed in fractions of seconds.
> >>
> >> The first term basically creates a base vector that has a quadratic aging
> >> function. It can be parametrized e.g. from an OCXO datasheet, daily aging
> >> given in s/s per day.
> >>
> >> The second term models white noise. It's just a random number scaled to
> >> the
> >> desired 1-second uncertainty.
> >>
> >> The third term is supposed to model flicker noise. It's basically a
> >> random
> >> walk scaled to the desired magnitude.
> >>
> >> As an example, the following function call would create a phase vector
> >> for a 10MHz oscillator with one day worth of samples, with an aging of
> >> about 5 Millihertz per day, 10ps/s white noise and 10ns/s of flicker
> >> noise:
> >>
> >> phase = osc_synth(86400, -44e-6, 10e-12, 10e-9);
> >>
> >> What I'd like to know - is that a "reasonable" model or is it just too
> >> far
> >> off of reality to be useful? What could be changed or improved?
> >>
> >> Best regards,
> >> Matthias
> >>
> >>
> >> _______________________________________________
> >> time-nuts mailing list -- time-nuts@lists.febo.com -- To unsubscribe send
> >> an email to time-nuts-leave@lists.febo.com To unsubscribe, go to and
> >> follow the instructions there.
> >
> > _______________________________________________
> > time-nuts mailing list -- time-nuts@lists.febo.com -- To unsubscribe send
> > an email to time-nuts-leave@lists.febo.com To unsubscribe, go to and
> > follow the instructions there.
>
> _______________________________________________
> time-nuts mailing list -- time-nuts@lists.febo.com -- To unsubscribe send an
> email to time-nuts-leave@lists.febo.com To unsubscribe, go to and follow
> the instructions there.
BK
Bob kb8tq
Thu, May 5, 2022 12:17 AM
Hi
The most basic is the “phase pop” that is not modeled by any of the
normal noise formulas. The further you dig in, the more you find things
that the models really don’t cover.
Bob
The gotcha is that there are a number of very normal OCXO “behaviors” that are not
covered by any of the standard statistical models.
Could you elaborate a bit on what these "normal behaviours" are?
Attila Kinali
--
The driving force behind research is the question: "Why?"
There are things we don't understand and things we always
wonder about. And that's why we do research.
-- Kobayashi Makoto
time-nuts mailing list -- time-nuts@lists.febo.com -- To unsubscribe send an email to time-nuts-leave@lists.febo.com
To unsubscribe, go to and follow the instructions there.
Hi
The most basic is the “phase pop” that is not modeled by any of the
normal noise formulas. The further you dig in, the more you find things
that the models really don’t cover.
Bob
> On May 4, 2022, at 11:50 AM, Attila Kinali <attila@kinali.ch> wrote:
>
> Hoi Bob,
>
> On Tue, 3 May 2022 16:23:27 -0500
> Bob kb8tq <kb8tq@n1k.org> wrote:
>
>> The gotcha is that there are a number of very normal OCXO “behaviors” that are not
>> covered by any of the standard statistical models.
>
> Could you elaborate a bit on what these "normal behaviours" are?
>
> Attila Kinali
>
> --
> The driving force behind research is the question: "Why?"
> There are things we don't understand and things we always
> wonder about. And that's why we do research.
> -- Kobayashi Makoto
> _______________________________________________
> time-nuts mailing list -- time-nuts@lists.febo.com -- To unsubscribe send an email to time-nuts-leave@lists.febo.com
> To unsubscribe, go to and follow the instructions there.
AK
Attila Kinali
Thu, May 5, 2022 4:01 PM
However. I'm looking for something reasonably simple just for the purpose of
GPSDO simulation. Here, most of the finer details of noise are not very
relevant.
If you want it very simple, then I would only add white noise (i.e. uncorrelated
Gaussian noise) and 1/f^4 noise (integrate twice) at the approriate level.
This should give you the simplest yet, kind of faithful approximation.
But be aware that you will miss a lot of the corner cases, of the dynamic
things that happen, which make the control loop design of a GPSDO challenging.
I don't really care for PSD, for example. What I'm looking for is a
tool that can produce a phase vector that just resembles what a real
oscillator is doing, looking from afar, with a little squinting.
This is what we are getting at: a pure noise simulation might not get
you close enough for a faithful simulation to design a GPSDO control loop.
This gives me a vector that, as far as Allan deviation is concerned, looks
remarkably like an LPRO-101. With some other parameters I can produce a
credible resemblance to a PRS10.
Oh.. a big fat warning here: ADEV and friends are very bad tools to check
whether two oscillator data sets are similar in behavior or not. *ADEV
do a lot of data compression. And it's logarithmic too! You enter a million
data points and get ~50 data points out. You enter 10 million data points and
get ~60 data points out. There is a lot lost in this compression. Unless
you make some quite limiting assumptions on what kind of behaviour the
oscillators are allowed to have, *DEV cannot be used for validation.
Attatched is the ADEV plot of an LPRO against an GPSDO. I guess yours looks
similar? You can see the shoulder due to the GPSDO's internal oscillator,
then going down with 1/tau down to a few parts in 1e-13 and going up again.
Looks pretty normal and you can identify white/flicker phase noise and
a frequency random walk region. A white frequency noise region
seems to be absent. So we expect to see mostly white noise and frequency
random walk.
Let us now look at the phase plot.... Does it look like what we expect?
Not quite. We see two/three distinct regions where the frequency seems
to be pretty stable, but inbetween there is a change in frequency which occurs
over the stretch of a few hours/days. Not quite what we expected. It definitely
does not look like frequency random walk. At best we can approximate it with
3 regions of almost constant frequency.
Now, let us zoom in into the first 114 days, which sem to be pretty stable.
This doesn't look like frequency random walk either. Again, we have regions
of almost constant frequency, that are seperated with hours/days of close to
constant frequency drift. But not only that, there seems to be an overreaching
arc of decreasing frequency, but it does not go linearly, but seems to be in
discrete steps that take time.
With this in mind, a faithful model of a LPRO would look quite different
than what one would have guessed from the ADEV.
(Before anyone asks: no, the frequency changes are not correlated with temperature
or ambient air pressure. My best guess is that they are light shift related, or
changes in the vapor cell but I cannot prove it)
Attila Kinali
--
In science if you know what you are doing you should not be doing it.
In engineering if you do not know what you are doing you should not be doing it.
-- Richard W. Hamming, The Art of Doing Science and Engineering
On Wed, 04 May 2022 19:31:08 +0200
Matthias Welwarsky <time-nuts@welwarsky.de> wrote:
> However. I'm looking for something reasonably simple just for the purpose of
> GPSDO simulation. Here, most of the finer details of noise are not very
> relevant.
If you want it very simple, then I would only add white noise (i.e. uncorrelated
Gaussian noise) and 1/f^4 noise (integrate twice) at the approriate level.
This should give you the simplest yet, kind of faithful approximation.
But be aware that you will miss a lot of the corner cases, of the dynamic
things that happen, which make the control loop design of a GPSDO challenging.
> I don't really care for PSD, for example. What I'm looking for is a
> tool that can produce a phase vector that just resembles what a real
> oscillator is doing, looking from afar, with a little squinting.
This is what we are getting at: a pure noise simulation might not get
you close enough for a faithful simulation to design a GPSDO control loop.
> This gives me a vector that, as far as Allan deviation is concerned, looks
> remarkably like an LPRO-101. With some other parameters I can produce a
> credible resemblance to a PRS10.
Oh.. a big fat warning here: ADEV and friends are very bad tools to check
whether two oscillator data sets are similar in behavior or not. *ADEV
do a lot of data compression. And it's logarithmic too! You enter a million
data points and get ~50 data points out. You enter 10 million data points and
get ~60 data points out. There is a lot lost in this compression. Unless
you make some quite limiting assumptions on what kind of behaviour the
oscillators are allowed to have, *DEV cannot be used for validation.
Attatched is the ADEV plot of an LPRO against an GPSDO. I guess yours looks
similar? You can see the shoulder due to the GPSDO's internal oscillator,
then going down with 1/tau down to a few parts in 1e-13 and going up again.
Looks pretty normal and you can identify white/flicker phase noise and
a frequency random walk region. A white frequency noise region
seems to be absent. So we expect to see mostly white noise and frequency
random walk.
Let us now look at the phase plot.... Does it look like what we expect?
Not quite. We see two/three distinct regions where the frequency seems
to be pretty stable, but inbetween there is a change in frequency which occurs
over the stretch of a few hours/days. Not quite what we expected. It definitely
does not look like frequency random walk. At best we can approximate it with
3 regions of almost constant frequency.
Now, let us zoom in into the first 114 days, which sem to be pretty stable.
This doesn't look like frequency random walk either. Again, we have regions
of almost constant frequency, that are seperated with hours/days of close to
constant frequency drift. But not only that, there seems to be an overreaching
arc of decreasing frequency, but it does not go linearly, but seems to be in
discrete steps that take time.
With this in mind, a faithful model of a LPRO would look quite different
than what one would have guessed from the ADEV.
(Before anyone asks: no, the frequency changes are not correlated with temperature
or ambient air pressure. My best guess is that they are light shift related, or
changes in the vapor cell but I cannot prove it)
Attila Kinali
--
In science if you know what you are doing you should not be doing it.
In engineering if you do not know what you are doing you should not be doing it.
-- Richard W. Hamming, The Art of Doing Science and Engineering
MD
Magnus Danielson
Thu, May 5, 2022 4:34 PM
Hi,
Could not agree more on this point.
It's even to the point we have two standards for it, the IEEE Std 1139
for the basic measures and noises, and then IEEE Std 1193 for the
"environmentals", or rather, the rest.
Both is being revisioned and 1139 just went out for re-balloting process
after receiving ballotting comments and 1193 is just to get approved to
be sent to balloting. The work have been lead by NIST T&F department
chief Elizabeth Donley, who also got an award at EFTF-IFCS 2022 for her
contribution to the field, and these standards in particular. Very well
desirved I might add.
While simple models help to test and analyze specific things in
separation, as you bring things together, going towards real life
application the reality is always somewhat different to the models. It
takes some time to learn just how much of the things you can pick up
from models and what to use them for to adapt for the expected real life
situation.
The two IEEE standards have tons of references, and it is well worth
following those. The IEEE UFFC also have quite a bit of educational
reference material at hand which can also be worthwhile reading.
Cheers,
Magnus
On 2022-05-03 23:23, Bob kb8tq wrote:
Hi
The gotcha is that there are a number of very normal OCXO “behaviors” that are not
covered by any of the standard statistical models. Coping with these issue is at least
as important at working with the stuff that is coved by any of the standard statistical
models ….
Bob
On May 3, 2022, at 3:57 AM, Matthias Welwarsky time-nuts@welwarsky.de wrote:
Dear all,
thanks for your kind comments, corrections and suggestions. Please forgive if
I don't reply to all of your comments individually. Summary response follows:
Attila - yes, I realize temperature dependence is one key parameter. I model
this meanwhile as a frequency shift over time.
Bob - I agree in principle, real world data is a good reality check for any
model, but there are only so few datasets available and most of the time they
don't contain associated environmental data. You get a mix of effects without
any chance to isolate them.
Magnus, Jim - thanks a lot. Your post encouraged me to look especially into
flicker noise an how to generate it in the time domain. I now use randn() and
a low-pass filter. Also, I think I understood now how to create phase vs
frequency noise.
I've some Timelab screenshots attached, ADEV and frequency plot of a data set
I generated with the following matlab function, plus some temperature response
modeled outside of this function.
function [phase] = synth_osc(samples,da,wpn,wfn,fpn,ffn)
low-pass butterworth filter for 1/f noise generator
[b,a] = butter(1, 0.1);
aging
phase = (((1:samples)/86400).^2)*da;
white phase noise
phase += (randn(1, samples))*wpn;
white frequency noise
phase += cumsum(randn(1, samples))*wfn;
1/f phase noise
phase += filter(b,a,randn(1,samples))*fpn;
1/f frequency noise
phase += cumsum(filter(b,a,randn(1,samples))*ffn);
end
osc = synth_osc(400000, -50e-6, 5e-11, 1e-11, 5e-11, 5e-11);
Thanks.
On Montag, 2. Mai 2022 17:12:47 CEST Matthias Welwarsky wrote:
Dear all,
I'm trying to come up with a reasonably simple model for an OCXO that I can
parametrize to experiment with a GPSDO simulator. For now I have the
following matlab function that "somewhat" does what I think is reasonable,
but I would like a reality check.
This is the matlab code:
function [phase] = synth_osc(samples,da,wn,fn)
aging
phase = (((1:samples)/86400).^2)*da;
white noise
phase += (rand(1,samples)-0.5)*wn;
flicker noise
phase += cumsum(rand(1,samples)-0.5)*fn;
end
There are three components in the model, aging, white noise and flicker
noise, with everything expressed in fractions of seconds.
The first term basically creates a base vector that has a quadratic aging
function. It can be parametrized e.g. from an OCXO datasheet, daily aging
given in s/s per day.
The second term models white noise. It's just a random number scaled to the
desired 1-second uncertainty.
The third term is supposed to model flicker noise. It's basically a random
walk scaled to the desired magnitude.
As an example, the following function call would create a phase vector for a
10MHz oscillator with one day worth of samples, with an aging of about 5
Millihertz per day, 10ps/s white noise and 10ns/s of flicker noise:
phase = osc_synth(86400, -44e-6, 10e-12, 10e-9);
What I'd like to know - is that a "reasonable" model or is it just too far
off of reality to be useful? What could be changed or improved?
Best regards,
Matthias
time-nuts mailing list -- time-nuts@lists.febo.com -- To unsubscribe send an
email to time-nuts-leave@lists.febo.com To unsubscribe, go to and follow
the instructions there.
<oscillator-freq.png><oscillator.png>_______________________________________________
time-nuts mailing list -- time-nuts@lists.febo.com -- To unsubscribe send an email to time-nuts-leave@lists.febo.com
To unsubscribe, go to and follow the instructions there.
time-nuts mailing list -- time-nuts@lists.febo.com -- To unsubscribe send an email to time-nuts-leave@lists.febo.com
To unsubscribe, go to and follow the instructions there.
Hi,
Could not agree more on this point.
It's even to the point we have two standards for it, the IEEE Std 1139
for the basic measures and noises, and then IEEE Std 1193 for the
"environmentals", or rather, the rest.
Both is being revisioned and 1139 just went out for re-balloting process
after receiving ballotting comments and 1193 is just to get approved to
be sent to balloting. The work have been lead by NIST T&F department
chief Elizabeth Donley, who also got an award at EFTF-IFCS 2022 for her
contribution to the field, and these standards in particular. Very well
desirved I might add.
While simple models help to test and analyze specific things in
separation, as you bring things together, going towards real life
application the reality is always somewhat different to the models. It
takes some time to learn just how much of the things you can pick up
from models and what to use them for to adapt for the expected real life
situation.
The two IEEE standards have tons of references, and it is well worth
following those. The IEEE UFFC also have quite a bit of educational
reference material at hand which can also be worthwhile reading.
Cheers,
Magnus
On 2022-05-03 23:23, Bob kb8tq wrote:
> Hi
>
> The gotcha is that there are a number of very normal OCXO “behaviors” that are not
> covered by any of the standard statistical models. Coping with these issue is at least
> as important at working with the stuff that is coved by any of the standard statistical
> models ….
>
> Bob
>
>> On May 3, 2022, at 3:57 AM, Matthias Welwarsky <time-nuts@welwarsky.de> wrote:
>>
>> Dear all,
>>
>> thanks for your kind comments, corrections and suggestions. Please forgive if
>> I don't reply to all of your comments individually. Summary response follows:
>>
>> Attila - yes, I realize temperature dependence is one key parameter. I model
>> this meanwhile as a frequency shift over time.
>>
>> Bob - I agree in principle, real world data is a good reality check for any
>> model, but there are only so few datasets available and most of the time they
>> don't contain associated environmental data. You get a mix of effects without
>> any chance to isolate them.
>>
>> Magnus, Jim - thanks a lot. Your post encouraged me to look especially into
>> flicker noise an how to generate it in the time domain. I now use randn() and
>> a low-pass filter. Also, I think I understood now how to create phase vs
>> frequency noise.
>>
>> I've some Timelab screenshots attached, ADEV and frequency plot of a data set
>> I generated with the following matlab function, plus some temperature response
>> modeled outside of this function.
>>
>> function [phase] = synth_osc(samples,da,wpn,wfn,fpn,ffn)
>> # low-pass butterworth filter for 1/f noise generator
>> [b,a] = butter(1, 0.1);
>>
>> # aging
>> phase = (((1:samples)/86400).^2)*da;
>> # white phase noise
>> phase += (randn(1, samples))*wpn;
>> # white frequency noise
>> phase += cumsum(randn(1, samples))*wfn;
>> # 1/f phase noise
>> phase += filter(b,a,randn(1,samples))*fpn;
>> # 1/f frequency noise
>> phase += cumsum(filter(b,a,randn(1,samples))*ffn);
>> end
>>
>> osc = synth_osc(400000, -50e-6, 5e-11, 1e-11, 5e-11, 5e-11);
>>
>> Thanks.
>>
>> On Montag, 2. Mai 2022 17:12:47 CEST Matthias Welwarsky wrote:
>>> Dear all,
>>>
>>> I'm trying to come up with a reasonably simple model for an OCXO that I can
>>> parametrize to experiment with a GPSDO simulator. For now I have the
>>> following matlab function that "somewhat" does what I think is reasonable,
>>> but I would like a reality check.
>>>
>>> This is the matlab code:
>>>
>>> function [phase] = synth_osc(samples,da,wn,fn)
>>> # aging
>>> phase = (((1:samples)/86400).^2)*da;
>>> # white noise
>>> phase += (rand(1,samples)-0.5)*wn;
>>> # flicker noise
>>> phase += cumsum(rand(1,samples)-0.5)*fn;
>>> end
>>>
>>> There are three components in the model, aging, white noise and flicker
>>> noise, with everything expressed in fractions of seconds.
>>>
>>> The first term basically creates a base vector that has a quadratic aging
>>> function. It can be parametrized e.g. from an OCXO datasheet, daily aging
>>> given in s/s per day.
>>>
>>> The second term models white noise. It's just a random number scaled to the
>>> desired 1-second uncertainty.
>>>
>>> The third term is supposed to model flicker noise. It's basically a random
>>> walk scaled to the desired magnitude.
>>>
>>> As an example, the following function call would create a phase vector for a
>>> 10MHz oscillator with one day worth of samples, with an aging of about 5
>>> Millihertz per day, 10ps/s white noise and 10ns/s of flicker noise:
>>>
>>> phase = osc_synth(86400, -44e-6, 10e-12, 10e-9);
>>>
>>> What I'd like to know - is that a "reasonable" model or is it just too far
>>> off of reality to be useful? What could be changed or improved?
>>>
>>> Best regards,
>>> Matthias
>>>
>>>
>>> _______________________________________________
>>> time-nuts mailing list -- time-nuts@lists.febo.com -- To unsubscribe send an
>>> email to time-nuts-leave@lists.febo.com To unsubscribe, go to and follow
>>> the instructions there.
>> <oscillator-freq.png><oscillator.png>_______________________________________________
>> time-nuts mailing list -- time-nuts@lists.febo.com -- To unsubscribe send an email to time-nuts-leave@lists.febo.com
>> To unsubscribe, go to and follow the instructions there.
> _______________________________________________
> time-nuts mailing list -- time-nuts@lists.febo.com -- To unsubscribe send an email to time-nuts-leave@lists.febo.com
> To unsubscribe, go to and follow the instructions there.
CA
Carsten Andrich
Tue, May 10, 2022 6:20 AM
First, thanks to everyone who chimed in on this highly interesting topic.
On 04.05.22 18:49, Attila Kinali wrote:
FFT based systems take a white, normal distributed noise source,
Fourier transform it, filter it in frequency domain and transform
it back. Runtime is dominated by the FFT and thus O(n*log(n)).
There was a nice paper by either Barnes or Greenhall (or both?)
on this, which I seem currently unable to find. This is also the
method employed by the bruiteur tool from sigma-theta.
Biggest disadvantage of this method is, that it operates on the
whole sample length multiple times. I.e it becomes slow very
quickly, especially when the whole sample length is larger
than main memory. But they deliver exact results with exactly
the spectrum / time-correlation you want.
If you happen to find the paper, please share a reference. I'm curious
about implementation details and side-effects, e.g., whether
implementing the filter via circular convolution (straightforward
multiplication in frequency-domain) carries any penalties regarding
stochastic properties due to periodicity of the generated noise.
Also, any reason to do this via forward and inverse FFT? AFAIK the
Fourier transform of white noise is white noise, because the underlying
Gaussian is an Eigenfuntion of the transform. Generating in time-domain
as follows (Python code using NumPy)
N = int(1e6) # number of samples
A = 1e-9 # Allan deviation for tau = 1 sec
generate white noise in time-domain
X = np.fft.rfft(np.random.normal(0, A * 3**-0.5, N))
multiply with frequency response of desired power-law noise and apply inverse Fourier transform
NOTE: implements circular convolution
x = np.fft.irfft(X * H)
should yield the same properties as generating the white noise directly
in frequency-domain (there may be an off-by-one-or-two in there
regarding variance scaling), but the latter will halve the computational
cost:
generate white noise directly in frequency-domain
NOTE: imaginary components at DC and Nyquist are discarded by irfft()
X = np.random.normal(0, A * 6**-0.5 * N**0.5, N+2).view(np.complex128())
multiply with frequency response of desired power-law noise and apply inverse Fourier transform
NOTE: implements circular convolution
x = np.fft.irfft(X * H)
On 05.05.22 18:34, Magnus Danielson via time-nuts wrote:
Both is being revisioned and 1139 just went out for re-balloting
process after receiving ballotting comments and 1193 is just to get
approved to be sent to balloting.
Any details you can share regarding the changes over the current
version? Are drafts publicly available?
Best regards,
Carsten
First, thanks to everyone who chimed in on this highly interesting topic.
On 04.05.22 18:49, Attila Kinali wrote:
> FFT based systems take a white, normal distributed noise source,
> Fourier transform it, filter it in frequency domain and transform
> it back. Runtime is dominated by the FFT and thus O(n*log(n)).
> There was a nice paper by either Barnes or Greenhall (or both?)
> on this, which I seem currently unable to find. This is also the
> method employed by the bruiteur tool from sigma-theta.
>
> Biggest disadvantage of this method is, that it operates on the
> whole sample length multiple times. I.e it becomes slow very
> quickly, especially when the whole sample length is larger
> than main memory. But they deliver exact results with exactly
> the spectrum / time-correlation you want.
If you happen to find the paper, please share a reference. I'm curious
about implementation details and side-effects, e.g., whether
implementing the filter via circular convolution (straightforward
multiplication in frequency-domain) carries any penalties regarding
stochastic properties due to periodicity of the generated noise.
Also, any reason to do this via forward and inverse FFT? AFAIK the
Fourier transform of white noise is white noise, because the underlying
Gaussian is an Eigenfuntion of the transform. Generating in time-domain
as follows (Python code using NumPy)
N = int(1e6) # number of samples
A = 1e-9 # Allan deviation for tau = 1 sec
# generate white noise in time-domain
X = np.fft.rfft(np.random.normal(0, A * 3**-0.5, N))
# multiply with frequency response of desired power-law noise and apply inverse Fourier transform
# NOTE: implements circular convolution
x = np.fft.irfft(X * H)
should yield the same properties as generating the white noise directly
in frequency-domain (there may be an off-by-one-or-two in there
regarding variance scaling), but the latter will halve the computational
cost:
# generate white noise directly in frequency-domain
# NOTE: imaginary components at DC and Nyquist are discarded by irfft()
X = np.random.normal(0, A * 6**-0.5 * N**0.5, N+2).view(np.complex128())
# multiply with frequency response of desired power-law noise and apply inverse Fourier transform
# NOTE: implements circular convolution
x = np.fft.irfft(X * H)
On 05.05.22 18:34, Magnus Danielson via time-nuts wrote:
> Both is being revisioned and 1139 just went out for re-balloting
> process after receiving ballotting comments and 1193 is just to get
> approved to be sent to balloting.
Any details you can share regarding the changes over the current
version? Are drafts publicly available?
Best regards,
Carsten