time-nuts@lists.febo.com

Discussion of precise time and frequency measurement

View all threads

Simple simulation model for an OCXO?

NM
Neville Michie
Tue, May 10, 2022 8:37 AM

The use of forward then reverse Fourier transforms is one of the most important
achievements of the Fourier transform. When one data set is convolved with
another data set, it appears impossible to undo the tangle.
But if the data is transformed into the Fourier domain, serial division can separate
the data, and transformation back will yield the original data.
See: "The Fourier transform and its applications” by Ron Bracewell.

Cheers, Neville Michie

On 10 May 2022, at 16:20, Carsten Andrich carsten.andrich@tu-ilmenau.de wrote:

First, thanks to everyone who chimed in on this highly interesting topic.

On 04.05.22 18:49, Attila Kinali wrote:

FFT based systems take a white, normal distributed noise source,
Fourier transform it, filter it in frequency domain and transform
it back. Runtime is dominated by the FFT and thus O(n*log(n)).
There was a nice paper by either Barnes or Greenhall (or both?)
on this, which I seem currently unable to find. This is also the
method employed by the bruiteur tool from sigma-theta.

Biggest disadvantage of this method is, that it operates on the
whole sample length multiple times. I.e it becomes slow very
quickly, especially when the whole sample length is larger
than main memory. But they deliver exact results with exactly
the spectrum / time-correlation you want.

If you happen to find the paper, please share a reference. I'm curious about implementation details and side-effects, e.g., whether implementing the filter via circular convolution (straightforward multiplication in frequency-domain) carries any penalties regarding stochastic properties due to periodicity of the generated noise.

Also, any reason to do this via forward and inverse FFT? AFAIK the Fourier transform of white noise is white noise, because the underlying Gaussian is an Eigenfuntion of the transform. Generating in time-domain as follows (Python code using NumPy)

N = int(1e6) # number of samples
A = 1e-9    # Allan deviation for tau = 1 sec

generate white noise in time-domain

X = np.fft.rfft(np.random.normal(0, A * 3**-0.5, N))

multiply with frequency response of desired power-law noise and apply inverse Fourier transform

NOTE: implements circular convolution

x = np.fft.irfft(X * H)

should yield the same properties as generating the white noise directly in frequency-domain (there may be an off-by-one-or-two in there regarding variance scaling), but the latter will halve the computational cost:

generate white noise directly in frequency-domain

NOTE: imaginary components at DC and Nyquist are discarded by irfft()

X = np.random.normal(0, A * 6**-0.5 * N**0.5, N+2).view(np.complex128())

multiply with frequency response of desired power-law noise and apply inverse Fourier transform

NOTE: implements circular convolution

x = np.fft.irfft(X * H)

On 05.05.22 18:34, Magnus Danielson via time-nuts wrote:

Both is being revisioned and 1139 just went out for re-balloting process after receiving ballotting comments and 1193 is just to get approved to be sent to balloting.

Any details you can share regarding the changes over the current version? Are drafts publicly available?

Best regards,
Carsten


time-nuts mailing list -- time-nuts@lists.febo.com -- To unsubscribe send an email to time-nuts-leave@lists.febo.com
To unsubscribe, go to and follow the instructions there.

The use of forward then reverse Fourier transforms is one of the most important achievements of the Fourier transform. When one data set is convolved with another data set, it appears impossible to undo the tangle. But if the data is transformed into the Fourier domain, serial division can separate the data, and transformation back will yield the original data. See: "The Fourier transform and its applications” by Ron Bracewell. Cheers, Neville Michie > On 10 May 2022, at 16:20, Carsten Andrich <carsten.andrich@tu-ilmenau.de> wrote: > > First, thanks to everyone who chimed in on this highly interesting topic. > > On 04.05.22 18:49, Attila Kinali wrote: >> FFT based systems take a white, normal distributed noise source, >> Fourier transform it, filter it in frequency domain and transform >> it back. Runtime is dominated by the FFT and thus O(n*log(n)). >> There was a nice paper by either Barnes or Greenhall (or both?) >> on this, which I seem currently unable to find. This is also the >> method employed by the bruiteur tool from sigma-theta. >> >> Biggest disadvantage of this method is, that it operates on the >> whole sample length multiple times. I.e it becomes slow very >> quickly, especially when the whole sample length is larger >> than main memory. But they deliver exact results with exactly >> the spectrum / time-correlation you want. > > If you happen to find the paper, please share a reference. I'm curious about implementation details and side-effects, e.g., whether implementing the filter via circular convolution (straightforward multiplication in frequency-domain) carries any penalties regarding stochastic properties due to periodicity of the generated noise. > > Also, any reason to do this via forward and inverse FFT? AFAIK the Fourier transform of white noise is white noise, because the underlying Gaussian is an Eigenfuntion of the transform. Generating in time-domain as follows (Python code using NumPy) > > N = int(1e6) # number of samples > A = 1e-9 # Allan deviation for tau = 1 sec > # generate white noise in time-domain > X = np.fft.rfft(np.random.normal(0, A * 3**-0.5, N)) > # multiply with frequency response of desired power-law noise and apply inverse Fourier transform > # NOTE: implements circular convolution > x = np.fft.irfft(X * H) > > should yield the same properties as generating the white noise directly in frequency-domain (there may be an off-by-one-or-two in there regarding variance scaling), but the latter will halve the computational cost: > > # generate white noise directly in frequency-domain > # NOTE: imaginary components at DC and Nyquist are discarded by irfft() > X = np.random.normal(0, A * 6**-0.5 * N**0.5, N+2).view(np.complex128()) > # multiply with frequency response of desired power-law noise and apply inverse Fourier transform > # NOTE: implements circular convolution > x = np.fft.irfft(X * H) > > > On 05.05.22 18:34, Magnus Danielson via time-nuts wrote: >> Both is being revisioned and 1139 just went out for re-balloting process after receiving ballotting comments and 1193 is just to get approved to be sent to balloting. > Any details you can share regarding the changes over the current version? Are drafts publicly available? > > > Best regards, > Carsten > _______________________________________________ > time-nuts mailing list -- time-nuts@lists.febo.com -- To unsubscribe send an email to time-nuts-leave@lists.febo.com > To unsubscribe, go to and follow the instructions there.
AK
Attila Kinali
Tue, May 10, 2022 10:58 AM

On Tue, 10 May 2022 08:20:35 +0200
Carsten Andrich carsten.andrich@tu-ilmenau.de wrote:

If you happen to find the paper, please share a reference. I'm curious
about implementation details and side-effects, e.g., whether
implementing the filter via circular convolution (straightforward
multiplication in frequency-domain) carries any penalties regarding
stochastic properties due to periodicity of the generated noise.

Yes, for one, noise is not periodic, neither is the filter you need.
You can't build a filter with a 1/sqrt(f) slope over all the
frequency range. That would require a fractional integrator, which
is a non-trivial task. Unless you actually do fractional integration,
all time domain filters will be approximations of the required filter.

Also, any reason to do this via forward and inverse FFT? AFAIK the
Fourier transform of white noise is white noise, because the underlying
Gaussian is an Eigenfuntion of the transform. Generating in time-domain
as follows (Python code using NumPy)

I had the same question when I first saw this. Unfortunately I don't have a good
answer, besides that forward + inverse ensures that the noise looks like it is
supposed to do, while I'm not 100% whether there is an easy way to generate
time-domain Gauss i.i.d. noise in the frequency domain.

If you know how, please let me know.

But be aware, while the Gauss bell curve is an eigenfunction of the Fourier
transform, the noise we feed into it is not the Gauss bell curve. It's the
probability distribution of the noise that has this bell curve, not the noise
itself. Hence the probability distribution of a random variable is not necessarily
invariant over the Fourier transfom.

On 05.05.22 18:34, Magnus Danielson via time-nuts wrote:

Both is being revisioned and 1139 just went out for re-balloting
process after receiving ballotting comments and 1193 is just to get
approved to be sent to balloting.

Any details you can share regarding the changes over the current
version? Are drafts publicly available?

Both 1139 and 1193 saw major rework. Large portions have been replaced and rewritten
and the rest re-arranged to make more sense. I would look at the upcoming revisions
more as complete rewrites instead instead of just mere updates.

I am not aware of any public drafts. As the groups working on them are well networked
with those who would use the standards, having a public discussion never came up
and thus no public draft policy was formed. But the revision process is mostly
complete, both should be published at by the end of the year, probably earlier.

			Attila Kinali

--
In science if you know what you are doing you should not be doing it.
In engineering if you do not know what you are doing you should not be doing it.
-- Richard W. Hamming, The Art of Doing Science and Engineering

On Tue, 10 May 2022 08:20:35 +0200 Carsten Andrich <carsten.andrich@tu-ilmenau.de> wrote: > If you happen to find the paper, please share a reference. I'm curious > about implementation details and side-effects, e.g., whether > implementing the filter via circular convolution (straightforward > multiplication in frequency-domain) carries any penalties regarding > stochastic properties due to periodicity of the generated noise. Yes, for one, noise is not periodic, neither is the filter you need. You can't build a filter with a 1/sqrt(f) slope over all the frequency range. That would require a fractional integrator, which is a non-trivial task. Unless you actually do fractional integration, all time domain filters will be approximations of the required filter. > Also, any reason to do this via forward and inverse FFT? AFAIK the > Fourier transform of white noise is white noise, because the underlying > Gaussian is an Eigenfuntion of the transform. Generating in time-domain > as follows (Python code using NumPy) I had the same question when I first saw this. Unfortunately I don't have a good answer, besides that forward + inverse ensures that the noise looks like it is supposed to do, while I'm not 100% whether there is an easy way to generate time-domain Gauss i.i.d. noise in the frequency domain. If you know how, please let me know. But be aware, while the Gauss bell curve is an eigenfunction of the Fourier transform, the noise we feed into it is not the Gauss bell curve. It's the probability distribution of the noise that has this bell curve, not the noise itself. Hence the probability distribution of a random variable is not necessarily invariant over the Fourier transfom. > On 05.05.22 18:34, Magnus Danielson via time-nuts wrote: > > Both is being revisioned and 1139 just went out for re-balloting > > process after receiving ballotting comments and 1193 is just to get > > approved to be sent to balloting. > Any details you can share regarding the changes over the current > version? Are drafts publicly available? Both 1139 and 1193 saw major rework. Large portions have been replaced and rewritten and the rest re-arranged to make more sense. I would look at the upcoming revisions more as complete rewrites instead instead of just mere updates. I am not aware of any public drafts. As the groups working on them are well networked with those who would use the standards, having a public discussion never came up and thus no public draft policy was formed. But the revision process is mostly complete, both should be published at by the end of the year, probably earlier. Attila Kinali -- In science if you know what you are doing you should not be doing it. In engineering if you do not know what you are doing you should not be doing it. -- Richard W. Hamming, The Art of Doing Science and Engineering
CA
Carsten Andrich
Wed, May 11, 2022 6:15 AM

On 10.05.22 10:37, Neville Michie wrote:

The use of forward then reverse Fourier transforms is one of the most important
achievements of the Fourier transform. When one data set is convolved with
another data set, it appears impossible to undo the tangle.
But if the data is transformed into the Fourier domain, serial division can separate
the data, and transformation back will yield the original data.

Absolutely, but in this case I was wondering why to do the costly O(n
log n) forward transform at all, if its output can be directly computed
in O(n).

On 10.05.22 12:58, Attila Kinali wrote:

If you happen to find the paper, please share a reference. I'm curious
about implementation details and side-effects, e.g., whether
implementing the filter via circular convolution (straightforward
multiplication in frequency-domain) carries any penalties regarding
stochastic properties due to periodicity of the generated noise.

Yes, for one, noise is not periodic, neither is the filter you need.
You can't build a filter with a 1/sqrt(f) slope over all the
frequency range. That would require a fractional integrator, which
is a non-trivial task. Unless you actually do fractional integration,
all time domain filters will be approximations of the required filter.

Any chance the Paper was "FFT-BASED METHODS FOR SIMULATING FLICKER FM"
by Greenhall [1]?

I've had a look at the source code of the bruiteur tool, which you
previously recommended, and it appears to opt for the circular
convolution approach. Would you consider that the state-of-the-art for
1/sqrt(f)? The same is used here [3]. Kasdin gives an extensive overview
over the subject [2], but I haven't read the 20+ pages paper yet.

I had the same question when I first saw this. Unfortunately I don't have a good
answer, besides that forward + inverse ensures that the noise looks like it is
supposed to do, while I'm not 100% whether there is an easy way to generate
time-domain Gauss i.i.d. noise in the frequency domain.

If you know how, please let me know.

Got an idea on that, will report back.

But be aware, while the Gauss bell curve is an eigenfunction of the Fourier
transform, the noise we feed into it is not the Gauss bell curve.

Thanks for pointing that out. It appears I went for the first reasonable
sounding explanation to support my gut feeling without thinking it
through :D

Best regards,
Carsten

[1] https://apps.dtic.mil/sti/pdfs/ADA485683.pdf
[2] https://ieeexplore.ieee.org/document/381848
[3] https://rjav.sra.ro/index.php/rjav/article/view/40

On 10.05.22 10:37, Neville Michie wrote: > The use of forward then reverse Fourier transforms is one of the most important > achievements of the Fourier transform. When one data set is convolved with > another data set, it appears impossible to undo the tangle. > But if the data is transformed into the Fourier domain, serial division can separate > the data, and transformation back will yield the original data. Absolutely, but in this case I was wondering why to do the costly O(n log n) forward transform at all, if its output can be directly computed in O(n). On 10.05.22 12:58, Attila Kinali wrote: >> If you happen to find the paper, please share a reference. I'm curious >> about implementation details and side-effects, e.g., whether >> implementing the filter via circular convolution (straightforward >> multiplication in frequency-domain) carries any penalties regarding >> stochastic properties due to periodicity of the generated noise. > Yes, for one, noise is not periodic, neither is the filter you need. > You can't build a filter with a 1/sqrt(f) slope over all the > frequency range. That would require a fractional integrator, which > is a non-trivial task. Unless you actually do fractional integration, > all time domain filters will be approximations of the required filter. Any chance the Paper was "FFT-BASED METHODS FOR SIMULATING FLICKER FM" by Greenhall [1]? I've had a look at the source code of the bruiteur tool, which you previously recommended, and it appears to opt for the circular convolution approach. Would you consider that the state-of-the-art for 1/sqrt(f)? The same is used here [3]. Kasdin gives an extensive overview over the subject [2], but I haven't read the 20+ pages paper yet. > I had the same question when I first saw this. Unfortunately I don't have a good > answer, besides that forward + inverse ensures that the noise looks like it is > supposed to do, while I'm not 100% whether there is an easy way to generate > time-domain Gauss i.i.d. noise in the frequency domain. > > If you know how, please let me know. Got an idea on that, will report back. > But be aware, while the Gauss bell curve is an eigenfunction of the Fourier > transform, the noise we feed into it is not the Gauss bell curve. Thanks for pointing that out. It appears I went for the first reasonable sounding explanation to support my gut feeling without thinking it through :D Best regards, Carsten [1] https://apps.dtic.mil/sti/pdfs/ADA485683.pdf [2] https://ieeexplore.ieee.org/document/381848 [3] https://rjav.sra.ro/index.php/rjav/article/view/40
CA
Carsten Andrich
Fri, May 13, 2022 3:25 PM

On 11.05.22 08:15, Carsten Andrich wrote:

Also, any reason to do this via forward and inverse FFT? AFAIK the
Fourier transform of white noise is white noise, [...]

I had the same question when I first saw this. Unfortunately I don't have a good
answer, besides that forward + inverse ensures that the noise looks like it is
supposed to do, while I'm not 100% whether there is an easy way to generate
time-domain Gauss i.i.d. noise in the frequency domain.

If you know how, please let me know.

Got an idea on that, will report back.

Disclaimer: I'm an electrical engineer, not a mathematician, so someone
trained in the latter craft should verify my train of though. Feedback
is much appreciated.

Turns out the derivation of the DFT of time-domain white noise is
straightforward. The DFT formula

X[k] = \Sigma_{n=0}^{N-1} x[n] * e^{-2j*\pikn/N}

illustrates that a single frequency-domain sample is just a sum of
scaled time-domain samples. Now let x[n] be N normally distributed
samples with zero mean and variance \sigma^2, thus each X[k] is a sum of
scaled i.i.d. random variables. According to the central limit theorem,
the sum of these random variables is normally distributed.

To ascertain the variance of X[k], rely on the linearity of variance
[1], i.e., Var(aX+bY) = a^2Var(X) + b^2Var(Y) + 2ab*Cov(X,Y), and
the fact that the covariance of uncorrelated variables is zero, so,
separating into real and imaginary components, one gets:

Var(Re{X[k]}) = \Sum_{n=0}^{N-1} Var(x[n]) * Re{e^{-2j*\pikn/N})}^2
= \Sum_{n=0}^{N-1} \sigma^2  * cos(2*\pikn/N)^2
= \sigma^2 * \Sum_{n=0}^{N-1} cos(2*\pikn/N)^2

Var(Im{X[k]}) = \Sum_{n=0}^{N-1} Var(x[n]) * Im{e^{-2j*\pikn/N})}^2
= ...
= \sigma^2 * \Sum_{n=0}^{N-1} sin(2*\pikn/N)^2

The sum over squared sin(…)/cos(…) is always N/2, except for k=0 and
k=N/2, where cos(…) is N and sin(…) is 0, resulting in X[k] with real DC
and Nyquist components as is to be expected for a real x[n].
Finally, for an x[n] ~ N(0, \sigma^2), the DFT's X[k] has the following
variance:

             { N   * \sigma^2, k = 0

Var(Re{X[k]}) = { N  * \sigma^2, k = N/2
{ N/2 * \sigma^2, else

             { 0             , k = 0

Var(Im{X[k]}) = { 0            , k = N/2
{ N/2 * \sigma^2, else

Therefore, a normally distributed time domain-sequence x[n] ~ N(0,
\sigma^2) with N samples has the following DFT (note: N is the number of
samples and N(0, 1) is a normally distributed random variable with mean
0 and variance 1):

    { N^0.5     * \sigma *  N(0, 1)                , k = 0

X[k] = { N^0.5    * \sigma *  N(0, 1)                , k = N/2
{ (N/2)^0.5 * \sigma * (N(0, 1) + 1j * N(0, 1)), else

Greenhall has the same results, with two noteworthy differences [2].
First, normalization with the sample count occurs after the IFFT.
Second, his FFT is of size 2N, resulting in a N^0.5 factor between his
results and the above. Finally, Greenhall discards half (minus one) of
the samples returned by the IFFT to realize linear convolution instead
of circular convolution, fundamentally implementing a single iteration
of overlap-save fast convolution [3]. If I didn't miss anything skimming
over the bruiteur source code, it seems to skip that very step and
therefore generates periodic output [4].

Best regards,
Carsten

[1] https://en.wikipedia.org/wiki/Variance#Basic_properties
[2] https://apps.dtic.mil/sti/pdfs/ADA485683.pdf#page=5
[3] https://en.wikipedia.org/wiki/Overlap%E2%80%93save_method
[4] https://github.com/euldulle/SigmaTheta/blob/master/source/filtre.c#L130

On 11.05.22 08:15, Carsten Andrich wrote: >>> Also, any reason to do this via forward and inverse FFT? AFAIK the >>> Fourier transform of white noise is white noise, [...] >> I had the same question when I first saw this. Unfortunately I don't have a good >> answer, besides that forward + inverse ensures that the noise looks like it is >> supposed to do, while I'm not 100% whether there is an easy way to generate >> time-domain Gauss i.i.d. noise in the frequency domain. >> >> If you know how, please let me know. > Got an idea on that, will report back. Disclaimer: I'm an electrical engineer, not a mathematician, so someone trained in the latter craft should verify my train of though. Feedback is much appreciated. Turns out the derivation of the DFT of time-domain white noise is straightforward. The DFT formula X[k] = \Sigma_{n=0}^{N-1} x[n] * e^{-2j*\pi*k*n/N} illustrates that a single frequency-domain sample is just a sum of scaled time-domain samples. Now let x[n] be N normally distributed samples with zero mean and variance \sigma^2, thus each X[k] is a sum of scaled i.i.d. random variables. According to the central limit theorem, the sum of these random variables is normally distributed. To ascertain the variance of X[k], rely on the linearity of variance [1], i.e., Var(a*X+b*Y) = a^2*Var(X) + b^2*Var(Y) + 2ab*Cov(X,Y), and the fact that the covariance of uncorrelated variables is zero, so, separating into real and imaginary components, one gets: Var(Re{X[k]}) = \Sum_{n=0}^{N-1} Var(x[n]) * Re{e^{-2j*\pi*k*n/N})}^2 = \Sum_{n=0}^{N-1} \sigma^2 * cos(2*\pi*k*n/N)^2 = \sigma^2 * \Sum_{n=0}^{N-1} cos(2*\pi*k*n/N)^2 Var(Im{X[k]}) = \Sum_{n=0}^{N-1} Var(x[n]) * Im{e^{-2j*\pi*k*n/N})}^2 = ... = \sigma^2 * \Sum_{n=0}^{N-1} sin(2*\pi*k*n/N)^2 The sum over squared sin(…)/cos(…) is always N/2, except for k=0 and k=N/2, where cos(…) is N and sin(…) is 0, resulting in X[k] with real DC and Nyquist components as is to be expected for a real x[n]. Finally, for an x[n] ~ N(0, \sigma^2), the DFT's X[k] has the following variance: { N * \sigma^2, k = 0 Var(Re{X[k]}) = { N * \sigma^2, k = N/2 { N/2 * \sigma^2, else { 0 , k = 0 Var(Im{X[k]}) = { 0 , k = N/2 { N/2 * \sigma^2, else Therefore, a normally distributed time domain-sequence x[n] ~ N(0, \sigma^2) with N samples has the following DFT (note: N is the number of samples and N(0, 1) is a normally distributed random variable with mean 0 and variance 1): { N^0.5 * \sigma * N(0, 1) , k = 0 X[k] = { N^0.5 * \sigma * N(0, 1) , k = N/2 { (N/2)^0.5 * \sigma * (N(0, 1) + 1j * N(0, 1)), else Greenhall has the same results, with two noteworthy differences [2]. First, normalization with the sample count occurs after the IFFT. Second, his FFT is of size 2N, resulting in a N^0.5 factor between his results and the above. Finally, Greenhall discards half (minus one) of the samples returned by the IFFT to realize linear convolution instead of circular convolution, fundamentally implementing a single iteration of overlap-save fast convolution [3]. If I didn't miss anything skimming over the bruiteur source code, it seems to skip that very step and therefore generates periodic output [4]. Best regards, Carsten [1] https://en.wikipedia.org/wiki/Variance#Basic_properties [2] https://apps.dtic.mil/sti/pdfs/ADA485683.pdf#page=5 [3] https://en.wikipedia.org/wiki/Overlap%E2%80%93save_method [4] https://github.com/euldulle/SigmaTheta/blob/master/source/filtre.c#L130
MD
Magnus Danielson
Sat, May 14, 2022 6:59 AM

Hi Carsten,

On 2022-05-13 09:25, Carsten Andrich wrote:

On 11.05.22 08:15, Carsten Andrich wrote:

Also, any reason to do this via forward and inverse FFT? AFAIK the
Fourier transform of white noise is white noise, [...]

I had the same question when I first saw this. Unfortunately I don't
have a good
answer, besides that forward + inverse ensures that the noise looks
like it is
supposed to do, while I'm not 100% whether there is an easy way to
generate
time-domain Gauss i.i.d. noise in the frequency domain.

If you know how, please let me know.

Got an idea on that, will report back.

Disclaimer: I'm an electrical engineer, not a mathematician, so
someone trained in the latter craft should verify my train of though.
Feedback is much appreciated.

Turns out the derivation of the DFT of time-domain white noise is
straightforward. The DFT formula

X[k] = \Sigma_{n=0}^{N-1} x[n] * e^{-2j*\pikn/N}

illustrates that a single frequency-domain sample is just a sum of
scaled time-domain samples. Now let x[n] be N normally distributed
samples with zero mean and variance \sigma^2, thus each X[k] is a sum
of scaled i.i.d. random variables. According to the central limit
theorem, the sum of these random variables is normally distributed.

Do note that the model of no correlation is not correct model of
reality. There is several effects which make "white noise" slightly
correlated, even if this for most pratical uses is very small
correlation. Not that it significantly changes your conclusions, but you
should remember that the model only go so far. To avoid aliasing, you
need an anti-aliasing filter that causes correlation between samples.
Also, the noise has inherent bandwidth limitations and futher, thermal
noise is convergent because of the power-distribution of thermal noise
as established by Max Planck, and is really the existence of photons.
The physics of it cannot be fully ignored as one goes into the math
field, but rather, one should be aware that the simplified models may
fool yourself in the mathematical exercise.

To ascertain the variance of X[k], rely on the linearity of variance
[1], i.e., Var(aX+bY) = a^2Var(X) + b^2Var(Y) + 2ab*Cov(X,Y), and
the fact that the covariance of uncorrelated variables is zero, so,
separating into real and imaginary components, one gets:

Var(Re{X[k]}) = \Sum_{n=0}^{N-1} Var(x[n]) * Re{e^{-2j*\pikn/N})}^2
              = \Sum_{n=0}^{N-1} \sigma^2  * cos(2*\pikn/N)^2
              = \sigma^2 * \Sum_{n=0}^{N-1} cos(2*\pikn/N)^2

Var(Im{X[k]}) = \Sum_{n=0}^{N-1} Var(x[n]) * Im{e^{-2j*\pikn/N})}^2
              = ...
              = \sigma^2 * \Sum_{n=0}^{N-1} sin(2*\pikn/N)^2

The sum over squared sin(…)/cos(…) is always N/2, except for k=0 and
k=N/2, where cos(…) is N and sin(…) is 0, resulting in X[k] with real
DC and Nyquist components as is to be expected for a real x[n].
Finally, for an x[n] ~ N(0, \sigma^2), the DFT's X[k] has the
following variance:

                { N   * \sigma^2, k = 0
Var(Re{X[k]}) = { N   * \sigma^2, k = N/2
                { N/2 * \sigma^2, else

                { 0             , k = 0
Var(Im{X[k]}) = { 0             , k = N/2
                { N/2 * \sigma^2, else

Therefore, a normally distributed time domain-sequence x[n] ~ N(0,
\sigma^2) with N samples has the following DFT (note: N is the number
of samples and N(0, 1) is a normally distributed random variable with
mean 0 and variance 1):

       { N^0.5     * \sigma *  N(0, 1)                , k = 0
X[k] = { N^0.5     * \sigma *  N(0, 1)                , k = N/2
       { (N/2)^0.5 * \sigma * (N(0, 1) + 1j * N(0, 1)), else

Here you skipped a few steps compared to your other derivation. You
should explain how X[k] comes out of Var(Re(X[k])) and Var(Im(X[k])).

Greenhall has the same results, with two noteworthy differences [2].
First, normalization with the sample count occurs after the IFFT.
Second, his FFT is of size 2N, resulting in a N^0.5 factor between his
results and the above. Finally, Greenhall discards half (minus one) of
the samples returned by the IFFT to realize linear convolution instead
of circular convolution, fundamentally implementing a single iteration
of overlap-save fast convolution [3]. If I didn't miss anything
skimming over the bruiteur source code, it seems to skip that very
step and therefore generates periodic output [4].

This is a result of using real-only values in the complex Fourier
transform. It creates mirror images. Greenhall uses one method to
circumvent the issue.

Cheers,
Magnus

Hi Carsten, On 2022-05-13 09:25, Carsten Andrich wrote: > On 11.05.22 08:15, Carsten Andrich wrote: > >>>> Also, any reason to do this via forward and inverse FFT? AFAIK the >>>> Fourier transform of white noise is white noise, [...] >>> I had the same question when I first saw this. Unfortunately I don't >>> have a good >>> answer, besides that forward + inverse ensures that the noise looks >>> like it is >>> supposed to do, while I'm not 100% whether there is an easy way to >>> generate >>> time-domain Gauss i.i.d. noise in the frequency domain. >>> >>> If you know how, please let me know. >> Got an idea on that, will report back. > > Disclaimer: I'm an electrical engineer, not a mathematician, so > someone trained in the latter craft should verify my train of though. > Feedback is much appreciated. > > Turns out the derivation of the DFT of time-domain white noise is > straightforward. The DFT formula > > X[k] = \Sigma_{n=0}^{N-1} x[n] * e^{-2j*\pi*k*n/N} > > illustrates that a single frequency-domain sample is just a sum of > scaled time-domain samples. Now let x[n] be N normally distributed > samples with zero mean and variance \sigma^2, thus each X[k] is a sum > of scaled i.i.d. random variables. According to the central limit > theorem, the sum of these random variables is normally distributed. > Do note that the model of no correlation is not correct model of reality. There is several effects which make "white noise" slightly correlated, even if this for most pratical uses is very small correlation. Not that it significantly changes your conclusions, but you should remember that the model only go so far. To avoid aliasing, you need an anti-aliasing filter that causes correlation between samples. Also, the noise has inherent bandwidth limitations and futher, thermal noise is convergent because of the power-distribution of thermal noise as established by Max Planck, and is really the existence of photons. The physics of it cannot be fully ignored as one goes into the math field, but rather, one should be aware that the simplified models may fool yourself in the mathematical exercise. > To ascertain the variance of X[k], rely on the linearity of variance > [1], i.e., Var(a*X+b*Y) = a^2*Var(X) + b^2*Var(Y) + 2ab*Cov(X,Y), and > the fact that the covariance of uncorrelated variables is zero, so, > separating into real and imaginary components, one gets: > > Var(Re{X[k]}) = \Sum_{n=0}^{N-1} Var(x[n]) * Re{e^{-2j*\pi*k*n/N})}^2 >               = \Sum_{n=0}^{N-1} \sigma^2  * cos(2*\pi*k*n/N)^2 >               = \sigma^2 * \Sum_{n=0}^{N-1} cos(2*\pi*k*n/N)^2 > > Var(Im{X[k]}) = \Sum_{n=0}^{N-1} Var(x[n]) * Im{e^{-2j*\pi*k*n/N})}^2 >               = ... >               = \sigma^2 * \Sum_{n=0}^{N-1} sin(2*\pi*k*n/N)^2 > > The sum over squared sin(…)/cos(…) is always N/2, except for k=0 and > k=N/2, where cos(…) is N and sin(…) is 0, resulting in X[k] with real > DC and Nyquist components as is to be expected for a real x[n]. > Finally, for an x[n] ~ N(0, \sigma^2), the DFT's X[k] has the > following variance: > >                 { N   * \sigma^2, k = 0 > Var(Re{X[k]}) = { N   * \sigma^2, k = N/2 >                 { N/2 * \sigma^2, else > > >                 { 0             , k = 0 > Var(Im{X[k]}) = { 0             , k = N/2 >                 { N/2 * \sigma^2, else > > Therefore, a normally distributed time domain-sequence x[n] ~ N(0, > \sigma^2) with N samples has the following DFT (note: N is the number > of samples and N(0, 1) is a normally distributed random variable with > mean 0 and variance 1): > >        { N^0.5     * \sigma *  N(0, 1)                , k = 0 > X[k] = { N^0.5     * \sigma *  N(0, 1)                , k = N/2 >        { (N/2)^0.5 * \sigma * (N(0, 1) + 1j * N(0, 1)), else > Here you skipped a few steps compared to your other derivation. You should explain how X[k] comes out of Var(Re(X[k])) and Var(Im(X[k])). > Greenhall has the same results, with two noteworthy differences [2]. > First, normalization with the sample count occurs after the IFFT. > Second, his FFT is of size 2N, resulting in a N^0.5 factor between his > results and the above. Finally, Greenhall discards half (minus one) of > the samples returned by the IFFT to realize linear convolution instead > of circular convolution, fundamentally implementing a single iteration > of overlap-save fast convolution [3]. If I didn't miss anything > skimming over the bruiteur source code, it seems to skip that very > step and therefore generates periodic output [4]. This is a result of using real-only values in the complex Fourier transform. It creates mirror images. Greenhall uses one method to circumvent the issue. Cheers, Magnus > > Best regards, > Carsten > > [1] https://en.wikipedia.org/wiki/Variance#Basic_properties > [2] https://apps.dtic.mil/sti/pdfs/ADA485683.pdf#page=5 > [3] https://en.wikipedia.org/wiki/Overlap%E2%80%93save_method > [4] > https://github.com/euldulle/SigmaTheta/blob/master/source/filtre.c#L130 > _______________________________________________ > time-nuts mailing list -- time-nuts@lists.febo.com > To unsubscribe send an email to time-nuts-leave@lists.febo.com
MW
Matthias Welwarsky
Sat, May 14, 2022 2:58 PM

On Dienstag, 3. Mai 2022 22:08:49 CEST Magnus Danielson via time-nuts wrote:

Dear Matthias,

Notice that 1/f is power-spectrum density, straight filter will give you
1/f^2 in power-spectrum, just as an integration slope.

One approach to flicker filter is an IIR filter with the weighing of
1/sqrt(n+1) where n is tap index, and feed it normal noise. You need to
"flush out" state before you use it so you have a long history to help
shaping. For a 1024 sample series, I do 2048 samples and only use the
last 1024. Efficient? No. Quick-and-dirty? Yes.

I went "window shopping" on Google and found something that would probably fit
my needs here:

https://ccrma.stanford.edu/~jos/sasp/Example_Synthesis_1_F_Noise.html

Matlab code:

Nx = 2^16;  % number of samples to synthesize
B = [0.049922035 -0.095993537 0.050612699 -0.004408786];
A = [1 -2.494956002  2.017265875  -0.522189400];
nT60 = round(log(1000)/(1-max(abs(roots(A))))); % T60 est.
v = randn(1,Nx+nT60); % Gaussian white noise: N(0,1)
x = filter(B,A,v);    % Apply 1/F roll-off to PSD
x = x(nT60+1:end);    % Skip transient response

It looks quite simple and there is no explanation where the filter
coefficients come from, but I checked the PSD and it looks quite reasonable.

The ADEV of a synthesized oscillator, using the above generator to generate 1/
f FM noise is interesting: it's an almost completely flat curve that moves
"sideways" until the drift becomes dominant.

Regards,
Matthias

On Dienstag, 3. Mai 2022 22:08:49 CEST Magnus Danielson via time-nuts wrote: > Dear Matthias, > > Notice that 1/f is power-spectrum density, straight filter will give you > 1/f^2 in power-spectrum, just as an integration slope. > > One approach to flicker filter is an IIR filter with the weighing of > 1/sqrt(n+1) where n is tap index, and feed it normal noise. You need to > "flush out" state before you use it so you have a long history to help > shaping. For a 1024 sample series, I do 2048 samples and only use the > last 1024. Efficient? No. Quick-and-dirty? Yes. I went "window shopping" on Google and found something that would probably fit my needs here: https://ccrma.stanford.edu/~jos/sasp/Example_Synthesis_1_F_Noise.html Matlab code: Nx = 2^16; % number of samples to synthesize B = [0.049922035 -0.095993537 0.050612699 -0.004408786]; A = [1 -2.494956002 2.017265875 -0.522189400]; nT60 = round(log(1000)/(1-max(abs(roots(A))))); % T60 est. v = randn(1,Nx+nT60); % Gaussian white noise: N(0,1) x = filter(B,A,v); % Apply 1/F roll-off to PSD x = x(nT60+1:end); % Skip transient response It looks quite simple and there is no explanation where the filter coefficients come from, but I checked the PSD and it looks quite reasonable. The ADEV of a synthesized oscillator, using the above generator to generate 1/ f FM noise is interesting: it's an almost completely flat curve that moves "sideways" until the drift becomes dominant. Regards, Matthias
CA
Carsten Andrich
Sat, May 14, 2022 4:43 PM

On 14.05.22 16:58, Matthias Welwarsky wrote:

I went "window shopping" on Google and found something that would probably fit
my needs here:

https://ccrma.stanford.edu/~jos/sasp/Example_Synthesis_1_F_Noise.html

Matlab code:

Nx = 2^16;  % number of samples to synthesize
B = [0.049922035 -0.095993537 0.050612699 -0.004408786];
A = [1 -2.494956002  2.017265875  -0.522189400];
nT60 = round(log(1000)/(1-max(abs(roots(A))))); % T60 est.
v = randn(1,Nx+nT60); % Gaussian white noise: N(0,1)
x = filter(B,A,v);    % Apply 1/F roll-off to PSD
x = x(nT60+1:end);    % Skip transient response

Hi Matthias,

the coefficients could come the digital transform of an analog filter
that approximates a 10 dB/decade slope, see:
https://electronics.stackexchange.com/questions/200583/is-it-possible-to-make-a-weak-analog-low-pass-filter-with-less-than-20-db-decade
https://electronics.stackexchange.com/questions/374247/is-the-roll-off-gain-of-filters-always-20-db-dec
https://m.youtube.com/watch?v=fn2UGyk5DP4

However, even for the 2^16 samples used by the CCRMA snippet, the filter
slope rolls off too quickly. I've attached its frequency response. It
exhibits a little wobbly 1/f power slope over 3 orders of magnitude, but
it's essentially flat over the remaining two orders of mag. The used IIR
filter is too short to affect the lower frequencies.

If you want a good approximation of 1/f over the full frequency range, I
personally believe Greenhall's "discrete spectrum algorithm" [1] to be
the best choice, as per the literature I've consulted so far. It
realizes an appropriately large FIR filter, i.e., as large as the
input/output signal, and it's straightforward to implement. Because it's
generic, it can be used to generate any combination of power-law noises
or arbitrary characteristics, e.g., phase noise based on measurements,
in a single invocation.

Best regards,
Carsten

[1] https://apps.dtic.mil/sti/pdfs/ADA485683.pdf#page=5

P.S.: Python Code to plot attached frequency response:

import matplotlib.pyplot as plt
import numpy as np
import scipy.signal as signal

w, h = signal.freqz(
[0.049922035, -0.095993537, 0.050612699, -0.004408786],
[1, -2.494956002,  2.017265875,  -0.522189400],
2**16)

plt.loglog(w/np.pi, abs(h))
plt.plot([1e-3, 1e-0], [0.93, 0.93/10**(0.5*3)], "--")

plt.xlabel("Normalized frequency ($f_\mathrm{Nyquist}$)")
plt.ylabel("Gain")
plt.grid()

On 14.05.22 16:58, Matthias Welwarsky wrote: > I went "window shopping" on Google and found something that would probably fit > my needs here: > > https://ccrma.stanford.edu/~jos/sasp/Example_Synthesis_1_F_Noise.html > > Matlab code: > > Nx = 2^16; % number of samples to synthesize > B = [0.049922035 -0.095993537 0.050612699 -0.004408786]; > A = [1 -2.494956002 2.017265875 -0.522189400]; > nT60 = round(log(1000)/(1-max(abs(roots(A))))); % T60 est. > v = randn(1,Nx+nT60); % Gaussian white noise: N(0,1) > x = filter(B,A,v); % Apply 1/F roll-off to PSD > x = x(nT60+1:end); % Skip transient response Hi Matthias, the coefficients could come the digital transform of an analog filter that approximates a 10 dB/decade slope, see: https://electronics.stackexchange.com/questions/200583/is-it-possible-to-make-a-weak-analog-low-pass-filter-with-less-than-20-db-decade https://electronics.stackexchange.com/questions/374247/is-the-roll-off-gain-of-filters-always-20-db-dec https://m.youtube.com/watch?v=fn2UGyk5DP4 However, even for the 2^16 samples used by the CCRMA snippet, the filter slope rolls off too quickly. I've attached its frequency response. It exhibits a little wobbly 1/f power slope over 3 orders of magnitude, but it's essentially flat over the remaining two orders of mag. The used IIR filter is too short to affect the lower frequencies. If you want a good approximation of 1/f over the full frequency range, I personally believe Greenhall's "discrete spectrum algorithm" [1] to be the best choice, as per the literature I've consulted so far. It realizes an appropriately large FIR filter, i.e., as large as the input/output signal, and it's straightforward to implement. Because it's generic, it can be used to generate any combination of power-law noises or arbitrary characteristics, e.g., phase noise based on measurements, in a single invocation. Best regards, Carsten [1] https://apps.dtic.mil/sti/pdfs/ADA485683.pdf#page=5 P.S.: Python Code to plot attached frequency response: import matplotlib.pyplot as plt import numpy as np import scipy.signal as signal w, h = signal.freqz( [0.049922035, -0.095993537, 0.050612699, -0.004408786], [1, -2.494956002, 2.017265875, -0.522189400], 2**16) plt.loglog(w/np.pi, abs(h)) plt.plot([1e-3, 1e-0], [0.93, 0.93/10**(0.5*3)], "--") plt.xlabel("Normalized frequency ($f_\mathrm{Nyquist}$)") plt.ylabel("Gain") plt.grid()
CA
Carsten Andrich
Sat, May 14, 2022 5:38 PM

Hi Magnus,

On 14.05.22 08:59, Magnus Danielson via time-nuts wrote:

Do note that the model of no correlation is not correct model of
reality. There is several effects which make "white noise" slightly
correlated, even if this for most pratical uses is very small
correlation. Not that it significantly changes your conclusions, but
you should remember that the model only go so far. To avoid aliasing,
you need an anti-aliasing filter that causes correlation between
samples. Also, the noise has inherent bandwidth limitations and
futher, thermal noise is convergent because of the power-distribution
of thermal noise as established by Max Planck, and is really the
existence of photons. The physics of it cannot be fully ignored as one
goes into the math field, but rather, one should be aware that the
simplified models may fool yourself in the mathematical exercise.

Thank you for that insight. Duly noted. I'll opt to ignore the residual
correlation. As was pointed out here before, the 5 component power law
noise model is an oversimplification of oscillators, so the remaining
error due to residual correlation is hopefully negligible compared to
the general model error.

Here you skipped a few steps compared to your other derivation. You
should explain how X[k] comes out of Var(Re(X[k])) and Var(Im(X[k])).

Given the variance of X[k] and E{X[k]} = 0 \forall k, it follows that

X[k] = Var(Re{X[k]})^0.5 * N(0, 1) + 1j * Var(Im{X[k]})^0.5 * N(0, 1)

because the variance is the scaling of a standard Gaussian N(0, 1)
distribution is the square root of its variance.

This is a result of using real-only values in the complex Fourier
transform. It creates mirror images. Greenhall uses one method to
circumvent the issue.

Can't quite follow on that one. What do you mean by "mirror images"? Do
you mean that my formula for X[k] is missing the complex conjugates for
k = N/2+1 ... N-1? Used with a regular, complex IFFT the previously
posted formula for X[k] would obviously generate complex output, which
is wrong. I missed that one, because my implementation uses a
complex-to-real IFFT, which has the complex conjugate implied. However,
for a the regular, complex (I)FFT given by my derivation, the correct
formula for X[k] should be the following:

    { N^0.5     * \sigma *  N(0, 1)                , k = 0, N/2

X[k] = { (N/2)^0.5 * \sigma * (N(0, 1) + 1j * N(0, 1)), k = 1 ... N/2 - 1
{ conj(X[N-k])                                , k = N/2 + 1 ... N - 1

Best regards,
Carsten

--
M.Sc. Carsten Andrich

Technische Universität Ilmenau
Fachgebiet Elektronische Messtechnik und Signalverarbeitung (EMS)
Helmholtzplatz 2
98693 Ilmenau
T +49 3677 69-4269

Hi Magnus, On 14.05.22 08:59, Magnus Danielson via time-nuts wrote: > Do note that the model of no correlation is not correct model of > reality. There is several effects which make "white noise" slightly > correlated, even if this for most pratical uses is very small > correlation. Not that it significantly changes your conclusions, but > you should remember that the model only go so far. To avoid aliasing, > you need an anti-aliasing filter that causes correlation between > samples. Also, the noise has inherent bandwidth limitations and > futher, thermal noise is convergent because of the power-distribution > of thermal noise as established by Max Planck, and is really the > existence of photons. The physics of it cannot be fully ignored as one > goes into the math field, but rather, one should be aware that the > simplified models may fool yourself in the mathematical exercise. Thank you for that insight. Duly noted. I'll opt to ignore the residual correlation. As was pointed out here before, the 5 component power law noise model is an oversimplification of oscillators, so the remaining error due to residual correlation is hopefully negligible compared to the general model error. > Here you skipped a few steps compared to your other derivation. You > should explain how X[k] comes out of Var(Re(X[k])) and Var(Im(X[k])). Given the variance of X[k] and E{X[k]} = 0 \forall k, it follows that X[k] = Var(Re{X[k]})^0.5 * N(0, 1) + 1j * Var(Im{X[k]})^0.5 * N(0, 1) because the variance is the scaling of a standard Gaussian N(0, 1) distribution is the square root of its variance. > This is a result of using real-only values in the complex Fourier > transform. It creates mirror images. Greenhall uses one method to > circumvent the issue. Can't quite follow on that one. What do you mean by "mirror images"? Do you mean that my formula for X[k] is missing the complex conjugates for k = N/2+1 ... N-1? Used with a regular, complex IFFT the previously posted formula for X[k] would obviously generate complex output, which is wrong. I missed that one, because my implementation uses a complex-to-real IFFT, which has the complex conjugate implied. However, for a the regular, complex (I)FFT given by my derivation, the correct formula for X[k] should be the following: { N^0.5 * \sigma * N(0, 1) , k = 0, N/2 X[k] = { (N/2)^0.5 * \sigma * (N(0, 1) + 1j * N(0, 1)), k = 1 ... N/2 - 1 { conj(X[N-k]) , k = N/2 + 1 ... N - 1 Best regards, Carsten -- M.Sc. Carsten Andrich Technische Universität Ilmenau Fachgebiet Elektronische Messtechnik und Signalverarbeitung (EMS) Helmholtzplatz 2 98693 Ilmenau T +49 3677 69-4269
MW
Matthias Welwarsky
Sat, May 14, 2022 6:30 PM

On Samstag, 14. Mai 2022 18:43:13 CEST Carsten Andrich wrote:

However, even for the 2^16 samples used by the CCRMA snippet, the filter
slope rolls off too quickly. I've attached its frequency response. It
exhibits a little wobbly 1/f power slope over 3 orders of magnitude, but
it's essentially flat over the remaining two orders of mag. The used IIR
filter is too short to affect the lower frequencies.

Ah. That explains why the ADEV "degrades" for longer tau. It bends "down". For
very low frequencies, i.e. long tau in ADEV terms, the filter is invisible,
i.e. it passes on white noise. That makes it indeed unusable, for my purposes.

BR,
Matthias

On Samstag, 14. Mai 2022 18:43:13 CEST Carsten Andrich wrote: > However, even for the 2^16 samples used by the CCRMA snippet, the filter > slope rolls off too quickly. I've attached its frequency response. It > exhibits a little wobbly 1/f power slope over 3 orders of magnitude, but > it's essentially flat over the remaining two orders of mag. The used IIR > filter is too short to affect the lower frequencies. Ah. That explains why the ADEV "degrades" for longer tau. It bends "down". For very low frequencies, i.e. long tau in ADEV terms, the filter is invisible, i.e. it passes on white noise. That makes it indeed unusable, for my purposes. BR, Matthias
MD
Magnus Danielson
Sun, May 15, 2022 4:33 PM

Hi Matthias,

On 2022-05-14 08:58, Matthias Welwarsky wrote:

On Dienstag, 3. Mai 2022 22:08:49 CEST Magnus Danielson via time-nuts wrote:

Dear Matthias,

Notice that 1/f is power-spectrum density, straight filter will give you
1/f^2 in power-spectrum, just as an integration slope.

One approach to flicker filter is an IIR filter with the weighing of
1/sqrt(n+1) where n is tap index, and feed it normal noise. You need to
"flush out" state before you use it so you have a long history to help
shaping. For a 1024 sample series, I do 2048 samples and only use the
last 1024. Efficient? No. Quick-and-dirty? Yes.

I went "window shopping" on Google and found something that would probably fit
my needs here:

https://ccrma.stanford.edu/~jos/sasp/Example_Synthesis_1_F_Noise.html

Matlab code:

Nx = 2^16;  % number of samples to synthesize
B = [0.049922035 -0.095993537 0.050612699 -0.004408786];
A = [1 -2.494956002  2.017265875  -0.522189400];
nT60 = round(log(1000)/(1-max(abs(roots(A))))); % T60 est.
v = randn(1,Nx+nT60); % Gaussian white noise: N(0,1)
x = filter(B,A,v);    % Apply 1/F roll-off to PSD
x = x(nT60+1:end);    % Skip transient response

It looks quite simple and there is no explanation where the filter
coefficients come from, but I checked the PSD and it looks quite reasonable.

This is a variant of the James "Jim" Barnes filter to use lead-lag
filter to approximate 1/f slope. You achieve it within a certain range
of frequency. The first article with this is available as a technical
note from NBS 1965 (available at NIST T&F archive - check for Barnes and
Allan), but there is a more modern PTTI article by Barnes and Greenhall
(also in NIST archive) that uses a more flexible approach where the
spread of pole/zero pairs is parametric rather than fixed. The later
paper is important as it also contains the code to initialize the state
of the filter as if it has been running for ever so the state have
stabilized. A particular interesting thing in that article is the plot
of the filter property aligned to the 1/f slope, it illustrates very
well the useful range of the produced filter. This plot is achieved by
scaling the amplitude responce with sqrt(f).

I recommend using the Barnes & Greenhall variant rather than what you
found. It needs to be adapted to the simulation at hand, so the fixed
setup you have will fit only some needs. One needs to have the filter
covering the full frequency range where used flicker noise is dominant
or near dominant. As one uses flicker shaping for both flicker phase
modulation as well as flicker frequency modulation, there is two
different frequency ranges there they are dominant or near dominant.

There is many other approaches, see Attilas splendid review the other day.

Let me also say that I find myself having this question coming even from
the most senioric researches even the last week: "What is your favorite
flicker noise simulation model?". They keep acknowledge my basic message
of "Flicker noise simulation is hard". None model fit all applications,
so no one solution solves it all. One needs to validate that it fit the
application at hand.

Cheers,
Magnus

The ADEV of a synthesized oscillator, using the above generator to generate 1/
f FM noise is interesting: it's an almost completely flat curve that moves
"sideways" until the drift becomes dominant.

Regards,
Matthias


time-nuts mailing list -- time-nuts@lists.febo.com
To unsubscribe send an email to time-nuts-leave@lists.febo.com

Hi Matthias, On 2022-05-14 08:58, Matthias Welwarsky wrote: > On Dienstag, 3. Mai 2022 22:08:49 CEST Magnus Danielson via time-nuts wrote: >> Dear Matthias, >> >> Notice that 1/f is power-spectrum density, straight filter will give you >> 1/f^2 in power-spectrum, just as an integration slope. >> >> One approach to flicker filter is an IIR filter with the weighing of >> 1/sqrt(n+1) where n is tap index, and feed it normal noise. You need to >> "flush out" state before you use it so you have a long history to help >> shaping. For a 1024 sample series, I do 2048 samples and only use the >> last 1024. Efficient? No. Quick-and-dirty? Yes. > I went "window shopping" on Google and found something that would probably fit > my needs here: > > https://ccrma.stanford.edu/~jos/sasp/Example_Synthesis_1_F_Noise.html > > Matlab code: > > Nx = 2^16; % number of samples to synthesize > B = [0.049922035 -0.095993537 0.050612699 -0.004408786]; > A = [1 -2.494956002 2.017265875 -0.522189400]; > nT60 = round(log(1000)/(1-max(abs(roots(A))))); % T60 est. > v = randn(1,Nx+nT60); % Gaussian white noise: N(0,1) > x = filter(B,A,v); % Apply 1/F roll-off to PSD > x = x(nT60+1:end); % Skip transient response > > It looks quite simple and there is no explanation where the filter > coefficients come from, but I checked the PSD and it looks quite reasonable. This is a variant of the James "Jim" Barnes filter to use lead-lag filter to approximate 1/f slope. You achieve it within a certain range of frequency. The first article with this is available as a technical note from NBS 1965 (available at NIST T&F archive - check for Barnes and Allan), but there is a more modern PTTI article by Barnes and Greenhall (also in NIST archive) that uses a more flexible approach where the spread of pole/zero pairs is parametric rather than fixed. The later paper is important as it also contains the code to initialize the state of the filter as if it has been running for ever so the state have stabilized. A particular interesting thing in that article is the plot of the filter property aligned to the 1/f slope, it illustrates very well the useful range of the produced filter. This plot is achieved by scaling the amplitude responce with sqrt(f). I recommend using the Barnes & Greenhall variant rather than what you found. It needs to be adapted to the simulation at hand, so the fixed setup you have will fit only some needs. One needs to have the filter covering the full frequency range where used flicker noise is dominant or near dominant. As one uses flicker shaping for both flicker phase modulation as well as flicker frequency modulation, there is two different frequency ranges there they are dominant or near dominant. There is many other approaches, see Attilas splendid review the other day. Let me also say that I find myself having this question coming even from the most senioric researches even the last week: "What is your favorite flicker noise simulation model?". They keep acknowledge my basic message of "Flicker noise simulation is hard". None model fit all applications, so no one solution solves it all. One needs to validate that it fit the application at hand. Cheers, Magnus > > The ADEV of a synthesized oscillator, using the above generator to generate 1/ > f FM noise is interesting: it's an almost completely flat curve that moves > "sideways" until the drift becomes dominant. > > Regards, > Matthias > > > _______________________________________________ > time-nuts mailing list -- time-nuts@lists.febo.com > To unsubscribe send an email to time-nuts-leave@lists.febo.com