MD
Magnus Danielson
Sun, May 15, 2022 5:37 PM
Hi Carsten,
On 2022-05-14 11:38, Carsten Andrich wrote:
Hi Magnus,
On 14.05.22 08:59, Magnus Danielson via time-nuts wrote:
Do note that the model of no correlation is not correct model of
reality. There is several effects which make "white noise" slightly
correlated, even if this for most pratical uses is very small
correlation. Not that it significantly changes your conclusions, but
you should remember that the model only go so far. To avoid aliasing,
you need an anti-aliasing filter that causes correlation between
samples. Also, the noise has inherent bandwidth limitations and
futher, thermal noise is convergent because of the power-distribution
of thermal noise as established by Max Planck, and is really the
existence of photons. The physics of it cannot be fully ignored as
one goes into the math field, but rather, one should be aware that
the simplified models may fool yourself in the mathematical exercise.
Thank you for that insight. Duly noted. I'll opt to ignore the
residual correlation. As was pointed out here before, the 5 component
power law noise model is an oversimplification of oscillators, so the
remaining error due to residual correlation is hopefully negligible
compared to the general model error.
Indeed. My comment is more to point out details which becomes relevant
for those attempting to do math exercises and prevent unnecessary insanity.
Yes, I keep riminding that the 5 component power law noise model is just
that, only a model, and it does not really respect the "Leeson effect"
(actually older) of resonator folding of noise, which becomes a
systematic connection of noise of different slopes.
Here you skipped a few steps compared to your other derivation. You
should explain how X[k] comes out of Var(Re(X[k])) and Var(Im(X[k])).
Given the variance of X[k] and E{X[k]} = 0 \forall k, it follows that
X[k] = Var(Re{X[k]})^0.5 * N(0, 1) + 1j * Var(Im{X[k]})^0.5 * N(0, 1)
because the variance is the scaling of a standard Gaussian N(0, 1)
distribution is the square root of its variance.
Reasonable. I just wanted it to be complete in the thread.
This is a result of using real-only values in the complex Fourier
transform. It creates mirror images. Greenhall uses one method to
circumvent the issue.
Can't quite follow on that one. What do you mean by "mirror images"?
Do you mean that my formula for X[k] is missing the complex conjugates
for k = N/2+1 ... N-1? Used with a regular, complex IFFT the
previously posted formula for X[k] would obviously generate complex
output, which is wrong. I missed that one, because my implementation
uses a complex-to-real IFFT, which has the complex conjugate implied.
However, for a the regular, complex (I)FFT given by my derivation, the
correct formula for X[k] should be the following:
{ N^0.5 * \sigma * N(0, 1) , k = 0, N/2
X[k] = { (N/2)^0.5 * \sigma * (N(0, 1) + 1j * N(0, 1)), k = 1 ... N/2 - 1
{ conj(X[N-k]) , k = N/2 + 1
... N - 1
If you process a real-value only sample list by the complex FFT, as you
did, you will have mirror fourier frequencies of opposite sign. This
comes as e^(i2pift)+e^(-i2pift) is only real. Rather than using
the optimization to remove half unused inputs (imaginary) and half
unused outputs (negative frequencies) with N/2 size transform, you can
use the N-size transform more straightforward and accept the losses for
simplicity of clarity. This is why Greenhall only use upper half
frequencies.
Cheers,
Magnus
Hi Carsten,
On 2022-05-14 11:38, Carsten Andrich wrote:
> Hi Magnus,
>
> On 14.05.22 08:59, Magnus Danielson via time-nuts wrote:
>> Do note that the model of no correlation is not correct model of
>> reality. There is several effects which make "white noise" slightly
>> correlated, even if this for most pratical uses is very small
>> correlation. Not that it significantly changes your conclusions, but
>> you should remember that the model only go so far. To avoid aliasing,
>> you need an anti-aliasing filter that causes correlation between
>> samples. Also, the noise has inherent bandwidth limitations and
>> futher, thermal noise is convergent because of the power-distribution
>> of thermal noise as established by Max Planck, and is really the
>> existence of photons. The physics of it cannot be fully ignored as
>> one goes into the math field, but rather, one should be aware that
>> the simplified models may fool yourself in the mathematical exercise.
>
> Thank you for that insight. Duly noted. I'll opt to ignore the
> residual correlation. As was pointed out here before, the 5 component
> power law noise model is an oversimplification of oscillators, so the
> remaining error due to residual correlation is hopefully negligible
> compared to the general model error.
Indeed. My comment is more to point out details which becomes relevant
for those attempting to do math exercises and prevent unnecessary insanity.
Yes, I keep riminding that the 5 component power law noise model is just
that, only a model, and it does not really respect the "Leeson effect"
(actually older) of resonator folding of noise, which becomes a
systematic connection of noise of different slopes.
>
>
>> Here you skipped a few steps compared to your other derivation. You
>> should explain how X[k] comes out of Var(Re(X[k])) and Var(Im(X[k])).
> Given the variance of X[k] and E{X[k]} = 0 \forall k, it follows that
>
> X[k] = Var(Re{X[k]})^0.5 * N(0, 1) + 1j * Var(Im{X[k]})^0.5 * N(0, 1)
>
> because the variance is the scaling of a standard Gaussian N(0, 1)
> distribution is the square root of its variance.
Reasonable. I just wanted it to be complete in the thread.
>
>
>> This is a result of using real-only values in the complex Fourier
>> transform. It creates mirror images. Greenhall uses one method to
>> circumvent the issue.
> Can't quite follow on that one. What do you mean by "mirror images"?
> Do you mean that my formula for X[k] is missing the complex conjugates
> for k = N/2+1 ... N-1? Used with a regular, complex IFFT the
> previously posted formula for X[k] would obviously generate complex
> output, which is wrong. I missed that one, because my implementation
> uses a complex-to-real IFFT, which has the complex conjugate implied.
> However, for a the regular, complex (I)FFT given by my derivation, the
> correct formula for X[k] should be the following:
>
> { N^0.5 * \sigma * N(0, 1) , k = 0, N/2
> X[k] = { (N/2)^0.5 * \sigma * (N(0, 1) + 1j * N(0, 1)), k = 1 ... N/2 - 1
> { conj(X[N-k]) , k = N/2 + 1
> ... N - 1
If you process a real-value only sample list by the complex FFT, as you
did, you will have mirror fourier frequencies of opposite sign. This
comes as e^(i*2*pi*f*t)+e^(-i*2*pi*f*t) is only real. Rather than using
the optimization to remove half unused inputs (imaginary) and half
unused outputs (negative frequencies) with N/2 size transform, you can
use the N-size transform more straightforward and accept the losses for
simplicity of clarity. This is why Greenhall only use upper half
frequencies.
Cheers,
Magnus
MD
Magnus Danielson
Sun, May 15, 2022 5:49 PM
Hi Matthias,
On 2022-05-14 12:30, Matthias Welwarsky wrote:
On Samstag, 14. Mai 2022 18:43:13 CEST Carsten Andrich wrote:
However, even for the 2^16 samples used by the CCRMA snippet, the filter
slope rolls off too quickly. I've attached its frequency response. It
exhibits a little wobbly 1/f power slope over 3 orders of magnitude, but
it's essentially flat over the remaining two orders of mag. The used IIR
filter is too short to affect the lower frequencies.
Ah. That explains why the ADEV "degrades" for longer tau. It bends "down". For
very low frequencies, i.e. long tau in ADEV terms, the filter is invisible,
i.e. it passes on white noise. That makes it indeed unusable, for my purposes.
I agree. Good that we come to the same conclusion.
I just have not had time to run simulation and check, and I would check
both spectrum and ADEV, but there is other tests to do such as
autocorrelation function. A more unusual one is the increase of
deviation of an ensamble of simlations, and thus the spread it can take.
It depends clearly on the noise-type and length of sequence.
Cheers,
Magnus
Hi Matthias,
On 2022-05-14 12:30, Matthias Welwarsky wrote:
> On Samstag, 14. Mai 2022 18:43:13 CEST Carsten Andrich wrote:
>> However, even for the 2^16 samples used by the CCRMA snippet, the filter
>> slope rolls off too quickly. I've attached its frequency response. It
>> exhibits a little wobbly 1/f power slope over 3 orders of magnitude, but
>> it's essentially flat over the remaining two orders of mag. The used IIR
>> filter is too short to affect the lower frequencies.
> Ah. That explains why the ADEV "degrades" for longer tau. It bends "down". For
> very low frequencies, i.e. long tau in ADEV terms, the filter is invisible,
> i.e. it passes on white noise. That makes it indeed unusable, for my purposes.
I agree. Good that we come to the same conclusion.
I just have not had time to run simulation and check, and I would check
both spectrum and ADEV, but there is other tests to do such as
autocorrelation function. A more unusual one is the increase of
deviation of an ensamble of simlations, and thus the spread it can take.
It depends clearly on the noise-type and length of sequence.
Cheers,
Magnus
CA
Carsten Andrich
Mon, May 16, 2022 8:53 PM
Hi Magnus,
On 15.05.22 19:37, Magnus Danielson via time-nuts wrote:
This is a result of using real-only values in the complex Fourier
transform. It creates mirror images. Greenhall uses one method to
circumvent the issue.
Can't quite follow on that one. What do you mean by "mirror images"?
Do you mean that my formula for X[k] is missing the complex
conjugates for k = N/2+1 ... N-1? Used with a regular, complex IFFT
the previously posted formula for X[k] would obviously generate
complex output, which is wrong. I missed that one, because my
implementation uses a complex-to-real IFFT, which has the complex
conjugate implied. However, for a the regular, complex (I)FFT given
by my derivation, the correct formula for X[k] should be the following:
{ N^0.5 * \sigma * N(0, 1) , k = 0, N/2
X[k] = { (N/2)^0.5 * \sigma * (N(0, 1) + 1j * N(0, 1)), k = 1 ... N/2
- 1
{ conj(X[N-k]) , k = N/2 + 1
... N - 1
If you process a real-value only sample list by the complex FFT, as
you did, you will have mirror fourier frequencies of opposite sign.
This comes as e^(i2pift)+e^(-i2pift) is only real.
I'm familiar with the hermitian symmetry properties of the Fourier
transform, just wasn't entirely sure that's what you were referring to.
Rather than using the optimization to remove half unused inputs
(imaginary) and half unused outputs (negative frequencies) with N/2
size transform, you can use the N-size transform more straightforward
and accept the losses for simplicity of clarity.
I agree that deriving for the regular, complex-to-complex DFT is best
regarding generality and clarity from a theoretical point of view.
However, from a practical standpoint, the complex-to-real IDFT/IFFT is
just a minor optimization with the hermitian symmetry of the spectrum
hard wired into the transform. It's an N-point (not N/2) IFFT that is
analytically -- not numerically, due to limited precision -- identical
to a fully-complex N-point IFFT of data with hermitian symmetry.
Practically, it halves memory usage and computational complexity [2].
When using common numeric packages like Matlab or NumPy, peak memory
usage may even be cut down to one third by obviating the extra copy of
the real values implied by "dropping" the imaginary component of a
regular IFFT's complex output.
IMHO, the minor head scratching involved in using a complex-to-real IFFT
(N/2+1 vs. N input samples) is well worth the computational advantage,
because the two implementation differences are minor and actually reduce
implementation complexity slightly:
- The complex conjugate required for hermitian symmetry of the IFFT
input does not have to be explicitly computed, but is implied by the
transform.
- The IFFT's output is already real, so explicitly dropping the
imaginary component is not required.
This is why Greenhall only use upper half frequencies.
Hi Magnus,
On 15.05.22 19:37, Magnus Danielson via time-nuts wrote:
>>> This is a result of using real-only values in the complex Fourier
>>> transform. It creates mirror images. Greenhall uses one method to
>>> circumvent the issue.
>> Can't quite follow on that one. What do you mean by "mirror images"?
>> Do you mean that my formula for X[k] is missing the complex
>> conjugates for k = N/2+1 ... N-1? Used with a regular, complex IFFT
>> the previously posted formula for X[k] would obviously generate
>> complex output, which is wrong. I missed that one, because my
>> implementation uses a complex-to-real IFFT, which has the complex
>> conjugate implied. However, for a the regular, complex (I)FFT given
>> by my derivation, the correct formula for X[k] should be the following:
>>
>> { N^0.5 * \sigma * N(0, 1) , k = 0, N/2
>> X[k] = { (N/2)^0.5 * \sigma * (N(0, 1) + 1j * N(0, 1)), k = 1 ... N/2
>> - 1
>> { conj(X[N-k]) , k = N/2 + 1
>> ... N - 1
>
> If you process a real-value only sample list by the complex FFT, as
> you did, you will have mirror fourier frequencies of opposite sign.
> This comes as e^(i*2*pi*f*t)+e^(-i*2*pi*f*t) is only real.
I'm familiar with the hermitian symmetry properties of the Fourier
transform, just wasn't entirely sure that's what you were referring to.
> Rather than using the optimization to remove half unused inputs
> (imaginary) and half unused outputs (negative frequencies) with N/2
> size transform, you can use the N-size transform more straightforward
> and accept the losses for simplicity of clarity.
I agree that deriving for the regular, complex-to-complex DFT is best
regarding generality and clarity from a theoretical point of view.
However, from a practical standpoint, the complex-to-real IDFT/IFFT is
just a minor optimization with the hermitian symmetry of the spectrum
hard wired into the transform. It's an N-point (not N/2) IFFT that is
analytically -- not numerically, due to limited precision -- identical
to a fully-complex N-point IFFT of data with hermitian symmetry.
Practically, it halves memory usage and computational complexity [2].
When using common numeric packages like Matlab or NumPy, peak memory
usage may even be cut down to one third by obviating the extra copy of
the real values implied by "dropping" the imaginary component of a
regular IFFT's complex output.
IMHO, the minor head scratching involved in using a complex-to-real IFFT
(N/2+1 vs. N input samples) is well worth the computational advantage,
because the two implementation differences are minor and actually reduce
implementation complexity slightly:
1. The complex conjugate required for hermitian symmetry of the IFFT
input does not have to be explicitly computed, but is implied by the
transform.
2. The IFFT's output is already real, so explicitly dropping the
imaginary component is not required.
> This is why Greenhall only use upper half frequencies.
Could you elaborate on that? Greenhall uses all 2N frequency-domain
samples of his 2N DFT [1]:
Let Z_k = √(S_k/2) (Uk + iV_k), Z_{2N-k} = Z*_k for k = 1 to N-1
Thanks and best regards,
Carsten
[1] https://apps.dtic.mil/sti/pdfs/ADA485683.pdf#page=5
[2]
https://www.fftw.org/fftw3_doc/One_002dDimensional-DFTs-of-Real-Data.html
--
M.Sc. Carsten Andrich
Technische Universität Ilmenau
Fachgebiet Elektronische Messtechnik und Signalverarbeitung (EMS)
Helmholtzplatz 2
98693 Ilmenau
T +49 3677 69-4269