MD
Magnus Danielson
Fri, Aug 5, 2022 8:28 PM
Erik,
Which algorithm of linear regression do you use? Describe the
cancellation details.
I just want to be sure we talk the same language here.
Have you seen this? https://arxiv.org/abs/1604.01004
Cheers,
Magnus
On 8/4/22 20:04, Erik Kaashoek wrote:
Bob, Magnus,
Using a second counter (my famous Picotest U6200A) locked to the
reference output of the DIY counter and measuring the output of the
signal generator and also set to gate of 10 s it is confirmed that the
frequency pulling (if any) is below 1E-11 (not more digits on the
display of the U6200A)
Generator is set to 10.000,000,000,2 MHz and is measured as such by
the U6200A
As there seems to be no frequency pulling I went back to the
simulation of the linear regression algorithm and discovered that when
there is a integer divide/multiply relation between the internal
reference and the measured frequency the regression looses some accuracy.
For sure if the reference is close to an integer multiple of the
measured frequency (10 Mhz measured -> 200 MHz reference) the
regression collapses completely in accuracy. I hoped that by creating
a fractional relation this collapse would not happen at 10 MHz but is
still there, although much smaller. For this test I'm using a "div 3
times 64 e.g. 213.333,333,333,333... MHz" internal reference frequency
derived from the external 10MHz reference. Ton van Baak warned me
against using fractional relations in a counter but otherwise it is
impossible to measure a 10 MHz input signal with any accuracy without
a HW time to digital as the interpolation no longer works. I can
switch dynamically to 200 MHz or 245 MHz reference and these produce
much much worse results.
I realize this test only measures if the TCXO used as reference in the
DIY counter does not show frequency pulling but it does not show if
the PLL used to convert the 10MHz to 213.333333333... MHz for the
internal counters shows any frequency pulling.
NOt
Erik.
Erik,
Which algorithm of linear regression do you use? Describe the
cancellation details.
I just want to be sure we talk the same language here.
Have you seen this? https://arxiv.org/abs/1604.01004
Cheers,
Magnus
On 8/4/22 20:04, Erik Kaashoek wrote:
> Bob, Magnus,
>
> Using a second counter (my famous Picotest U6200A) locked to the
> reference output of the DIY counter and measuring the output of the
> signal generator and also set to gate of 10 s it is confirmed that the
> frequency pulling (if any) is below 1E-11 (not more digits on the
> display of the U6200A)
> Generator is set to 10.000,000,000,2 MHz and is measured as such by
> the U6200A
> As there seems to be no frequency pulling I went back to the
> simulation of the linear regression algorithm and discovered that when
> there is a integer divide/multiply relation between the internal
> reference and the measured frequency the regression looses some accuracy.
> For sure if the reference is close to an integer multiple of the
> measured frequency (10 Mhz measured -> 200 MHz reference) the
> regression collapses completely in accuracy. I hoped that by creating
> a fractional relation this collapse would not happen at 10 MHz but is
> still there, although much smaller. For this test I'm using a "div 3
> times 64 e.g. 213.333,333,333,333... MHz" internal reference frequency
> derived from the external 10MHz reference. Ton van Baak warned me
> against using fractional relations in a counter but otherwise it is
> impossible to measure a 10 MHz input signal with any accuracy without
> a HW time to digital as the interpolation no longer works. I can
> switch dynamically to 200 MHz or 245 MHz reference and these produce
> much much worse results.
> I realize this test only measures if the TCXO used as reference in the
> DIY counter does not show frequency pulling but it does not show if
> the PLL used to convert the 10MHz to 213.333333333... MHz for the
> internal counters shows any frequency pulling.
> NOt
> Erik.
EK
Erik Kaashoek
Sat, Aug 6, 2022 7:15 AM
Magnus,
Many thanks for the reference to your article on linear regression. I
will need a lot of time to study this as my math skills have become very
rusty after not having been used for 40 years.
Not sure what you meant with "cancelation"
What I did is much much more simple.
The Excel simulation is an easy way to understand the calculations [1]
Let me know if you can not open an Excel file.
The input data is in red cells
The important output is in blue cells (phase, frequency)
The formula are in green cells
The "time noise" input was to check if dithering of the interpolation
points over time could improve the outcome, which it did not.
The internal ref frequency is derived from a 10MHz reference using a PLL
In the real code all sums are calculated in 32 or 64 bit integers. This
limits the maximum gate time to somewhere above 10 seconds.
The simulation uses 0.1 ms interval between the interpolation points,
the actual implementation uses 0.05 ms
At the gate moment the final divide into a 64 bit double is done, if
sufficient interpolation points, otherwise the direct calculation is used.
As you can see in the Regression error graph the final gate moment has a
big influence on the error which is a pity as the pattern in the error
suggests there is something that can be "interpolated" or eliminated
And the impact of the final gate moment increases when close to an
integer multiply/divide relation to the reference.
Hope, once I understand your algorithm, this can be improved.
[1] http://athome.kaashoek.com/time-nuts/Regression.xlsx
On 5-8-2022 22:28, Magnus Danielson wrote:
Erik,
Which algorithm of linear regression do you use? Describe the
cancellation details.
I just want to be sure we talk the same language here.
Have you seen this? https://arxiv.org/abs/1604.01004
Cheers,
Magnus
On 8/4/22 20:04, Erik Kaashoek wrote:
Bob, Magnus,
Using a second counter (my famous Picotest U6200A) locked to the
reference output of the DIY counter and measuring the output of the
signal generator and also set to gate of 10 s it is confirmed that
the frequency pulling (if any) is below 1E-11 (not more digits on the
display of the U6200A)
Generator is set to 10.000,000,000,2 MHz and is measured as such by
the U6200A
As there seems to be no frequency pulling I went back to the
simulation of the linear regression algorithm and discovered that
when there is a integer divide/multiply relation between the
internal reference and the measured frequency the regression looses
some accuracy.
For sure if the reference is close to an integer multiple of the
measured frequency (10 Mhz measured -> 200 MHz reference) the
regression collapses completely in accuracy. I hoped that by creating
a fractional relation this collapse would not happen at 10 MHz but is
still there, although much smaller. For this test I'm using a "div 3
times 64 e.g. 213.333,333,333,333... MHz" internal reference
frequency derived from the external 10MHz reference. Ton van Baak
warned me against using fractional relations in a counter but
otherwise it is impossible to measure a 10 MHz input signal with any
accuracy without a HW time to digital as the interpolation no longer
works. I can switch dynamically to 200 MHz or 245 MHz reference and
these produce much much worse results.
I realize this test only measures if the TCXO used as reference in
the DIY counter does not show frequency pulling but it does not show
if the PLL used to convert the 10MHz to 213.333333333... MHz for the
internal counters shows any frequency pulling.
NOt
Erik.
Magnus,
Many thanks for the reference to your article on linear regression. I
will need a lot of time to study this as my math skills have become very
rusty after not having been used for 40 years.
Not sure what you meant with "cancelation"
What I did is much much more simple.
The Excel simulation is an easy way to understand the calculations [1]
Let me know if you can not open an Excel file.
The input data is in red cells
The important output is in blue cells (phase, frequency)
The formula are in green cells
The "time noise" input was to check if dithering of the interpolation
points over time could improve the outcome, which it did not.
The internal ref frequency is derived from a 10MHz reference using a PLL
In the real code all sums are calculated in 32 or 64 bit integers. This
limits the maximum gate time to somewhere above 10 seconds.
The simulation uses 0.1 ms interval between the interpolation points,
the actual implementation uses 0.05 ms
At the gate moment the final divide into a 64 bit double is done, if
sufficient interpolation points, otherwise the direct calculation is used.
As you can see in the Regression error graph the final gate moment has a
big influence on the error which is a pity as the pattern in the error
suggests there is something that can be "interpolated" or eliminated
And the impact of the final gate moment increases when close to an
integer multiply/divide relation to the reference.
Hope, once I understand your algorithm, this can be improved.
[1] http://athome.kaashoek.com/time-nuts/Regression.xlsx
On 5-8-2022 22:28, Magnus Danielson wrote:
> Erik,
>
> Which algorithm of linear regression do you use? Describe the
> cancellation details.
>
> I just want to be sure we talk the same language here.
>
> Have you seen this? https://arxiv.org/abs/1604.01004
>
> Cheers,
> Magnus
>
> On 8/4/22 20:04, Erik Kaashoek wrote:
>> Bob, Magnus,
>>
>> Using a second counter (my famous Picotest U6200A) locked to the
>> reference output of the DIY counter and measuring the output of the
>> signal generator and also set to gate of 10 s it is confirmed that
>> the frequency pulling (if any) is below 1E-11 (not more digits on the
>> display of the U6200A)
>> Generator is set to 10.000,000,000,2 MHz and is measured as such by
>> the U6200A
>> As there seems to be no frequency pulling I went back to the
>> simulation of the linear regression algorithm and discovered that
>> when there is a integer divide/multiply relation between the
>> internal reference and the measured frequency the regression looses
>> some accuracy.
>> For sure if the reference is close to an integer multiple of the
>> measured frequency (10 Mhz measured -> 200 MHz reference) the
>> regression collapses completely in accuracy. I hoped that by creating
>> a fractional relation this collapse would not happen at 10 MHz but is
>> still there, although much smaller. For this test I'm using a "div 3
>> times 64 e.g. 213.333,333,333,333... MHz" internal reference
>> frequency derived from the external 10MHz reference. Ton van Baak
>> warned me against using fractional relations in a counter but
>> otherwise it is impossible to measure a 10 MHz input signal with any
>> accuracy without a HW time to digital as the interpolation no longer
>> works. I can switch dynamically to 200 MHz or 245 MHz reference and
>> these produce much much worse results.
>> I realize this test only measures if the TCXO used as reference in
>> the DIY counter does not show frequency pulling but it does not show
>> if the PLL used to convert the 10MHz to 213.333333333... MHz for the
>> internal counters shows any frequency pulling.
>> NOt
>> Erik.
MD
Magnus Danielson
Sat, Aug 6, 2022 8:09 PM
Erik,
On 2022-08-06 09:15, Erik Kaashoek wrote:
Magnus,
Many thanks for the reference to your article on linear regression. I
will need a lot of time to study this as my math skills have become
very rusty after not having been used for 40 years.
Well, actually you will not need to know much math at all, as I use it
only to get rid of it. You only need to understand how to form the sums
C and D, and then do the estimations out of that using C, D, N and tau_0.
The linear algebra trick actually jumps behind the linear algebra using
the knowledge that measurements is a sequence of tau_0 distance, and
that allows significant reduction of the math into a much more benign
form. It also creates benign decimation methods that you can apply to
any form of your liking.
Shaping the data in C and D summations properly, and you can avoid
roundings in that process, and only experience it in the final
estimation scaling, but after the cancellations.
The cute thing is that each least square estimation, which is the same
as linear regression, done using these methods will strictly respect the
least square and not compromise through averaging etc. Accumulation can
be done in blocks and then be merged as if it was accumulated in direct
sequence. So you could to a small tight decimation for say 1024 samples
and then do a more costly decimation for multiples of that.
Not sure what you meant with "cancelation"
What I did is much much more simple.
The Excel simulation is an easy way to understand the calculations [1]
Let me know if you can not open an Excel file.
The input data is in red cells
The important output is in blue cells (phase, frequency)
The formula are in green cells
The "time noise" input was to check if dithering of the interpolation
points over time could improve the outcome, which it did not.
The internal ref frequency is derived from a 10MHz reference using a PLL
In the real code all sums are calculated in 32 or 64 bit integers.
This limits the maximum gate time to somewhere above 10 seconds.
The simulation uses 0.1 ms interval between the interpolation points,
the actual implementation uses 0.05 ms
At the gate moment the final divide into a 64 bit double is done, if
sufficient interpolation points, otherwise the direct calculation is
used.
As you can see in the Regression error graph the final gate moment has
a big influence on the error which is a pity as the pattern in the
error suggests there is something that can be "interpolated" or
eliminated
And the impact of the final gate moment increases when close to an
integer multiply/divide relation to the reference.
Hope, once I understand your algorithm, this can be improved.
[1] http://athome.kaashoek.com/time-nuts/Regression.xlsx
OK, I will check this. Thanks for providing it.
Cheers,
Magnus
On 5-8-2022 22:28, Magnus Danielson wrote:
Erik,
Which algorithm of linear regression do you use? Describe the
cancellation details.
I just want to be sure we talk the same language here.
Have you seen this? https://arxiv.org/abs/1604.01004
Cheers,
Magnus
On 8/4/22 20:04, Erik Kaashoek wrote:
Bob, Magnus,
Using a second counter (my famous Picotest U6200A) locked to the
reference output of the DIY counter and measuring the output of the
signal generator and also set to gate of 10 s it is confirmed that
the frequency pulling (if any) is below 1E-11 (not more digits on
the display of the U6200A)
Generator is set to 10.000,000,000,2 MHz and is measured as such by
the U6200A
As there seems to be no frequency pulling I went back to the
simulation of the linear regression algorithm and discovered that
when there is a integer divide/multiply relation between the
internal reference and the measured frequency the regression looses
some accuracy.
For sure if the reference is close to an integer multiple of the
measured frequency (10 Mhz measured -> 200 MHz reference) the
regression collapses completely in accuracy. I hoped that by
creating a fractional relation this collapse would not happen at 10
MHz but is still there, although much smaller. For this test I'm
using a "div 3 times 64 e.g. 213.333,333,333,333... MHz" internal
reference frequency derived from the external 10MHz reference. Ton
van Baak warned me against using fractional relations in a counter
but otherwise it is impossible to measure a 10 MHz input signal with
any accuracy without a HW time to digital as the interpolation no
longer works. I can switch dynamically to 200 MHz or 245 MHz
reference and these produce much much worse results.
I realize this test only measures if the TCXO used as reference in
the DIY counter does not show frequency pulling but it does not show
if the PLL used to convert the 10MHz to 213.333333333... MHz for the
internal counters shows any frequency pulling.
NOt
Erik.
Erik,
On 2022-08-06 09:15, Erik Kaashoek wrote:
> Magnus,
>
> Many thanks for the reference to your article on linear regression. I
> will need a lot of time to study this as my math skills have become
> very rusty after not having been used for 40 years.
Well, actually you will not need to know much math at all, as I use it
only to get rid of it. You only need to understand how to form the sums
C and D, and then do the estimations out of that using C, D, N and tau_0.
The linear algebra trick actually jumps behind the linear algebra using
the knowledge that measurements is a sequence of tau_0 distance, and
that allows significant reduction of the math into a much more benign
form. It also creates benign decimation methods that you can apply to
any form of your liking.
Shaping the data in C and D summations properly, and you can avoid
roundings in that process, and only experience it in the final
estimation scaling, but after the cancellations.
The cute thing is that each least square estimation, which is the same
as linear regression, done using these methods will strictly respect the
least square and not compromise through averaging etc. Accumulation can
be done in blocks and then be merged as if it was accumulated in direct
sequence. So you could to a small tight decimation for say 1024 samples
and then do a more costly decimation for multiples of that.
> Not sure what you meant with "cancelation"
>
> What I did is much much more simple.
> The Excel simulation is an easy way to understand the calculations [1]
> Let me know if you can not open an Excel file.
>
> The input data is in red cells
> The important output is in blue cells (phase, frequency)
> The formula are in green cells
> The "time noise" input was to check if dithering of the interpolation
> points over time could improve the outcome, which it did not.
>
> The internal ref frequency is derived from a 10MHz reference using a PLL
>
> In the real code all sums are calculated in 32 or 64 bit integers.
> This limits the maximum gate time to somewhere above 10 seconds.
> The simulation uses 0.1 ms interval between the interpolation points,
> the actual implementation uses 0.05 ms
> At the gate moment the final divide into a 64 bit double is done, if
> sufficient interpolation points, otherwise the direct calculation is
> used.
>
> As you can see in the Regression error graph the final gate moment has
> a big influence on the error which is a pity as the pattern in the
> error suggests there is something that can be "interpolated" or
> eliminated
> And the impact of the final gate moment increases when close to an
> integer multiply/divide relation to the reference.
>
> Hope, once I understand your algorithm, this can be improved.
>
> [1] http://athome.kaashoek.com/time-nuts/Regression.xlsx
OK, I will check this. Thanks for providing it.
Cheers,
Magnus
>
> On 5-8-2022 22:28, Magnus Danielson wrote:
>> Erik,
>>
>> Which algorithm of linear regression do you use? Describe the
>> cancellation details.
>>
>> I just want to be sure we talk the same language here.
>>
>> Have you seen this? https://arxiv.org/abs/1604.01004
>>
>> Cheers,
>> Magnus
>>
>> On 8/4/22 20:04, Erik Kaashoek wrote:
>>> Bob, Magnus,
>>>
>>> Using a second counter (my famous Picotest U6200A) locked to the
>>> reference output of the DIY counter and measuring the output of the
>>> signal generator and also set to gate of 10 s it is confirmed that
>>> the frequency pulling (if any) is below 1E-11 (not more digits on
>>> the display of the U6200A)
>>> Generator is set to 10.000,000,000,2 MHz and is measured as such by
>>> the U6200A
>>> As there seems to be no frequency pulling I went back to the
>>> simulation of the linear regression algorithm and discovered that
>>> when there is a integer divide/multiply relation between the
>>> internal reference and the measured frequency the regression looses
>>> some accuracy.
>>> For sure if the reference is close to an integer multiple of the
>>> measured frequency (10 Mhz measured -> 200 MHz reference) the
>>> regression collapses completely in accuracy. I hoped that by
>>> creating a fractional relation this collapse would not happen at 10
>>> MHz but is still there, although much smaller. For this test I'm
>>> using a "div 3 times 64 e.g. 213.333,333,333,333... MHz" internal
>>> reference frequency derived from the external 10MHz reference. Ton
>>> van Baak warned me against using fractional relations in a counter
>>> but otherwise it is impossible to measure a 10 MHz input signal with
>>> any accuracy without a HW time to digital as the interpolation no
>>> longer works. I can switch dynamically to 200 MHz or 245 MHz
>>> reference and these produce much much worse results.
>>> I realize this test only measures if the TCXO used as reference in
>>> the DIY counter does not show frequency pulling but it does not show
>>> if the PLL used to convert the 10MHz to 213.333333333... MHz for the
>>> internal counters shows any frequency pulling.
>>> NOt
>>> Erik.
>
EK
Erik Kaashoek
Sun, Aug 7, 2022 6:13 AM
Magnus,
The simulation was incorrect in reflecting the capture synchronization
to the input edge. This has been corrected.
And I added the option to specify a crude form of phase noise called
"Freq noise"
Erik
On 6-8-2022 22:09, Magnus Danielson wrote:
Erik,
On 2022-08-06 09:15, Erik Kaashoek wrote:
Magnus,
Many thanks for the reference to your article on linear regression. I
will need a lot of time to study this as my math skills have become
very rusty after not having been used for 40 years.
Well, actually you will not need to know much math at all, as I use it
only to get rid of it. You only need to understand how to form the
sums C and D, and then do the estimations out of that using C, D, N
and tau_0.
The linear algebra trick actually jumps behind the linear algebra
using the knowledge that measurements is a sequence of tau_0 distance,
and that allows significant reduction of the math into a much more
benign form. It also creates benign decimation methods that you can
apply to any form of your liking.
Shaping the data in C and D summations properly, and you can avoid
roundings in that process, and only experience it in the final
estimation scaling, but after the cancellations.
The cute thing is that each least square estimation, which is the same
as linear regression, done using these methods will strictly respect
the least square and not compromise through averaging etc.
Accumulation can be done in blocks and then be merged as if it was
accumulated in direct sequence. So you could to a small tight
decimation for say 1024 samples and then do a more costly decimation
for multiples of that.
Not sure what you meant with "cancelation"
What I did is much much more simple.
The Excel simulation is an easy way to understand the calculations [1]
Let me know if you can not open an Excel file.
The input data is in red cells
The important output is in blue cells (phase, frequency)
The formula are in green cells
The "time noise" input was to check if dithering of the interpolation
points over time could improve the outcome, which it did not.
The internal ref frequency is derived from a 10MHz reference using a PLL
In the real code all sums are calculated in 32 or 64 bit integers.
This limits the maximum gate time to somewhere above 10 seconds.
The simulation uses 0.1 ms interval between the interpolation points,
the actual implementation uses 0.05 ms
At the gate moment the final divide into a 64 bit double is done, if
sufficient interpolation points, otherwise the direct calculation is
used.
As you can see in the Regression error graph the final gate moment
has a big influence on the error which is a pity as the pattern in
the error suggests there is something that can be "interpolated" or
eliminated
And the impact of the final gate moment increases when close to an
integer multiply/divide relation to the reference.
Hope, once I understand your algorithm, this can be improved.
[1] http://athome.kaashoek.com/time-nuts/Regression.xlsx
OK, I will check this. Thanks for providing it.
Cheers,
Magnus
Magnus,
The simulation was incorrect in reflecting the capture synchronization
to the input edge. This has been corrected.
And I added the option to specify a crude form of phase noise called
"Freq noise"
Erik
On 6-8-2022 22:09, Magnus Danielson wrote:
> Erik,
>
> On 2022-08-06 09:15, Erik Kaashoek wrote:
>> Magnus,
>>
>> Many thanks for the reference to your article on linear regression. I
>> will need a lot of time to study this as my math skills have become
>> very rusty after not having been used for 40 years.
>
> Well, actually you will not need to know much math at all, as I use it
> only to get rid of it. You only need to understand how to form the
> sums C and D, and then do the estimations out of that using C, D, N
> and tau_0.
>
> The linear algebra trick actually jumps behind the linear algebra
> using the knowledge that measurements is a sequence of tau_0 distance,
> and that allows significant reduction of the math into a much more
> benign form. It also creates benign decimation methods that you can
> apply to any form of your liking.
>
> Shaping the data in C and D summations properly, and you can avoid
> roundings in that process, and only experience it in the final
> estimation scaling, but after the cancellations.
>
> The cute thing is that each least square estimation, which is the same
> as linear regression, done using these methods will strictly respect
> the least square and not compromise through averaging etc.
> Accumulation can be done in blocks and then be merged as if it was
> accumulated in direct sequence. So you could to a small tight
> decimation for say 1024 samples and then do a more costly decimation
> for multiples of that.
>
>> Not sure what you meant with "cancelation"
>>
>> What I did is much much more simple.
>> The Excel simulation is an easy way to understand the calculations [1]
>> Let me know if you can not open an Excel file.
>>
>> The input data is in red cells
>> The important output is in blue cells (phase, frequency)
>> The formula are in green cells
>> The "time noise" input was to check if dithering of the interpolation
>> points over time could improve the outcome, which it did not.
>>
>> The internal ref frequency is derived from a 10MHz reference using a PLL
>>
>> In the real code all sums are calculated in 32 or 64 bit integers.
>> This limits the maximum gate time to somewhere above 10 seconds.
>> The simulation uses 0.1 ms interval between the interpolation points,
>> the actual implementation uses 0.05 ms
>> At the gate moment the final divide into a 64 bit double is done, if
>> sufficient interpolation points, otherwise the direct calculation is
>> used.
>>
>> As you can see in the Regression error graph the final gate moment
>> has a big influence on the error which is a pity as the pattern in
>> the error suggests there is something that can be "interpolated" or
>> eliminated
>> And the impact of the final gate moment increases when close to an
>> integer multiply/divide relation to the reference.
>>
>> Hope, once I understand your algorithm, this can be improved.
>>
>> [1] http://athome.kaashoek.com/time-nuts/Regression.xlsx
>
> OK, I will check this. Thanks for providing it.
>
> Cheers,
> Magnus
>
EK
Erik Kaashoek
Sun, Aug 7, 2022 11:28 AM
Magnus,
Due to the design of the counter it is not possible to guarantee all
captures are at exactly tau_0 distance.
Erik.
On 6-8-2022 22:09, Magnus Danielson wrote:
The linear algebra trick actually jumps behind the linear algebra
using the knowledge that measurements is a sequence of tau_0 distance,
and that allows significant reduction of the math into a much more
benign form. It also creates benign decimation methods that you can
apply to any form of your liking.
Magnus,
Due to the design of the counter it is not possible to guarantee all
captures are at exactly tau_0 distance.
Erik.
On 6-8-2022 22:09, Magnus Danielson wrote:
> The linear algebra trick actually jumps behind the linear algebra
> using the knowledge that measurements is a sequence of tau_0 distance,
> and that allows significant reduction of the math into a much more
> benign form. It also creates benign decimation methods that you can
> apply to any form of your liking.
MD
Magnus Danielson
Sun, Aug 7, 2022 8:08 PM
Erik,
They never are. It's a running assumption that everyone makes.
Cheers,
Magnus
On 8/7/22 13:28, Erik Kaashoek wrote:
Magnus,
Due to the design of the counter it is not possible to guarantee all
captures are at exactly tau_0 distance.
Erik.
On 6-8-2022 22:09, Magnus Danielson wrote:
The linear algebra trick actually jumps behind the linear algebra
using the knowledge that measurements is a sequence of tau_0
distance, and that allows significant reduction of the math into a
much more benign form. It also creates benign decimation methods that
you can apply to any form of your liking.
Erik,
They never are. It's a running assumption that everyone makes.
Cheers,
Magnus
On 8/7/22 13:28, Erik Kaashoek wrote:
> Magnus,
> Due to the design of the counter it is not possible to guarantee all
> captures are at exactly tau_0 distance.
> Erik.
>
> On 6-8-2022 22:09, Magnus Danielson wrote:
>> The linear algebra trick actually jumps behind the linear algebra
>> using the knowledge that measurements is a sequence of tau_0
>> distance, and that allows significant reduction of the math into a
>> much more benign form. It also creates benign decimation methods that
>> you can apply to any form of your liking.
>
EK
Erik Kaashoek
Sun, Aug 7, 2022 8:21 PM
Magnus,
Now you confuse me.
Can you simplify the calculation assuming the samples are equally spaced
even if they are not? Can you assume the spread is noise and it will sjaal
out? How about gaps?
Please help me to understand
Erik
On Sun, Aug 7, 2022, 22:08 Magnus Danielson magnus@rubidium.se wrote:
Erik,
They never are. It's a running assumption that everyone makes.
Cheers,
Magnus
On 8/7/22 13:28, Erik Kaashoek wrote:
Magnus,
Due to the design of the counter it is not possible to guarantee all
captures are at exactly tau_0 distance.
Erik.
On 6-8-2022 22:09, Magnus Danielson wrote:
The linear algebra trick actually jumps behind the linear algebra
using the knowledge that measurements is a sequence of tau_0
distance, and that allows significant reduction of the math into a
much more benign form. It also creates benign decimation methods that
you can apply to any form of your liking.
Magnus,
Now you confuse me.
Can you simplify the calculation assuming the samples are equally spaced
even if they are not? Can you assume the spread is noise and it will sjaal
out? How about gaps?
Please help me to understand
Erik
On Sun, Aug 7, 2022, 22:08 Magnus Danielson <magnus@rubidium.se> wrote:
> Erik,
>
> They never are. It's a running assumption that everyone makes.
>
> Cheers,
> Magnus
>
> On 8/7/22 13:28, Erik Kaashoek wrote:
> > Magnus,
> > Due to the design of the counter it is not possible to guarantee all
> > captures are at exactly tau_0 distance.
> > Erik.
> >
> > On 6-8-2022 22:09, Magnus Danielson wrote:
> >> The linear algebra trick actually jumps behind the linear algebra
> >> using the knowledge that measurements is a sequence of tau_0
> >> distance, and that allows significant reduction of the math into a
> >> much more benign form. It also creates benign decimation methods that
> >> you can apply to any form of your liking.
> >
>
MD
Magnus Danielson
Sun, Aug 7, 2022 8:54 PM
Erik,
OK, so it's not big magic really. There is an assumed time-base length.
The actual time each such time-stamp is done shifts around a little. For
a 10 MHz signal, the time-base shifts around within one period so 100
ns. The actual trigger point 1 and actual trigger point 2 will on
average be the tau_0 time-distance from each other. As we do this for a
set of frequency estimations and average these, it averages out. This is
the basic assumption being used in all estimations I've seen. My
algorithm makes no different assumption than any of the others I've seen.
I have seen those processing this with more detail of actual delay, but
that is when focusing on single-measurements. The danger there is that
numeric precision eats you quickly.
Now, the variation you get is really a systematic play on the period
time and the tau_0 and the phase-ramp you get out of that. This breaks
down into other phase-ramps of diminishing frequency and amplitude, just
as in a DDS. This systematic pattern rolls of quickly in averaging while
random noise does not roll off as quickly. The systematic pattern can be
"nulled" by matching average length to pattern length, as always. You
can't really resolve this systematic noise before you know the
relationship, rather it is a consequence of the actual rational number
and how you choose to measure it. Random noise tends to smooth things out.
You need to compare the noise of the tau_0 "instability" with that of
the signal and the time-interval measurement error, it's fairly small
compared to the others together typically.
Now, the algorithm you have in that paper does not handle gaps in data.
It assumed a continuous block. Essentially the linear ramp of phase and
frequency needs to be unbroken or it will be producing the wrong
results. You can handle gaped data by altering the algorithm, it will be
a little more messy, but still maintain most of the benefits. Gaped data
is a big thing, and valuable work has been done for ADEV by Dave Howe.
Cheers,
Magnus
On 8/7/22 22:21, Erik Kaashoek wrote:
Magnus,
Now you confuse me.
Can you simplify the calculation assuming the samples are equally
spaced even if they are not? Can you assume the spread is noise and it
will sjaal out? How about gaps?
Please help me to understand
Erik
On Sun, Aug 7, 2022, 22:08 Magnus Danielson magnus@rubidium.se wrote:
Erik,
They never are. It's a running assumption that everyone makes.
Cheers,
Magnus
On 8/7/22 13:28, Erik Kaashoek wrote:
> Magnus,
> Due to the design of the counter it is not possible to guarantee
all
> captures are at exactly tau_0 distance.
> Erik.
>
> On 6-8-2022 22:09, Magnus Danielson wrote:
>> The linear algebra trick actually jumps behind the linear algebra
>> using the knowledge that measurements is a sequence of tau_0
>> distance, and that allows significant reduction of the math into a
>> much more benign form. It also creates benign decimation
methods that
>> you can apply to any form of your liking.
>
Erik,
OK, so it's not big magic really. There is an assumed time-base length.
The actual time each such time-stamp is done shifts around a little. For
a 10 MHz signal, the time-base shifts around within one period so 100
ns. The actual trigger point 1 and actual trigger point 2 will on
average be the tau_0 time-distance from each other. As we do this for a
set of frequency estimations and average these, it averages out. This is
the basic assumption being used in all estimations I've seen. My
algorithm makes no different assumption than any of the others I've seen.
I have seen those processing this with more detail of actual delay, but
that is when focusing on single-measurements. The danger there is that
numeric precision eats you quickly.
Now, the variation you get is really a systematic play on the period
time and the tau_0 and the phase-ramp you get out of that. This breaks
down into other phase-ramps of diminishing frequency and amplitude, just
as in a DDS. This systematic pattern rolls of quickly in averaging while
random noise does not roll off as quickly. The systematic pattern can be
"nulled" by matching average length to pattern length, as always. You
can't really resolve this systematic noise before you know the
relationship, rather it is a consequence of the actual rational number
and how you choose to measure it. Random noise tends to smooth things out.
You need to compare the noise of the tau_0 "instability" with that of
the signal and the time-interval measurement error, it's fairly small
compared to the others together typically.
Now, the algorithm you have in that paper does not handle gaps in data.
It assumed a continuous block. Essentially the linear ramp of phase and
frequency needs to be unbroken or it will be producing the wrong
results. You can handle gaped data by altering the algorithm, it will be
a little more messy, but still maintain most of the benefits. Gaped data
is a big thing, and valuable work has been done for ADEV by Dave Howe.
Cheers,
Magnus
On 8/7/22 22:21, Erik Kaashoek wrote:
> Magnus,
> Now you confuse me.
> Can you simplify the calculation assuming the samples are equally
> spaced even if they are not? Can you assume the spread is noise and it
> will sjaal out? How about gaps?
> Please help me to understand
> Erik
>
> On Sun, Aug 7, 2022, 22:08 Magnus Danielson <magnus@rubidium.se> wrote:
>
> Erik,
>
> They never are. It's a running assumption that everyone makes.
>
> Cheers,
> Magnus
>
> On 8/7/22 13:28, Erik Kaashoek wrote:
> > Magnus,
> > Due to the design of the counter it is not possible to guarantee
> all
> > captures are at exactly tau_0 distance.
> > Erik.
> >
> > On 6-8-2022 22:09, Magnus Danielson wrote:
> >> The linear algebra trick actually jumps behind the linear algebra
> >> using the knowledge that measurements is a sequence of tau_0
> >> distance, and that allows significant reduction of the math into a
> >> much more benign form. It also creates benign decimation
> methods that
> >> you can apply to any form of your liking.
> >
>
BK
Bob kb8tq
Sun, Aug 7, 2022 9:02 PM
Hi
There are a lot of ways to deal with gaps. The best one is not to
have them in the first place :). It is not uncommon to ignore the
gap, but that does create issues. It also is not uncommon to plug
in “average” data. Again, issues are created. The hope is that they
are not as significant as ignoring it. How you generate that average
data …. that depends …..
So, no perfect solutions once you have a gap. If a setup produces
gaps on a regular basis, that’s probably not a really good way to
do things.
Bob
On Aug 7, 2022, at 12:54 PM, Magnus Danielson magnus@rubidium.se wrote:
Erik,
OK, so it's not big magic really. There is an assumed time-base length. The actual time each such time-stamp is done shifts around a little. For a 10 MHz signal, the time-base shifts around within one period so 100 ns. The actual trigger point 1 and actual trigger point 2 will on average be the tau_0 time-distance from each other. As we do this for a set of frequency estimations and average these, it averages out. This is the basic assumption being used in all estimations I've seen. My algorithm makes no different assumption than any of the others I've seen.
I have seen those processing this with more detail of actual delay, but that is when focusing on single-measurements. The danger there is that numeric precision eats you quickly.
Now, the variation you get is really a systematic play on the period time and the tau_0 and the phase-ramp you get out of that. This breaks down into other phase-ramps of diminishing frequency and amplitude, just as in a DDS. This systematic pattern rolls of quickly in averaging while random noise does not roll off as quickly. The systematic pattern can be "nulled" by matching average length to pattern length, as always. You can't really resolve this systematic noise before you know the relationship, rather it is a consequence of the actual rational number and how you choose to measure it. Random noise tends to smooth things out.
You need to compare the noise of the tau_0 "instability" with that of the signal and the time-interval measurement error, it's fairly small compared to the others together typically.
Now, the algorithm you have in that paper does not handle gaps in data. It assumed a continuous block. Essentially the linear ramp of phase and frequency needs to be unbroken or it will be producing the wrong results. You can handle gaped data by altering the algorithm, it will be a little more messy, but still maintain most of the benefits. Gaped data is a big thing, and valuable work has been done for ADEV by Dave Howe.
Cheers,
Magnus
On 8/7/22 22:21, Erik Kaashoek wrote:
Magnus,
Now you confuse me.
Can you simplify the calculation assuming the samples are equally spaced even if they are not? Can you assume the spread is noise and it will sjaal out? How about gaps?
Please help me to understand
Erik
On Sun, Aug 7, 2022, 22:08 Magnus Danielson <magnus@rubidium.se mailto:magnus@rubidium.se> wrote:
Erik,
They never are. It's a running assumption that everyone makes.
Cheers,
Magnus
On 8/7/22 13:28, Erik Kaashoek wrote:
Magnus,
Due to the design of the counter it is not possible to guarantee all
captures are at exactly tau_0 distance.
Erik.
On 6-8-2022 22:09, Magnus Danielson wrote:
The linear algebra trick actually jumps behind the linear algebra
using the knowledge that measurements is a sequence of tau_0
distance, and that allows significant reduction of the math into a
much more benign form. It also creates benign decimation methods that
you can apply to any form of your liking.
Hi
There are a lot of ways to deal with gaps. The best one is not to
have them in the first place :). It is not uncommon to ignore the
gap, but that does create issues. It also is not uncommon to plug
in “average” data. Again, issues are created. The hope is that they
are not as significant as ignoring it. How you generate that average
data …. that depends …..
So, no perfect solutions once you have a gap. If a setup produces
gaps on a regular basis, that’s probably not a really good way to
do things.
Bob
> On Aug 7, 2022, at 12:54 PM, Magnus Danielson <magnus@rubidium.se> wrote:
>
> Erik,
>
> OK, so it's not big magic really. There is an assumed time-base length. The actual time each such time-stamp is done shifts around a little. For a 10 MHz signal, the time-base shifts around within one period so 100 ns. The actual trigger point 1 and actual trigger point 2 will on average be the tau_0 time-distance from each other. As we do this for a set of frequency estimations and average these, it averages out. This is the basic assumption being used in all estimations I've seen. My algorithm makes no different assumption than any of the others I've seen.
>
> I have seen those processing this with more detail of actual delay, but that is when focusing on single-measurements. The danger there is that numeric precision eats you quickly.
>
> Now, the variation you get is really a systematic play on the period time and the tau_0 and the phase-ramp you get out of that. This breaks down into other phase-ramps of diminishing frequency and amplitude, just as in a DDS. This systematic pattern rolls of quickly in averaging while random noise does not roll off as quickly. The systematic pattern can be "nulled" by matching average length to pattern length, as always. You can't really resolve this systematic noise before you know the relationship, rather it is a consequence of the actual rational number and how you choose to measure it. Random noise tends to smooth things out.
>
> You need to compare the noise of the tau_0 "instability" with that of the signal and the time-interval measurement error, it's fairly small compared to the others together typically.
>
> Now, the algorithm you have in that paper does not handle gaps in data. It assumed a continuous block. Essentially the linear ramp of phase and frequency needs to be unbroken or it will be producing the wrong results. You can handle gaped data by altering the algorithm, it will be a little more messy, but still maintain most of the benefits. Gaped data is a big thing, and valuable work has been done for ADEV by Dave Howe.
>
> Cheers,
> Magnus
>
> On 8/7/22 22:21, Erik Kaashoek wrote:
>> Magnus,
>> Now you confuse me.
>> Can you simplify the calculation assuming the samples are equally spaced even if they are not? Can you assume the spread is noise and it will sjaal out? How about gaps?
>> Please help me to understand
>> Erik
>>
>> On Sun, Aug 7, 2022, 22:08 Magnus Danielson <magnus@rubidium.se <mailto:magnus@rubidium.se>> wrote:
>> Erik,
>>
>> They never are. It's a running assumption that everyone makes.
>>
>> Cheers,
>> Magnus
>>
>> On 8/7/22 13:28, Erik Kaashoek wrote:
>> > Magnus,
>> > Due to the design of the counter it is not possible to guarantee all
>> > captures are at exactly tau_0 distance.
>> > Erik.
>> >
>> > On 6-8-2022 22:09, Magnus Danielson wrote:
>> >> The linear algebra trick actually jumps behind the linear algebra
>> >> using the knowledge that measurements is a sequence of tau_0
>> >> distance, and that allows significant reduction of the math into a
>> >> much more benign form. It also creates benign decimation methods that
>> >> you can apply to any form of your liking.
>> >
MD
Magnus Danielson
Mon, Aug 8, 2022 10:03 AM
Hi,
I am talking about very recent research that was presented at the
IFCS-EFTF 2022 and also published as an UFFC article.
The observation is that the second derivative phase data is independent,
so you can reuse samples as long as the phase & frequency transition is
respected and that no pair of frequency estimates re-occur which can be
achieved. The biases and quality tests turn out being really good.
The basic assumption is also what is used to prove that overlapping ADEV
measures is independent and thus can be used without creating bias to
the ADEV, but rather, is just a better estimator for ADEV.
The actual code is on github.
Cheers,
Magnus
On 8/7/22 23:02, Bob kb8tq wrote:
Hi
There are a lot of ways to deal with gaps. The best one is not to
have them in the first place :). It is not uncommon to ignore the
gap, but that does create issues. It also is not uncommon to plug
in “average” data. Again, issues are created. The hope is that they
are not as significant as ignoring it. How you generate that average
data …. that depends …..
So, no perfect solutions once you have a gap. If a setup produces
gaps on a regular basis, that’s probably not a really good way to
do things.
Bob
On Aug 7, 2022, at 12:54 PM, Magnus Danielson magnus@rubidium.se wrote:
Erik,
OK, so it's not big magic really. There is an assumed time-base
length. The actual time each such time-stamp is done shifts around a
little. For a 10 MHz signal, the time-base shifts around within one
period so 100 ns. The actual trigger point 1 and actual trigger point
2 will on average be the tau_0 time-distance from each other. As we
do this for a set of frequency estimations and average these, it
averages out. This is the basic assumption being used in all
estimations I've seen. My algorithm makes no different assumption
than any of the others I've seen.
I have seen those processing this with more detail of actual delay,
but that is when focusing on single-measurements. The danger there is
that numeric precision eats you quickly.
Now, the variation you get is really a systematic play on the period
time and the tau_0 and the phase-ramp you get out of that. This
breaks down into other phase-ramps of diminishing frequency and
amplitude, just as in a DDS. This systematic pattern rolls of quickly
in averaging while random noise does not roll off as quickly. The
systematic pattern can be "nulled" by matching average length to
pattern length, as always. You can't really resolve this systematic
noise before you know the relationship, rather it is a consequence of
the actual rational number and how you choose to measure it. Random
noise tends to smooth things out.
You need to compare the noise of the tau_0 "instability" with that of
the signal and the time-interval measurement error, it's fairly small
compared to the others together typically.
Now, the algorithm you have in that paper does not handle gaps in
data. It assumed a continuous block. Essentially the linear ramp of
phase and frequency needs to be unbroken or it will be producing the
wrong results. You can handle gaped data by altering the algorithm,
it will be a little more messy, but still maintain most of the
benefits. Gaped data is a big thing, and valuable work has been done
for ADEV by Dave Howe.
Cheers,
Magnus
On 8/7/22 22:21, Erik Kaashoek wrote:
Magnus,
Now you confuse me.
Can you simplify the calculation assuming the samples are equally
spaced even if they are not? Can you assume the spread is noise and
it will sjaal out? How about gaps?
Please help me to understand
Erik
On Sun, Aug 7, 2022, 22:08 Magnus Danielson magnus@rubidium.se wrote:
Erik,
They never are. It's a running assumption that everyone makes.
Cheers,
Magnus
On 8/7/22 13:28, Erik Kaashoek wrote:
> Magnus,
> Due to the design of the counter it is not possible to
guarantee all
> captures are at exactly tau_0 distance.
> Erik.
>
> On 6-8-2022 22:09, Magnus Danielson wrote:
>> The linear algebra trick actually jumps behind the linear
algebra
>> using the knowledge that measurements is a sequence of tau_0
>> distance, and that allows significant reduction of the math
into a
>> much more benign form. It also creates benign decimation
methods that
>> you can apply to any form of your liking.
>
Hi,
I am talking about very recent research that was presented at the
IFCS-EFTF 2022 and also published as an UFFC article.
The observation is that the second derivative phase data is independent,
so you can reuse samples as long as the phase & frequency transition is
respected and that no pair of frequency estimates re-occur which can be
achieved. The biases and quality tests turn out being really good.
The basic assumption is also what is used to prove that overlapping ADEV
measures is independent and thus can be used without creating bias to
the ADEV, but rather, is just a better estimator for ADEV.
The actual code is on github.
Cheers,
Magnus
On 8/7/22 23:02, Bob kb8tq wrote:
> Hi
>
> There are a lot of ways to deal with gaps. The best one is not to
> have them in the first place :). It is not uncommon to ignore the
> gap, but that does create issues. It also is not uncommon to plug
> in “average” data. Again, issues are created. The hope is that they
> are not as significant as ignoring it. How you generate that average
> data …. that depends …..
>
> So, no perfect solutions once you have a gap. If a setup produces
> gaps on a regular basis, that’s probably not a really good way to
> do things.
>
> Bob
>
>> On Aug 7, 2022, at 12:54 PM, Magnus Danielson <magnus@rubidium.se> wrote:
>>
>> Erik,
>>
>> OK, so it's not big magic really. There is an assumed time-base
>> length. The actual time each such time-stamp is done shifts around a
>> little. For a 10 MHz signal, the time-base shifts around within one
>> period so 100 ns. The actual trigger point 1 and actual trigger point
>> 2 will on average be the tau_0 time-distance from each other. As we
>> do this for a set of frequency estimations and average these, it
>> averages out. This is the basic assumption being used in all
>> estimations I've seen. My algorithm makes no different assumption
>> than any of the others I've seen.
>>
>> I have seen those processing this with more detail of actual delay,
>> but that is when focusing on single-measurements. The danger there is
>> that numeric precision eats you quickly.
>>
>> Now, the variation you get is really a systematic play on the period
>> time and the tau_0 and the phase-ramp you get out of that. This
>> breaks down into other phase-ramps of diminishing frequency and
>> amplitude, just as in a DDS. This systematic pattern rolls of quickly
>> in averaging while random noise does not roll off as quickly. The
>> systematic pattern can be "nulled" by matching average length to
>> pattern length, as always. You can't really resolve this systematic
>> noise before you know the relationship, rather it is a consequence of
>> the actual rational number and how you choose to measure it. Random
>> noise tends to smooth things out.
>>
>> You need to compare the noise of the tau_0 "instability" with that of
>> the signal and the time-interval measurement error, it's fairly small
>> compared to the others together typically.
>>
>> Now, the algorithm you have in that paper does not handle gaps in
>> data. It assumed a continuous block. Essentially the linear ramp of
>> phase and frequency needs to be unbroken or it will be producing the
>> wrong results. You can handle gaped data by altering the algorithm,
>> it will be a little more messy, but still maintain most of the
>> benefits. Gaped data is a big thing, and valuable work has been done
>> for ADEV by Dave Howe.
>>
>> Cheers,
>> Magnus
>>
>> On 8/7/22 22:21, Erik Kaashoek wrote:
>>> Magnus,
>>> Now you confuse me.
>>> Can you simplify the calculation assuming the samples are equally
>>> spaced even if they are not? Can you assume the spread is noise and
>>> it will sjaal out? How about gaps?
>>> Please help me to understand
>>> Erik
>>>
>>> On Sun, Aug 7, 2022, 22:08 Magnus Danielson <magnus@rubidium.se> wrote:
>>>
>>> Erik,
>>>
>>> They never are. It's a running assumption that everyone makes.
>>>
>>> Cheers,
>>> Magnus
>>>
>>> On 8/7/22 13:28, Erik Kaashoek wrote:
>>> > Magnus,
>>> > Due to the design of the counter it is not possible to
>>> guarantee all
>>> > captures are at exactly tau_0 distance.
>>> > Erik.
>>> >
>>> > On 6-8-2022 22:09, Magnus Danielson wrote:
>>> >> The linear algebra trick actually jumps behind the linear
>>> algebra
>>> >> using the knowledge that measurements is a sequence of tau_0
>>> >> distance, and that allows significant reduction of the math
>>> into a
>>> >> much more benign form. It also creates benign decimation
>>> methods that
>>> >> you can apply to any form of your liking.
>>> >
>>>
>