TD
Tijd Dingen
Fri, May 13, 2011 3:28 PM
In trying to put together a way to calculate Allan variance based on a series of timestamps of every Nth cycle, I ran into the following...
Suppose you have an input signal, but it's a bit on the high side. So you use a prescaler to divide it down to a manageable frequency range. And now you want to use that signal to be able to say something useful about the original high frequency signal.
Now taking a look at the part about "Non-overlapped variable tau estimators" in the wikipedia article here:
http://en.wikipedia.org/wiki/Allan_variance#Non-overlapped_variable_.CF.84_estimators
It seems to me that "divide by 4 and then measure all cycles back-to-back" is essentially the same as "measure all cycles of the undivided high frequency signal back-to-back" and decimate. Or "skipping past n − 1 samples" as the wiki article puts it. And that is disregarding /extra/ jitter due to the divider, purely for the sake of simplicity.
Plus, I strongly suspect that all these commercial counters that can handle 6 Ghz and such are not timestamping every single cycle back-to-back either. Especially the models that have a few versions in the series. One cheaper one that can handle 300 MHz for example, and a more expensive one that can handle 6 GHz. That reads like: "All models share the same basic data processing core and the same time interpolators. For the more expensive model we just slapped on an high bandwidth input + a prescaler."
Anyways, any drawbacks to calculating Allan Variance of a divided signal that I am overlooking here?
regards,
Fred
In trying to put together a way to calculate Allan variance based on a series of timestamps of every Nth cycle, I ran into the following...
Suppose you have an input signal, but it's a bit on the high side. So you use a prescaler to divide it down to a manageable frequency range. And now you want to use that signal to be able to say something useful about the original high frequency signal.
Now taking a look at the part about "Non-overlapped variable tau estimators" in the wikipedia article here:
http://en.wikipedia.org/wiki/Allan_variance#Non-overlapped_variable_.CF.84_estimators
It seems to me that "divide by 4 and then measure all cycles back-to-back" is essentially the same as "measure all cycles of the undivided high frequency signal back-to-back" and decimate. Or "skipping past n − 1 samples" as the wiki article puts it. And that is disregarding /extra/ jitter due to the divider, purely for the sake of simplicity.
Plus, I strongly suspect that all these commercial counters that can handle 6 Ghz and such are not timestamping every single cycle back-to-back either. Especially the models that have a few versions in the series. One cheaper one that can handle 300 MHz for example, and a more expensive one that can handle 6 GHz. That reads like: "All models share the same basic data processing core and the same time interpolators. For the more expensive model we just slapped on an high bandwidth input + a prescaler."
Anyways, any drawbacks to calculating Allan Variance of a divided signal that I am overlooking here?
regards,
Fred
BC
Bob Camp
Fri, May 13, 2011 4:05 PM
Hi
For AVAR you want a time record not a frequency measure. Your time stamps
are a direct phase estimate. They are what you would use directly for the
AVAR calculation. If they are faster than your shortest tau, all is well.
Divide, mix down, what ever, just stamp faster than the shortest tau.
Bob
-----Original Message-----
From: time-nuts-bounces@febo.com [mailto:time-nuts-bounces@febo.com] On
Behalf Of Tijd Dingen
Sent: Friday, May 13, 2011 11:28 AM
To: time-nuts@febo.com
Subject: [time-nuts] Limitations of Allan Variance applied to
frequencydivided signal?
In trying to put together a way to calculate Allan variance based on a
series of timestamps of every Nth cycle, I ran into the following...
Suppose you have an input signal, but it's a bit on the high side. So you
use a prescaler to divide it down to a manageable frequency range. And now
you want to use that signal to be able to say something useful about the
original high frequency signal.
Now taking a look at the part about "Non-overlapped variable tau estimators"
in the wikipedia article here:
http://en.wikipedia.org/wiki/Allan_variance#Non-overlapped_variable_.CF.84_e
stimators
It seems to me that "divide by 4 and then measure all cycles back-to-back"
is essentially the same as "measure all cycles of the undivided high
frequency signal back-to-back" and decimate. Or "skipping past n - 1
samples" as the wiki article puts it. And that is disregarding /extra/
jitter due to the divider, purely for the sake of simplicity.
Plus, I strongly suspect that all these commercial counters that can handle
6 Ghz and such are not timestamping every single cycle back-to-back either.
Especially the models that have a few versions in the series. One cheaper
one that can handle 300 MHz for example, and a more expensive one that can
handle 6 GHz. That reads like: "All models share the same basic data
processing core and the same time interpolators. For the more expensive
model we just slapped on an high bandwidth input + a prescaler."
Anyways, any drawbacks to calculating Allan Variance of a divided signal
that I am overlooking here?
regards,
Fred
time-nuts mailing list -- time-nuts@febo.com
To unsubscribe, go to
https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
and follow the instructions there.
Hi
For AVAR you want a time record not a frequency measure. Your time stamps
are a direct phase estimate. They are what you would use directly for the
AVAR calculation. If they are faster than your shortest tau, all is well.
Divide, mix down, what ever, just stamp faster than the shortest tau.
Bob
-----Original Message-----
From: time-nuts-bounces@febo.com [mailto:time-nuts-bounces@febo.com] On
Behalf Of Tijd Dingen
Sent: Friday, May 13, 2011 11:28 AM
To: time-nuts@febo.com
Subject: [time-nuts] Limitations of Allan Variance applied to
frequencydivided signal?
In trying to put together a way to calculate Allan variance based on a
series of timestamps of every Nth cycle, I ran into the following...
Suppose you have an input signal, but it's a bit on the high side. So you
use a prescaler to divide it down to a manageable frequency range. And now
you want to use that signal to be able to say something useful about the
original high frequency signal.
Now taking a look at the part about "Non-overlapped variable tau estimators"
in the wikipedia article here:
http://en.wikipedia.org/wiki/Allan_variance#Non-overlapped_variable_.CF.84_e
stimators
It seems to me that "divide by 4 and then measure all cycles back-to-back"
is essentially the same as "measure all cycles of the undivided high
frequency signal back-to-back" and decimate. Or "skipping past n - 1
samples" as the wiki article puts it. And that is disregarding /extra/
jitter due to the divider, purely for the sake of simplicity.
Plus, I strongly suspect that all these commercial counters that can handle
6 Ghz and such are not timestamping every single cycle back-to-back either.
Especially the models that have a few versions in the series. One cheaper
one that can handle 300 MHz for example, and a more expensive one that can
handle 6 GHz. That reads like: "All models share the same basic data
processing core and the same time interpolators. For the more expensive
model we just slapped on an high bandwidth input + a prescaler."
Anyways, any drawbacks to calculating Allan Variance of a divided signal
that I am overlooking here?
regards,
Fred
_______________________________________________
time-nuts mailing list -- time-nuts@febo.com
To unsubscribe, go to
https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
and follow the instructions there.
TD
Tijd Dingen
Fri, May 13, 2011 4:23 PM
Hi Bob,
Precisely the kind of sanity check I was looking for. Thank you!
regards,
Fred
----- Original Message -----
From: Bob Camp lists@rtty.us
To: 'Tijd Dingen' tijddingen@yahoo.com; 'Discussion of precise time and frequency measurement' time-nuts@febo.com
Cc:
Sent: Friday, May 13, 2011 6:05 PM
Subject: RE: [time-nuts] Limitations of Allan Variance applied to frequencydivided signal?
Hi
For AVAR you want a time record not a frequency measure. Your time stamps
are a direct phase estimate. They are what you would use directly for the
AVAR calculation. If they are faster than your shortest tau, all is well.
Divide, mix down, what ever, just stamp faster than the shortest tau.
Bob
-----Original Message-----
From: time-nuts-bounces@febo.com [mailto:time-nuts-bounces@febo.com] On
Behalf Of Tijd Dingen
Sent: Friday, May 13, 2011 11:28 AM
To: time-nuts@febo.com
Subject: [time-nuts] Limitations of Allan Variance applied to
frequencydivided signal?
In trying to put together a way to calculate Allan variance based on a
series of timestamps of every Nth cycle, I ran into the following...
Suppose you have an input signal, but it's a bit on the high side. So you
use a prescaler to divide it down to a manageable frequency range. And now
you want to use that signal to be able to say something useful about the
original high frequency signal.
Now taking a look at the part about "Non-overlapped variable tau estimators"
in the wikipedia article here:
http://en.wikipedia.org/wiki/Allan_variance#Non-overlapped_variable_.CF.84_e
stimators
It seems to me that "divide by 4 and then measure all cycles back-to-back"
is essentially the same as "measure all cycles of the undivided high
frequency signal back-to-back" and decimate. Or "skipping past n - 1
samples" as the wiki article puts it. And that is disregarding /extra/
jitter due to the divider, purely for the sake of simplicity.
Plus, I strongly suspect that all these commercial counters that can handle
6 Ghz and such are not timestamping every single cycle back-to-back either.
Especially the models that have a few versions in the series. One cheaper
one that can handle 300 MHz for example, and a more expensive one that can
handle 6 GHz. That reads like: "All models share the same basic data
processing core and the same time interpolators. For the more expensive
model we just slapped on an high bandwidth input + a prescaler."
Anyways, any drawbacks to calculating Allan Variance of a divided signal
that I am overlooking here?
regards,
Fred
time-nuts mailing list -- time-nuts@febo.com
To unsubscribe, go to
https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
and follow the instructions there.
Hi Bob,
Precisely the kind of sanity check I was looking for. Thank you!
regards,
Fred
----- Original Message -----
From: Bob Camp <lists@rtty.us>
To: 'Tijd Dingen' <tijddingen@yahoo.com>; 'Discussion of precise time and frequency measurement' <time-nuts@febo.com>
Cc:
Sent: Friday, May 13, 2011 6:05 PM
Subject: RE: [time-nuts] Limitations of Allan Variance applied to frequencydivided signal?
Hi
For AVAR you want a time record not a frequency measure. Your time stamps
are a direct phase estimate. They are what you would use directly for the
AVAR calculation. If they are faster than your shortest tau, all is well.
Divide, mix down, what ever, just stamp faster than the shortest tau.
Bob
-----Original Message-----
From: time-nuts-bounces@febo.com [mailto:time-nuts-bounces@febo.com] On
Behalf Of Tijd Dingen
Sent: Friday, May 13, 2011 11:28 AM
To: time-nuts@febo.com
Subject: [time-nuts] Limitations of Allan Variance applied to
frequencydivided signal?
In trying to put together a way to calculate Allan variance based on a
series of timestamps of every Nth cycle, I ran into the following...
Suppose you have an input signal, but it's a bit on the high side. So you
use a prescaler to divide it down to a manageable frequency range. And now
you want to use that signal to be able to say something useful about the
original high frequency signal.
Now taking a look at the part about "Non-overlapped variable tau estimators"
in the wikipedia article here:
http://en.wikipedia.org/wiki/Allan_variance#Non-overlapped_variable_.CF.84_e
stimators
It seems to me that "divide by 4 and then measure all cycles back-to-back"
is essentially the same as "measure all cycles of the undivided high
frequency signal back-to-back" and decimate. Or "skipping past n - 1
samples" as the wiki article puts it. And that is disregarding /extra/
jitter due to the divider, purely for the sake of simplicity.
Plus, I strongly suspect that all these commercial counters that can handle
6 Ghz and such are not timestamping every single cycle back-to-back either.
Especially the models that have a few versions in the series. One cheaper
one that can handle 300 MHz for example, and a more expensive one that can
handle 6 GHz. That reads like: "All models share the same basic data
processing core and the same time interpolators. For the more expensive
model we just slapped on an high bandwidth input + a prescaler."
Anyways, any drawbacks to calculating Allan Variance of a divided signal
that I am overlooking here?
regards,
Fred
_______________________________________________
time-nuts mailing list -- time-nuts@febo.com
To unsubscribe, go to
https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
and follow the instructions there.
MD
Magnus Danielson
Sat, May 14, 2011 7:47 AM
On 05/13/2011 05:28 PM, Tijd Dingen wrote:
In trying to put together a way to calculate Allan variance based on a series of timestamps of every Nth cycle, I ran into the following...
Suppose you have an input signal, but it's a bit on the high side. So you use a prescaler to divide it down to a manageable frequency range. And now you want to use that signal to be able to say something useful about the original high frequency signal.
Now taking a look at the part about "Non-overlapped variable tau estimators" in the wikipedia article here:
http://en.wikipedia.org/wiki/Allan_variance#Non-overlapped_variable_.CF.84_estimators
Nice to see people actually read and use what I wrote.
It seems to me that "divide by 4 and then measure all cycles back-to-back" is essentially the same as "measure all cycles of the undivided high frequency signal back-to-back" and decimate. Or "skipping past n − 1 samples" as the wiki article puts it. And that is disregarding /extra/ jitter due to the divider, purely for the sake of simplicity.
If you use a prescaler of say 1/64 then it takes 64 cycles of the
original signal to cause a cycle to the counter core. These are then
time-stamped, i.e. a time-measure and an event counter measure is taken.
To transform the event-counter value into the properly scaled event
value, you then multiply the event counter by 64, since it took 64 times
more events than counted by the counter. The time-stamps does not have
to be modified.
Notice that the pre-scaler is only used for higher frequencies.
Plus, I strongly suspect that all these commercial counters that can
handle 6 Ghz and such are not timestamping every single cycle
back-to-back either. Especially the models that have a few versions
in the series. One cheaper one that can handle 300 MHz for example,
and a more expensive one that can handle 6 GHz. That reads like:
"All models share the same basic data processing core and the same
time interpolators. For the more expensive model we just slapped on
an high bandwidth input + a prescaler."
You never time-stamp individual cycles anyway, so a pre-scaler doesn't
do much difference. It does limit the granularity of the tau values you
use, but usually not in a significant way since Allan variance is rarely
used for taus shorter than 100 ms and well... pre-scaling usually is
below 100 ns so it isn't a big difference.
Anyways, any drawbacks to calculating Allan Variance of a divided signal that I am overlooking here?
No significant, it adds to the noise floor, but in practice the
time-stamping and processing doesn't have big problems due to it.
Cheers,
Magnus
On 05/13/2011 05:28 PM, Tijd Dingen wrote:
> In trying to put together a way to calculate Allan variance based on a series of timestamps of every Nth cycle, I ran into the following...
>
> Suppose you have an input signal, but it's a bit on the high side. So you use a prescaler to divide it down to a manageable frequency range. And now you want to use that signal to be able to say something useful about the original high frequency signal.
>
> Now taking a look at the part about "Non-overlapped variable tau estimators" in the wikipedia article here:
>
> http://en.wikipedia.org/wiki/Allan_variance#Non-overlapped_variable_.CF.84_estimators
Nice to see people actually read and use what I wrote.
> It seems to me that "divide by 4 and then measure all cycles back-to-back" is essentially the same as "measure all cycles of the undivided high frequency signal back-to-back" and decimate. Or "skipping past n − 1 samples" as the wiki article puts it. And that is disregarding /extra/ jitter due to the divider, purely for the sake of simplicity.
If you use a prescaler of say 1/64 then it takes 64 cycles of the
original signal to cause a cycle to the counter core. These are then
time-stamped, i.e. a time-measure and an event counter measure is taken.
To transform the event-counter value into the properly scaled event
value, you then multiply the event counter by 64, since it took 64 times
more events than counted by the counter. The time-stamps does not have
to be modified.
Notice that the pre-scaler is only used for higher frequencies.
> Plus, I strongly suspect that all these commercial counters that can
> handle 6 Ghz and such are not timestamping every single cycle
> back-to-back either. Especially the models that have a few versions
> in the series. One cheaper one that can handle 300 MHz for example,
> and a more expensive one that can handle 6 GHz. That reads like:
> "All models share the same basic data processing core and the same
> time interpolators. For the more expensive model we just slapped on
> an high bandwidth input + a prescaler."
You never time-stamp individual cycles anyway, so a pre-scaler doesn't
do much difference. It does limit the granularity of the tau values you
use, but usually not in a significant way since Allan variance is rarely
used for taus shorter than 100 ms and well... pre-scaling usually is
below 100 ns so it isn't a big difference.
> Anyways, any drawbacks to calculating Allan Variance of a divided signal that I am overlooking here?
No significant, it adds to the noise floor, but in practice the
time-stamping and processing doesn't have big problems due to it.
Cheers,
Magnus
MD
Magnus Danielson
Sat, May 14, 2011 7:51 AM
On 05/13/2011 06:05 PM, Bob Camp wrote:
Hi
For AVAR you want a time record not a frequency measure. Your time stamps
are a direct phase estimate. They are what you would use directly for the
AVAR calculation. If they are faster than your shortest tau, all is well.
Divide, mix down, what ever, just stamp faster than the shortest tau.
You can use frequency measures.... but there is a number of quirks
hiding in there which can make a frequency-based analysis biased. Using
time-stamps avoid those quirks, but naturally you can fluke those too...
For instance, use of averaging can be a bad idea. It can be used, but it
needs to be blended in not to bias the AVAR measures.
Cheers,
Magnus
On 05/13/2011 06:05 PM, Bob Camp wrote:
> Hi
>
> For AVAR you want a time record not a frequency measure. Your time stamps
> are a direct phase estimate. They are what you would use directly for the
> AVAR calculation. If they are faster than your shortest tau, all is well.
> Divide, mix down, what ever, just stamp faster than the shortest tau.
You can use frequency measures.... but there is a number of quirks
hiding in there which can make a frequency-based analysis biased. Using
time-stamps avoid those quirks, but naturally you can fluke those too...
For instance, use of averaging can be a bad idea. It can be used, but it
needs to be blended in not to bias the AVAR measures.
Cheers,
Magnus
TD
Tijd Dingen
Sat, May 14, 2011 11:02 AM
Nice to see people actually read and use what I wrote.
If you use a prescaler of say 1/64 then it takes 64 cycles of the original
signal to cause a cycle to the counter core. These are then time-stamped,
i.e. a time-measure and an event counter measure is taken. To transform
the event-counter value into the properly scaled event value, you then
multiply the event counter by 64, since it took 64 times more events than
counted by the counter. The time-stamps does not have to be modified.
Okay, that clears things up. Thanks!
Notice that the pre-scaler is only used for higher frequencies.
Understood. I was just using the prescaler as an example for the "what if
if take every Nth edge".
Plus, I strongly suspect that all these commercial counters that can
handle 6 Ghz and such are not timestamping every single cycle
back-to-back either. Especially the models that have a few versions
in the series. One cheaper one that can handle 300 MHz for example,
and a more expensive one that can handle 6 GHz. That reads like:
"All models share the same basic data processing core and the same
time interpolators. For the more expensive model we just slapped on
an high bandwidth input + a prescaler."
You never time-stamp individual cycles anyway, so a pre-scaler doesn't do
much difference. It does limit the granularity of the tau values you use, but
usually not in a significant way since Allan variance is rarely used for taus
shorter than 100 ms and well... pre-scaling usually is below 100 ns so it
isn't a big difference.
Well, I can certainly /try/ to be able to timestamp individual cycles. ;) That way
I can for example characterize oscillator startup and such. Right now I can only
spit out a medium resolution timestamp every cycle for frequencies up to about
400 Mhz, and a high resolution timestamp every cycle for frequencies up to
about 20 MHz.
Medium resolution being on the order of 100 ps, and high resolution being on
the order of 10 ps. The medium resolution is possibly even a little worse than
that due to non-linearities, but there is still a few ways to improve that. Just
requires an aweful lot of design handholding to manually route parts of the
fpga design. I.e: "I will do that later. much much later". ;->
But understood, for Allan variance you don't need timestamps for every indivual
cycle.
Anyways, any drawbacks to calculating Allan Variance of a divided signal
that I am overlooking here?
No significant, it adds to the noise floor, but in practice the time-stamping and
processing doesn't have big problems due to it.
Precisely what I was hoping for, thanks! :)
regards,
Fred
Magnus Danielson wrote:
> > http://en.wikipedia.org/wiki/Allan_variance#Non-overlapped_variable_.CF.84_estimators
> Nice to see people actually read and use what I wrote.
:-)
> If you use a prescaler of say 1/64 then it takes 64 cycles of the original
> signal to cause a cycle to the counter core. These are then time-stamped,
> i.e. a time-measure and an event counter measure is taken. To transform
> the event-counter value into the properly scaled event value, you then
> multiply the event counter by 64, since it took 64 times more events than
> counted by the counter. The time-stamps does not have to be modified.
Okay, that clears things up. Thanks!
> Notice that the pre-scaler is only used for higher frequencies.
Understood. I was just using the prescaler as an example for the "what if
if take every Nth edge".
> > Plus, I strongly suspect that all these commercial counters that can
> > handle 6 Ghz and such are not timestamping every single cycle
> > back-to-back either. Especially the models that have a few versions
> > in the series. One cheaper one that can handle 300 MHz for example,
> > and a more expensive one that can handle 6 GHz. That reads like:
> > "All models share the same basic data processing core and the same
> > time interpolators. For the more expensive model we just slapped on
> > an high bandwidth input + a prescaler."
> You never time-stamp individual cycles anyway, so a pre-scaler doesn't do
> much difference. It does limit the granularity of the tau values you use, but
> usually not in a significant way since Allan variance is rarely used for taus
> shorter than 100 ms and well... pre-scaling usually is below 100 ns so it
> isn't a big difference.
Well, I can certainly /try/ to be able to timestamp individual cycles. ;) That way
I can for example characterize oscillator startup and such. Right now I can only
spit out a medium resolution timestamp every cycle for frequencies up to about
400 Mhz, and a high resolution timestamp every cycle for frequencies up to
about 20 MHz.
Medium resolution being on the order of 100 ps, and high resolution being on
the order of 10 ps. The medium resolution is possibly even a little worse than
that due to non-linearities, but there is still a few ways to improve that. Just
requires an aweful lot of design handholding to manually route parts of the
fpga design. I.e: "I will do that later. much much later". ;->
But understood, for Allan variance you don't need timestamps for every indivual
cycle.
> > Anyways, any drawbacks to calculating Allan Variance of a divided signal
> > that I am overlooking here?
> No significant, it adds to the noise floor, but in practice the time-stamping and
> processing doesn't have big problems due to it.
Precisely what I was hoping for, thanks! :)
regards,
Fred
MD
Magnus Danielson
Sun, May 15, 2011 1:47 PM
Hi Fred,
On 05/14/2011 01:02 PM, Tijd Dingen wrote:
Notice that the pre-scaler is only used for higher frequencies.
Understood. I was just using the prescaler as an example for the "what if
if take every Nth edge".
Consider then the typical measurement setup:
A counter is set up to make a time interval measurement from channel A
to channel B on each occurrence of a external arm trigger. Consider that
a GPS provides a PPS pulse to the external arm input and a 10 MHz to the
channel A. The DUT provides a 10 MHz to the channel B.
In this setup it will be 10 milion cycles on the channel A and B. This
is not a problem for ADEV/AVAR. The tau will be that of 1 s or integer
multiples thereof.
However, if you want a quality measure at 1 s then you better measure at
a higher speed of say 1 kHz in order to get higher amount of data
without having to way veeery long. Algorithmic improvements have been
done to achieve higher quality quicker on the same data. Overlapping
measures make fair use of data for shorter taus.
Notice that you need to adjust your data for cycle-slips. If you don't
do that you will get a significant performance hit with typical several
decades higher ADEV curve than expected.
Plus, I strongly suspect that all these commercial counters that can
handle 6 Ghz and such are not timestamping every single cycle
back-to-back either. Especially the models that have a few versions
in the series. One cheaper one that can handle 300 MHz for example,
and a more expensive one that can handle 6 GHz. That reads like:
"All models share the same basic data processing core and the same
time interpolators. For the more expensive model we just slapped on
an high bandwidth input + a prescaler."
You never time-stamp individual cycles anyway, so a pre-scaler doesn't do
much difference. It does limit the granularity of the tau values you use, but
usually not in a significant way since Allan variance is rarely used for taus
shorter than 100 ms and well... pre-scaling usually is below 100 ns so it
isn't a big difference.
Well, I can certainly /try/ to be able to timestamp individual cycles. ;) That way
I can for example characterize oscillator startup and such. Right now I can only
spit out a medium resolution timestamp every cycle for frequencies up to about
400 Mhz, and a high resolution timestamp every cycle for frequencies up to
about 20 MHz.
Medium resolution being on the order of 100 ps, and high resolution being on
the order of 10 ps. The medium resolution is possibly even a little worse than
that due to non-linearities, but there is still a few ways to improve that. Just
requires an aweful lot of design handholding to manually route parts of the
fpga design. I.e: "I will do that later. much much later". ;->
But understood, for Allan variance you don't need timestamps for every indivual
cycle.
No. Certainly not.
I do lack one rate in your discussion, your time-stamp rate, i.e. the
maximum sample-rate you can handle, being limited to minimum time
between two measurements. For instance, a HP5372A has a maximum sample
rate of 10 MS/s in normal mode (100 ns to store a sample) while in fast
mode it can do 13,33 MS/s (75 ns to store a sample). The interpolator
uses a delay architecture to provide quick turn-around interpolation
which gives only 200 ps resolution (100 ps resolution is supported in
the architecture if only boards would be designed for it, so there is a
hidden upgrade which never came about).
Do you mean to say that your low resolution time-stamping rate is 400
MS/s and high resolution time-stamping rate is 20 MS/s?
It is perfectly respectable to skip a number of cycles, but the number
of cycles must be known. One way is to have an event-counter which is
sampled, or you always provide samples at a fixed distance
event-counter-wise such that the event-counter can be rebuilt
afterwards. The later method save data, but have the draw-back that your
observation period becomes dependent on the frequency of the signal
which may or may not be what you want, depending on your application.
Recall, you will have to store and process this flood of data. For
higher tau plots you will wade in sufficiently high amounts of data
anyway, so dropping high frequency data to achieve a more manageable
data rate in order to be able to store and process the longer tau data
is needed.
For most of the ADEV plots on stability, starting at 100 ms or 1 s is
perfectly useful, so a measurement rate of 10 S/s is acceptable.
For high speed things like startup burps etc. you have a different
requirement race. A counter capable of doing both will be great, but
they usually don't do it.
Anyways, any drawbacks to calculating Allan Variance of a divided signal
that I am overlooking here?
No significant, it adds to the noise floor, but in practice the time-stamping and
processing doesn't have big problems due to it.
Precisely what I was hoping for, thanks! :)
Hi Fred,
On 05/14/2011 01:02 PM, Tijd Dingen wrote:
> Magnus Danielson wrote:
>> Notice that the pre-scaler is only used for higher frequencies.
>
> Understood. I was just using the prescaler as an example for the "what if
> if take every Nth edge".
Consider then the typical measurement setup:
A counter is set up to make a time interval measurement from channel A
to channel B on each occurrence of a external arm trigger. Consider that
a GPS provides a PPS pulse to the external arm input and a 10 MHz to the
channel A. The DUT provides a 10 MHz to the channel B.
In this setup it will be 10 milion cycles on the channel A and B. This
is not a problem for ADEV/AVAR. The tau will be that of 1 s or integer
multiples thereof.
However, if you want a quality measure at 1 s then you better measure at
a higher speed of say 1 kHz in order to get higher amount of data
without having to way veeery long. Algorithmic improvements have been
done to achieve higher quality quicker on the same data. Overlapping
measures make fair use of data for shorter taus.
Notice that you need to adjust your data for cycle-slips. If you don't
do that you will get a significant performance hit with typical several
decades higher ADEV curve than expected.
>>> Plus, I strongly suspect that all these commercial counters that can
>>> handle 6 Ghz and such are not timestamping every single cycle
>>> back-to-back either. Especially the models that have a few versions
>>> in the series. One cheaper one that can handle 300 MHz for example,
>>> and a more expensive one that can handle 6 GHz. That reads like:
>>> "All models share the same basic data processing core and the same
>>> time interpolators. For the more expensive model we just slapped on
>>> an high bandwidth input + a prescaler."
>
>> You never time-stamp individual cycles anyway, so a pre-scaler doesn't do
>> much difference. It does limit the granularity of the tau values you use, but
>> usually not in a significant way since Allan variance is rarely used for taus
>> shorter than 100 ms and well... pre-scaling usually is below 100 ns so it
>> isn't a big difference.
>
> Well, I can certainly /try/ to be able to timestamp individual cycles. ;) That way
> I can for example characterize oscillator startup and such. Right now I can only
> spit out a medium resolution timestamp every cycle for frequencies up to about
> 400 Mhz, and a high resolution timestamp every cycle for frequencies up to
> about 20 MHz.
>
> Medium resolution being on the order of 100 ps, and high resolution being on
> the order of 10 ps. The medium resolution is possibly even a little worse than
> that due to non-linearities, but there is still a few ways to improve that. Just
> requires an aweful lot of design handholding to manually route parts of the
> fpga design. I.e: "I will do that later. much much later". ;->
>
> But understood, for Allan variance you don't need timestamps for every indivual
> cycle.
No. Certainly not.
I do lack one rate in your discussion, your time-stamp rate, i.e. the
maximum sample-rate you can handle, being limited to minimum time
between two measurements. For instance, a HP5372A has a maximum sample
rate of 10 MS/s in normal mode (100 ns to store a sample) while in fast
mode it can do 13,33 MS/s (75 ns to store a sample). The interpolator
uses a delay architecture to provide quick turn-around interpolation
which gives only 200 ps resolution (100 ps resolution is supported in
the architecture if only boards would be designed for it, so there is a
hidden upgrade which never came about).
Do you mean to say that your low resolution time-stamping rate is 400
MS/s and high resolution time-stamping rate is 20 MS/s?
It is perfectly respectable to skip a number of cycles, but the number
of cycles must be known. One way is to have an event-counter which is
sampled, or you always provide samples at a fixed distance
event-counter-wise such that the event-counter can be rebuilt
afterwards. The later method save data, but have the draw-back that your
observation period becomes dependent on the frequency of the signal
which may or may not be what you want, depending on your application.
Recall, you will have to store and process this flood of data. For
higher tau plots you will wade in sufficiently high amounts of data
anyway, so dropping high frequency data to achieve a more manageable
data rate in order to be able to store and process the longer tau data
is needed.
For most of the ADEV plots on stability, starting at 100 ms or 1 s is
perfectly useful, so a measurement rate of 10 S/s is acceptable.
For high speed things like startup burps etc. you have a different
requirement race. A counter capable of doing both will be great, but
they usually don't do it.
>>> Anyways, any drawbacks to calculating Allan Variance of a divided signal
>>> that I am overlooking here?
>
>> No significant, it adds to the noise floor, but in practice the time-stamping and
>> processing doesn't have big problems due to it.
>
> Precisely what I was hoping for, thanks! :)
Great.
Cheers,
Magnus
TD
Tijd Dingen
Sun, May 15, 2011 8:01 PM
Hi Magnus,
Magnus Danielson wrote:
Notice that the pre-scaler is only used for higher frequencies.
Understood. I was just using the prescaler as an example for the "what if
if take every Nth edge".
Consider then the typical measurement setup:
A counter is set up to make a time interval measurement from channel A
to channel B on each occurrence of a external arm trigger. Consider that
a GPS provides a PPS pulse to the external arm input and a 10 MHz to the
channel A. The DUT provides a 10 MHz to the channel B.
In this setup it will be 10 milion cycles on the channel A and B. This
is not a problem for ADEV/AVAR. The tau will be that of 1 s or integer
multiples thereof.
However, if you want a quality measure at 1 s then you better measure at
a higher speed of say 1 kHz in order to get higher amount of data
without having to way veeery long. Algorithmic improvements have been
done to achieve higher quality quicker on the same data. Overlapping
measures make fair use of data for shorter taus.
Check. That is what I understood the "Overlapped variable tau estimators"
bit on wikipedia to be about. Same raw data, smarter processing.
Notice that you need to adjust your data for cycle-slips. If you don't
do that you will get a significant performance hit with typical several
decades higher ADEV curve than expected.
"Adjust for cycle-slips"... You mean the following ... ?
Your processing back-end receives a series of timestamps from the timestamper.
The timestamper claims "this is the timestamp for cycle number XYZ. No, really!".
However you notice that given the distribution of all the other (cycle_no, time)
pairs this would be hard to believe. If however you would assume add +1 to that
"claimed" cycle number, then it would perfectly. So you adjust the cycle number
by one, under the assumption that /somewhere/ 1 cycle got lost. Somewhere being
a PLL cycle slip, an fpga counter missing a count, etc...
That sort of adjustment I take it? If yes, then understood. If not, I'm afraid
I don't follow. :)
You never time-stamp individual cycles anyway, so a pre-scaler doesn't do
much difference. It does limit the granularity of the tau values you use, but
usually not in a significant way since Allan variance is rarely used for taus
shorter than 100 ms and well... pre-scaling usually is below 100 ns so it
isn't a big difference.
Well, I can certainly /try/ to be able to timestamp individual cycles. ;) That way
I can for example characterize oscillator startup and such. Right now I can only
spit out a medium resolution timestamp every cycle for frequencies up to about
400 Mhz, and a high resolution timestamp every cycle for frequencies up to
about 20 MHz.
Medium resolution being on the order of 100 ps, and high resolution being on
the order of 10 ps. The medium resolution is possibly even a little worse than
that due to non-linearities, but there is still a few ways to improve that. Just
requires an aweful lot of design handholding to manually route parts of the
fpga design. I.e: "I will do that later. much much later". ;->
But understood, for Allan variance you don't need timestamps for every indivual
cycle.
I do lack one rate in your discussion, your time-stamp rate, i.e. the
maximum sample-rate you can handle, being limited to minimum time
between two measurements. For instance, a HP5372A has a maximum sample
rate of 10 MS/s in normal mode (100 ns to store a sample) while in fast
mode it can do 13,33 MS/s (75 ns to store a sample). The interpolator
uses a delay architecture to provide quick turn-around interpolation
which gives only 200 ps resolution (100 ps resolution is supported in
the architecture if only boards would be designed for it, so there is a
hidden upgrade which never came about).
Do you mean to say that your low resolution time-stamping rate is 400
MS/s and high resolution time-stamping rate is 20 MS/s?
That is what I mean to say. There are still design issues with both modes,
so could become better, could become worse. Knowing how reality works,
probably worse. ;-> But those numbers are roughly it yes.
At the current stage: 200 MS/s at the lower resolution is easy. 400 MS/s
is trickier.
The reason: 400 MHz is the full tap sampling speed, and I can barely keep
up with the data. The data is from a 192 tap delay line incidentally.
Active length is typically about 130 taps, but I have to design worst case.
Or rather best case, because needing all those taps to fit a 2.5 ns cycle
would be really good news. ;) But hey, we can always hope for fast silicon
right?
Anyways, the first 5 pipeline stages right after the taps work at 400 MHz.
The second part (as it happens also 5 stages) works at 200 MHz. If only
for the simple reason that the block ram in a spartan-6 has a max frequency
of about 280 MHz. So the 200 MHz pipeline processes 2 timestamps in parallel.
For this part of the processing I have spent more design effort on the modules
that are responsible for the high resolution timestamps. So low resolution,
200 MS/s == done. 400 Ms/s == working on it. :P
It is perfectly respectable to skip a number of cycles, but the number
of cycles must be known. One way is to have an event-counter which is
sampled, or you always provide samples at a fixed distance
event-counter-wise such that the event-counter can be rebuilt
afterwards. The later method save data, but have the draw-back that your
observation period becomes dependent on the frequency of the signal
which may or may not be what you want, depending on your application.
What I have now is an event counter which is sampled.
Recall, you will have to store and process this flood of data. For
higher tau plots you will wade in sufficiently high amounts of data
anyway, so dropping high frequency data to achieve a more manageable
data rate in order to be able to store and process the longer tau data
is needed.
Heh, "store and process this flood of data" is the reason why I'm at
revision numero 3 for the frigging taps processor. :P But oh well,
good for my pipeline design skills.
For most of the ADEV plots on stability, starting at 100 ms or 1 s is
perfectly useful, so a measurement rate of 10 S/s is acceptable.
Well, that would be too easy. Where's the fun in that?
For high speed things like startup burps etc. you have a different
requirement race. A counter capable of doing both will be great, but
they usually don't do it.
Check. For me the main purpose of this thing is:
1 - learn new things
2 - be able to measure frequency with accuracy comparable to current commercial counters
3 - monitor frequency stability
Anyways, for now the mandate is to be able to spit out as many timestamps as I can get away
with, and then figure out fun ways to process them. ;)
regards,
Fred
Hi Magnus,
Magnus Danielson wrote:
>>> Notice that the pre-scaler is only used for higher frequencies.
>> Understood. I was just using the prescaler as an example for the "what if
>> if take every Nth edge".
> Consider then the typical measurement setup:
> A counter is set up to make a time interval measurement from channel A
> to channel B on each occurrence of a external arm trigger. Consider that
> a GPS provides a PPS pulse to the external arm input and a 10 MHz to the
> channel A. The DUT provides a 10 MHz to the channel B.
> In this setup it will be 10 milion cycles on the channel A and B. This
> is not a problem for ADEV/AVAR. The tau will be that of 1 s or integer
> multiples thereof.
> However, if you want a quality measure at 1 s then you better measure at
> a higher speed of say 1 kHz in order to get higher amount of data
> without having to way veeery long. Algorithmic improvements have been
> done to achieve higher quality quicker on the same data. Overlapping
> measures make fair use of data for shorter taus.
Check. That is what I understood the "Overlapped variable tau estimators"
bit on wikipedia to be about. Same raw data, smarter processing.
> Notice that you need to adjust your data for cycle-slips. If you don't
> do that you will get a significant performance hit with typical several
> decades higher ADEV curve than expected.
"Adjust for cycle-slips"... You mean the following ... ?
Your processing back-end receives a series of timestamps from the timestamper.
The timestamper claims "this is the timestamp for cycle number XYZ. No, really!".
However you notice that given the distribution of all the other (cycle_no, time)
pairs this would be hard to believe. If however you would assume add +1 to that
"claimed" cycle number, then it would perfectly. So you adjust the cycle number
by one, under the assumption that /somewhere/ 1 cycle got lost. Somewhere being
a PLL cycle slip, an fpga counter missing a count, etc...
That sort of adjustment I take it? If yes, then understood. If not, I'm afraid
I don't follow. :)
>>> You never time-stamp individual cycles anyway, so a pre-scaler doesn't do
>>> much difference. It does limit the granularity of the tau values you use, but
>>> usually not in a significant way since Allan variance is rarely used for taus
>>> shorter than 100 ms and well... pre-scaling usually is below 100 ns so it
>>> isn't a big difference.
>> Well, I can certainly /try/ to be able to timestamp individual cycles. ;) That way
>> I can for example characterize oscillator startup and such. Right now I can only
>> spit out a medium resolution timestamp every cycle for frequencies up to about
>> 400 Mhz, and a high resolution timestamp every cycle for frequencies up to
>> about 20 MHz.
>> Medium resolution being on the order of 100 ps, and high resolution being on
>> the order of 10 ps. The medium resolution is possibly even a little worse than
>> that due to non-linearities, but there is still a few ways to improve that. Just
>> requires an aweful lot of design handholding to manually route parts of the
>> fpga design. I.e: "I will do that later. much much later". ;->
>> But understood, for Allan variance you don't need timestamps for every indivual
>> cycle.
> No. Certainly not.
> I do lack one rate in your discussion, your time-stamp rate, i.e. the
> maximum sample-rate you can handle, being limited to minimum time
> between two measurements. For instance, a HP5372A has a maximum sample
> rate of 10 MS/s in normal mode (100 ns to store a sample) while in fast
> mode it can do 13,33 MS/s (75 ns to store a sample). The interpolator
> uses a delay architecture to provide quick turn-around interpolation
> which gives only 200 ps resolution (100 ps resolution is supported in
> the architecture if only boards would be designed for it, so there is a
> hidden upgrade which never came about).
> Do you mean to say that your low resolution time-stamping rate is 400
> MS/s and high resolution time-stamping rate is 20 MS/s?
That is what I mean to say. There are still design issues with both modes,
so could become better, could become worse. Knowing how reality works,
probably worse. ;-> But those numbers are roughly it yes.
At the current stage: 200 MS/s at the lower resolution is easy. 400 MS/s
is trickier.
The reason: 400 MHz is the full tap sampling speed, and I can barely keep
up with the data. The data is from a 192 tap delay line incidentally.
Active length is typically about 130 taps, but I have to design worst case.
Or rather best case, because needing all those taps to fit a 2.5 ns cycle
would be really good news. ;) But hey, we can always hope for fast silicon
right?
Anyways, the first 5 pipeline stages right after the taps work at 400 MHz.
The second part (as it happens also 5 stages) works at 200 MHz. If only
for the simple reason that the block ram in a spartan-6 has a max frequency
of about 280 MHz. So the 200 MHz pipeline processes 2 timestamps in parallel.
For this part of the processing I have spent more design effort on the modules
that are responsible for the high resolution timestamps. So low resolution,
200 MS/s == done. 400 Ms/s == working on it. :P
> It is perfectly respectable to skip a number of cycles, but the number
> of cycles must be known. One way is to have an event-counter which is
> sampled, or you always provide samples at a fixed distance
> event-counter-wise such that the event-counter can be rebuilt
> afterwards. The later method save data, but have the draw-back that your
> observation period becomes dependent on the frequency of the signal
> which may or may not be what you want, depending on your application.
What I have now is an event counter which is sampled.
> Recall, you will have to store and process this flood of data. For
> higher tau plots you will wade in sufficiently high amounts of data
> anyway, so dropping high frequency data to achieve a more manageable
> data rate in order to be able to store and process the longer tau data
> is needed.
Heh, "store and process this flood of data" is the reason why I'm at
revision numero 3 for the frigging taps processor. :P But oh well,
good for my pipeline design skills.
> For most of the ADEV plots on stability, starting at 100 ms or 1 s is
> perfectly useful, so a measurement rate of 10 S/s is acceptable.
Well, that would be too easy. Where's the fun in that?
> For high speed things like startup burps etc. you have a different
> requirement race. A counter capable of doing both will be great, but
> they usually don't do it.
Check. For me the main purpose of this thing is:
1 - learn new things
2 - be able to measure frequency with accuracy comparable to current commercial counters
3 - monitor frequency stability
Anyways, for now the mandate is to be able to spit out as many timestamps as I can get away
with, and then figure out fun ways to process them. ;)
regards,
Fred
MD
Magnus Danielson
Sun, May 15, 2011 9:31 PM
Hi Fred,
On 05/15/2011 10:01 PM, Tijd Dingen wrote:
Check. That is what I understood the "Overlapped variable tau estimators"
bit on wikipedia to be about. Same raw data, smarter processing.
Notice that you need to adjust your data for cycle-slips. If you don't
do that you will get a significant performance hit with typical several
decades higher ADEV curve than expected.
"Adjust for cycle-slips"... You mean the following ... ?
Your processing back-end receives a series of timestamps from the
timestamper.
The timestamper claims "this is the timestamp for cycle number XYZ. No,
really!".
However you notice that given the distribution of all the other
(cycle_no, time)
pairs this would be hard to believe. If however you would assume add +1
to that
"claimed" cycle number, then it would perfectly. So you adjust the cycle
number
by one, under the assumption that /somewhere/ 1 cycle got lost.
Somewhere being
a PLL cycle slip, an fpga counter missing a count, etc...
That sort of adjustment I take it? If yes, then understood. If not, I'm
afraid
I don't follow. :)
Consider that your time-base and your measurement signal is 1E-10 away
from each other in the case I gave, let's assume 10 MHz clocks. This
means that their phases will shift 1E-10*1E7 = 1E-3 degrees of a cycle
every second, so after 1000 seconds it will have shifted a full cycle,
somewhere in that data there will be a 99.9 ns to 0.0 ns (or vice-versa)
transition. This is not since the counter missed a cycle, but since the
signals actually beats against each other. Now, if you would take that
time-series and do an ADEV on it you will get a surprise. If you correct
it such that you extend the curve with +100 ns or -100 ns (i.e. add or
subtract a cycle, but expressed as a time shift) you will get a more
correct TI curve which you will get your expected results from.
Do you mean to say that your low resolution time-stamping rate is 400
MS/s and high resolution time-stamping rate is 20 MS/s?
That is what I mean to say. There are still design issues with both modes,
so could become better, could become worse. Knowing how reality works,
probably worse. ;-> But those numbers are roughly it yes.
Great, now we speak the same language on that aspect.
At the current stage: 200 MS/s at the lower resolution is easy. 400 MS/s
is trickier.
The reason: 400 MHz is the full tap sampling speed, and I can barely keep
up with the data. The data is from a 192 tap delay line incidentally.
Active length is typically about 130 taps, but I have to design worst case.
Or rather best case, because needing all those taps to fit a 2.5 ns cycle
would be really good news. ;) But hey, we can always hope for fast silicon
right?
See what a little fiddling with temperature and core voltage can do for
you :)
Anyways, the first 5 pipeline stages right after the taps work at 400 MHz.
The second part (as it happens also 5 stages) works at 200 MHz. If only
for the simple reason that the block ram in a spartan-6 has a max frequency
of about 280 MHz. So the 200 MHz pipeline processes 2 timestamps in
parallel.
For this part of the processing I have spent more design effort on the
modules
that are responsible for the high resolution timestamps. So low resolution,
200 MS/s == done. 400 Ms/s == working on it. :P
I guess you learn by doing, right? :)
It is perfectly respectable to skip a number of cycles, but the number
of cycles must be known. One way is to have an event-counter which is
sampled, or you always provide samples at a fixed distance
event-counter-wise such that the event-counter can be rebuilt
afterwards. The later method save data, but have the draw-back that your
observation period becomes dependent on the frequency of the signal
which may or may not be what you want, depending on your application.
What I have now is an event counter which is sampled.
Recall, you will have to store and process this flood of data. For
higher tau plots you will wade in sufficiently high amounts of data
anyway, so dropping high frequency data to achieve a more manageable
data rate in order to be able to store and process the longer tau data
is needed.
Heh, "store and process this flood of data" is the reason why I'm at
revision numero 3 for the frigging taps processor. :P But oh well,
good for my pipeline design skills.
Definitively. It's a learning process to unroll things which one
understands well in sequential processing but all of a sudden needs to
do in parallel.
For most of the ADEV plots on stability, starting at 100 ms or 1 s is
perfectly useful, so a measurement rate of 10 S/s is acceptable.
Well, that would be too easy. Where's the fun in that?
True, where's the fun in that :)
For high speed things like startup burps etc. you have a different
requirement race. A counter capable of doing both will be great, but
they usually don't do it.
Check. For me the main purpose of this thing is:
1 - learn new things
2 - be able to measure frequency with accuracy comparable to current
commercial counters
3 - monitor frequency stability
All good goals. Don't let me get in the way of them... :)
Anyways, for now the mandate is to be able to spit out as many
timestamps as I can get away
with, and then figure out fun ways to process them. ;)
Also interesting. I'm considering similar projects. Don't have a
spartan-6 lying around on a board so I will have to do with less. Got
some spartan-3E boards which should suffice for some initial experience.
Cheers,
Magnus
Hi Fred,
On 05/15/2011 10:01 PM, Tijd Dingen wrote:
> Check. That is what I understood the "Overlapped variable tau estimators"
> bit on wikipedia to be about. Same raw data, smarter processing.
Indeed.
>> Notice that you need to adjust your data for cycle-slips. If you don't
>> do that you will get a significant performance hit with typical several
>> decades higher ADEV curve than expected.
>
> "Adjust for cycle-slips"... You mean the following ... ?
>
> Your processing back-end receives a series of timestamps from the
> timestamper.
> The timestamper claims "this is the timestamp for cycle number XYZ. No,
> really!".
> However you notice that given the distribution of all the other
> (cycle_no, time)
> pairs this would be hard to believe. If however you would assume add +1
> to that
> "claimed" cycle number, then it would perfectly. So you adjust the cycle
> number
> by one, under the assumption that /somewhere/ 1 cycle got lost.
> Somewhere being
> a PLL cycle slip, an fpga counter missing a count, etc...
>
> That sort of adjustment I take it? If yes, then understood. If not, I'm
> afraid
> I don't follow. :)
Consider that your time-base and your measurement signal is 1E-10 away
from each other in the case I gave, let's assume 10 MHz clocks. This
means that their phases will shift 1E-10*1E7 = 1E-3 degrees of a cycle
every second, so after 1000 seconds it will have shifted a full cycle,
somewhere in that data there will be a 99.9 ns to 0.0 ns (or vice-versa)
transition. This is not since the counter missed a cycle, but since the
signals actually beats against each other. Now, if you would take that
time-series and do an ADEV on it you will get a surprise. If you correct
it such that you extend the curve with +100 ns or -100 ns (i.e. add or
subtract a cycle, but expressed as a time shift) you will get a more
correct TI curve which you will get your expected results from.
>> Do you mean to say that your low resolution time-stamping rate is 400
>> MS/s and high resolution time-stamping rate is 20 MS/s?
>
> That is what I mean to say. There are still design issues with both modes,
> so could become better, could become worse. Knowing how reality works,
> probably worse. ;-> But those numbers are roughly it yes.
Great, now we speak the same language on that aspect.
> At the current stage: 200 MS/s at the lower resolution is easy. 400 MS/s
> is trickier.
>
> The reason: 400 MHz is the full tap sampling speed, and I can barely keep
> up with the data. The data is from a 192 tap delay line incidentally.
> Active length is typically about 130 taps, but I have to design worst case.
> Or rather best case, because needing all those taps to fit a 2.5 ns cycle
> would be really good news. ;) But hey, we can always hope for fast silicon
> right?
See what a little fiddling with temperature and core voltage can do for
you :)
> Anyways, the first 5 pipeline stages right after the taps work at 400 MHz.
> The second part (as it happens also 5 stages) works at 200 MHz. If only
> for the simple reason that the block ram in a spartan-6 has a max frequency
> of about 280 MHz. So the 200 MHz pipeline processes 2 timestamps in
> parallel.
>
> For this part of the processing I have spent more design effort on the
> modules
> that are responsible for the high resolution timestamps. So low resolution,
> 200 MS/s == done. 400 Ms/s == working on it. :P
I guess you learn by doing, right? :)
>> It is perfectly respectable to skip a number of cycles, but the number
>> of cycles must be known. One way is to have an event-counter which is
>> sampled, or you always provide samples at a fixed distance
>> event-counter-wise such that the event-counter can be rebuilt
>> afterwards. The later method save data, but have the draw-back that your
>> observation period becomes dependent on the frequency of the signal
>> which may or may not be what you want, depending on your application.
>
> What I have now is an event counter which is sampled.
Great.
>> Recall, you will have to store and process this flood of data. For
>> higher tau plots you will wade in sufficiently high amounts of data
>> anyway, so dropping high frequency data to achieve a more manageable
>> data rate in order to be able to store and process the longer tau data
>> is needed.
>
> Heh, "store and process this flood of data" is the reason why I'm at
> revision numero 3 for the frigging taps processor. :P But oh well,
> good for my pipeline design skills.
Definitively. It's a learning process to unroll things which one
understands well in sequential processing but all of a sudden needs to
do in parallel.
>> For most of the ADEV plots on stability, starting at 100 ms or 1 s is
>> perfectly useful, so a measurement rate of 10 S/s is acceptable.
>
> Well, that would be too easy. Where's the fun in that?
True, where's the fun in that :)
>> For high speed things like startup burps etc. you have a different
>> requirement race. A counter capable of doing both will be great, but
>> they usually don't do it.
>
> Check. For me the main purpose of this thing is:
> 1 - learn new things
> 2 - be able to measure frequency with accuracy comparable to current
> commercial counters
> 3 - monitor frequency stability
All good goals. Don't let me get in the way of them... :)
> Anyways, for now the mandate is to be able to spit out as many
> timestamps as I can get away
> with, and then figure out fun ways to process them. ;)
Also interesting. I'm considering similar projects. Don't have a
spartan-6 lying around on a board so I will have to do with less. Got
some spartan-3E boards which should suffice for some initial experience.
Cheers,
Magnus