During testing of some oscillators that where not exactly on the right
frequency it appears that there is a measurement error in the frequency
measurements every halve the inverse of the frequency difference between
reference and measured signal when the gate time of the frequency meter
is set to 20ms.
An example plot can be found here:
http://athome.kaashoek.com/time-nuts/Frequency_pulling.png
It shows the measured frequency with 0.5Hz (blue) and 1Hz (pink)
frequency difference.
The normal frequency variations measured with a gate time of 20 ms are
below 2e-9 but when (presumably) an edge of the reference and the input
signal coincide(?) the frequency deviation can go up to almost 1e-8.
As you can understand this effect is not visible with a larger gate
time, such as above 50ms
Is this normal behavior for a frequency meter?
Does this imply that when you measure a signal that is phase locked to a
reference with a 20ms gate time you may have substantial larger noise in
the measurement because you have locked the system in the worst case
phase relation?
The frequency meter used is a Picotest U6200A with an external
reference. The results are the same when using the internal reference
but the internal reference has much more phase noise obscuring the
effect a bit.
Erik.
Hi
It is not at all uncommon to find that this or that measuring instrument
has various issues. On some of the more popular gear, they are pretty
well known. There might even be tech notes diving into just why this or
that issue pops up. With the more obscure gear, you may be the first
to spot the problem ….
There are a number of instruments that have a “dead zone” when the
input is very close to the reference. Just how close that is gets into the
details on that specific device. In some cases 0.1 Hz at 10 MHz is just
starting to get “close enough”. Is that the case here or is it something
different? Time to dig deeper :) :) :)
Bob
On Jul 20, 2022, at 7:34 AM, Erik Kaashoek via time-nuts time-nuts@lists.febo.com wrote:
During testing of some oscillators that where not exactly on the right frequency it appears that there is a measurement error in the frequency measurements every halve the inverse of the frequency difference between reference and measured signal when the gate time of the frequency meter is set to 20ms.
An example plot can be found here: http://athome.kaashoek.com/time-nuts/Frequency_pulling.png
It shows the measured frequency with 0.5Hz (blue) and 1Hz (pink) frequency difference.
The normal frequency variations measured with a gate time of 20 ms are below 2e-9 but when (presumably) an edge of the reference and the input signal coincide(?) the frequency deviation can go up to almost 1e-8.
As you can understand this effect is not visible with a larger gate time, such as above 50ms
Is this normal behavior for a frequency meter?
Does this imply that when you measure a signal that is phase locked to a reference with a 20ms gate time you may have substantial larger noise in the measurement because you have locked the system in the worst case phase relation?
The frequency meter used is a Picotest U6200A with an external reference. The results are the same when using the internal reference but the internal reference has much more phase noise obscuring the effect a bit.
Erik.
time-nuts mailing list -- time-nuts@lists.febo.com
To unsubscribe send an email to time-nuts-leave@lists.febo.com
Erik,
Yes, those ugly effects are expected but unwelcome. You can consider it
a measure of the [poor] quality of the instrument.
The test setup that I use is a very stable external 10 MHz reference and
a very stable input of 10 MHz plus, say, 1e-10. This creates a growing
"calibrated" phase difference of 100 ps per second, so over a run
lasting 1000 seconds you have covered an entire 100 ns period of the
clocks. The actual values aren't important, the key is simply to use two
references that are stable and not the same frequency, and take your
time so you carefully probe all phase points. It's a little like
spectral analysis.
Even with a commercial counter like the hp 53132A you will notice
pulling effects. There are many variations of the test, depending if you
use just one input or two inputs, or how you use the ext ref input, or
if you are in timestamp mode, or time interval mode, or frequency mode,
or even how long your cables are. A while ago I tested several 53132
with a slight offset 5 MHz input and found phase pulling by as much as
200 ps depending where in the 100 ns ref cycle or where in the 200 ns
input cycle the measurement was made.
http://leapsecond.com/pages/53132/
I may have mentioned this before, but many people test a counter by
splitting a good oscillator, to the rear 10 MHz ext ref input and to the
channel A input. And then they record lots of readings that say 10.000
000 000 00x and are impressed with how good their counter is. This is a
poor test because the counter is only working at one spot in its entire
range of internal phase or interpolator relationships.
It's much better to feed in a second, independent frequency that's close
to but not equal to the 10 MHz reference and then collect enough hours
of readings to see repetitive effects. Over time this will expose all
the internal phase relationships and phase or frequency pulling will
become obvious. Fixing this is difficult because it's the result of
subtle interaction among components, PCB layout, shielding, etc. But the
smaller these effects are the better the counter.
/tvb
On 7/20/2022 8:34 AM, Erik Kaashoek via time-nuts wrote:
During testing of some oscillators that where not exactly on the right
frequency it appears that there is a measurement error in the
frequency measurements every halve the inverse of the frequency
difference between reference and measured signal when the gate time of
the frequency meter is set to 20ms.
An example plot can be found here:
http://athome.kaashoek.com/time-nuts/Frequency_pulling.png
It shows the measured frequency with 0.5Hz (blue) and 1Hz (pink)
frequency difference.
The normal frequency variations measured with a gate time of 20 ms are
below 2e-9 but when (presumably) an edge of the reference and the
input signal coincide(?) the frequency deviation can go up to almost
1e-8.
As you can understand this effect is not visible with a larger gate
time, such as above 50ms
Is this normal behavior for a frequency meter?
Does this imply that when you measure a signal that is phase locked to
a reference with a 20ms gate time you may have substantial larger
noise in the measurement because you have locked the system in the
worst case phase relation?
The frequency meter used is a Picotest U6200A with an external
reference. The results are the same when using the internal reference
but the internal reference has much more phase noise obscuring the
effect a bit.
Erik.
time-nuts mailing list -- time-nuts@lists.febo.com
To unsubscribe send an email to time-nuts-leave@lists.febo.com
Hi Tom,
I did the test as you described for measuring the phase difference.
When using a sinus signals there is a gradual shift back and forth over
1000s in the measured phase residue of the order of 3e-10 but no sudden
jumps.
When measuring the frequency there is an extremely narrow window in the
gradual phase shift where the frequency jumps to a value that is
proportional to the difference in frequency. The problem with measuring
this is that it is only is really visible with a gate time of less then
0.1s and most pronounced with a 0.02s gate time and a frequency
difference of 0.01Hz. A frequency difference of 0.001Hz disappears in
the noise. Larger frequency differences change so quickly that the
effect is much shorter than the gate time so it again becomes invisible.
What I learn from this is that I should check if sudden peaks in the
frequency measurement could originate from a small, but no too small,
frequency difference and a matching short gate time. Not a big problem,
just to be aware of.
An example of these spikes can be found here:
http://athome.kaashoek.com/time-nuts/Spikes.png
The pink trace has the smallest frequency difference and therefore the
smallest spikes.
The blue trace has a bigger frequency difference but as soon as I change
the gate time to 0.1 s the spikes disappear
The variations in the residue of the measured phase with the shifting
sinus input phase can be seen in this plot.
http://athome.kaashoek.com/time-nuts/Phase_shift_0.001Hz.png
Blue is the picotest counter with 0.02 s gate time. Maybe it is caused
by small gradual changes in the effective trigger level.
Pink is my DIY counter which shows much more pulling. Expected because
part is still discrete logic and long connecting wires.
Thanks for all the help.
Erik.
On 20-7-2022 20:04, Tom Van Baak via time-nuts wrote:
The test setup that I use is a very stable external 10 MHz reference
and a very stable input of 10 MHz plus, say, 1e-10. This creates a
growing "calibrated" phase difference of 100 ps per second, so over a
run lasting 1000 seconds you have covered an entire 100 ns period of
the clocks.