time-nuts@lists.febo.com

Discussion of precise time and frequency measurement

View all threads

Re: [time-nuts] TEC party file format?

HM
Hal Murray
Tue, Jun 28, 2011 9:22 PM

So the PC itself becomes your frequency counter, with NTP providing the
long-term timebase stability you need.

Mains cycles-per-second become RS232 bytes-per-second. You will get, on
average, 60 bytes per second. At the end of a "perfect" day you would have
read 5184000 characters. In Europe, it's 4320000 (50 Hz * 86400).

I see two interesting problems with this sort of approach.

One is glitches on the line, either lightning/whatever causing extra counts,
or dropouts causing missed cycles.  Does anybody know how often this sort of
stuff happens?  Does anybody have scope pictures?

The other would be glitches on the PC.  I can easily keep a system running
for a week or a month, but every now and then I need to move the power plug
or I get the urge to play with some software and ...

If I have a day of good data, then a break, then more good data, how long can
the break be so that I can correctly guess the number of cycles that were
missed?  It depends upon how much the frequency changes.  If I extrapolate
forward from before the break and back from after, the lines will intersect.
If I can estimate the error in the slope of those lines I can see what
happens if I use the high/low error cases.

One thing that might help get back in cycle sync would be to use a PIC/AVR
rather than a 555.  The idea is that it can do the first layer of data
reduction, say divide by 100.  That would roughly multiply the
get-back-in-sync time by 100.  (assuming not much noise)

The PIC could also send out a RS-232 text message with the count in it, or
modulate the pulse width, say double the width every 10000 cycles.  Mumble
there are lots of opportunities.

Another idea is that once you get the PIC debugged, you probably won't have
to reboot it because you are messing with other software on the box.


An alternative would be to feed the 60 Hz into the audio port on the PC.  No
need for the 555 or whatever.  The crystal driving the audio ADC is another
variable but I think that won't be a major problem.

--
These are my opinions, not necessarily my employer's.  I hate spam.

> So the PC itself becomes your frequency counter, with NTP providing the > long-term timebase stability you need. > Mains cycles-per-second become RS232 bytes-per-second. You will get, on > average, 60 bytes per second. At the end of a "perfect" day you would have > read 5184000 characters. In Europe, it's 4320000 (50 Hz * 86400). I see two interesting problems with this sort of approach. One is glitches on the line, either lightning/whatever causing extra counts, or dropouts causing missed cycles. Does anybody know how often this sort of stuff happens? Does anybody have scope pictures? The other would be glitches on the PC. I can easily keep a system running for a week or a month, but every now and then I need to move the power plug or I get the urge to play with some software and ... If I have a day of good data, then a break, then more good data, how long can the break be so that I can correctly guess the number of cycles that were missed? It depends upon how much the frequency changes. If I extrapolate forward from before the break and back from after, the lines will intersect. If I can estimate the error in the slope of those lines I can see what happens if I use the high/low error cases. One thing that might help get back in cycle sync would be to use a PIC/AVR rather than a 555. The idea is that it can do the first layer of data reduction, say divide by 100. That would roughly multiply the get-back-in-sync time by 100. (assuming not much noise) The PIC could also send out a RS-232 text message with the count in it, or modulate the pulse width, say double the width every 10000 cycles. Mumble there are lots of opportunities. Another idea is that once you get the PIC debugged, you probably won't have to reboot it because you are messing with other software on the box. ------------------ An alternative would be to feed the 60 Hz into the audio port on the PC. No need for the 555 or whatever. The crystal driving the audio ADC is another variable but I think that won't be a major problem. -- These are my opinions, not necessarily my employer's. I hate spam.
TV
Tom Van Baak
Tue, Jun 28, 2011 9:45 PM

I see two interesting problems with this sort of approach.

One is glitches on the line, either lightning/whatever causing extra counts,
or dropouts causing missed cycles.  Does anybody know how often this sort of
stuff happens?  Does anybody have scope pictures?

The beauty of time-stamping each cycle (each byte) is that you
have a lot of data to work with to identify and correct short-term
glitches like this. Normal frequency counters are at the mercy
of glitches, but continuous time-stamping counters can, if they
want, apply heuristics to a CW sequence and easily pinpoint
cycles that are missing or doubly counted.

If course if you are missing minutes of data it may be impossible
to know for sure if you're off by a cycle or two. But to isolate bad
data over the span of a few seconds, it's easy.

The other would be glitches on the PC.  I can easily keep a system running
for a week or a month, but every now and then I need to move the power plug
or I get the urge to play with some software and ...

Correct. Then again, all you have to do is split the 60 Hz pulse
train to two serial ports on different PC's. Serial ports are easy
that way.

The PIC could also send out a RS-232 text message with the count in it, or
modulate the pulse width, say double the width every 10000 cycles.  Mumble
there are lots of opportunities.

See the start of these TEC thread(s). I'm collecting all my data
with a PIC: 60 Hz in, RS232 time-stamps out. I can send one
to you if you want to try it.

An alternative would be to feed the 60 Hz into the audio port on the PC.  No
need for the 555 or whatever.  The crystal driving the audio ADC is another
variable but I think that won't be a major problem.

Yeah, this was mentioned earlier. It turns out the crystal isn't
a problem since all you're doing is counting 16 ms cycles within
the realtime waveform capture buffer. In this case the ADC isn't
used as much for edge timing as it is for edge counting. The
sample rate or rate stability can be very low.

/tvb

> I see two interesting problems with this sort of approach. > > One is glitches on the line, either lightning/whatever causing extra counts, > or dropouts causing missed cycles. Does anybody know how often this sort of > stuff happens? Does anybody have scope pictures? The beauty of time-stamping each cycle (each byte) is that you have a lot of data to work with to identify and correct short-term glitches like this. Normal frequency counters are at the mercy of glitches, but continuous time-stamping counters can, if they want, apply heuristics to a CW sequence and easily pinpoint cycles that are missing or doubly counted. If course if you are missing minutes of data it may be impossible to know for sure if you're off by a cycle or two. But to isolate bad data over the span of a few seconds, it's easy. > The other would be glitches on the PC. I can easily keep a system running > for a week or a month, but every now and then I need to move the power plug > or I get the urge to play with some software and ... Correct. Then again, all you have to do is split the 60 Hz pulse train to two serial ports on different PC's. Serial ports are easy that way. > The PIC could also send out a RS-232 text message with the count in it, or > modulate the pulse width, say double the width every 10000 cycles. Mumble > there are lots of opportunities. See the start of these TEC thread(s). I'm collecting all my data with a PIC: 60 Hz in, RS232 time-stamps out. I can send one to you if you want to try it. > An alternative would be to feed the 60 Hz into the audio port on the PC. No > need for the 555 or whatever. The crystal driving the audio ADC is another > variable but I think that won't be a major problem. Yeah, this was mentioned earlier. It turns out the crystal isn't a problem since all you're doing is counting 16 ms cycles within the realtime waveform capture buffer. In this case the ADC isn't used as much for edge timing as it is for edge counting. The sample rate or rate stability can be very low. /tvb
TV
Tom Van Baak
Tue, Jun 28, 2011 11:26 PM

If I have a day of good data, then a break, then more good data, how long can
the break be so that I can correctly guess the number of cycles that were
missed?  It depends upon how much the frequency changes.  If I extrapolate
forward from before the break and back from after, the lines will intersect.
If I can estimate the error in the slope of those lines I can see what
happens if I use the high/low error cases.

Hal,

Your logic sounds correct. Note the question of how accurately
you can predict the future time or frequency of a clock, based
on a long record of its past behavior, is exactly what the ADEV
family of statistics give you. It's all about how much or little the
frequency changes over time.

I'll send you mains data from yesterday if you want to play with
simulating missing data algorithms. I think in this case TDEV or
MTIE is the statistic you want.

I had to deal with this issue of data breaks in my relativity clock
experiment on Mt Rainier (www.leapsecond.com/great2005, for
new people on the list). In this case you have stable clocks and
because of time dilation they "slip cycles" so to speak while you
are away from home. The question is: how stable a clock do you
need to have before you are sure you can see relativistic time
dilation and not just nanoseconds of normal clock drift.

One thing that might help get back in cycle sync would be to use a PIC/AVR
rather than a 555.  The idea is that it can do the first layer of data
reduction, say divide by 100.  That would roughly multiply the
get-back-in-sync time by 100.  (assuming not much noise)

Correct. Or divide to get 1 pulse per minute, or hour, etc. You may
laugh but even dividing by 5184000 (one pulse per day) would still
give you enough points to make a really nice plot of US mains power
timekeeping performance over a year. This illustrates the issue that
for some cases (such as this one) occasionally recording time
error (phase) is often easier and more reliable than making many
uninterrupted frequency measurements and integrating them all
to estimate net time error.

/tvb

> If I have a day of good data, then a break, then more good data, how long can > the break be so that I can correctly guess the number of cycles that were > missed? It depends upon how much the frequency changes. If I extrapolate > forward from before the break and back from after, the lines will intersect. > If I can estimate the error in the slope of those lines I can see what > happens if I use the high/low error cases. Hal, Your logic sounds correct. Note the question of how accurately you can predict the future time or frequency of a clock, based on a long record of its past behavior, is exactly what the ADEV family of statistics give you. It's all about how much or little the frequency changes over time. I'll send you mains data from yesterday if you want to play with simulating missing data algorithms. I think in this case TDEV or MTIE is the statistic you want. I had to deal with this issue of data breaks in my relativity clock experiment on Mt Rainier (www.leapsecond.com/great2005, for new people on the list). In this case you have stable clocks and because of time dilation they "slip cycles" so to speak while you are away from home. The question is: how stable a clock do you need to have before you are sure you can see relativistic time dilation and not just nanoseconds of normal clock drift. > One thing that might help get back in cycle sync would be to use a PIC/AVR > rather than a 555. The idea is that it can do the first layer of data > reduction, say divide by 100. That would roughly multiply the > get-back-in-sync time by 100. (assuming not much noise) Correct. Or divide to get 1 pulse per minute, or hour, etc. You may laugh but even dividing by 5184000 (one pulse per day) would still give you enough points to make a really nice plot of US mains power timekeeping performance over a year. This illustrates the issue that for some cases (such as this one) occasionally recording time error (phase) is often easier and more reliable than making many uninterrupted frequency measurements and integrating them all to estimate net time error. /tvb