usrp-users@lists.ettus.com

Discussion and technical support related to USRP, UHD, RFNoC

View all threads

[X310] set_command_time introduces unexpected delay dependent on sampling rate.

JA
je.amghar@gmail.com
Wed, Mar 26, 2025 10:13 AM

I'm using timed commands to set the RX gain at a precise moment with the following command:

set_command_time(md.time_spec + uhd::time_spec_t(0.02), 0);

However, I noticed that there is a delay between the specified time and the actual time when the gain is applied. This delay is significantly larger than the component latency responsible for changing the gain and appears to depend on the sampling frequency. Specifically, the delay is approximately 20 samples.

I’m trying to understand why this delay is much greater than the expected component latency and why it scales with the sampling frequency. Any insights on this behavior?

Regards.
Jamaleddine

I'm using timed commands to set the RX gain at a precise moment with the following command: set_command_time(md.time_spec + uhd::time_spec_t(0.02), 0); However, I noticed that there is a delay between the specified time and the actual time when the gain is applied. This delay is significantly larger than the component latency responsible for changing the gain and appears to depend on the sampling frequency. Specifically, the delay is approximately 20 samples. I’m trying to understand why this delay is much greater than the expected component latency and why it scales with the sampling frequency. Any insights on this behavior?\ \ Regards.\ Jamaleddine
MD
Marcus D. Leech
Wed, Mar 26, 2025 2:42 PM

On 26/03/2025 06:13, je.amghar@gmail.com wrote:

I'm using timed commands to set the RX gain at a precise moment with
the following command:

set_command_time(md.time_spec + uhd::time_spec_t(0.02), 0);

However, I noticed that there is a delay between the specified time
and the actual time when the gain is applied. This delay is
significantly larger than the component latency responsible for
changing the gain and appears to depend on the sampling frequency.
Specifically, the delay is approximately 20 samples.

I’m trying to understand why this delay is much greater than the
expected component latency and why it scales with the sampling
frequency. Any insights on this behavior?

Regards.
Jamaleddine

A change in signals presented to the head of the DDC chain will take
some number of sample times to propagate through the
   finite-length filters in the DDC.  They don't (and, indeed, cannot)
have zero group delay.

On 26/03/2025 06:13, je.amghar@gmail.com wrote: > > I'm using timed commands to set the RX gain at a precise moment with > the following command: > > set_command_time(md.time_spec + uhd::time_spec_t(0.02), 0); > > However, I noticed that there is a delay between the specified time > and the actual time when the gain is applied. This delay is > significantly larger than the component latency responsible for > changing the gain and appears to depend on the sampling frequency. > Specifically, the delay is approximately 20 samples. > > I’m trying to understand why this delay is much greater than the > expected component latency and why it scales with the sampling > frequency. Any insights on this behavior? > > Regards. > Jamaleddine > > A change in signals presented to the head of the DDC chain will take some number of sample times to propagate through the    finite-length filters in the DDC.  They don't (and, indeed, cannot) have zero group delay.
RK
Rob Kossler
Wed, Mar 26, 2025 4:09 PM

On Wed, Mar 26, 2025 at 10:43 AM Marcus D. Leech patchvonbraun@gmail.com
wrote:

On 26/03/2025 06:13, je.amghar@gmail.com wrote:

I'm using timed commands to set the RX gain at a precise moment with
the following command:

set_command_time(md.time_spec + uhd::time_spec_t(0.02), 0);

However, I noticed that there is a delay between the specified time
and the actual time when the gain is applied. This delay is
significantly larger than the component latency responsible for
changing the gain and appears to depend on the sampling frequency.
Specifically, the delay is approximately 20 samples.

I’m trying to understand why this delay is much greater than the
expected component latency and why it scales with the sampling
frequency. Any insights on this behavior?

Regards.
Jamaleddine

A change in signals presented to the head of the DDC chain will take
some number of sample times to propagate through the
finite-length filters in the DDC.  They don't (and, indeed, cannot)
have zero group delay.

Hi Marcus,
I think that the gain is set from the "radio" block which operates at the
master clock rate rather than the downconverted rate.  It doesn't make
sense to me why the latency of the gain setting would be related to the
downconverted sample rate.
Rob

On Wed, Mar 26, 2025 at 10:43 AM Marcus D. Leech <patchvonbraun@gmail.com> wrote: > On 26/03/2025 06:13, je.amghar@gmail.com wrote: > > > > I'm using timed commands to set the RX gain at a precise moment with > > the following command: > > > > set_command_time(md.time_spec + uhd::time_spec_t(0.02), 0); > > > > However, I noticed that there is a delay between the specified time > > and the actual time when the gain is applied. This delay is > > significantly larger than the component latency responsible for > > changing the gain and appears to depend on the sampling frequency. > > Specifically, the delay is approximately 20 samples. > > > > I’m trying to understand why this delay is much greater than the > > expected component latency and why it scales with the sampling > > frequency. Any insights on this behavior? > > > > Regards. > > Jamaleddine > > > > > A change in signals presented to the head of the DDC chain will take > some number of sample times to propagate through the > finite-length filters in the DDC. They don't (and, indeed, cannot) > have zero group delay. > Hi Marcus, I think that the gain is set from the "radio" block which operates at the master clock rate rather than the downconverted rate. It doesn't make sense to me why the latency of the gain setting would be related to the downconverted sample rate. Rob
MD
Marcus D. Leech
Wed, Mar 26, 2025 4:31 PM

On 26/03/2025 12:09, Rob Kossler wrote:

Hi Marcus,
I think that the gain is set from the "radio" block which operates at
the master clock rate rather than the downconverted rate.  It doesn't
make sense to me why the latency of the gain setting would be related
to the downconverted sample rate.
Rob

Let us ignore for a moment the gain-setting hardware on the radio. Let's
pretend that some noticeable signal parameter,
  as seen at the antenna plane, changes suddenly--like the signal level
comes up by 5dB.  How long before that effect is
  actually seen in the sample stream?  That will depend on the (very
small) group delay in the analog hardware, and the delay
  in the DDC filters, which DOES scale with sample-rate, because
different filters are switched-in depending on the commanded
  sample rate, and those filters have non-zero length...

On 26/03/2025 12:09, Rob Kossler wrote: > > > > Hi Marcus, > I think that the gain is set from the "radio" block which operates at > the master clock rate rather than the downconverted rate.  It doesn't > make sense to me why the latency of the gain setting would be related > to the downconverted sample rate. > Rob Let us ignore for a moment the gain-setting hardware on the radio. Let's pretend that some noticeable signal parameter,   as seen at the antenna plane, changes suddenly--like the signal level comes up by 5dB.  How long before that effect is   actually seen in the sample stream?  That will depend on the (very small) group delay in the analog hardware, and the delay   in the DDC filters, which DOES scale with sample-rate, because different filters are switched-in depending on the commanded   sample rate, and those filters have non-zero length...
RK
Rob Kossler
Wed, Mar 26, 2025 5:56 PM

On Wed, Mar 26, 2025 at 12:32 PM Marcus D. Leech patchvonbraun@gmail.com
wrote:

On 26/03/2025 12:09, Rob Kossler wrote:

Hi Marcus,
I think that the gain is set from the "radio" block which operates at the
master clock rate rather than the downconverted rate.  It doesn't make
sense to me why the latency of the gain setting would be related to the
downconverted sample rate.
Rob

Let us ignore for a moment the gain-setting hardware on the radio.  Let's
pretend that some noticeable signal parameter,
as seen at the antenna plane, changes suddenly--like the signal level
comes up by 5dB.  How long before that effect is
actually seen in the sample stream?  That will depend on the (very
small) group delay in the analog hardware, and the delay
in the DDC filters, which DOES scale with sample-rate, because different
filters are switched-in depending on the commanded
sample rate, and those filters have non-zero length...

True. But if the comparison is between the gain setting time stamp and the

Rx samples time stamp (inserted at the radio) it still seems that it would
be sample rate independent (with the caveat that the time stamp resolution
may have to change to the decimated sample rate with some type of
quantization)

On Wed, Mar 26, 2025 at 12:32 PM Marcus D. Leech <patchvonbraun@gmail.com> wrote: > On 26/03/2025 12:09, Rob Kossler wrote: > > > > > Hi Marcus, > I think that the gain is set from the "radio" block which operates at the > master clock rate rather than the downconverted rate. It doesn't make > sense to me why the latency of the gain setting would be related to the > downconverted sample rate. > Rob > > > Let us ignore for a moment the gain-setting hardware on the radio. Let's > pretend that some noticeable signal parameter, > as seen at the antenna plane, changes suddenly--like the signal level > comes up by 5dB. How long before that effect is > actually seen in the sample stream? That will depend on the (very > small) group delay in the analog hardware, and the delay > in the DDC filters, which DOES scale with sample-rate, because different > filters are switched-in depending on the commanded > sample rate, and those filters have non-zero length... > > True. But if the comparison is between the gain setting time stamp and the Rx samples time stamp (inserted at the radio) it still seems that it would be sample rate independent (with the caveat that the time stamp resolution may have to change to the decimated sample rate with some type of quantization)
JA
je.amghar@gmail.com
Thu, Mar 27, 2025 9:21 AM

Thank you both, Marcus and Rob, for your responses! That clarifies things—filter latency in the FPGA explains the dependency on sample rate. I hadn’t fully considered that before. A signal change at the antenna takes time to propagate, and with different filters engaged at varying sample rates, the delay naturally scales. Appreciate the insights!

Thank you both, Marcus and Rob, for your responses! That clarifies things—filter latency in the FPGA explains the dependency on sample rate. I hadn’t fully considered that before. A signal change at the antenna takes time to propagate, and with different filters engaged at varying sample rates, the delay naturally scales. Appreciate the insights!
MB
Martin Braun
Wed, Apr 2, 2025 11:23 AM

The time stamp resolution (or time stamp timebase) is always the master
clock rate. On X310, unless you tell it something else, that's 200MHz.

Let's craft an example: Let's say you receive 2000 samples contiguously, at
100 samples per packet (that's 20 packets total). Let's say that the 0-th
packet has timestamp t_0, and the 10-th packet has timestamp t_1, so that
t_1 = t_0 + 1000/200MHz.
Also assume the DDC is decimating by 10. (Note the packet sizes are not
realistic, but let me do easy math here).

The output of the DDC will produce 2 packets. The first packet will be
timestamped t_0, and the second packet will be timestamped t_1. The first
output packet will approximately hold the content from the first 10 input
packets, and the second output packet will approximately hold the content
of the 11-20th input packet.

However, the timestamping etc. does not account for any group delays. Like
Marcus says, we have filters for anti-aliasing. We have half-band filters
with 47 taps (so, 23 samples group delay) and CIC filters with variable
decimation (I don't know the group delay of the top off my head). When you
decimate, the visible group delay gets reduced of course by the decimation.
But let's say you're decimating by 4, then that means you have 2 half-bands
and a decimation of four, that will give you approx. 23 samples delay.

(I always embarrass myself when doing math on the fly, so don't believe the
exact numbers without doing the math yourself).

It would be possible to also adjust timestamps. In GNU Radio, we have the
"declare_sample_delay()" API which helps moving stream tags to the right
position. However, there's no "correct" answer here if we really should
modify timestamps, so we go with the simplest solution and don't.

Side note: If you run the capture at full rate (200Msps) then this will not
occur. However, there may still be some fixed, residual delay. This is
something that the application needs to handle appropriately.

--M

On Wed, Mar 26, 2025 at 6:57 PM Rob Kossler via USRP-users <
usrp-users@lists.ettus.com> wrote:

On Wed, Mar 26, 2025 at 12:32 PM Marcus D. Leech patchvonbraun@gmail.com
wrote:

On 26/03/2025 12:09, Rob Kossler wrote:

Hi Marcus,
I think that the gain is set from the "radio" block which operates at the
master clock rate rather than the downconverted rate.  It doesn't make
sense to me why the latency of the gain setting would be related to the
downconverted sample rate.
Rob

Let us ignore for a moment the gain-setting hardware on the radio.  Let's
pretend that some noticeable signal parameter,
as seen at the antenna plane, changes suddenly--like the signal level
comes up by 5dB.  How long before that effect is
actually seen in the sample stream?  That will depend on the (very
small) group delay in the analog hardware, and the delay
in the DDC filters, which DOES scale with sample-rate, because
different filters are switched-in depending on the commanded
sample rate, and those filters have non-zero length...

True. But if the comparison is between the gain setting time stamp and

the Rx samples time stamp (inserted at the radio) it still seems that it
would be sample rate independent (with the caveat that the time stamp
resolution may have to change to the decimated sample rate with some type
of quantization)


USRP-users mailing list -- usrp-users@lists.ettus.com
To unsubscribe send an email to usrp-users-leave@lists.ettus.com

The time stamp resolution (or time stamp timebase) is always the master clock rate. On X310, unless you tell it something else, that's 200MHz. Let's craft an example: Let's say you receive 2000 samples contiguously, at 100 samples per packet (that's 20 packets total). Let's say that the 0-th packet has timestamp t_0, and the 10-th packet has timestamp t_1, so that t_1 = t_0 + 1000/200MHz. Also assume the DDC is decimating by 10. (Note the packet sizes are not realistic, but let me do easy math here). The output of the DDC will produce 2 packets. The first packet will be timestamped t_0, and the second packet will be timestamped t_1. The first output packet will approximately hold the content from the first 10 input packets, and the second output packet will approximately hold the content of the 11-20th input packet. However, the timestamping etc. does not account for any group delays. Like Marcus says, we have filters for anti-aliasing. We have half-band filters with 47 taps (so, 23 samples group delay) and CIC filters with variable decimation (I don't know the group delay of the top off my head). When you decimate, the visible group delay gets reduced of course by the decimation. But let's say you're decimating by 4, then that means you have 2 half-bands and a decimation of four, that will give you approx. 23 samples delay. (I always embarrass myself when doing math on the fly, so don't believe the exact numbers without doing the math yourself). It would be possible to also adjust timestamps. In GNU Radio, we have the "declare_sample_delay()" API which helps moving stream tags to the right position. However, there's no "correct" answer here if we really should modify timestamps, so we go with the simplest solution and don't. Side note: If you run the capture at full rate (200Msps) then this will not occur. However, there may still be some fixed, residual delay. This is something that the application needs to handle appropriately. --M On Wed, Mar 26, 2025 at 6:57 PM Rob Kossler via USRP-users < usrp-users@lists.ettus.com> wrote: > > > On Wed, Mar 26, 2025 at 12:32 PM Marcus D. Leech <patchvonbraun@gmail.com> > wrote: > >> On 26/03/2025 12:09, Rob Kossler wrote: >> >> >> >> >> Hi Marcus, >> I think that the gain is set from the "radio" block which operates at the >> master clock rate rather than the downconverted rate. It doesn't make >> sense to me why the latency of the gain setting would be related to the >> downconverted sample rate. >> Rob >> >> >> Let us ignore for a moment the gain-setting hardware on the radio. Let's >> pretend that some noticeable signal parameter, >> as seen at the antenna plane, changes suddenly--like the signal level >> comes up by 5dB. How long before that effect is >> actually seen in the sample stream? That will depend on the (very >> small) group delay in the analog hardware, and the delay >> in the DDC filters, which DOES scale with sample-rate, because >> different filters are switched-in depending on the commanded >> sample rate, and those filters have non-zero length... >> >> True. But if the comparison is between the gain setting time stamp and > the Rx samples time stamp (inserted at the radio) it still seems that it > would be sample rate independent (with the caveat that the time stamp > resolution may have to change to the decimated sample rate with some type > of quantization) > > _______________________________________________ > USRP-users mailing list -- usrp-users@lists.ettus.com > To unsubscribe send an email to usrp-users-leave@lists.ettus.com >