usrp-users@lists.ettus.com

Discussion and technical support related to USRP, UHD, RFNoC

View all threads

100Gb NIC for X410

RK
Rob Kossler
Tue, Mar 25, 2025 7:52 PM

Hi,
I am in the process of purchasing a 100Gb NIC for use with the X410 and
have seen documentation and previous posts indicating that the ConnectX
NICs are preferred. But I did note in the DPDK knowledge base article that
the Intel E810 could also work.  I prefer the E810 because it seems to be
less expensive and can be configured for 4x10Gb, but I don't want to create
a headache for myself.  Let me know if you have had success or issues with
the E810 using a 100Gb link (or two 100Gb links) to the X410.

I am also confused about the E810 which comes in a couple of 100Gb models:
CQDA2 and 2CQDA2, where they both have two 100Gb QSFP28 ports, but the
former can only handle aggregate 100Gb whereas the latter can handle
aggregate 200Gb.  My confusion is "why does it matter for the X410?".  With
4 channels at 500 MS/s, the aggregate bit rate is only 64Gb/s so why does
it matter if the E810 CQDA2 only supports aggregate 100Gb?  It seems to me
that either model supports the maximum rate of the X410.

Thanks.
Rob

Hi, I am in the process of purchasing a 100Gb NIC for use with the X410 and have seen documentation and previous posts indicating that the ConnectX NICs are preferred. But I did note in the DPDK knowledge base article that the Intel E810 could also work. I prefer the E810 because it seems to be less expensive and can be configured for 4x10Gb, but I don't want to create a headache for myself. Let me know if you have had success or issues with the E810 using a 100Gb link (or two 100Gb links) to the X410. I am also confused about the E810 which comes in a couple of 100Gb models: CQDA2 and 2CQDA2, where they both have two 100Gb QSFP28 ports, but the former can only handle aggregate 100Gb whereas the latter can handle aggregate 200Gb. My confusion is "why does it matter for the X410?". With 4 channels at 500 MS/s, the aggregate bit rate is only 64Gb/s so why does it matter if the E810 CQDA2 only supports aggregate 100Gb? It seems to me that either model supports the maximum rate of the X410. Thanks. Rob
MD
Michael Dickens
Wed, Mar 26, 2025 7:53 PM

Hey Rob! Great questions. Here's way too much information taken from
internal notes I have on the subject, to help you process all of this :)
{{{
E810 QCDA2 provides 100 Gb aggregate between both ports. Dual port to USRP
is not recommended since UHD doesn't "know" this limitation.

E810 2QCAD2 provides 2 bifurcated 100 Gb links, so can do 200 Gb aggregate.
I -think- one has to tell BIOS / OS about this bifurcation to get the NIC
fully working. I don't have one to test out.

There are now newer Intel E82* NICs. I don't know their capabilities.

Any of the Intel E8* NICs can be configured in various ways, the most
relevant for USRPs are:

  • 2x1x100 : 2 ports, each hosting 1 virtual link at 100 Gb
  • 100 : 1 port with a single virtual link at 100 Gb
  • 8x10 (formerly 2x4x10 : 2 ports, each hosting 4 virtual link at 10 Gb each
    {{{
    $ sudo ./epct64e -get -nic 1
    Ethernet Port Configuration Tool
    EPCT version: v1.42.24.04
    Copyright 2019 - 2024 Intel Corporation.

Available Port Options:


---========
Port                            Quad 0          Quad 1
Option  Option (Gbps)                    L0  L1  L2  L3  L4  L5  L6  L7
======= =============================    ================ ================
2x1x100                      -> 100  -  -  -  100  -  -  -
2x50                          ->  50  -  50  -    -  -  -  -
4x25                          ->  25  25  25  25    -  -  -  -
2x2x25                        ->  25  25  -  -  25  25  -  -
Active  8x10                          ->  10  10  10  10  10  10  10  10
100                          -> 100  -  -  -    -  -  -  -
}}}

FWIW: We're had a number of customers with E810 CQDA2 issues recently. My
current belief is that the NIC (NVM) and OS drivers do not play nicely
together & hence updating both to the latest is needed to get everything
working properly.

Intel E8* NICs used the ICE driver, which is in active development & works
pretty well overall. ICE drivers -do not- work seamlessly with DPDK unlike
Mellanox ones. It's easy to create a script to do the driver binding & link
stuff both down and up, but this can be very confusing for people not used
to taking down a link and rebinding the driver & then the reverse to get it
back working in the system again.

The Mellanox drivers & hardware use a little less CPU time than the Intel
ones, so a little better single-core performance — which helps when using
DPDK and doing max data throughput.

Yes, 500 GS/s on 4 channels (2 GS/s aggregate) is 64 Gb/s and thus well
within the capabilities of a single 100 Gb port on either NIC ... That's
fine for an X410. For an X440 we double that to 4 GS/s aggregate, which
clearly requires 2x 100 Gb links. For this use-case the Mellanox NICs are
the way to go.
}}}

On Tue, Mar 25, 2025 at 3:53 PM Rob Kossler via USRP-users <
usrp-users@lists.ettus.com> wrote:

Hi,
I am in the process of purchasing a 100Gb NIC for use with the X410 and
have seen documentation and previous posts indicating that the ConnectX
NICs are preferred. But I did note in the DPDK knowledge base article that
the Intel E810 could also work.  I prefer the E810 because it seems to be
less expensive and can be configured for 4x10Gb, but I don't want to create
a headache for myself.  Let me know if you have had success or issues with
the E810 using a 100Gb link (or two 100Gb links) to the X410.

I am also confused about the E810 which comes in a couple of 100Gb models:
CQDA2 and 2CQDA2, where they both have two 100Gb QSFP28 ports, but the
former can only handle aggregate 100Gb whereas the latter can handle
aggregate 200Gb.  My confusion is "why does it matter for the X410?".  With
4 channels at 500 MS/s, the aggregate bit rate is only 64Gb/s so why does
it matter if the E810 CQDA2 only supports aggregate 100Gb?  It seems to me
that either model supports the maximum rate of the X410.

Thanks.
Rob


USRP-users mailing list -- usrp-users@lists.ettus.com
To unsubscribe send an email to usrp-users-leave@lists.ettus.com

Hey Rob! Great questions. Here's way too much information taken from internal notes I have on the subject, to help you process all of this :) {{{ E810 QCDA2 provides 100 Gb aggregate between both ports. Dual port to USRP is not recommended since UHD doesn't "know" this limitation. E810 2QCAD2 provides 2 bifurcated 100 Gb links, so can do 200 Gb aggregate. I -think- one has to tell BIOS / OS about this bifurcation to get the NIC fully working. I don't have one to test out. There are now newer Intel E82* NICs. I don't know their capabilities. Any of the Intel E8* NICs can be configured in various ways, the most relevant for USRPs are: * 2x1x100 : 2 ports, each hosting 1 virtual link at 100 Gb * 100 : 1 port with a single virtual link at 100 Gb * 8x10 (formerly 2x4x10 : 2 ports, each hosting 4 virtual link at 10 Gb each {{{ $ sudo ./epct64e -get -nic 1 Ethernet Port Configuration Tool EPCT version: v1.42.24.04 Copyright 2019 - 2024 Intel Corporation. Available Port Options: ========================================================================== Port Quad 0 Quad 1 Option Option (Gbps) L0 L1 L2 L3 L4 L5 L6 L7 ======= ============================= ================ ================ 2x1x100 -> 100 - - - 100 - - - 2x50 -> 50 - 50 - - - - - 4x25 -> 25 25 25 25 - - - - 2x2x25 -> 25 25 - - 25 25 - - Active 8x10 -> 10 10 10 10 10 10 10 10 100 -> 100 - - - - - - - }}} FWIW: We're had a number of customers with E810 CQDA2 issues recently. My current belief is that the NIC (NVM) and OS drivers do not play nicely together & hence updating both to the latest is needed to get everything working properly. Intel E8* NICs used the ICE driver, which is in active development & works pretty well overall. ICE drivers -do not- work seamlessly with DPDK unlike Mellanox ones. It's easy to create a script to do the driver binding & link stuff both down and up, but this can be very confusing for people not used to taking down a link and rebinding the driver & then the reverse to get it back working in the system again. The Mellanox drivers & hardware use a little less CPU time than the Intel ones, so a little better single-core performance — which helps when using DPDK and doing max data throughput. Yes, 500 GS/s on 4 channels (2 GS/s aggregate) is 64 Gb/s and thus well within the capabilities of a single 100 Gb port on either NIC ... That's fine for an X410. For an X440 we double that to 4 GS/s aggregate, which clearly requires 2x 100 Gb links. For this use-case the Mellanox NICs are the way to go. }}} On Tue, Mar 25, 2025 at 3:53 PM Rob Kossler via USRP-users < usrp-users@lists.ettus.com> wrote: > Hi, > I am in the process of purchasing a 100Gb NIC for use with the X410 and > have seen documentation and previous posts indicating that the ConnectX > NICs are preferred. But I did note in the DPDK knowledge base article that > the Intel E810 could also work. I prefer the E810 because it seems to be > less expensive and can be configured for 4x10Gb, but I don't want to create > a headache for myself. Let me know if you have had success or issues with > the E810 using a 100Gb link (or two 100Gb links) to the X410. > > I am also confused about the E810 which comes in a couple of 100Gb models: > CQDA2 and 2CQDA2, where they both have two 100Gb QSFP28 ports, but the > former can only handle aggregate 100Gb whereas the latter can handle > aggregate 200Gb. My confusion is "why does it matter for the X410?". With > 4 channels at 500 MS/s, the aggregate bit rate is only 64Gb/s so why does > it matter if the E810 CQDA2 only supports aggregate 100Gb? It seems to me > that either model supports the maximum rate of the X410. > > Thanks. > Rob > _______________________________________________ > USRP-users mailing list -- usrp-users@lists.ettus.com > To unsubscribe send an email to usrp-users-leave@lists.ettus.com >
RK
Rob Kossler
Thu, Mar 27, 2025 2:20 AM

Thanks Michael,
Thanks for all of the information. Regarding your final paragraph, you
mentioned that the 64 Gb/s could be handled on one 100 Gb link.  However,
that seems at odds with the following statement in the UHD manual in the X410
section about FPGA types
https://files.ettus.com/manual/page_usrp_x4xx.html#x4xx_updating_fpga_types

  • CG_400: 400 MHz analog bandwidth streaming per channel between the
    X4x0 and an external host computer. The current implementation requires
    dual 100 GbE connections for 4 full-duplex channels or a single 100 GbE
    connection for 2 full-duplex channels.

Do you think that this statement in the UHD manual is a mistake?  This is
the statement that made me think that I needed two 100Gb links even though
the 4 channels at 500 MS/s is aggregate 64Gb/s.  If only one link is truly
needed, then I can feel more confident purchasing an E810.
Rob

On Wed, Mar 26, 2025 at 3:53 PM Michael Dickens michael.dickens@ettus.com
wrote:

Hey Rob! Great questions. Here's way too much information taken from
internal notes I have on the subject, to help you process all of this :)
{{{
E810 QCDA2 provides 100 Gb aggregate between both ports. Dual port to USRP
is not recommended since UHD doesn't "know" this limitation.

E810 2QCAD2 provides 2 bifurcated 100 Gb links, so can do 200 Gb
aggregate. I -think- one has to tell BIOS / OS about this bifurcation to
get the NIC fully working. I don't have one to test out.

There are now newer Intel E82* NICs. I don't know their capabilities.

Any of the Intel E8* NICs can be configured in various ways, the most
relevant for USRPs are:

  • 2x1x100 : 2 ports, each hosting 1 virtual link at 100 Gb
  • 100 : 1 port with a single virtual link at 100 Gb
  • 8x10 (formerly 2x4x10 : 2 ports, each hosting 4 virtual link at 10 Gb
    each
    {{{
    $ sudo ./epct64e -get -nic 1
    Ethernet Port Configuration Tool
    EPCT version: v1.42.24.04
    Copyright 2019 - 2024 Intel Corporation.

Available Port Options:


---========
Port                            Quad 0          Quad 1
Option  Option (Gbps)                    L0  L1  L2  L3  L4  L5  L6  L7
======= =============================    ================ ================
2x1x100                      -> 100  -  -  -  100  -  -  -
2x50                          ->  50  -  50  -    -  -  -  -
4x25                          ->  25  25  25  25    -  -  -  -
2x2x25                        ->  25  25  -  -  25  25  -  -
Active  8x10                          ->  10  10  10  10  10  10  10  10
100                          -> 100  -  -  -    -  -  -  -
}}}

FWIW: We're had a number of customers with E810 CQDA2 issues recently. My
current belief is that the NIC (NVM) and OS drivers do not play nicely
together & hence updating both to the latest is needed to get everything
working properly.

Intel E8* NICs used the ICE driver, which is in active development & works
pretty well overall. ICE drivers -do not- work seamlessly with DPDK unlike
Mellanox ones. It's easy to create a script to do the driver binding & link
stuff both down and up, but this can be very confusing for people not used
to taking down a link and rebinding the driver & then the reverse to get it
back working in the system again.

The Mellanox drivers & hardware use a little less CPU time than the Intel
ones, so a little better single-core performance — which helps when using
DPDK and doing max data throughput.

Yes, 500 GS/s on 4 channels (2 GS/s aggregate) is 64 Gb/s and thus well
within the capabilities of a single 100 Gb port on either NIC ... That's
fine for an X410. For an X440 we double that to 4 GS/s aggregate, which
clearly requires 2x 100 Gb links. For this use-case the Mellanox NICs are
the way to go.
}}}

On Tue, Mar 25, 2025 at 3:53 PM Rob Kossler via USRP-users <
usrp-users@lists.ettus.com> wrote:

Hi,
I am in the process of purchasing a 100Gb NIC for use with the X410 and
have seen documentation and previous posts indicating that the ConnectX
NICs are preferred. But I did note in the DPDK knowledge base article that
the Intel E810 could also work.  I prefer the E810 because it seems to be
less expensive and can be configured for 4x10Gb, but I don't want to create
a headache for myself.  Let me know if you have had success or issues with
the E810 using a 100Gb link (or two 100Gb links) to the X410.

I am also confused about the E810 which comes in a couple of 100Gb
models: CQDA2 and 2CQDA2, where they both have two 100Gb QSFP28 ports, but
the former can only handle aggregate 100Gb whereas the latter can handle
aggregate 200Gb.  My confusion is "why does it matter for the X410?".  With
4 channels at 500 MS/s, the aggregate bit rate is only 64Gb/s so why does
it matter if the E810 CQDA2 only supports aggregate 100Gb?  It seems to me
that either model supports the maximum rate of the X410.

Thanks.
Rob


USRP-users mailing list -- usrp-users@lists.ettus.com
To unsubscribe send an email to usrp-users-leave@lists.ettus.com

Thanks Michael, Thanks for all of the information. Regarding your final paragraph, you mentioned that the 64 Gb/s could be handled on one 100 Gb link. However, that seems at odds with the following statement in the UHD manual in the X410 section about FPGA types <https://files.ettus.com/manual/page_usrp_x4xx.html#x4xx_updating_fpga_types> - CG_400: 400 MHz analog bandwidth streaming per channel between the X4x0 and an external host computer. The current implementation requires dual 100 GbE connections for 4 full-duplex channels or a single 100 GbE connection for 2 full-duplex channels. Do you think that this statement in the UHD manual is a mistake? This is the statement that made me think that I needed two 100Gb links even though the 4 channels at 500 MS/s is aggregate 64Gb/s. If only one link is truly needed, then I can feel more confident purchasing an E810. Rob On Wed, Mar 26, 2025 at 3:53 PM Michael Dickens <michael.dickens@ettus.com> wrote: > Hey Rob! Great questions. Here's way too much information taken from > internal notes I have on the subject, to help you process all of this :) > {{{ > E810 QCDA2 provides 100 Gb aggregate between both ports. Dual port to USRP > is not recommended since UHD doesn't "know" this limitation. > > E810 2QCAD2 provides 2 bifurcated 100 Gb links, so can do 200 Gb > aggregate. I -think- one has to tell BIOS / OS about this bifurcation to > get the NIC fully working. I don't have one to test out. > > There are now newer Intel E82* NICs. I don't know their capabilities. > > Any of the Intel E8* NICs can be configured in various ways, the most > relevant for USRPs are: > * 2x1x100 : 2 ports, each hosting 1 virtual link at 100 Gb > * 100 : 1 port with a single virtual link at 100 Gb > * 8x10 (formerly 2x4x10 : 2 ports, each hosting 4 virtual link at 10 Gb > each > {{{ > $ sudo ./epct64e -get -nic 1 > Ethernet Port Configuration Tool > EPCT version: v1.42.24.04 > Copyright 2019 - 2024 Intel Corporation. > > Available Port Options: > ========================================================================== > Port Quad 0 Quad 1 > Option Option (Gbps) L0 L1 L2 L3 L4 L5 L6 L7 > ======= ============================= ================ ================ > 2x1x100 -> 100 - - - 100 - - - > 2x50 -> 50 - 50 - - - - - > 4x25 -> 25 25 25 25 - - - - > 2x2x25 -> 25 25 - - 25 25 - - > Active 8x10 -> 10 10 10 10 10 10 10 10 > 100 -> 100 - - - - - - - > }}} > > FWIW: We're had a number of customers with E810 CQDA2 issues recently. My > current belief is that the NIC (NVM) and OS drivers do not play nicely > together & hence updating both to the latest is needed to get everything > working properly. > > Intel E8* NICs used the ICE driver, which is in active development & works > pretty well overall. ICE drivers -do not- work seamlessly with DPDK unlike > Mellanox ones. It's easy to create a script to do the driver binding & link > stuff both down and up, but this can be very confusing for people not used > to taking down a link and rebinding the driver & then the reverse to get it > back working in the system again. > > The Mellanox drivers & hardware use a little less CPU time than the Intel > ones, so a little better single-core performance — which helps when using > DPDK and doing max data throughput. > > Yes, 500 GS/s on 4 channels (2 GS/s aggregate) is 64 Gb/s and thus well > within the capabilities of a single 100 Gb port on either NIC ... That's > fine for an X410. For an X440 we double that to 4 GS/s aggregate, which > clearly requires 2x 100 Gb links. For this use-case the Mellanox NICs are > the way to go. > }}} > > On Tue, Mar 25, 2025 at 3:53 PM Rob Kossler via USRP-users < > usrp-users@lists.ettus.com> wrote: > >> Hi, >> I am in the process of purchasing a 100Gb NIC for use with the X410 and >> have seen documentation and previous posts indicating that the ConnectX >> NICs are preferred. But I did note in the DPDK knowledge base article that >> the Intel E810 could also work. I prefer the E810 because it seems to be >> less expensive and can be configured for 4x10Gb, but I don't want to create >> a headache for myself. Let me know if you have had success or issues with >> the E810 using a 100Gb link (or two 100Gb links) to the X410. >> >> I am also confused about the E810 which comes in a couple of 100Gb >> models: CQDA2 and 2CQDA2, where they both have two 100Gb QSFP28 ports, but >> the former can only handle aggregate 100Gb whereas the latter can handle >> aggregate 200Gb. My confusion is "why does it matter for the X410?". With >> 4 channels at 500 MS/s, the aggregate bit rate is only 64Gb/s so why does >> it matter if the E810 CQDA2 only supports aggregate 100Gb? It seems to me >> that either model supports the maximum rate of the X410. >> >> Thanks. >> Rob >> _______________________________________________ >> USRP-users mailing list -- usrp-users@lists.ettus.com >> To unsubscribe send an email to usrp-users-leave@lists.ettus.com >> >