AMD TSN Solution
Time-Sensitive Networking (TSN) is a set of standards under development by the Time-Sensitive Networking task group of the IEEE 802.1 working group.
Information is provided here, on the Software TSN Solution for FPGA based TSN subsystem (https://www.xilinx.com/products/intellectual-property/1gtsn.html)
Table of Contents
- 1 HW Features of Soft IP based 100M/1G TSN Subsystem
- 2 Software support
- 3 Kernel Configuration
- 4 Devicetree
- 5 TSN System
- 6 Traffic Classes
- 7 PCP and Traffic Class
- 8 Generating TSN Traffic:
- 9 Converting Legacy Applications to TSN
- 10 Test Procedure
- 10.1 PTP Profiles Supported
- 10.2 Running gPTP(802.1AS):
- 10.3 Running PTP 1588v2 :
- 10.4 Running Qbv/Time Aware Shaper:
- 10.4.1 1)qbv_sched utility
- 10.4.1.1 Testing with Wireshark:
- 10.4.2 2) qdisc frame work
- 10.4.1 1)qbv_sched utility
- 10.5 Running IPIC:
- 10.6 Steps to demonstrate Preemption
- 10.7 Running Spanning Tree Protocol
- 10.8 Time Aware DMA (TADMA)
- 10.9 OOB Scripts
- 10.9.1 How to Run
- 10.10 Support for eight priority queues
- 10.11 QCI test procedure:
- 10.12 FRER Test procedure:
- 11 Performance
- 11.1 Test Configuration
- 12 Scheduled Submission of PTP Frames
- 13 Frequently Asked Questions (FAQs)
- 14 TSN Design with AXI Interrupt Controller
- 14.1 Usage
- 14.2 Device Tree Generation Issue
- 15 Operating TSN MACs at Different Speeds
- 16 Mainline status
- 17 Known issues, Limitations and troubleshooting
- 18 Changelog
HW Features of Soft IP based 100M/1G TSN Subsystem
Enhanced Time Synchronization using IEEE 802.1AS
Ethernet AVB (Audio Video Bridging, IEEE 802.1Qav)
Frame Replication and Elimination for Reliability IEEE 802.1CB
Enhancements for Scheduled Traffic IEEE 802.1Qbv
Per-Stream Filtering and Policing, IEEE 802.1 Qci
Enhancements and Performance Improvements, IEEE 802.1Qcc
Frame Preemption, IEEE 802.1Qbu
Interspersing Express Traffic, IEEE 802.3br
Software support
Soft IP TSN kernel drivers are currently supported in Xilinx Linux staging area: linux-xlnx/drivers/staging/xilinx-tsn at master · Xilinx/linux-xlnx
Soft IP TSN user space utilities and sample configurations are provided to enable TSN functionality. Please refer to the TSN SW user guide and the following sections for more details.
TSN application are available via utilities and examples here (can be built via AMD Yocto recipes):
https://github.com/Xilinx/tsn-utils
https://github.com/Xilinx/tsn-talker-listener
To include the above applications in the root filesystem, please follow the steps below
Yocto:
Enable tsn packages using below line in local.conf
IMAGE_INSTALL:append = "packagegroup-tsn"
Note: Make sure you have included meta-xilinx-tsn layer in bblayers.conf
Petalinux:
For petalinux, update petalinuxbsp.conf with below line
IMAGE_INSTALL:append = "packagegroup-tsn"
To compile applications in the PetaLinux flow, please refer to the following links:
AMD Technical Information Portal
AMD Technical Information Portal
Kernel Configuration
The following config options should be enabled in order to build the TSN Subsystem:
CONFIG_XILINX_TSN
CONFIG_AXIENET_HAS_TADMA
CONFIG_XILINX_TSN_PTP
CONFIG_XILINX_TSN_QBV
CONFIG_XILINX_TSN_SWITCH
CONFIG_XILINX_TSN_QCI
CONFIG_XILINX_TSN_CB
CONFIG_XILINX_TSN_QBR
The following additional config is required/selected by the TSN subsystem:
CONFIG_NET_SWITCHDEV
CONFIG_STP
CONFIG_NETFILTER
Devicetree
TSN subsystem DT documentation can be found here: Documentation/devicetree/bindings/staging/net/xilinx_tsn.txt
For TSN TEMAC, please refer to Documentation/devicetree/bindings/staging/net/xilinx-tsn-ethernet.txt
For TSN Switch, please refer to Documentation/devicetree/bindings/staging/net/xilinx_tsn_switch.txt
For TSN Endpoint, please refer to Documentation/devicetree/bindings/staging/net/xilinx_tsn_ep.txt
For TSN Extended Endpoint, please refer to Documentation/devicetree/bindings/staging/net/xilinx_tsn_ep_ex.txt
Please refer to PL Ethernet and DMA documentation for additional information: Documentation/devicetree/bindings/net/xlnx,axi-ethernet.yaml
For more details on phy bindings please refer "Documentation/devicetree/bindings/net/ethernet-phy.yaml"
Note:
TSN devicetree from DTG flow is automatically generated for an RGMII PHY at address 0. For custom boards or design, please update your own device tree node as per the the devicetree documentation mentioned above.
Please note that xlnx, packet-switch DT property is now used instead of packet-switch; the latter will be deprecated shortly. This DT property is used to identify packet switch feature enablement in TSN IP subsystem.
NOTE: In AMD 2025.1 tools, the System Device Tree (SDT) flow is the default. All new TSN and other software features are tested and validated using the SDT flow instead of the legacy flow.
TSN System
Xilinx’s TSN IP Switch has three ports Endpoint (Port 0), MAC1 (Port 1) and MAC2 (Port 2)
Endpoint is connected to MCDMA (Multichannel DMA), each MCDMA channel is a dedicated channel for each type of traffic, i.e. Best Effort, Scheduled Traffic and Reserved. There could be other use cases where there would be separate channels for management traffic
MAC1 is connected to external world by PHY1
MAC2 is connected to external world by PHY2
Traffic Classes
TSN IP supports multiple queues and traffic class configurations as listed below:
a. 3 queue or 3 traffic class system:
1. Best Effort
2. Scheduled
3. Reserved
b. 2 queue/2 traffic class system
1. Best Effort
2. Scheduled
c. 8 queue with 3 traffic class system (starting from 2025.1)
1. Best Effort
2. Scheduled
3. Reserved
PCP and Traffic Class
The vlan pcp of the ethernet frame is used to identify the traffic class by the HW. By default, pcp of 4 is mapped to ST and pcp’s of 2 and 3 are mapped to RES(reserved). If pcp is any other or if frame has no vlan tag, its considered as BE.
2019.x and earlier releases:
This default mapping can be changed by kernel command line option, in uEnv.txt.
For eg.
bootargs=console=ttyPS0,115200 xilinx_tsn_ep.st_pcp=5 xilinx_tsn_ep.res_pcp=2,3 root=/dev/mmcblk0p2 rw rootwait earlyprintk
2020.x and later releases:
The arguments in boot.scr determine PCP mapping. The default values remain same as mentioned above.
To change the PCP, edit the file <TOP_DIR>/sources/meta-xilinx-tsn/recipes-bsp/u-boot/uboot-zynq-scr/boot.cmd.sd.<boardname>.
For example, the following command line maps a pcp of 1 to ST traffic, a pcp of 4 to RES traffic, and the
rest of the pcps to BE traffic:
bootargs=console=ttyPS0,115200 xilinx_tsn_ep.st_pcp=1 xilinx_tsn_ep.res_pcp=4
root=/dev/mmcblk0p2 rw rootwait earlyprintk
The following command line maps pcps of 2 and 3 to ST traffic, a pcp of 1 to RES traffic, and the rest of the pcps to BE traffic.
bootargs=console=ttyPS0,115200 xilinx_tsn_ep.st_pcp=2,3 xilinx_tsn_ep.res_pcp=1
root=/dev/mmcblk0p2 rw rootwait earlyprintk
After changing the PCP values, source the bitbake environment and run bitbake build again:
#source setupsdk
#bitbake core-image-minimal
CAUTION: Do not edit the boot.scr file directly.
2025.1 and later releases:
In the legacy design where num_priorities <= 3, users can configure the PCP (Priority Code Point) values for Scheduled Traffic (ST) and Reserved (RES) traffic by passing boot arguments.
In the driver by default, ST traffic is mapped to PCP value 4, while RES traffic is mapped to PCP values 2 and 3.
If a user wants to change the default ST PCP value from 4 to any other value in the range 0 to 7, they should pass the desired value using the boot argument xilinx_emac_tsn.st_pcp=<0-7>.
Similarly, to override the default RES PCP values, the user can pass the required values using xilinx_emac_tsn.res_pcp=<0-7,...>.
NOTE:
From release 2025.1 onwards, the module name changed from xilinx_tsn_ep to xilinx_emac_tsn as part of the support for a modular driver architecture.
Please make sure to use the updated module name starting from the 2025.1 release.
In flexible queues mode (i.e., when num_priorities > 3), the default mapping of PCP to traffic classes cannot be changed. In this configuration, PCP values are assigned in a 1:1 to their corresponding priority queues, limited to the maximum number of supported priorities. Any remaining PCP values are directed to priority queue 0. For instance, if the system is configured to support four priorities, the mapping would be as follows: PCP0 is assigned to priority queue 0, PCP1 to priority queue 1, PCP2 to priority queue 2, and PCP3 to priority queue 3. All other PCP values are then allocated to priority queue 0.
Generating TSN Traffic:
Generating TSN Traffic, can be done by using raw sockets, where you can create ethernet frame with relevant pcp. One such implementation is tsn_talker provided as part of TSN Yocto SW release.
Converting Legacy Applications to TSN
Sometimes users need port legacy applications to send/receive TSN Traffic without having to change/modify them. For example, an application which only uses L3 layer (IP) to communicate shall not have capabilities to insert pcp/vlan into the frame. To solve this, Xilinx TSN Solution has IP interception kernel module support, to seamlessly transition legacy applications to use TSN technology. See "Running IPIC" section for more details.
Test Procedure
PTP Profiles Supported
TSN IP has support for following profiles:
a. 1588v1 and 1588v2
b. Power Profile
c. 802.1AS
d. 802.1ASREV
TSN driver and SW daemon (ptp4l and openAvnu/gptp) support is available for :
a. 1588v2
b. 802.1AS
c. 802.1ASREV (not all features may be available. See ptp4l/openAvnu documentation).
Running gPTP(802.1AS):
gPTP daemon can be run in two ways. One way is to run it from OpenAvnu, and the other is to run it from ptp4linux. Latter is preferred as it prints rms values at slave to identify sync with master.
Running gPTP daemon from OpenAvnu:
From the Intel card PC machine launch gPTP daemon as follows:
#Open-AVB/daemons/gptp/linux/build/obj/daemon_cl enp4s0 –S
From the Xilinx board launch PTP daemon as follows:
#daemon_cl eth1 –S
[1] 186
ERROR at 636 in ../src/linux_hal_common.cpp: Group ptp not found, will try root (0) instead
Using clock device: /dev/ptp0
Starting PDelay
root@Xilinx-ZCU102-2016_1:~# AsCapable: Enabled
*** Announce Timeout Expired - Becoming Master
New Grandmaster "00:0A:35:FF:FE:00:01:0E" (previous "00:00:00:00:00:00:00:00")
<<END>>
Running gPTP daemon from ptp4linux:
From the Xilinx board launch PTP daemon as follows:
#ptp4l -P -2 -H -i eth1 -p /dev/ptp0 –s -m -f /usr/sbin/ptp4l_slave.conf
Download PTP daemon from https://sourceforge.net/p/linuxptp/ and compile to get ptp4l binary in Intel card PC. Use gPTP.cfg or default.cfg file present in the linuxptp source code.
From the Intel card PC launch PTP daemon as follows:
(Use /usr/bin/ptp4l_master.conf from the board on PC)
#ptp4l -P -2 -H -i enp4s0 -p /dev/ptp0 -m -f ptp4l_master.conf
Upon successful synchronization, RMS values prints at the slave would be as follows:
root@zcu102-zynqmp:~# ptp4l -P -2 -H -i eth1 /dev/ptp0 -s -m -f /usr/sbin/ptp4l_slave.conf
ptp4l[7940.770]: selected /dev/ptp0 as PTP clock
ptp4l[7940.800]: driver changed our HWTSTAMP options
ptp4l[7940.800]: tx_type 1 not 1
ptp4l[7940.800]: rx_filter 1 not 12
ptp4l[7940.800]: port 1: INITIALIZING to LISTENING on INITIALIZE
ptp4l[7940.800]: port 0: INITIALIZING to LISTENING on INITIALIZE
ptp4l[7948.772]: port 1: LISTENING to MASTER on ANNOUNCE_RECEIPT_TIMEOUT_EXPIRES
ptp4l[7948.772]: selected best master clock 000a35.fffe.00010e
ptp4l[7948.772]: assuming the grand master role
ptp4l[7949.452]: port 1: new foreign master a0369f.fffe.684c96-1
ptp4l[7953.452]: selected best master clock a0369f.fffe.684c96
ptp4l[7953.452]: port 1: MASTER to UNCALIBRATED on RS_SLAVE
ptp4l[7953.951]: port 1: UNCALIBRATED to SLAVE on MASTER_CLOCK_SELECTED
ptp4l[7954.701]: rms 1732 max 2297 freq -100287 +/- 1208 delay 509 +/- 0
ptp4l[7955.701]: rms 326 max 499 freq -101341 +/- 438 delay 509 +/- 0
ptp4l[7956.702]: rms 545 max 579 freq -102323 +/- 151 delay 509 +/- 0
ptp4l[7957.702]: rms 343 max 463 freq -102512 +/- 9 delay 509 +/- 0
ptp4l[7958.702]: rms 118 max 193 freq -102419 +/- 43 delay 509 +/- 0
Note:
Currently 1 step ptp mode is not supported in software
The roles of master and slave can be changed by changing the priority values. A low priority implies it is a master and a higher priority implies it is a slave.
By default the MAC ports link speed is 1Gbps, use the following command to set it at 100Mbps, incase 100Mbps setting is required
# mii-tool -F 100baseTx-FD eth1
Change the ptp4l config files (/usr/sbin/ptp4l_slave.conf and /usr/sbin/ptp4l_master.conf) parameter neighborPropDelayThresh as below
neighborPropDelayThresh 2000 - for 100Mbps link speed
or
neighborPropDelayThresh 800 - for 1Gbps link speed
Running PTP 1588v2 :
PTPv2 uses Best Master Clock algorithm to determine which clock is of highest quality(grand master) within the network to create master/slave hierarchy and synchronizes all other nodes with grand master. To make a node master, priority field in configuration file should have the least value.
PTPv2 can be run over L2 or UDP. When run over UDP, it can be run in multicast mode on ep + switch systems and in both multicast and unicast modes on ep only systems.
To Run on Intel Card PC, download PTP daemon from https://sourceforge.net/p/linuxptp/ and compile to get ptp4l binary. Copy master/slave configuration files to linuxptp folder and launch ptp4l daemon from the folder as mentioned below.
To run PTPv2 over L2:
Peer to Peer(P2P) mechanism:
To run as master on zcu102 or zc702:
ptp4l -P -2 -H -i eth1 -p /dev/ptp0 -m -f /usr/sbin/ptp4l_master_v2_l2.conf
To run as slave on zcu102 or zc702:
ptp4l -P -2 -H -i eth1 -p /dev/ptp0 -m -f /usr/sbin/ptp4l_slave_v2_l2.conf
To run as master on Intel Card PC:
ptp4l -P -2 -H -i eth1 -p /dev/ptp0 -m -f ptp4l_master_v2_l2.conf
To run as slave on Intel Card PC:
ptp4l -P -2 -H -i eth1 -p /dev/ptp0 -m -f ptp4l_slave_v2_l2.conf
End to End(E2E) mechanism:
To run as master on zcu102 or zc702:
ptp4l -E -2 -H -i eth1 -p /dev/ptp0 -m -f /usr/sbin/ptp4l_master_v2_l2.conf
To run as slave on zcu102 or zc702:
ptp4l -E -2 -H -i eth1 -p /dev/ptp0 -m -f /usr/sbin/ptp4l_slave_v2_l2.conf
To run as master on Intel Card PC:
ptp4l -E -2 -H -i eth1 -p /dev/ptp0 -m -f ptp4l_master_v2_l2.conf
To run as slave on Intel Card PC:
ptp4l -E -2 -H -i eth1 -p /dev/ptp0 -m -f ptp4l_slave_v2_l2.conf
PTPv2 over UDP and in multicast mode:
Peer to Peer(P2P) mechanism:
To run as master on zcu102 or zc702:
ptp4l -P -4 -H -i eth1 -p /dev/ptp0 -m -f /usr/sbin/ptp4l_master_v2_udp_multicast.conf
To run as slave on zcu102 or zc702:
ptp4l -P -4 -H -i eth1 -p /dev/ptp0 -m -f /usr/sbin/ptp4l_slave_v2_udp_multicast.conf
To run as master on Intel Card PC:
ptp4l -P -4 -H -i eth1 -p /dev/ptp0 -m -f ptp4l_master_v2_udp_multicast.conf
To run as slave on Intel Card PC:
ptp4l -P -4 -H -i eth1 -p /dev/ptp0 -m -f ptp4l_slave_v2_udp_multicast.conf
End to End(E2E) mechanism:
To run as master on zcu102 or zc702:
ptp4l -E -4 -H -i eth1 -p /dev/ptp0 -m -f /usr/sbin/ptp4l_master_v2_udp_multicast.conf
To run as slave on zcu102 or zc702:
ptp4l -E -4 -H -i eth1 -p /dev/ptp0 -m -f /usr/sbin/ptp4l_slave_v2_udp_multicast.conf
To run as master on Intel Card PC:
ptp4l -E -4 -H -i eth1 -p /dev/ptp0 -m -f ptp4l_master_v2_udp_multicast.conf
To run as slave on Intel Card PC:
ptp4l -E -4 -H -i eth1 -p /dev/ptp0 -m -f ptp4l_slave_v2_udp_multicast.conf
PTPv2 over UDP and in unicast mode:
Peer to Peer(P2P) mechanism:
To run as master on zcu102 or zc702:
ptp4l -P -4 -H -i eth1 -p /dev/ptp0 -m -f /usr/sbin/ptp4l_master_v2_udp_unicast_p2p.conf
To run as slave on zcu102 or zc702:
'peer_address' field of /usr/sbin/ptp4l_slave_v2_udp_unicast_p2p.conf should be set to the
IP address of master
ptp4l -P -4 -H -i eth1 -p /dev/ptp0 -m -f /usr/sbin/ptp4l_slave_v2_udp_unicast_p2p.conf
To run as master on Intel Card PC:
ptp4l -P -4 -H -i eth1 -p /dev/ptp0 -m -f ptp4l_master_v2_udp_unicast_p2p.conf
To run as slave on Intel Card PC:
'peer_address' field of ptp4l_slave_v2_udp_unicast_p2p.conf should be set to the
IP address of master
ptp4l -P -4 -H -i eth1 -p /dev/ptp0 -m -f ptp4l_slave_v2_udp_unicast_p2p.conf
End to End(E2E) mechanism:
To run as master on zcu102 or zc702:
ptp4l -E -4 -H -i eth1 -p /dev/ptp0 -m -f /usr/sbin/ptp4l_master_v2_udp_unicast_e2e.conf
To run as slave on zcu102 or zc702:
'UDPv4' field of /usr/sbin/ptp4l_slave_v2_udp_unicast_e2e.conf should be set to the
IP address of master
ptp4l -E -4 -H -i eth1 -p /dev/ptp0 -m -f /usr/sbin/ptp4l_slave_v2_udp_unicast_e2e.conf
To run as master on Intel Card PC:
ptp4l -E -4 -H -i eth1 -p /dev/ptp0 -m -f ptp4l_master_v2_udp_unicast_e2e.conf
To run as slave on Intel Card PC:
'UDPv4' field of ptp4l_slave_v2_udp_unicast_e2e.conf should be set to the
IP address of master
ptp4l -E -4 -H -i eth1 -p /dev/ptp0 -m -f ptp4l_slave_v2_udp_unicast_e2e.conf
On successful synchronization, logs at the slave would be as follows:
ptp4l[765.873]: selected /dev/ptp0 as PTP clock
ptp4l[765.960]: driver rejected most general HWTSTAMP filter
ptp4l[765.960]: port 1: INITIALIZING to LISTENING on INIT_COMPLETE
ptp4l[765.960]: port 0: INITIALIZING to LISTENING on INIT_COMPLETE
ptp4l[772.710]: port 1: LISTENING to MASTER on ANNOUNCE_RECEIPT_TIMEOUT_EXPIRES
ptp4l[772.710]: selected local clock 000a35.fffe.00012e as best master
ptp4l[772.710]: assuming the grand master role
ptp4l[775.065]: port 1: new foreign master 000a35.fffe.00013e-1
ptp4l[779.065]: selected best master clock 000a35.fffe.00013e
ptp4l[779.065]: port 1: MASTER to UNCALIBRATED on RS_SLAVE
ptp4l[780.064]: master offset 3409342692 s0 freq +0 path delay 396
ptp4l[781.065]: master offset 3409344460 s1 freq +1768 path delay 396
ptp4l[782.065]: master offset -373 s2 freq +1395 path delay 396
ptp4l[782.065]: port 1: UNCALIBRATED to SLAVE on MASTER_CLOCK_SELECTED
ptp4l[783.065]: master offset -333 s2 freq +1323 path delay 396
ptp4l[784.065]: master offset -404 s2 freq +1152 path delay 396
ptp4l[785.065]: master offset -381 s2 freq +1054 path delay 396
ptp4l[786.065]: master offset -303 s2 freq +1017 path delay 396
ptp4l[787.065]: master offset -297 s2 freq +933 path delay 396
ptp4l[788.065]: master offset -19 s2 freq +1121 path delay 396
ptp4l[789.065]: master offset 316 s2 freq +1451 path delay 396
ptp4l[790.065]: master offset 373 s2 freq +1603 path delay 396
The s0, s1, s2 strings indicate the different clock servo states: s0 is unlocked, s1 is clock step and s2 is locked. Once the servo is in the locked state, the clock will not be stepped (only slowly adjusted). INITIALIZING, LISTENING, UNCALIBRATED and SLAVE are some of possible port states which change on the INITIALIZE, RS_SLAVE, MASTER_CLOCK_SELECTED events. The master offset value is the measured offset from the master in nanoseconds. This has decreased from 3409342692 to -373 indicating successful synchronization with master changing the port state from UNCALIBRATED to SLAVE.
Running Qbv/Time Aware Shaper:
Qbv functionality can be programmed using two ways
1)qbv_sched utility.
2) qdisc frame work.
1)qbv_sched utility
For Example:
qbv_sched -c ep /tmp/abc.cfg
This schedules Qbv on ep using the TSN configuration of abc.cfg present in tmp directory
qbv_sched ep
This schedules Qbv on ep using the default TSN configuration present in /etc/qbv.cfg
qbv_sched -g ep
This returns the schedule running on ep
qbv_sched -s ep -f
This forces a Qbv schedule on ep even if a schedule is pending to be run on ep
qbv_sched -c ep /tmp/abc.cfg -f
This forcefully runs Qbv using TSN configuration of abv.cfg presnt in tmp directory even if a schedule is pending to be run on ep
Testing with Wireshark:
Configuring Qbv using qbv_sched:
The default TSN configuration is present in the /etc/qbv.cfg file. This file has the QBV gate schedule. To run Qbv, you need to configure this file as per the instructions given in it. To open all gates, cycle_time should be 0.
For Example:
qbv =
{
temac1 =
{
start_sec = 0;
start_ns = 0;
cycle_time = 10000000; //cycle time is 10ms
gate_list_length = 2;
gate_list =
(
{
state = 4;
time = 100000;
},
{
state = 1;
time = 100000;
}
);
};
};
As 'temac1' part of the file is configured, Qbv scheduler is run on 'eth1' using qbv_sched utility as follows:
# qbv_sched eth1
qbv_sched utility can be used to schedule all interfaces ep, eth1- temac1 and eth2- temac2
The above Qbv schedule opens ST gate for 100uS and then closes it. Cycle time is 10ms; So after 100uS BE gate is kept open for the rest of the cycle time even though it's gate time is 100uS as the sum of gate time's is not cycle time.
To test TSN functionality you need to run PTP in the background and make sure it’s working without any errors. After PTP starts running in the background, run tsn_talker program from the Xilinx HW. Before launching tsn_talker application configure the switch CAM entry to allow corresponding traffic.
#switch_cam -a a0:36:9f:68:4c:96 10 swp1
Switch CAM entry is added with destination MAC as a0:36:9f:68:4c:96, VLAN ID as 10 and port as swp1 (Temac1).
#tsn_talker eth1 a0:36:9f:68:4c:96 00:0a:35:00:01:0e 10 4 1500 0 12 1000000000 1
This application sends 12 ST packets with VLAN ID as 10 and packet size as 1500 bytes every 1 second.
From the Intel card PC, run Wireshark and observe the incoming packets.
Wireshark Before Qbv Programming:
as we can see in the above picture, from 106th packet untill 117th packet are all in sequence and the next packet starts at the next second.
Wireshark After qbv programming:
You would observe that every second 12 ST packets are sent of which 8 packets are sent in the 100us of 10 ms cycle and the rest 4 packets are sent in the next cycle.
As we can see in the above picture, from 4th packet until 11th packet (8 packets) are received sequentially and the next packet starts after 10ms delay (packet no. 12).
2) qdisc frame work
Configuring Qbv using qdisc framework(taprio):
The qdisc framework allows us to configure all QBV (Time-Aware Shaper) settings.
Kernel configuration
CONFIG_NET_SCH_NETEM=y
CONFIG_NET_SCH_TAPRIO=y
For example
./tc qdisc add dev ep parent root handle 100 taprio flags 2 num_tc 3 map 2 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 queues 1@0 1@1 1@2 sched-entry S 01 300000 sched-entry S 02 300000 sched-entry S 04 400000 base-time 1732523070165770012 cycle-time 1000000
- Scheduled cycle time for 1 ms and divided 1ms into three parts those are
Best Effort traffic - 300us
Reserved traffic - 300us
Schedule traffic - 400us
Explanation:
tc → Traffic controller
qdisc → qdsic frame work
add → adding a device
dev ep → ep interface (ex: eth0, eth1).
root → Qdsic ID
taprio → Using qdisc for time-aware shaper feature
flags → 2 (To pick taprio)
num_tc → number of Traffic classes.
map → The priority to traffic class map. Maps priorities 0..15 to a specified traffic class.
queues → Provide count and offset of queue range for each traffic class. In the format, count@offset.
Queue ranges for each traffic classes cannot overlap and must be a contiguous range of queues.
sched-entry → sched-entry S 01 300000
sched-entry → syntax
S → Command
01 → Traffic type
300000 → Interval
sched-entry → sched-entry S 02 300000
sched-entry → syntax
S → Command
02 → Traffic class
300000 → Interval
sched-entry → sched-entry S 04 400000
sched-entry → syntax
S → Command
04 → Traffic class
400000 → Interval
base-time → PTP time
cycle-time → total cycle time
TC Help:
root@xilinx-zcu102-20242:~# ./tc help
Usage: tc [ OPTIONS ] OBJECT { COMMAND | help }
tc [-force] -batch filename
where OBJECT := { qdisc | class | filter | chain | action | monitor | exec }
OPTIONS := { -V[ersion] | -s[tatistics] | -d[etails] | -r[aw] |
-o[neline] | -j[son] | -p[retty] | -c[olor]
-b[atch] [filename] | -n[etns] name | -N[umeric] |
-nm | -nam[es] | { -cf | -conf } path
-br[ief] }
root@xilinx-zcu102-20242:~# ./tc qdisc help
Usage: tc qdisc [ add | del | replace | change | show ] dev STRING
[ handle QHANDLE ] [ root | ingress | clsact | parent CLASSID ]
[ estimator INTERVAL TIME_CONSTANT ]
[ stab [ help | STAB_OPTIONS] ]
[ ingress_block BLOCK_INDEX ] [ egress_block BLOCK_INDEX ]
[ [ QDISC_KIND ] [ help | OPTIONS ] ]
tc qdisc { show | list } [ dev STRING ] [ QDISC_ID ] [ invisible ]
Where:
QDISC_KIND := { [p|b]fifo | tbf | prio | red | etc. }
OPTIONS := ... try tc qdisc add <desired QDISC_KIND> help
STAB_OPTIONS := ... try tc qdisc add stab help
QDISC_ID := { root | ingress | handle QHANDLE | parent CLASSID }
root@xilinx-zcu102-20231:~# ./tc qdisc add taprio help
Usage: ... taprio clockid CLOCKID
[num_tc NUMBER] [map P0 P1 ...]
[queues COUNT@OFFSET COUNT@OFFSET COUNT@OFFSET ...]
[ [sched-entry index cmd gate-mask interval] ... ]
[base-time time] [txtime-delay delay]
[fp FP0 FP1 FP2 ...]
CLOCKID must be a valid SYS-V id (i.e. CLOCK_TAI)
root@xilinx-zcu102-20242:~#
root@xilinx-zcu102-20242:~# ./tc qdisc add dev ep help
Usage: tc qdisc [ add | del | replace | change | show ] dev STRING
[ handle QHANDLE ] [ root | ingress | clsact | parent CLASSID ]
[ estimator INTERVAL TIME_CONSTANT ]
[ stab [ help | STAB_OPTIONS] ]
[ ingress_block BLOCK_INDEX ] [ egress_block BLOCK_INDEX ]
[ [ QDISC_KIND ] [ help | OPTIONS ] ]
tc qdisc { show | list } [ dev STRING ] [ QDISC_ID ] [ invisible ]
Where:
QDISC_KIND := { [p|b]fifo | tbf | prio | red | etc. }
OPTIONS := ... try tc qdisc add <desired QDISC_KIND> help
STAB_OPTIONS := ... try tc qdisc add stab help
QDISC_ID := { root | ingress | handle QHANDLE | parent CLASSID }
RTNETLINK answers: Invalid argument
root@xilinx-zcu102-20231:~#
For more information:
https://man7.org/linux/man-pages/man8/tc-mqprio.8.html
https://man7.org/linux/man-pages/man8/tc-taprio.8.html
NOTE: Please refer to Answer Record 37315 for a known issue on 2024.2 qdisc implementation (also see known issues section below)
Running IPIC:
IP interception translates the transmit packet with the configured source, destination MAC addresses, VLAN ID and PCP values if the packets' IPv4 tuples (source IP, destination IP, DSCP, protocol, source port number and destination port number) match. IPIC module maintains hash entries of IPv4 tuples and if the out-going packets' tuple data match with that of hash entries , translates the IP stream.
To configure the matching condition of IPv4 tuple, at driver load time, choose '1'(set) or '0' (unset) against the corresponding tuple data where IPv4 tuple order is 'IPv4_tuple=src_ip, dest_ip, dscp, protocol, src_port, dest_port'.
For example, to filter packets that have a specific source IP, source port number, and destination port number, it is needed to load IPIC module as follows:
insmod /lib/modules/4.14.0-xilinx-v2018.1/extra/xilinx_tsn_ip_intercept.ko IPv4_tuple=1,1,0,0,0,0
In this case, DSCP, protocol, source port number and destination port number are not considered.
User application ipic_prog programs IPIC module to add hash entries corresponding the set IPv4 tuple data, and translate IP stream with provided source, destination MAC addresses, VLAN ID and PCP values when the set IPv4 tuple data matches.
Usage of ipic_prog is as follows:
ipic_prog <add | del | flush> <src_ip> <dest_ip> <protocol> <dscp> <src_port> <dest_port> <src_mac> <dest_mac> <vlanid> <pcp>
Following are the examples of addition of a single entry, and deletion of single and all entries.
Addition of Entry:
ipic_prog add 192.168.10.5 192.168.10.9 17 0 8000 1000 00-0a-35-00-01-0e a0-36-9f-68-4c-96 10 4
Deletion of Entry:
ipic_prog del 192.168.10.5 192.168.10.9 17 0 8000 1000 00-0a-35-00-01-0e a0-36-9f-68-4c-96 10 4
Deletion/Flushing of All Entries:
ipic_prog flush
If you want to add multiple entries with different IPv4 tuple combinations and different translation fields, run ipic_prog command for each entry.
For example, the following commands add two entries with same source IP and different destination IPs, and translate the streams with different source, destination MAC addresses and VLAN IDs
ipic_prog add 192.168.10.5 192.168.10.9 17 0 8000 1000 00-0a-35-00-01-0e a0-36-9f-68-4c-96 10 4
In this case, packets with destination IP as 192.168.10.9 are translated with VLAN ID of 10 and MAC address of source as 00:0a:35:00:01:0e and of destination as a0:36:9f:68:4c:96
ipic_prog add 192.168.10.5 192.168.10.3 17 0 8000 1000 00-0a-35-07-89-ff a0-36-9f-87-44-00 99 4
In this case, packets with destination IP as 192.168.10.3 are translated with VLAN-ID of 99 and MAC addresses of source as 00:0a:35:07:89:ff and of destination as a0:36:9f:87:44:00
the translated IP stream is sent out of network ports if there is a switch CAM entry corresponding to the destination MAC address and VLAN ID. Hence, make sure to add CAM entries using switch_cam command.
For example, for the above two added entries, switch_cam is run as follows:
switch_cam -a a0:36:9f:68:4c:96 10 swp1
© 2025 Advanced Micro Devices, Inc. Privacy Policy