1 .. SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause)
3 ====================================
4 Marvell OcteonTx2 RVU Kernel Drivers
5 ====================================
7 Copyright (c) 2020 Marvell International Ltd.
14 - `Basic packet flow`_
15 - `Devlink health reporters`_
16 - `Quality of service`_
21 Resource virtualization unit (RVU) on Marvell's OcteonTX2 SOC maps HW
22 resources from the network, crypto and other functional blocks into
23 PCI-compatible physical and virtual functions. Each functional block
24 again has multiple local functions (LFs) for provisioning to PCI devices.
25 RVU supports multiple PCIe SRIOV physical functions (PFs) and virtual
26 functions (VFs). PF0 is called the administrative / admin function (AF)
27 and has privileges to provision RVU functional block's LFs to each of the
30 RVU managed networking functional blocks
31 - Network pool or buffer allocator (NPA)
32 - Network interface controller (NIX)
33 - Network parser CAM (NPC)
34 - Schedule/Synchronize/Order unit (SSO)
35 - Loopback interface (LBK)
37 RVU managed non-networking functional blocks
38 - Crypto accelerator (CPT)
39 - Scheduled timers unit (TIM)
40 - Schedule/Synchronize/Order unit (SSO)
41 Used for both networking and non networking usecases
43 Resource provisioning examples
44 - A PF/VF with NIX-LF & NPA-LF resources works as a pure network device
45 - A PF/VF with CPT-LF resource works as a pure crypto offload device.
47 RVU functional blocks are highly configurable as per software requirements.
49 Firmware setups following stuff before kernel boots
50 - Enables required number of RVU PFs based on number of physical links.
51 - Number of VFs per PF are either static or configurable at compile time.
52 Based on config, firmware assigns VFs to each of the PFs.
53 - Also assigns MSIX vectors to each of PF and VFs.
54 - These are not changed after kernel boot.
59 Linux kernel will have multiple drivers registering to different PF and VFs
60 of RVU. Wrt networking there will be 3 flavours of drivers.
65 As mentioned above RVU PF0 is called the admin function (AF), this driver
66 supports resource provisioning and configuration of functional blocks.
67 Doesn't handle any I/O. It sets up few basic stuff but most of the
68 funcionality is achieved via configuration requests from PFs and VFs.
70 PF/VFs communicates with AF via a shared memory region (mailbox). Upon
71 receiving requests AF does resource provisioning and other HW configuration.
72 AF is always attached to host kernel, but PFs and their VFs may be used by host
73 kernel itself, or attached to VMs or to userspace applications like
74 DPDK etc. So AF has to handle provisioning/configuration requests sent
75 by any device from any domain.
77 AF driver also interacts with underlying firmware to
78 - Manage physical ethernet links ie CGX LMACs.
79 - Retrieve information like speed, duplex, autoneg etc
80 - Retrieve PHY EEPROM and stats.
81 - Configure FEC, PAM modes
84 From pure networking side AF driver supports following functionality.
85 - Map a physical link to a RVU PF to which a netdev is registered.
86 - Attach NIX and NPA block LFs to RVU PF/VF which provide buffer pools, RQs, SQs
87 for regular networking functionality.
88 - Flow control (pause frames) enable/disable/config.
89 - HW PTP timestamping related config.
90 - NPC parser profile config, basically how to parse pkt and what info to extract.
91 - NPC extract profile config, what to extract from the pkt to match data in MCAM entries.
92 - Manage NPC MCAM entries, upon request can frame and install requested packet forwarding rules.
93 - Defines receive side scaling (RSS) algorithms.
94 - Defines segmentation offload algorithms (eg TSO)
95 - VLAN stripping, capture and insertion config.
96 - SSO and TIM blocks config which provide packet scheduling support.
97 - Debugfs support, to check current resource provising, current status of
98 NPA pools, NIX RQ, SQ and CQs, various stats etc which helps in debugging issues.
101 Physical Function driver
102 ------------------------
104 This RVU PF handles IO, is mapped to a physical ethernet link and this
105 driver registers a netdev. This supports SR-IOV. As said above this driver
106 communicates with AF with a mailbox. To retrieve information from physical
107 links this driver talks to AF and AF gets that info from firmware and responds
108 back ie cannot talk to firmware directly.
110 Supports ethtool for configuring links, RSS, queue count, queue size,
111 flow control, ntuple filters, dump PHY EEPROM, config FEC etc.
113 Virtual Function driver
114 -----------------------
116 There are two types VFs, VFs that share the physical link with their parent
117 SR-IOV PF and the VFs which work in pairs using internal HW loopback channels (LBK).
120 - These VFs and their parent PF share a physical link and used for outside communication.
121 - VFs cannot communicate with AF directly, they send mbox message to PF and PF
122 forwards that to AF. AF after processing, responds back to PF and PF forwards
124 - From functionality point of view there is no difference between PF and VF as same type
125 HW resources are attached to both. But user would be able to configure few stuff only
126 from PF as PF is treated as owner/admin of the link.
129 - RVU PF0 ie admin function creates these VFs and maps them to loopback block's channels.
130 - A set of two VFs (VF0 & VF1, VF2 & VF3 .. so on) works as a pair ie pkts sent out of
131 VF0 will be received by VF1 and vice versa.
132 - These VFs can be used by applications or virtual machines to communicate between them
133 without sending traffic outside. There is no switch present in HW, hence the support
135 - These communicate directly with AF (PF0) via mbox.
137 Except for the IO channels or links used for packet reception and transmission there is
138 no other difference between these VF types. AF driver takes care of IO channel mapping,
139 hence same VF driver works for both types of devices.
147 1. CGX LMAC receives packet.
148 2. Forwards the packet to the NIX block.
149 3. Then submitted to NPC block for parsing and then MCAM lookup to get the destination RVU device.
150 4. NIX LF attached to the destination RVU device allocates a buffer from RQ mapped buffer pool of NPA block LF.
151 5. RQ may be selected by RSS or by configuring MCAM rule with a RQ number.
152 6. Packet is DMA'ed and driver is notified.
157 1. Driver prepares a send descriptor and submits to SQ for transmission.
158 2. The SQ is already configured (by AF) to transmit on a specific link/channel.
159 3. The SQ descriptor ring is maintained in buffers allocated from SQ mapped pool of NPA block LF.
160 4. NIX block transmits the pkt on the designated channel.
161 5. NPC MCAM entries can be installed to divert pkt onto a different channel.
163 Devlink health reporters
164 ========================
168 The NPA reporters are responsible for reporting and recovering the following group of errors:
172 - Error due to operation of unmapped PF.
173 - Error due to disabled alloc/free for other HW blocks (NIX, SSO, TIM, DPI and AURA).
177 - Fault due to NPA_AQ_INST_S read or NPA_AQ_RES_S write.
182 - RAS Error Reporting for NPA_AQ_INST_S/NPA_AQ_RES_S.
186 - Error due to unmapped slot.
193 state healthy error 2872 recover 2872 last_dump_date 2020-12-10 last_dump_time 09:39:09 grace_period 0 auto_recover true auto_dump true
195 state healthy error 2872 recover 2872 last_dump_date 2020-12-11 last_dump_time 04:43:04 grace_period 0 auto_recover true auto_dump true
197 state healthy error 2871 recover 2871 last_dump_date 2020-12-10 last_dump_time 09:39:17 grace_period 0 auto_recover true auto_dump true
199 state healthy error 0 recover 0 last_dump_date 2020-12-10 last_dump_time 09:32:40 grace_period 0 auto_recover true auto_dump true
201 Each reporter dumps the
204 - Error Register value
209 ~# devlink health dump show pci/0002:01:00.0 reporter hw_npa_gen
211 NPA General Interrupt Reg : 1
212 NIX0: free disabled RX
213 ~# devlink health dump show pci/0002:01:00.0 reporter hw_npa_intr
215 NPA RVU Interrupt Reg : 1
217 ~# devlink health dump show pci/0002:01:00.0 reporter hw_npa_err
219 NPA Error Interrupt Reg : 4096
225 The NIX reporters are responsible for reporting and recovering the following group of errors:
229 - Receive mirror/multicast packet drop due to insufficient buffer.
230 - SMQ Flush operation.
234 - Memory Fault due to WQE read/write from multicast/mirror buffer.
235 - Receive multicast/mirror replication list error.
236 - Receive packet on an unmapped PF.
237 - Fault due to NIX_AQ_INST_S read or NIX_AQ_RES_S write.
242 - RAS Error Reporting for NIX Receive Multicast/Mirror Entry Structure.
243 - RAS Error Reporting for WQE/Packet Data read from Multicast/Mirror Buffer..
244 - RAS Error Reporting for NIX_AQ_INST_S/NIX_AQ_RES_S.
248 - Error due to unmapped slot.
255 state healthy error 0 recover 0 grace_period 0 auto_recover true auto_dump true
257 state healthy error 0 recover 0 grace_period 0 auto_recover true auto_dump true
259 state healthy error 0 recover 0 grace_period 0 auto_recover true auto_dump true
261 state healthy error 0 recover 0 grace_period 0 auto_recover true auto_dump true
263 state healthy error 1121 recover 1121 last_dump_date 2021-01-19 last_dump_time 05:42:26 grace_period 0 auto_recover true auto_dump true
265 state healthy error 949 recover 949 last_dump_date 2021-01-19 last_dump_time 05:42:43 grace_period 0 auto_recover true auto_dump true
267 state healthy error 1147 recover 1147 last_dump_date 2021-01-19 last_dump_time 05:42:59 grace_period 0 auto_recover true auto_dump true
269 state healthy error 409 recover 409 last_dump_date 2021-01-19 last_dump_time 05:43:16 grace_period 0 auto_recover true auto_dump true
271 Each reporter dumps the
274 - Error Register value
279 ~# devlink health dump show pci/0002:01:00.0 reporter hw_nix_intr
281 NIX RVU Interrupt Reg : 1
283 ~# devlink health dump show pci/0002:01:00.0 reporter hw_nix_gen
285 NIX General Interrupt Reg : 1
286 Rx multicast pkt drop
287 ~# devlink health dump show pci/0002:01:00.0 reporter hw_nix_err
289 NIX Error Interrupt Reg : 64
290 Rx on unmapped PF_FUNC
297 Hardware algorithms used in scheduling
298 --------------------------------------
300 octeontx2 silicon and CN10K transmit interface consists of five transmit levels
301 starting from SMQ/MDQ, TL4 to TL1. Each packet will traverse MDQ, TL4 to TL1
302 levels. Each level contains an array of queues to support scheduling and shaping.
303 The hardware uses the below algorithms depending on the priority of scheduler queues.
304 once the usercreates tc classes with different priorities, the driver configures
305 schedulers allocated to the class with specified priority along with rate-limiting
310 - Once packets are submitted to MDQ, hardware picks all active MDQs having different priority
311 using strict priority.
315 - Active MDQs having the same priority level are chosen using round robin.
321 1. Enable HW TC offload on the interface::
323 # ethtool -K <interface> hw-tc-offload on
327 # tc qdisc add dev <interface> clsact
328 # tc qdisc replace dev <interface> root handle 1: htb offload
330 3. Create tc classes with different priorities::
332 # tc class add dev <interface> parent 1: classid 1:1 htb rate 10Gbit prio 1
334 # tc class add dev <interface> parent 1: classid 1:2 htb rate 10Gbit prio 7
336 4. Create tc classes with same priorities and different quantum::
338 # tc class add dev <interface> parent 1: classid 1:1 htb rate 10Gbit prio 2 quantum 409600
340 # tc class add dev <interface> parent 1: classid 1:2 htb rate 10Gbit prio 2 quantum 188416
342 # tc class add dev <interface> parent 1: classid 1:3 htb rate 10Gbit prio 2 quantum 32768