5 This document describes the **Distributed Switch Architecture (DSA)** subsystem
6 design principles, limitations, interactions with other subsystems, and how to
7 develop drivers for this subsystem as well as a TODO for developers interested
13 The Distributed Switch Architecture subsystem was primarily designed to
14 support Marvell Ethernet switches (MV88E6xxx, a.k.a. Link Street product
15 line) using Linux, but has since evolved to support other vendors as well.
17 The original philosophy behind this design was to be able to use unmodified
18 Linux tools such as bridge, iproute2, ifconfig to work transparently whether
19 they configured/queried a switch port network device or a regular network
22 An Ethernet switch typically comprises multiple front-panel ports and one
23 or more CPU or management ports. The DSA subsystem currently relies on the
24 presence of a management port connected to an Ethernet controller capable of
25 receiving Ethernet frames from the switch. This is a very common setup for all
26 kinds of Ethernet switches found in Small Home and Office products: routers,
27 gateways, or even top-of-rack switches. This host Ethernet controller will
28 be later referred to as "conduit" and "cpu" in DSA terminology and code.
30 The D in DSA stands for Distributed, because the subsystem has been designed
31 with the ability to configure and manage cascaded switches on top of each other
32 using upstream and downstream Ethernet links between switches. These specific
33 ports are referred to as "dsa" ports in DSA terminology and code. A collection
34 of multiple switches connected to each other is called a "switch tree".
36 For each front-panel port, DSA creates specialized network devices which are
37 used as controlling and data-flowing endpoints for use by the Linux networking
38 stack. These specialized network interfaces are referred to as "user" network
39 interfaces in DSA terminology and code.
41 The ideal case for using DSA is when an Ethernet switch supports a "switch tag"
42 which is a hardware feature making the switch insert a specific tag for each
43 Ethernet frame it receives to/from specific ports to help the management
46 - what port is this frame coming from
47 - what was the reason why this frame got forwarded
48 - how to send CPU originated traffic to specific ports
50 The subsystem does support switches not capable of inserting/stripping tags, but
51 the features might be slightly limited in that case (traffic separation relies
52 on Port-based VLAN IDs).
54 Note that DSA does not currently create network interfaces for the "cpu" and
57 - the "cpu" port is the Ethernet switch facing side of the management
58 controller, and as such, would create a duplication of feature, since you
59 would get two interfaces for the same conduit: conduit netdev, and "cpu" netdev
61 - the "dsa" port(s) are just conduits between two or more switches, and as such
62 cannot really be used as proper network interfaces either, only the
63 downstream, or the top-most upstream interface makes sense with that model
65 NB: for the past 15 years, the DSA subsystem had been making use of the terms
66 "master" (rather than "conduit") and "slave" (rather than "user"). These terms
67 have been removed from the DSA codebase and phased out of the uAPI.
69 Switch tagging protocols
70 ------------------------
72 DSA supports many vendor-specific tagging protocols, one software-defined
73 tagging protocol, and a tag-less mode as well (``DSA_TAG_PROTO_NONE``).
75 The exact format of the tag protocol is vendor specific, but in general, they
76 all contain something which:
78 - identifies which port the Ethernet frame came from/should be sent to
79 - provides a reason why this frame was forwarded to the management interface
81 All tagging protocols are in ``net/dsa/tag_*.c`` files and implement the
82 methods of the ``struct dsa_device_ops`` structure, which are detailed below.
84 Tagging protocols generally fall in one of three categories:
86 1. The switch-specific frame header is located before the Ethernet header,
87 shifting to the right (from the perspective of the DSA conduit's frame
88 parser) the MAC DA, MAC SA, EtherType and the entire L2 payload.
89 2. The switch-specific frame header is located before the EtherType, keeping
90 the MAC DA and MAC SA in place from the DSA conduit's perspective, but
91 shifting the 'real' EtherType and L2 payload to the right.
92 3. The switch-specific frame header is located at the tail of the packet,
93 keeping all frame headers in place and not altering the view of the packet
94 that the DSA conduit's frame parser has.
96 A tagging protocol may tag all packets with switch tags of the same length, or
97 the tag length might vary (for example packets with PTP timestamps might
98 require an extended switch tag, or there might be one tag length on TX and a
99 different one on RX). Either way, the tagging protocol driver must populate the
100 ``struct dsa_device_ops::needed_headroom`` and/or ``struct dsa_device_ops::needed_tailroom``
101 with the length in octets of the longest switch frame header/trailer. The DSA
102 framework will automatically adjust the MTU of the conduit interface to
103 accommodate for this extra size in order for DSA user ports to support the
104 standard MTU (L2 payload length) of 1500 octets. The ``needed_headroom`` and
105 ``needed_tailroom`` properties are also used to request from the network stack,
106 on a best-effort basis, the allocation of packets with enough extra space such
107 that the act of pushing the switch tag on transmission of a packet does not
108 cause it to reallocate due to lack of memory.
110 Even though applications are not expected to parse DSA-specific frame headers,
111 the format on the wire of the tagging protocol represents an Application Binary
112 Interface exposed by the kernel towards user space, for decoders such as
113 ``libpcap``. The tagging protocol driver must populate the ``proto`` member of
114 ``struct dsa_device_ops`` with a value that uniquely describes the
115 characteristics of the interaction required between the switch hardware and the
116 data path driver: the offset of each bit field within the frame header and any
117 stateful processing required to deal with the frames (as may be required for
120 From the perspective of the network stack, all switches within the same DSA
121 switch tree use the same tagging protocol. In case of a packet transiting a
122 fabric with more than one switch, the switch-specific frame header is inserted
123 by the first switch in the fabric that the packet was received on. This header
124 typically contains information regarding its type (whether it is a control
125 frame that must be trapped to the CPU, or a data frame to be forwarded).
126 Control frames should be decapsulated only by the software data path, whereas
127 data frames might also be autonomously forwarded towards other user ports of
128 other switches from the same fabric, and in this case, the outermost switch
129 ports must decapsulate the packet.
131 Note that in certain cases, it might be the case that the tagging format used
132 by a leaf switch (not connected directly to the CPU) is not the same as what
133 the network stack sees. This can be seen with Marvell switch trees, where the
134 CPU port can be configured to use either the DSA or the Ethertype DSA (EDSA)
135 format, but the DSA links are configured to use the shorter (without Ethertype)
136 DSA frame header, in order to reduce the autonomous packet forwarding overhead.
137 It still remains the case that, if the DSA switch tree is configured for the
138 EDSA tagging protocol, the operating system sees EDSA-tagged packets from the
139 leaf switches that tagged them with the shorter DSA header. This can be done
140 because the Marvell switch connected directly to the CPU is configured to
141 perform tag translation between DSA and EDSA (which is simply the operation of
142 adding or removing the ``ETH_P_EDSA`` EtherType and some padding octets).
144 It is possible to construct cascaded setups of DSA switches even if their
145 tagging protocols are not compatible with one another. In this case, there are
146 no DSA links in this fabric, and each switch constitutes a disjoint DSA switch
147 tree. The DSA links are viewed as simply a pair of a DSA conduit (the out-facing
148 port of the upstream DSA switch) and a CPU port (the in-facing port of the
149 downstream DSA switch).
151 The tagging protocol of the attached DSA switch tree can be viewed through the
152 ``dsa/tagging`` sysfs attribute of the DSA conduit::
154 cat /sys/class/net/eth0/dsa/tagging
156 If the hardware and driver are capable, the tagging protocol of the DSA switch
157 tree can be changed at runtime. This is done by writing the new tagging
158 protocol name to the same sysfs device attribute as above (the DSA conduit and
159 all attached switch ports must be down while doing this).
161 It is desirable that all tagging protocols are testable with the ``dsa_loop``
162 mockup driver, which can be attached to any network interface. The goal is that
163 any network interface should be capable of transmitting the same packet in the
164 same way, and the tagger should decode the same received packet in the same way
165 regardless of the driver used for the switch control path, and the driver used
168 The transmission of a packet goes through the tagger's ``xmit`` function.
169 The passed ``struct sk_buff *skb`` has ``skb->data`` pointing at
170 ``skb_mac_header(skb)``, i.e. at the destination MAC address, and the passed
171 ``struct net_device *dev`` represents the virtual DSA user network interface
172 whose hardware counterpart the packet must be steered to (i.e. ``swp0``).
173 The job of this method is to prepare the skb in a way that the switch will
174 understand what egress port the packet is for (and not deliver it towards other
175 ports). Typically this is fulfilled by pushing a frame header. Checking for
176 insufficient size in the skb headroom or tailroom is unnecessary provided that
177 the ``needed_headroom`` and ``needed_tailroom`` properties were filled out
178 properly, because DSA ensures there is enough space before calling this method.
180 The reception of a packet goes through the tagger's ``rcv`` function. The
181 passed ``struct sk_buff *skb`` has ``skb->data`` pointing at
182 ``skb_mac_header(skb) + ETH_ALEN`` octets, i.e. to where the first octet after
183 the EtherType would have been, were this frame not tagged. The role of this
184 method is to consume the frame header, adjust ``skb->data`` to really point at
185 the first octet after the EtherType, and to change ``skb->dev`` to point to the
186 virtual DSA user network interface corresponding to the physical front-facing
187 switch port that the packet was received on.
189 Since tagging protocols in category 1 and 2 break software (and most often also
190 hardware) packet dissection on the DSA conduit, features such as RPS (Receive
191 Packet Steering) on the DSA conduit would be broken. The DSA framework deals
192 with this by hooking into the flow dissector and shifting the offset at which
193 the IP header is to be found in the tagged frame as seen by the DSA conduit.
194 This behavior is automatic based on the ``overhead`` value of the tagging
195 protocol. If not all packets are of equal size, the tagger can implement the
196 ``flow_dissect`` method of the ``struct dsa_device_ops`` and override this
197 default behavior by specifying the correct offset incurred by each individual
198 RX packet. Tail taggers do not cause issues to the flow dissector.
200 Checksum offload should work with category 1 and 2 taggers when the DSA conduit
201 driver declares NETIF_F_HW_CSUM in vlan_features and looks at csum_start and
202 csum_offset. For those cases, DSA will shift the checksum start and offset by
203 the tag size. If the DSA conduit driver still uses the legacy NETIF_F_IP_CSUM
204 or NETIF_F_IPV6_CSUM in vlan_features, the offload might only work if the
205 offload hardware already expects that specific tag (perhaps due to matching
206 vendors). DSA user ports inherit those flags from the conduit, and it is up to
207 the driver to correctly fall back to software checksum when the IP header is not
208 where the hardware expects. If that check is ineffective, the packets might go
209 to the network without a proper checksum (the checksum field will have the
210 pseudo IP header sum). For category 3, when the offload hardware does not
211 already expect the switch tag in use, the checksum must be calculated before any
212 tag is inserted (i.e. inside the tagger). Otherwise, the DSA conduit would
213 include the tail tag in the (software or hardware) checksum calculation. Then,
214 when the tag gets stripped by the switch during transmission, it will leave an
215 incorrect IP checksum in place.
217 Due to various reasons (most common being category 1 taggers being associated
218 with DSA-unaware conduits, mangling what the conduit perceives as MAC DA), the
219 tagging protocol may require the DSA conduit to operate in promiscuous mode, to
220 receive all frames regardless of the value of the MAC DA. This can be done by
221 setting the ``promisc_on_conduit`` property of the ``struct dsa_device_ops``.
222 Note that this assumes a DSA-unaware conduit driver, which is the norm.
224 Conduit network devices
225 -----------------------
227 Conduit network devices are regular, unmodified Linux network device drivers for
228 the CPU/management Ethernet interface. Such a driver might occasionally need to
229 know whether DSA is enabled (e.g.: to enable/disable specific offload features),
230 but the DSA subsystem has been proven to work with industry standard drivers:
231 ``e1000e,`` ``mv643xx_eth`` etc. without having to introduce modifications to these
232 drivers. Such network devices are also often referred to as conduit network
233 devices since they act as a pipe between the host processor and the hardware
236 Networking stack hooks
237 ----------------------
239 When a conduit netdev is used with DSA, a small hook is placed in the
240 networking stack is in order to have the DSA subsystem process the Ethernet
241 switch specific tagging protocol. DSA accomplishes this by registering a
242 specific (and fake) Ethernet type (later becoming ``skb->protocol``) with the
243 networking stack, this is also known as a ``ptype`` or ``packet_type``. A typical
244 Ethernet Frame receive sequence looks like this:
246 Conduit network device (e.g.: e1000e):
248 1. Receive interrupt fires:
250 - receive function is invoked
251 - basic packet processing is done: getting length, status etc.
252 - packet is prepared to be processed by the Ethernet layer by calling
255 2. net/ethernet/eth.c::
257 eth_type_trans(skb, dev)
258 if (dev->dsa_ptr != NULL)
259 -> skb->protocol = ETH_P_XDSA
261 3. drivers/net/ethernet/\*::
263 netif_receive_skb(skb)
264 -> iterate over registered packet_type
265 -> invoke handler for ETH_P_XDSA, calls dsa_switch_rcv()
270 -> invoke switch tag specific protocol handler in 'net/dsa/tag_*.c'
274 - inspect and strip switch tag protocol to determine originating port
275 - locate per-port network device
276 - invoke ``eth_type_trans()`` with the DSA user network device
277 - invoked ``netif_receive_skb()``
279 Past this point, the DSA user network devices get delivered regular Ethernet
280 frames that can be processed by the networking stack.
285 User network devices created by DSA are stacked on top of their conduit network
286 device, each of these network interfaces will be responsible for being a
287 controlling and data-flowing end-point for each front-panel port of the switch.
288 These interfaces are specialized in order to:
290 - insert/remove the switch tag protocol (if it exists) when sending traffic
291 to/from specific switch ports
292 - query the switch for ethtool operations: statistics, link state,
293 Wake-on-LAN, register dumps...
294 - manage external/internal PHY: link, auto-negotiation, etc.
296 These user network devices have custom net_device_ops and ethtool_ops function
297 pointers which allow DSA to introduce a level of layering between the networking
298 stack/ethtool and the switch driver implementation.
300 Upon frame transmission from these user network devices, DSA will look up which
301 switch tagging protocol is currently registered with these network devices and
302 invoke a specific transmit routine which takes care of adding the relevant
303 switch tag in the Ethernet frames.
305 These frames are then queued for transmission using the conduit network device
306 ``ndo_start_xmit()`` function. Since they contain the appropriate switch tag, the
307 Ethernet switch will be able to process these incoming frames from the
308 management interface and deliver them to the physical switch port.
310 When using multiple CPU ports, it is possible to stack a LAG (bonding/team)
311 device between the DSA user devices and the physical DSA conduits. The LAG
312 device is thus also a DSA conduit, but the LAG slave devices continue to be DSA
313 conduits as well (just with no user port assigned to them; this is needed for
314 recovery in case the LAG DSA conduit disappears). Thus, the data path of the LAG
315 DSA conduit is used asymmetrically. On RX, the ``ETH_P_XDSA`` handler, which
316 calls ``dsa_switch_rcv()``, is invoked early (on the physical DSA conduit;
317 LAG slave). Therefore, the RX data path of the LAG DSA conduit is not used.
318 On the other hand, TX takes place linearly: ``dsa_user_xmit`` calls
319 ``dsa_enqueue_skb``, which calls ``dev_queue_xmit`` towards the LAG DSA conduit.
320 The latter calls ``dev_queue_xmit`` towards one physical DSA conduit or the
321 other, and in both cases, the packet exits the system through a hardware path
324 Graphical representation
325 ------------------------
327 Summarized, this is basically how DSA looks like from a network device
331 opens and binds socket
334 +-----------v--|--------------------+
335 |+------+ +------+ +------+ +------+|
336 || swp0 | | swp1 | | swp2 | | swp3 ||
337 |+------+-+------+-+------+-+------+|
338 | DSA switch driver |
339 +-----------------------------------+
341 Tag added by | | Tag consumed by
342 switch driver | | switch driver
344 +-----------------------------------+
345 | Unmodified host interface driver | Software
346 --------+-----------------------------------+------------
347 | Host interface (eth0) | Hardware
348 +-----------------------------------+
350 Tag consumed by | | Tag added by
351 switch hardware | | switch hardware
353 +-----------------------------------+
355 |+------+ +------+ +------+ +------+|
356 || swp0 | | swp1 | | swp2 | | swp3 ||
357 ++------+-+------+-+------+-+------++
362 In order to be able to read to/from a switch PHY built into it, DSA creates an
363 user MDIO bus which allows a specific switch driver to divert and intercept
364 MDIO reads/writes towards specific PHY addresses. In most MDIO-connected
365 switches, these functions would utilize direct or indirect PHY addressing mode
366 to return standard MII registers from the switch builtin PHYs, allowing the PHY
367 library and/or to return link status, link partner pages, auto-negotiation
370 For Ethernet switches which have both external and internal MDIO buses, the
371 user MII bus can be utilized to mux/demux MDIO reads and writes towards either
372 internal or external MDIO devices this switch might be connected to: internal
373 PHYs, external PHYs, or even external switches.
378 DSA data structures are defined in ``include/net/dsa.h`` as well as
379 ``net/dsa/dsa_priv.h``:
381 - ``dsa_chip_data``: platform data configuration for a given switch device,
382 this structure describes a switch device's parent device, its address, as
383 well as various properties of its ports: names/labels, and finally a routing
384 table indication (when cascading switches)
386 - ``dsa_platform_data``: platform device configuration data which can reference
387 a collection of dsa_chip_data structures if multiple switches are cascaded,
388 the conduit network device this switch tree is attached to needs to be
391 - ``dsa_switch_tree``: structure assigned to the conduit network device under
392 ``dsa_ptr``, this structure references a dsa_platform_data structure as well as
393 the tagging protocol supported by the switch tree, and which receive/transmit
394 function hooks should be invoked, information about the directly attached
395 switch is also provided: CPU port. Finally, a collection of dsa_switch are
396 referenced to address individual switches in the tree.
398 - ``dsa_switch``: structure describing a switch device in the tree, referencing
399 a ``dsa_switch_tree`` as a backpointer, user network devices, conduit network
400 device, and a reference to the backing``dsa_switch_ops``
402 - ``dsa_switch_ops``: structure referencing function pointers, see below for a
408 Lack of CPU/DSA network devices
409 -------------------------------
411 DSA does not currently create user network devices for the CPU or DSA ports, as
412 described before. This might be an issue in the following cases:
414 - inability to fetch switch CPU port statistics counters using ethtool, which
415 can make it harder to debug MDIO switch connected using xMII interfaces
417 - inability to configure the CPU port link parameters based on the Ethernet
418 controller capabilities attached to it: http://patchwork.ozlabs.org/patch/509806/
420 - inability to configure specific VLAN IDs / trunking VLANs between switches
421 when using a cascaded setup
423 Common pitfalls using DSA setups
424 --------------------------------
426 Once a conduit network device is configured to use DSA (dev->dsa_ptr becomes
427 non-NULL), and the switch behind it expects a tagging protocol, this network
428 interface can only exclusively be used as a conduit interface. Sending packets
429 directly through this interface (e.g.: opening a socket using this interface)
430 will not make us go through the switch tagging protocol transmit function, so
431 the Ethernet switch on the other end, expecting a tag will typically drop this
434 Interactions with other subsystems
435 ==================================
437 DSA currently leverages the following subsystems:
439 - MDIO/PHY library: ``drivers/net/phy/phy.c``, ``mdio_bus.c``
440 - Switchdev:``net/switchdev/*``
441 - Device Tree for various of_* functions
442 - Devlink: ``net/core/devlink.c``
447 User network devices exposed by DSA may or may not be interfacing with PHY
448 devices (``struct phy_device`` as defined in ``include/linux/phy.h)``, but the DSA
449 subsystem deals with all possible combinations:
451 - internal PHY devices, built into the Ethernet switch hardware
452 - external PHY devices, connected via an internal or external MDIO bus
453 - internal PHY devices, connected via an internal MDIO bus
454 - special, non-autonegotiated or non MDIO-managed PHY devices: SFPs, MoCA; a.k.a
457 The PHY configuration is done by the ``dsa_user_phy_setup()`` function and the
458 logic basically looks like this:
460 - if Device Tree is used, the PHY device is looked up using the standard
461 "phy-handle" property, if found, this PHY device is created and registered
462 using ``of_phy_connect()``
464 - if Device Tree is used and the PHY device is "fixed", that is, conforms to
465 the definition of a non-MDIO managed PHY as defined in
466 ``Documentation/devicetree/bindings/net/fixed-link.txt``, the PHY is registered
467 and connected transparently using the special fixed MDIO bus driver
469 - finally, if the PHY is built into the switch, as is very common with
470 standalone switch packages, the PHY is probed using the user MII bus created
477 DSA directly utilizes SWITCHDEV when interfacing with the bridge layer, and
478 more specifically with its VLAN filtering portion when configuring VLANs on top
479 of per-port user network devices. As of today, the only SWITCHDEV objects
480 supported by DSA are the FDB and VLAN objects.
485 DSA registers one devlink device per physical switch in the fabric.
486 For each devlink device, every physical port (i.e. user ports, CPU ports, DSA
487 links or unused ports) is exposed as a devlink port.
489 DSA drivers can make use of the following devlink features:
491 - Regions: debugging feature which allows user space to dump driver-defined
492 areas of hardware information in a low-level, binary format. Both global
493 regions as well as per-port regions are supported. It is possible to export
494 devlink regions even for pieces of data that are already exposed in some way
495 to the standard iproute2 user space programs (ip-link, bridge), like address
496 tables and VLAN tables. For example, this might be useful if the tables
497 contain additional hardware-specific details which are not visible through
498 the iproute2 abstraction, or it might be useful to inspect these tables on
499 the non-user ports too, which are invisible to iproute2 because no network
500 interface is registered for them.
501 - Params: a feature which enables user to configure certain low-level tunable
502 knobs pertaining to the device. Drivers may implement applicable generic
503 devlink params, or may add new device-specific devlink params.
504 - Resources: a monitoring feature which enables users to see the degree of
505 utilization of certain hardware tables in the device, such as FDB, VLAN, etc.
506 - Shared buffers: a QoS feature for adjusting and partitioning memory and frame
507 reservations per port and per traffic class, in the ingress and egress
508 directions, such that low-priority bulk traffic does not impede the
509 processing of high-priority critical traffic.
511 For more details, consult ``Documentation/networking/devlink/``.
516 DSA features a standardized binding which is documented in
517 ``Documentation/devicetree/bindings/net/dsa/dsa.txt``. PHY/MDIO library helper
518 functions such as ``of_get_phy_mode()``, ``of_phy_connect()`` are also used to query
519 per-port PHY specific details: interface connection, MDIO bus location, etc.
524 DSA switch drivers need to implement a ``dsa_switch_ops`` structure which will
525 contain the various members described below.
527 Probing, registration and device lifetime
528 -----------------------------------------
530 DSA switches are regular ``device`` structures on buses (be they platform, SPI,
531 I2C, MDIO or otherwise). The DSA framework is not involved in their probing
532 with the device core.
534 Switch registration from the perspective of a driver means passing a valid
535 ``struct dsa_switch`` pointer to ``dsa_register_switch()``, usually from the
536 switch driver's probing function. The following members must be valid in the
539 - ``ds->dev``: will be used to parse the switch's OF node or platform data.
541 - ``ds->num_ports``: will be used to create the port list for this switch, and
542 to validate the port indices provided in the OF node.
544 - ``ds->ops``: a pointer to the ``dsa_switch_ops`` structure holding the DSA
545 method implementations.
547 - ``ds->priv``: backpointer to a driver-private data structure which can be
548 retrieved in all further DSA method callbacks.
550 In addition, the following flags in the ``dsa_switch`` structure may optionally
551 be configured to obtain driver-specific behavior from the DSA core. Their
552 behavior when set is documented through comments in ``include/net/dsa.h``.
554 - ``ds->vlan_filtering_is_global``
556 - ``ds->needs_standalone_vlan_filtering``
558 - ``ds->configure_vlan_while_not_filtering``
560 - ``ds->untag_bridge_pvid``
562 - ``ds->assisted_learning_on_cpu_port``
564 - ``ds->mtu_enforcement_ingress``
566 - ``ds->fdb_isolation``
568 Internally, DSA keeps an array of switch trees (group of switches) global to
569 the kernel, and attaches a ``dsa_switch`` structure to a tree on registration.
570 The tree ID to which the switch is attached is determined by the first u32
571 number of the ``dsa,member`` property of the switch's OF node (0 if missing).
572 The switch ID within the tree is determined by the second u32 number of the
573 same OF property (0 if missing). Registering multiple switches with the same
574 switch ID and tree ID is illegal and will cause an error. Using platform data,
575 a single switch and a single switch tree is permitted.
577 In case of a tree with multiple switches, probing takes place asymmetrically.
578 The first N-1 callers of ``dsa_register_switch()`` only add their ports to the
579 port list of the tree (``dst->ports``), each port having a backpointer to its
580 associated switch (``dp->ds``). Then, these switches exit their
581 ``dsa_register_switch()`` call early, because ``dsa_tree_setup_routing_table()``
582 has determined that the tree is not yet complete (not all ports referenced by
583 DSA links are present in the tree's port list). The tree becomes complete when
584 the last switch calls ``dsa_register_switch()``, and this triggers the effective
585 continuation of initialization (including the call to ``ds->ops->setup()``) for
586 all switches within that tree, all as part of the calling context of the last
587 switch's probe function.
589 The opposite of registration takes place when calling ``dsa_unregister_switch()``,
590 which removes a switch's ports from the port list of the tree. The entire tree
591 is torn down when the first switch unregisters.
593 It is mandatory for DSA switch drivers to implement the ``shutdown()`` callback
594 of their respective bus, and call ``dsa_switch_shutdown()`` from it (a minimal
595 version of the full teardown performed by ``dsa_unregister_switch()``).
596 The reason is that DSA keeps a reference on the conduit net device, and if the
597 driver for the conduit device decides to unbind on shutdown, DSA's reference
598 will block that operation from finalizing.
600 Either ``dsa_switch_shutdown()`` or ``dsa_unregister_switch()`` must be called,
601 but not both, and the device driver model permits the bus' ``remove()`` method
602 to be called even if ``shutdown()`` was already called. Therefore, drivers are
603 expected to implement a mutual exclusion method between ``remove()`` and
604 ``shutdown()`` by setting their drvdata to NULL after any of these has run, and
605 checking whether the drvdata is NULL before proceeding to take any action.
607 After ``dsa_switch_shutdown()`` or ``dsa_unregister_switch()`` was called, no
608 further callbacks via the provided ``dsa_switch_ops`` may take place, and the
609 driver may free the data structures associated with the ``dsa_switch``.
614 - ``get_tag_protocol``: this is to indicate what kind of tagging protocol is
615 supported, should be a valid value from the ``dsa_tag_protocol`` enum.
616 The returned information does not have to be static; the driver is passed the
617 CPU port number, as well as the tagging protocol of a possibly stacked
618 upstream switch, in case there are hardware limitations in terms of supported
621 - ``change_tag_protocol``: when the default tagging protocol has compatibility
622 problems with the conduit or other issues, the driver may support changing it
623 at runtime, either through a device tree property or through sysfs. In that
624 case, further calls to ``get_tag_protocol`` should report the protocol in
627 - ``setup``: setup function for the switch, this function is responsible for setting
628 up the ``dsa_switch_ops`` private structure with all it needs: register maps,
629 interrupts, mutexes, locks, etc. This function is also expected to properly
630 configure the switch to separate all network interfaces from each other, that
631 is, they should be isolated by the switch hardware itself, typically by creating
632 a Port-based VLAN ID for each port and allowing only the CPU port and the
633 specific port to be in the forwarding vector. Ports that are unused by the
634 platform should be disabled. Past this function, the switch is expected to be
635 fully configured and ready to serve any kind of request. It is recommended
636 to issue a software reset of the switch during this setup function in order to
637 avoid relying on what a previous software agent such as a bootloader/firmware
638 may have previously configured. The method responsible for undoing any
639 applicable allocations or operations done here is ``teardown``.
641 - ``port_setup`` and ``port_teardown``: methods for initialization and
642 destruction of per-port data structures. It is mandatory for some operations
643 such as registering and unregistering devlink port regions to be done from
644 these methods, otherwise they are optional. A port will be torn down only if
645 it has been previously set up. It is possible for a port to be set up during
646 probing only to be torn down immediately afterwards, for example in case its
647 PHY cannot be found. In this case, probing of the DSA switch continues
648 without that particular port.
650 - ``port_change_conduit``: method through which the affinity (association used
651 for traffic termination purposes) between a user port and a CPU port can be
652 changed. By default all user ports from a tree are assigned to the first
653 available CPU port that makes sense for them (most of the times this means
654 the user ports of a tree are all assigned to the same CPU port, except for H
655 topologies as described in commit 2c0b03258b8b). The ``port`` argument
656 represents the index of the user port, and the ``conduit`` argument represents
657 the new DSA conduit ``net_device``. The CPU port associated with the new
658 conduit can be retrieved by looking at ``struct dsa_port *cpu_dp =
659 conduit->dsa_ptr``. Additionally, the conduit can also be a LAG device where
660 all the slave devices are physical DSA conduits. LAG DSA also have a
661 valid ``conduit->dsa_ptr`` pointer, however this is not unique, but rather a
662 duplicate of the first physical DSA conduit's (LAG slave) ``dsa_ptr``. In case
663 of a LAG DSA conduit, a further call to ``port_lag_join`` will be emitted
664 separately for the physical CPU ports associated with the physical DSA
665 conduits, requesting them to create a hardware LAG associated with the LAG
668 PHY devices and link management
669 -------------------------------
671 - ``get_phy_flags``: Some switches are interfaced to various kinds of Ethernet PHYs,
672 if the PHY library PHY driver needs to know about information it cannot obtain
673 on its own (e.g.: coming from switch memory mapped registers), this function
674 should return a 32-bit bitmask of "flags" that is private between the switch
675 driver and the Ethernet PHY driver in ``drivers/net/phy/\*``.
677 - ``phy_read``: Function invoked by the DSA user MDIO bus when attempting to read
678 the switch port MDIO registers. If unavailable, return 0xffff for each read.
679 For builtin switch Ethernet PHYs, this function should allow reading the link
680 status, auto-negotiation results, link partner pages, etc.
682 - ``phy_write``: Function invoked by the DSA user MDIO bus when attempting to write
683 to the switch port MDIO registers. If unavailable return a negative error
686 - ``adjust_link``: Function invoked by the PHY library when a user network device
687 is attached to a PHY device. This function is responsible for appropriately
688 configuring the switch port link parameters: speed, duplex, pause based on
689 what the ``phy_device`` is providing.
691 - ``fixed_link_update``: Function invoked by the PHY library, and specifically by
692 the fixed PHY driver asking the switch driver for link parameters that could
693 not be auto-negotiated, or obtained by reading the PHY registers through MDIO.
694 This is particularly useful for specific kinds of hardware such as QSGMII,
695 MoCA or other kinds of non-MDIO managed PHYs where out of band link
696 information is obtained
701 - ``get_strings``: ethtool function used to query the driver's strings, will
702 typically return statistics strings, private flags strings, etc.
704 - ``get_ethtool_stats``: ethtool function used to query per-port statistics and
705 return their values. DSA overlays user network devices general statistics:
706 RX/TX counters from the network device, with switch driver specific statistics
709 - ``get_sset_count``: ethtool function used to query the number of statistics items
711 - ``get_wol``: ethtool function used to obtain Wake-on-LAN settings per-port, this
712 function may for certain implementations also query the conduit network device
713 Wake-on-LAN settings if this interface needs to participate in Wake-on-LAN
715 - ``set_wol``: ethtool function used to configure Wake-on-LAN settings per-port,
716 direct counterpart to set_wol with similar restrictions
718 - ``set_eee``: ethtool function which is used to configure a switch port EEE (Green
719 Ethernet) settings, can optionally invoke the PHY library to enable EEE at the
720 PHY level if relevant. This function should enable EEE at the switch port MAC
721 controller and data-processing logic
723 - ``get_eee``: ethtool function which is used to query a switch port EEE settings,
724 this function should return the EEE state of the switch port MAC controller
725 and data-processing logic as well as query the PHY for its currently configured
728 - ``get_eeprom_len``: ethtool function returning for a given switch the EEPROM
731 - ``get_eeprom``: ethtool function returning for a given switch the EEPROM contents
733 - ``set_eeprom``: ethtool function writing specified data to a given switch EEPROM
735 - ``get_regs_len``: ethtool function returning the register length for a given
738 - ``get_regs``: ethtool function returning the Ethernet switch internal register
739 contents. This function might require user-land code in ethtool to
740 pretty-print register values and registers
745 - ``suspend``: function invoked by the DSA platform device when the system goes to
746 suspend, should quiesce all Ethernet switch activities, but keep ports
747 participating in Wake-on-LAN active as well as additional wake-up logic if
750 - ``resume``: function invoked by the DSA platform device when the system resumes,
751 should resume all Ethernet switch activities and re-configure the switch to be
752 in a fully active state
754 - ``port_enable``: function invoked by the DSA user network device ndo_open
755 function when a port is administratively brought up, this function should
756 fully enable a given switch port. DSA takes care of marking the port with
757 ``BR_STATE_BLOCKING`` if the port is a bridge member, or ``BR_STATE_FORWARDING`` if it
758 was not, and propagating these changes down to the hardware
760 - ``port_disable``: function invoked by the DSA user network device ndo_close
761 function when a port is administratively brought down, this function should
762 fully disable a given switch port. DSA takes care of marking the port with
763 ``BR_STATE_DISABLED`` and propagating changes to the hardware if this port is
764 disabled while being a bridge member
769 Switching hardware is expected to have a table for FDB entries, however not all
770 of them are active at the same time. An address database is the subset (partition)
771 of FDB entries that is active (can be matched by address learning on RX, or FDB
772 lookup on TX) depending on the state of the port. An address database may
773 occasionally be called "FID" (Filtering ID) in this document, although the
774 underlying implementation may choose whatever is available to the hardware.
776 For example, all ports that belong to a VLAN-unaware bridge (which is
777 *currently* VLAN-unaware) are expected to learn source addresses in the
778 database associated by the driver with that bridge (and not with other
779 VLAN-unaware bridges). During forwarding and FDB lookup, a packet received on a
780 VLAN-unaware bridge port should be able to find a VLAN-unaware FDB entry having
781 the same MAC DA as the packet, which is present on another port member of the
782 same bridge. At the same time, the FDB lookup process must be able to not find
783 an FDB entry having the same MAC DA as the packet, if that entry points towards
784 a port which is a member of a different VLAN-unaware bridge (and is therefore
785 associated with a different address database).
787 Similarly, each VLAN of each offloaded VLAN-aware bridge should have an
788 associated address database, which is shared by all ports which are members of
789 that VLAN, but not shared by ports belonging to different bridges that are
790 members of the same VID.
792 In this context, a VLAN-unaware database means that all packets are expected to
793 match on it irrespective of VLAN ID (only MAC address lookup), whereas a
794 VLAN-aware database means that packets are supposed to match based on the VLAN
795 ID from the classified 802.1Q header (or the pvid if untagged).
797 At the bridge layer, VLAN-unaware FDB entries have the special VID value of 0,
798 whereas VLAN-aware FDB entries have non-zero VID values. Note that a
799 VLAN-unaware bridge may have VLAN-aware (non-zero VID) FDB entries, and a
800 VLAN-aware bridge may have VLAN-unaware FDB entries. As in hardware, the
801 software bridge keeps separate address databases, and offloads to hardware the
802 FDB entries belonging to these databases, through switchdev, asynchronously
803 relative to the moment when the databases become active or inactive.
805 When a user port operates in standalone mode, its driver should configure it to
806 use a separate database called a port private database. This is different from
807 the databases described above, and should impede operation as standalone port
808 (packet in, packet out to the CPU port) as little as possible. For example,
809 on ingress, it should not attempt to learn the MAC SA of ingress traffic, since
810 learning is a bridging layer service and this is a standalone port, therefore
811 it would consume useless space. With no address learning, the port private
812 database should be empty in a naive implementation, and in this case, all
813 received packets should be trivially flooded to the CPU port.
815 DSA (cascade) and CPU ports are also called "shared" ports because they service
816 multiple address databases, and the database that a packet should be associated
817 to is usually embedded in the DSA tag. This means that the CPU port may
818 simultaneously transport packets coming from a standalone port (which were
819 classified by hardware in one address database), and from a bridge port (which
820 were classified to a different address database).
822 Switch drivers which satisfy certain criteria are able to optimize the naive
823 configuration by removing the CPU port from the flooding domain of the switch,
824 and just program the hardware with FDB entries pointing towards the CPU port
825 for which it is known that software is interested in those MAC addresses.
826 Packets which do not match a known FDB entry will not be delivered to the CPU,
827 which will save CPU cycles required for creating an skb just to drop it.
829 DSA is able to perform host address filtering for the following kinds of
832 - Primary unicast MAC addresses of ports (``dev->dev_addr``). These are
833 associated with the port private database of the respective user port,
834 and the driver is notified to install them through ``port_fdb_add`` towards
837 - Secondary unicast and multicast MAC addresses of ports (addresses added
838 through ``dev_uc_add()`` and ``dev_mc_add()``). These are also associated
839 with the port private database of the respective user port.
841 - Local/permanent bridge FDB entries (``BR_FDB_LOCAL``). These are the MAC
842 addresses of the bridge ports, for which packets must be terminated locally
843 and not forwarded. They are associated with the address database for that
846 - Static bridge FDB entries installed towards foreign (non-DSA) interfaces
847 present in the same bridge as some DSA switch ports. These are also
848 associated with the address database for that bridge.
850 - Dynamically learned FDB entries on foreign interfaces present in the same
851 bridge as some DSA switch ports, only if ``ds->assisted_learning_on_cpu_port``
852 is set to true by the driver. These are associated with the address database
855 For various operations detailed below, DSA provides a ``dsa_db`` structure
856 which can be of the following types:
858 - ``DSA_DB_PORT``: the FDB (or MDB) entry to be installed or deleted belongs to
859 the port private database of user port ``db->dp``.
860 - ``DSA_DB_BRIDGE``: the entry belongs to one of the address databases of bridge
861 ``db->bridge``. Separation between the VLAN-unaware database and the per-VID
862 databases of this bridge is expected to be done by the driver.
863 - ``DSA_DB_LAG``: the entry belongs to the address database of LAG ``db->lag``.
864 Note: ``DSA_DB_LAG`` is currently unused and may be removed in the future.
866 The drivers which act upon the ``dsa_db`` argument in ``port_fdb_add``,
867 ``port_mdb_add`` etc should declare ``ds->fdb_isolation`` as true.
869 DSA associates each offloaded bridge and each offloaded LAG with a one-based ID
870 (``struct dsa_bridge :: num``, ``struct dsa_lag :: id``) for the purposes of
871 refcounting addresses on shared ports. Drivers may piggyback on DSA's numbering
872 scheme (the ID is readable through ``db->bridge.num`` and ``db->lag.id`` or may
875 Only the drivers which declare support for FDB isolation are notified of FDB
876 entries on the CPU port belonging to ``DSA_DB_PORT`` databases.
877 For compatibility/legacy reasons, ``DSA_DB_BRIDGE`` addresses are notified to
878 drivers even if they do not support FDB isolation. However, ``db->bridge.num``
879 and ``db->lag.id`` are always set to 0 in that case (to denote the lack of
880 isolation, for refcounting purposes).
882 Note that it is not mandatory for a switch driver to implement physically
883 separate address databases for each standalone user port. Since FDB entries in
884 the port private databases will always point to the CPU port, there is no risk
885 for incorrect forwarding decisions. In this case, all standalone ports may
886 share the same database, but the reference counting of host-filtered addresses
887 (not deleting the FDB entry for a port's MAC address if it's still in use by
888 another port) becomes the responsibility of the driver, because DSA is unaware
889 that the port databases are in fact shared. This can be achieved by calling
890 ``dsa_fdb_present_in_other_db()`` and ``dsa_mdb_present_in_other_db()``.
891 The down side is that the RX filtering lists of each user port are in fact
892 shared, which means that user port A may accept a packet with a MAC DA it
893 shouldn't have, only because that MAC address was in the RX filtering list of
894 user port B. These packets will still be dropped in software, however.
899 Offloading the bridge forwarding plane is optional and handled by the methods
900 below. They may be absent, return -EOPNOTSUPP, or ``ds->max_num_bridges`` may
901 be non-zero and exceeded, and in this case, joining a bridge port is still
902 possible, but the packet forwarding will take place in software, and the ports
903 under a software bridge must remain configured in the same way as for
904 standalone operation, i.e. have all bridging service functions (address
905 learning etc) disabled, and send all received packets to the CPU port only.
907 Concretely, a port starts offloading the forwarding plane of a bridge once it
908 returns success to the ``port_bridge_join`` method, and stops doing so after
909 ``port_bridge_leave`` has been called. Offloading the bridge means autonomously
910 learning FDB entries in accordance with the software bridge port's state, and
911 autonomously forwarding (or flooding) received packets without CPU intervention.
912 This is optional even when offloading a bridge port. Tagging protocol drivers
913 are expected to call ``dsa_default_offload_fwd_mark(skb)`` for packets which
914 have already been autonomously forwarded in the forwarding domain of the
915 ingress switch port. DSA, through ``dsa_port_devlink_setup()``, considers all
916 switch ports part of the same tree ID to be part of the same bridge forwarding
917 domain (capable of autonomous forwarding to each other).
919 Offloading the TX forwarding process of a bridge is a distinct concept from
920 simply offloading its forwarding plane, and refers to the ability of certain
921 driver and tag protocol combinations to transmit a single skb coming from the
922 bridge device's transmit function to potentially multiple egress ports (and
923 thereby avoid its cloning in software).
925 Packets for which the bridge requests this behavior are called data plane
926 packets and have ``skb->offload_fwd_mark`` set to true in the tag protocol
927 driver's ``xmit`` function. Data plane packets are subject to FDB lookup,
928 hardware learning on the CPU port, and do not override the port STP state.
929 Additionally, replication of data plane packets (multicast, flooding) is
930 handled in hardware and the bridge driver will transmit a single skb for each
931 packet that may or may not need replication.
933 When the TX forwarding offload is enabled, the tag protocol driver is
934 responsible to inject packets into the data plane of the hardware towards the
935 correct bridging domain (FID) that the port is a part of. The port may be
936 VLAN-unaware, and in this case the FID must be equal to the FID used by the
937 driver for its VLAN-unaware address database associated with that bridge.
938 Alternatively, the bridge may be VLAN-aware, and in that case, it is guaranteed
939 that the packet is also VLAN-tagged with the VLAN ID that the bridge processed
940 this packet in. It is the responsibility of the hardware to untag the VID on
941 the egress-untagged ports, or keep the tag on the egress-tagged ones.
943 - ``port_bridge_join``: bridge layer function invoked when a given switch port is
944 added to a bridge, this function should do what's necessary at the switch
945 level to permit the joining port to be added to the relevant logical
946 domain for it to ingress/egress traffic with other members of the bridge.
947 By setting the ``tx_fwd_offload`` argument to true, the TX forwarding process
948 of this bridge is also offloaded.
950 - ``port_bridge_leave``: bridge layer function invoked when a given switch port is
951 removed from a bridge, this function should do what's necessary at the
952 switch level to deny the leaving port from ingress/egress traffic from the
953 remaining bridge members.
955 - ``port_stp_state_set``: bridge layer function invoked when a given switch port STP
956 state is computed by the bridge layer and should be propagated to switch
957 hardware to forward/block/learn traffic.
959 - ``port_bridge_flags``: bridge layer function invoked when a port must
960 configure its settings for e.g. flooding of unknown traffic or source address
961 learning. The switch driver is responsible for initial setup of the
962 standalone ports with address learning disabled and egress flooding of all
963 types of traffic, then the DSA core notifies of any change to the bridge port
964 flags when the port joins and leaves a bridge. DSA does not currently manage
965 the bridge port flags for the CPU port. The assumption is that address
966 learning should be statically enabled (if supported by the hardware) on the
967 CPU port, and flooding towards the CPU port should also be enabled, due to a
968 lack of an explicit address filtering mechanism in the DSA core.
970 - ``port_fast_age``: bridge layer function invoked when flushing the
971 dynamically learned FDB entries on the port is necessary. This is called when
972 transitioning from an STP state where learning should take place to an STP
973 state where it shouldn't, or when leaving a bridge, or when address learning
974 is turned off via ``port_bridge_flags``.
976 Bridge VLAN filtering
977 ---------------------
979 - ``port_vlan_filtering``: bridge layer function invoked when the bridge gets
980 configured for turning on or off VLAN filtering. If nothing specific needs to
981 be done at the hardware level, this callback does not need to be implemented.
982 When VLAN filtering is turned on, the hardware must be programmed with
983 rejecting 802.1Q frames which have VLAN IDs outside of the programmed allowed
984 VLAN ID map/rules. If there is no PVID programmed into the switch port,
985 untagged frames must be rejected as well. When turned off the switch must
986 accept any 802.1Q frames irrespective of their VLAN ID, and untagged frames are
989 - ``port_vlan_add``: bridge layer function invoked when a VLAN is configured
990 (tagged or untagged) for the given switch port. The CPU port becomes a member
991 of a VLAN only if a foreign bridge port is also a member of it (and
992 forwarding needs to take place in software), or the VLAN is installed to the
993 VLAN group of the bridge device itself, for termination purposes
994 (``bridge vlan add dev br0 vid 100 self``). VLANs on shared ports are
995 reference counted and removed when there is no user left. Drivers do not need
996 to manually install a VLAN on the CPU port.
998 - ``port_vlan_del``: bridge layer function invoked when a VLAN is removed from the
1001 - ``port_fdb_add``: bridge layer function invoked when the bridge wants to install a
1002 Forwarding Database entry, the switch hardware should be programmed with the
1003 specified address in the specified VLAN Id in the forwarding database
1004 associated with this VLAN ID.
1006 - ``port_fdb_del``: bridge layer function invoked when the bridge wants to remove a
1007 Forwarding Database entry, the switch hardware should be programmed to delete
1008 the specified MAC address from the specified VLAN ID if it was mapped into
1009 this port forwarding database
1011 - ``port_fdb_dump``: bridge bypass function invoked by ``ndo_fdb_dump`` on the
1012 physical DSA port interfaces. Since DSA does not attempt to keep in sync its
1013 hardware FDB entries with the software bridge, this method is implemented as
1014 a means to view the entries visible on user ports in the hardware database.
1015 The entries reported by this function have the ``self`` flag in the output of
1016 the ``bridge fdb show`` command.
1018 - ``port_mdb_add``: bridge layer function invoked when the bridge wants to install
1019 a multicast database entry. The switch hardware should be programmed with the
1020 specified address in the specified VLAN ID in the forwarding database
1021 associated with this VLAN ID.
1023 - ``port_mdb_del``: bridge layer function invoked when the bridge wants to remove a
1024 multicast database entry, the switch hardware should be programmed to delete
1025 the specified MAC address from the specified VLAN ID if it was mapped into
1026 this port forwarding database.
1031 Link aggregation is implemented in the Linux networking stack by the bonding
1032 and team drivers, which are modeled as virtual, stackable network interfaces.
1033 DSA is capable of offloading a link aggregation group (LAG) to hardware that
1034 supports the feature, and supports bridging between physical ports and LAGs,
1035 as well as between LAGs. A bonding/team interface which holds multiple physical
1036 ports constitutes a logical port, although DSA has no explicit concept of a
1037 logical port at the moment. Due to this, events where a LAG joins/leaves a
1038 bridge are treated as if all individual physical ports that are members of that
1039 LAG join/leave the bridge. Switchdev port attributes (VLAN filtering, STP
1040 state, etc) and objects (VLANs, MDB entries) offloaded to a LAG as bridge port
1041 are treated similarly: DSA offloads the same switchdev object / port attribute
1042 on all members of the LAG. Static bridge FDB entries on a LAG are not yet
1043 supported, since the DSA driver API does not have the concept of a logical port
1046 - ``port_lag_join``: function invoked when a given switch port is added to a
1047 LAG. The driver may return ``-EOPNOTSUPP``, and in this case, DSA will fall
1048 back to a software implementation where all traffic from this port is sent to
1050 - ``port_lag_leave``: function invoked when a given switch port leaves a LAG
1051 and returns to operation as a standalone port.
1052 - ``port_lag_change``: function invoked when the link state of any member of
1053 the LAG changes, and the hashing function needs rebalancing to only make use
1054 of the subset of physical LAG member ports that are up.
1056 Drivers that benefit from having an ID associated with each offloaded LAG
1057 can optionally populate ``ds->num_lag_ids`` from the ``dsa_switch_ops::setup``
1058 method. The LAG ID associated with a bonding/team interface can then be
1059 retrieved by a DSA switch driver using the ``dsa_lag_id`` function.
1064 The Media Redundancy Protocol is a topology management protocol optimized for
1065 fast fault recovery time for ring networks, which has some components
1066 implemented as a function of the bridge driver. MRP uses management PDUs
1067 (Test, Topology, LinkDown/Up, Option) sent at a multicast destination MAC
1068 address range of 01:15:4e:00:00:0x and with an EtherType of 0x88e3.
1069 Depending on the node's role in the ring (MRM: Media Redundancy Manager,
1070 MRC: Media Redundancy Client, MRA: Media Redundancy Automanager), certain MRP
1071 PDUs might need to be terminated locally and others might need to be forwarded.
1072 An MRM might also benefit from offloading to hardware the creation and
1073 transmission of certain MRP PDUs (Test).
1075 Normally an MRP instance can be created on top of any network interface,
1076 however in the case of a device with an offloaded data path such as DSA, it is
1077 necessary for the hardware, even if it is not MRP-aware, to be able to extract
1078 the MRP PDUs from the fabric before the driver can proceed with the software
1079 implementation. DSA today has no driver which is MRP-aware, therefore it only
1080 listens for the bare minimum switchdev objects required for the software assist
1081 to work properly. The operations are detailed below.
1083 - ``port_mrp_add`` and ``port_mrp_del``: notifies driver when an MRP instance
1084 with a certain ring ID, priority, primary port and secondary port is
1086 - ``port_mrp_add_ring_role`` and ``port_mrp_del_ring_role``: function invoked
1087 when an MRP instance changes ring roles between MRM or MRC. This affects
1088 which MRP PDUs should be trapped to software and which should be autonomously
1091 IEC 62439-3 (HSR/PRP)
1092 ---------------------
1094 The Parallel Redundancy Protocol (PRP) is a network redundancy protocol which
1095 works by duplicating and sequence numbering packets through two independent L2
1096 networks (which are unaware of the PRP tail tags carried in the packets), and
1097 eliminating the duplicates at the receiver. The High-availability Seamless
1098 Redundancy (HSR) protocol is similar in concept, except all nodes that carry
1099 the redundant traffic are aware of the fact that it is HSR-tagged (because HSR
1100 uses a header with an EtherType of 0x892f) and are physically connected in a
1101 ring topology. Both HSR and PRP use supervision frames for monitoring the
1102 health of the network and for discovery of other nodes.
1104 In Linux, both HSR and PRP are implemented in the hsr driver, which
1105 instantiates a virtual, stackable network interface with two member ports.
1106 The driver only implements the basic roles of DANH (Doubly Attached Node
1107 implementing HSR) and DANP (Doubly Attached Node implementing PRP); the roles
1108 of RedBox and QuadBox are not implemented (therefore, bridging a hsr network
1109 interface with a physical switch port does not produce the expected result).
1111 A driver which is able of offloading certain functions of a DANP or DANH should
1112 declare the corresponding netdev features as indicated by the documentation at
1113 ``Documentation/networking/netdev-features.rst``. Additionally, the following
1114 methods must be implemented:
1116 - ``port_hsr_join``: function invoked when a given switch port is added to a
1117 DANP/DANH. The driver may return ``-EOPNOTSUPP`` and in this case, DSA will
1118 fall back to a software implementation where all traffic from this port is
1120 - ``port_hsr_leave``: function invoked when a given switch port leaves a
1121 DANP/DANH and returns to normal operation as a standalone port.
1126 Making SWITCHDEV and DSA converge towards an unified codebase
1127 -------------------------------------------------------------
1129 SWITCHDEV properly takes care of abstracting the networking stack with offload
1130 capable hardware, but does not enforce a strict switch device driver model. On
1131 the other DSA enforces a fairly strict device driver model, and deals with most
1132 of the switch specific. At some point we should envision a merger between these
1133 two subsystems and get the best of both worlds.