1 Documentation for /proc/sys/net/*
2 (c) 1999 Terrehon Bowden <terrehon@pacbell.net>
3 Bodo Bauer <bb@ricochet.net>
4 (c) 2000 Jorge Nerin <comandante@zaralinux.com>
5 (c) 2009 Shen Feng <shen@cn.fujitsu.com>
7 For general info and legal blurb, please look in README.
9 ==============================================================
11 This file contains the documentation for the sysctl files in
14 The interface to the networking parts of the kernel is located in
15 /proc/sys/net. The following table shows all possible subdirectories. You may
16 see only some of them, depending on your kernel's configuration.
19 Table : Subdirectories in /proc/sys/net
20 ..............................................................................
21 Directory Content Directory Content
22 core General parameter appletalk Appletalk protocol
23 unix Unix domain sockets netrom NET/ROM
24 802 E802 protocol ax25 AX25
25 ethernet Ethernet protocol rose X.25 PLP layer
26 ipv4 IP version 4 x25 X.25 protocol
27 ipx IPX token-ring IBM token ring
28 bridge Bridging decnet DEC net
29 ipv6 IP version 6 tipc TIPC
30 ..............................................................................
32 1. /proc/sys/net/core - Network core options
33 -------------------------------------------------------
38 This enables the BPF Just in Time (JIT) compiler. BPF is a flexible
39 and efficient infrastructure allowing to execute bytecode at various
40 hook points. It is used in a number of Linux kernel subsystems such
41 as networking (e.g. XDP, tc), tracing (e.g. kprobes, uprobes, tracepoints)
42 and security (e.g. seccomp). LLVM has a BPF back end that can compile
43 restricted C into a sequence of BPF instructions. After program load
44 through bpf(2) and passing a verifier in the kernel, a JIT will then
45 translate these BPF proglets into native CPU instructions. There are
46 two flavors of JITs, the newer eBPF JIT currently supported on:
56 And the older cBPF JIT supported on the following archs:
61 eBPF JITs are a superset of cBPF JITs, meaning the kernel will
62 migrate cBPF instructions into eBPF instructions and then JIT
63 compile them transparently. Older cBPF JITs can only translate
64 tcpdump filters, seccomp rules, etc, but not mentioned eBPF
65 programs loaded through bpf(2).
68 0 - disable the JIT (default value)
70 2 - enable the JIT and ask the compiler to emit traces on kernel log.
75 This enables hardening for the BPF JIT compiler. Supported are eBPF
76 JIT backends. Enabling hardening trades off performance, but can
77 mitigate JIT spraying.
79 0 - disable JIT hardening (default value)
80 1 - enable JIT hardening for unprivileged users only
81 2 - enable JIT hardening for all users
86 When BPF JIT compiler is enabled, then compiled images are unknown
87 addresses to the kernel, meaning they neither show up in traces nor
88 in /proc/kallsyms. This enables export of these addresses, which can
89 be used for debugging/tracing. If bpf_jit_harden is enabled, this
92 0 - disable JIT kallsyms export (default value)
93 1 - enable JIT kallsyms export for privileged users only
98 This enforces a global limit for memory allocations to the BPF JIT
99 compiler in order to reject unprivileged JIT requests once it has
100 been surpassed. bpf_jit_limit contains the value of the global limit
106 The maximum number of packets that kernel can handle on a NAPI interrupt,
107 it's a Per-CPU variable. For drivers that support LRO or GRO_HW, a hardware
108 aggregated packet is counted as one packet in this context.
115 RPS (e.g. RFS, aRFS) processing is competing with the registered NAPI poll function
116 of the driver for the per softirq cycle netdev_budget. This parameter influences
117 the proportion of the configured netdev_budget that is spent on RPS based packet
118 processing during RX softirq cycles. It is further meant for making current
119 dev_weight adaptable for asymmetric CPU needs on RX/TX side of the network stack.
120 (see dev_weight_tx_bias) It is effective on a per CPU basis. Determination is based
121 on dev_weight and is calculated multiplicative (dev_weight * dev_weight_rx_bias).
127 Scales the maximum number of packets that can be processed during a TX softirq cycle.
128 Effective on a per CPU basis. Allows scaling of current dev_weight for asymmetric
129 net stack processing needs. Be careful to avoid making TX softirq processing a CPU hog.
130 Calculation is based on dev_weight (dev_weight * dev_weight_tx_bias).
136 The default queuing discipline to use for network devices. This allows
137 overriding the default of pfifo_fast with an alternative. Since the default
138 queuing discipline is created without additional parameters so is best suited
139 to queuing disciplines that work well without configuration like stochastic
140 fair queue (sfq), CoDel (codel) or fair queue CoDel (fq_codel). Don't use
141 queuing disciplines like Hierarchical Token Bucket or Deficit Round Robin
142 which require setting up classes and bandwidths. Note that physical multiqueue
143 interfaces still use mq as root qdisc, which in turn uses this default for its
144 leaves. Virtual devices (like e.g. lo or veth) ignore this setting and instead
150 Low latency busy poll timeout for socket reads. (needs CONFIG_NET_RX_BUSY_POLL)
151 Approximate time in us to busy loop waiting for packets on the device queue.
152 This sets the default value of the SO_BUSY_POLL socket option.
153 Can be set or overridden per socket by setting socket option SO_BUSY_POLL,
154 which is the preferred method of enabling. If you need to enable the feature
155 globally via sysctl, a value of 50 is recommended.
156 Will increase power usage.
161 Low latency busy poll timeout for poll and select. (needs CONFIG_NET_RX_BUSY_POLL)
162 Approximate time in us to busy loop waiting for events.
163 Recommended value depends on the number of sockets you poll on.
164 For several sockets 50, for several hundreds 100.
165 For more than that you probably want to use epoll.
166 Note that only sockets with SO_BUSY_POLL set will be busy polled,
167 so you want to either selectively set SO_BUSY_POLL on those sockets or set
168 sysctl.net.busy_read globally.
169 Will increase power usage.
175 The default setting of the socket receive buffer in bytes.
180 The maximum receive socket buffer size in bytes.
184 Allow processes to receive tx timestamps looped together with the original
185 packet contents. If disabled, transmit timestamp requests from unprivileged
186 processes are dropped unless socket option SOF_TIMESTAMPING_OPT_TSONLY is set.
193 The default setting (in bytes) of the socket send buffer.
198 The maximum send socket buffer size in bytes.
200 message_burst and message_cost
201 ------------------------------
203 These parameters are used to limit the warning messages written to the kernel
204 log from the networking code. They enforce a rate limit to make a
205 denial-of-service attack impossible. A higher message_cost factor, results in
206 fewer messages that will be written. Message_burst controls when messages will
207 be dropped. The default settings limit warning messages to one every five
213 This sysctl is now unused.
215 This was used to control console messages from the networking stack that
216 occur because of problems on the network like duplicate address or bad
219 These messages are now emitted at KERN_DEBUG and can generally be enabled
220 and controlled by the dynamic_debug facility.
225 Maximum number of packets taken from all interfaces in one polling cycle (NAPI
226 poll). In one polling cycle interfaces which are registered to polling are
227 probed in a round-robin manner. Also, a polling cycle may not exceed
228 netdev_budget_usecs microseconds, even if netdev_budget has not been
232 ---------------------
234 Maximum number of microseconds in one NAPI polling cycle. Polling
235 will exit when either netdev_budget_usecs have elapsed during the
236 poll cycle or the number of packets processed reaches netdev_budget.
241 Maximum number of packets, queued on the INPUT side, when the interface
242 receives packets faster than kernel can process them.
247 RSS (Receive Side Scaling) enabled drivers use a 40 bytes host key that is
249 Some user space might need to gather its content even if drivers do not
250 provide ethtool -x support yet.
252 myhost:~# cat /proc/sys/net/core/netdev_rss_key
253 84:50:f4:00:a8:15:d1:a7:e9:7f:1d:60:35:c7:47:25:42:97:74:ca:56:bb:b6:a1:d8: ... (52 bytes total)
255 File contains nul bytes if no driver ever called netdev_rss_key_fill() function.
257 /proc/sys/net/core/netdev_rss_key contains 52 bytes of key,
258 but most drivers only use 40 bytes of it.
260 myhost:~# ethtool -x eth0
261 RX flow hash indirection table for eth0 with 8 RX ring(s):
264 84:50:f4:00:a8:15:d1:a7:e9:7f:1d:60:35:c7:47:25:42:97:74:ca:56:bb:b6:a1:d8:43:e3:c9:0c:fd:17:55:c2:3a:4d:69:ed:f1:42:89
266 netdev_tstamp_prequeue
267 ----------------------
269 If set to 0, RX packet timestamps can be sampled after RPS processing, when
270 the target CPU processes packets. It might give some delay on timestamps, but
271 permit to distribute the load on several cpus.
273 If set to 1 (default), timestamps are sampled as soon as possible, before
279 Maximum ancillary buffer size allowed per socket. Ancillary data is a sequence
280 of struct cmsghdr structures with appended data.
282 fb_tunnels_only_for_init_net
283 ----------------------------
285 Controls if fallback tunnels (like tunl0, gre0, gretap0, erspan0,
286 sit0, ip6tnl0, ip6gre0) are automatically created when a new
287 network namespace is created, if corresponding tunnel is present
288 in initial network namespace.
289 If set to 1, these devices are not automatically created, and
290 user space is responsible for creating them if needed.
292 Default : 0 (for compatibility reasons)
294 2. /proc/sys/net/unix - Parameters for Unix domain sockets
295 -------------------------------------------------------
297 There is only one file in this directory.
298 unix_dgram_qlen limits the max number of datagrams queued in Unix domain
299 socket's buffer. It will not take effect unless PF_UNIX flag is specified.
302 3. /proc/sys/net/ipv4 - IPV4 settings
303 -------------------------------------------------------
304 Please see: Documentation/networking/ip-sysctl.txt and ipvs-sysctl.txt for
305 descriptions of these entries.
309 -------------------------------------------------------
311 The /proc/sys/net/appletalk directory holds the Appletalk configuration data
312 when Appletalk is loaded. The configurable parameters are:
317 The amount of time we keep an ARP entry before expiring it. Used to age out
323 The amount of time we will spend trying to resolve an Appletalk address.
325 aarp-retransmit-limit
326 ---------------------
328 The number of times we will retransmit a query before giving up.
333 Controls the rate at which expires are checked.
335 The directory /proc/net/appletalk holds the list of active Appletalk sockets
338 The fields indicate the DDP type, the local address (in network:node format)
339 the remote address, the size of the transmit pending queue, the size of the
340 received queue (bytes waiting for applications to read) the state and the uid
343 /proc/net/atalk_iface lists all the interfaces configured for appletalk.It
344 shows the name of the interface, its Appletalk address, the network range on
345 that address (or network number for phase 1 networks), and the status of the
348 /proc/net/atalk_route lists each known network route. It lists the target
349 (network) that the route leads to, the router (may be directly connected), the
350 route flags, and the device the route is using.
354 -------------------------------------------------------
356 The IPX protocol has no tunable values in proc/sys/net.
358 The IPX protocol does, however, provide proc/net/ipx. This lists each IPX
359 socket giving the local and remote addresses in Novell format (that is
360 network:node:port). In accordance with the strange Novell tradition,
361 everything but the port is in hex. Not_Connected is displayed for sockets that
362 are not tied to a specific remote address. The Tx and Rx queue sizes indicate
363 the number of bytes pending for transmission and reception. The state
364 indicates the state the socket is in and the uid is the owning uid of the
367 The /proc/net/ipx_interface file lists all IPX interfaces. For each interface
368 it gives the network number, the node number, and indicates if the network is
369 the primary network. It also indicates which device it is bound to (or
370 Internal for internal networks) and the Frame Type if appropriate. Linux
371 supports 802.3, 802.2, 802.2 SNAP and DIX (Blue Book) ethernet framing for
374 The /proc/net/ipx_route table holds a list of IPX routes. For each route it
375 gives the destination network, the router node (or Directly) and the network
376 address of the router (or Connected) for internal networks.
379 -------------------------------------------------------
384 The TIPC protocol now has a tunable for the receive memory, similar to the
385 tcp_rmem - i.e. a vector of 3 INTEGERs: (min, default, max)
387 # cat /proc/sys/net/tipc/tipc_rmem
388 4252725 34021800 68043600
391 The max value is set to CONN_OVERLOAD_LIMIT, and the default and min values
392 are scaled (shifted) versions of that same value. Note that the min value
393 is not at this point in time used in any meaningful way, but the triplet is
394 preserved in order to be consistent with things like tcp_rmem.
399 TIPC name table updates are distributed asynchronously in a cluster, without
400 any form of transaction handling. This means that different race scenarios are
401 possible. One such is that a name withdrawal sent out by one node and received
402 by another node may arrive after a second, overlapping name publication already
403 has been accepted from a third node, although the conflicting updates
404 originally may have been issued in the correct sequential order.
405 If named_timeout is nonzero, failed topology updates will be placed on a defer
406 queue until another event arrives that clears the error, or until the timeout
407 expires. Value is in milliseconds.