1 # SPDX-License-Identifier: GPL-2.0-only
3 # Block device driver configuration
7 bool "Multiple devices driver support (RAID and LVM)"
11 Support multiple physical spindles through a single logical device.
12 Required for RAID and logical volume management.
17 tristate "RAID support"
18 select BLOCK_HOLDER_DEPRECATED if SYSFS
19 # BLOCK_LEGACY_AUTOLOAD requirement should be removed
20 # after relevant mdadm enhancements - to make "names=yes"
21 # the default - are widely available.
22 select BLOCK_LEGACY_AUTOLOAD
24 This driver lets you combine several hard disk partitions into one
25 logical block device. This can be used to simply append one
26 partition to another one or to combine several redundant hard disks
27 into a RAID1/4/5 device so as to provide protection against hard
28 disk failures. This is called "Software RAID" since the combining of
29 the partitions is done by the kernel. "Hardware RAID" means that the
30 combining is done by a dedicated controller; if you have such a
31 controller, you do not need to say Y here.
33 More information about Software RAID on Linux is contained in the
34 Software RAID mini-HOWTO, available from
35 <https://www.tldp.org/docs.html#howto>. There you will also learn
36 where to get the supporting user space utilities raidtools.
41 bool "Autodetect RAID arrays during kernel boot"
42 depends on BLK_DEV_MD=y
45 If you say Y here, then the kernel will try to autodetect raid
46 arrays as part of its boot process.
48 If you don't use raid and say Y, this autodetection can cause
49 a several-second delay in the boot time due to various
50 synchronisation steps that are part of this step.
55 tristate "Linear (append) mode (deprecated)"
58 If you say Y here, then your multiple devices driver will be able to
59 use the so-called linear mode, i.e. it will combine the hard disk
60 partitions by simply appending one to the other.
62 To compile this as a module, choose M here: the module
63 will be called linear.
68 tristate "RAID-0 (striping) mode"
71 If you say Y here, then your multiple devices driver will be able to
72 use the so-called raid0 mode, i.e. it will combine the hard disk
73 partitions into one logical device in such a fashion as to fill them
74 up evenly, one chunk here and one chunk there. This will increase
75 the throughput rate if the partitions reside on distinct disks.
77 Information about Software RAID on Linux is contained in the
78 Software-RAID mini-HOWTO, available from
79 <https://www.tldp.org/docs.html#howto>. There you will also
80 learn where to get the supporting user space utilities raidtools.
82 To compile this as a module, choose M here: the module
88 tristate "RAID-1 (mirroring) mode"
91 A RAID-1 set consists of several disk drives which are exact copies
92 of each other. In the event of a mirror failure, the RAID driver
93 will continue to use the operational mirrors in the set, providing
94 an error free MD (multiple device) to the higher levels of the
95 kernel. In a set with N drives, the available space is the capacity
96 of a single drive, and the set protects against a failure of (N - 1)
99 Information about Software RAID on Linux is contained in the
100 Software-RAID mini-HOWTO, available from
101 <https://www.tldp.org/docs.html#howto>. There you will also
102 learn where to get the supporting user space utilities raidtools.
104 If you want to use such a RAID-1 set, say Y. To compile this code
105 as a module, choose M here: the module will be called raid1.
110 tristate "RAID-10 (mirrored striping) mode"
111 depends on BLK_DEV_MD
113 RAID-10 provides a combination of striping (RAID-0) and
114 mirroring (RAID-1) with easier configuration and more flexible
116 Unlike RAID-0, but like RAID-1, RAID-10 requires all devices to
117 be the same size (or at least, only as much as the smallest device
119 RAID-10 provides a variety of layouts that provide different levels
120 of redundancy and performance.
122 RAID-10 requires mdadm-1.7.0 or later, available at:
124 https://www.kernel.org/pub/linux/utils/raid/mdadm/
129 tristate "RAID-4/RAID-5/RAID-6 mode"
130 depends on BLK_DEV_MD
136 select ASYNC_RAID6_RECOV
138 A RAID-5 set of N drives with a capacity of C MB per drive provides
139 the capacity of C * (N - 1) MB, and protects against a failure
140 of a single drive. For a given sector (row) number, (N - 1) drives
141 contain data sectors, and one drive contains the parity protection.
142 For a RAID-4 set, the parity blocks are present on a single drive,
143 while a RAID-5 set distributes the parity across the drives in one
144 of the available parity distribution methods.
146 A RAID-6 set of N drives with a capacity of C MB per drive
147 provides the capacity of C * (N - 2) MB, and protects
148 against a failure of any two drives. For a given sector
149 (row) number, (N - 2) drives contain data sectors, and two
150 drives contains two independent redundancy syndromes. Like
151 RAID-5, RAID-6 distributes the syndromes across the drives
152 in one of the available parity distribution methods.
154 Information about Software RAID on Linux is contained in the
155 Software-RAID mini-HOWTO, available from
156 <https://www.tldp.org/docs.html#howto>. There you will also
157 learn where to get the supporting user space utilities raidtools.
159 If you want to use such a RAID-4/RAID-5/RAID-6 set, say Y. To
160 compile this code as a module, choose M here: the module
161 will be called raid456.
166 tristate "Multipath I/O support (deprecated)"
167 depends on BLK_DEV_MD
169 MD_MULTIPATH provides a simple multi-path personality for use
170 the MD framework. It is not under active development. New
171 projects should consider using DM_MULTIPATH which has more
172 features and more testing.
177 tristate "Faulty test module for MD (deprecated)"
178 depends on BLK_DEV_MD
180 The "faulty" module allows for a block device that occasionally returns
181 read or write errors. It is useful for testing.
187 tristate "Cluster Support for MD"
188 depends on BLK_DEV_MD
192 Clustering support for MD devices. This enables locking and
193 synchronization across multiple systems on the cluster, so all
194 nodes in the cluster can access the MD devices simultaneously.
196 This brings the redundancy (and uptime) of RAID levels across the
197 nodes of the cluster. Currently, it can work with raid1 and raid10
202 source "drivers/md/bcache/Kconfig"
204 config BLK_DEV_DM_BUILTIN
208 tristate "Device mapper support"
209 select BLOCK_HOLDER_DEPRECATED if SYSFS
210 select BLK_DEV_DM_BUILTIN
211 select BLK_MQ_STACKING
212 depends on DAX || DAX=n
214 Device-mapper is a low level volume manager. It works by allowing
215 people to specify mappings for ranges of logical sectors. Various
216 mapping types are available, in addition people may write their own
217 modules containing custom mappings if they wish.
219 Higher level volume managers such as LVM2 use this driver.
221 To compile this as a module, choose M here: the module will be
227 bool "Device mapper debugging support"
228 depends on BLK_DEV_DM
230 Enable this for messages that may help debug device-mapper problems.
236 depends on BLK_DEV_DM
238 This interface allows you to do buffered I/O on a device and acts
239 as a cache, holding recently-read blocks in memory and performing
242 config DM_DEBUG_BLOCK_MANAGER_LOCKING
243 bool "Block manager locking"
246 Block manager locking can catch various metadata corruption issues.
250 config DM_DEBUG_BLOCK_STACK_TRACING
251 bool "Keep stack trace of persistent data block lock holders"
252 depends on STACKTRACE_SUPPORT && DM_DEBUG_BLOCK_MANAGER_LOCKING
255 Enable this for messages that may help debug problems with the
256 block manager locking used by thin provisioning and caching.
262 depends on BLK_DEV_DM
264 Some bio locking schemes used by other device-mapper targets
265 including thin provisioning.
267 source "drivers/md/persistent-data/Kconfig"
270 tristate "Unstriped target"
271 depends on BLK_DEV_DM
273 Unstripes I/O so it is issued solely on a single drive in a HW
274 RAID0 or dm-striped target.
277 tristate "Crypt target support"
278 depends on BLK_DEV_DM
279 depends on (ENCRYPTED_KEYS || ENCRYPTED_KEYS=n)
280 depends on (TRUSTED_KEYS || TRUSTED_KEYS=n)
285 This device-mapper target allows you to create a device that
286 transparently encrypts the data on it. You'll need to activate
287 the ciphers you're going to use in the cryptoapi configuration.
289 For further information on dm-crypt and userspace tools see:
290 <https://gitlab.com/cryptsetup/cryptsetup/wikis/DMCrypt>
292 To compile this code as a module, choose M here: the module will
298 tristate "Snapshot target"
299 depends on BLK_DEV_DM
302 Allow volume managers to take writable snapshots of a device.
304 config DM_THIN_PROVISIONING
305 tristate "Thin provisioning target"
306 depends on BLK_DEV_DM
307 select DM_PERSISTENT_DATA
310 Provides thin provisioning and snapshots that share a data store.
313 tristate "Cache target (EXPERIMENTAL)"
314 depends on BLK_DEV_DM
316 select DM_PERSISTENT_DATA
319 dm-cache attempts to improve performance of a block device by
320 moving frequently used data to a smaller, higher performance
321 device. Different 'policy' plugins can be used to change the
322 algorithms used to select which blocks are promoted, demoted,
323 cleaned etc. It supports writeback and writethrough modes.
326 tristate "Stochastic MQ Cache Policy (EXPERIMENTAL)"
330 A cache policy that uses a multiqueue ordered by recent hits
331 to select which blocks should be promoted and demoted.
332 This is meant to be a general purpose policy. It prioritises
333 reads over writes. This SMQ policy (vs MQ) offers the promise
334 of less memory utilization, improved performance and increased
335 adaptability in the face of changing workloads.
338 tristate "Writecache target"
339 depends on BLK_DEV_DM
341 The writecache target caches writes on persistent memory or SSD.
342 It is intended for databases or other programs that need extremely
345 The writecache target doesn't cache reads because reads are supposed
346 to be cached in standard RAM.
349 tristate "Emulated block size target (EXPERIMENTAL)"
350 depends on BLK_DEV_DM && !HIGHMEM
353 dm-ebs emulates smaller logical block size on backing devices
354 with larger ones (e.g. 512 byte sectors on 4K native disks).
357 tristate "Era target (EXPERIMENTAL)"
358 depends on BLK_DEV_DM
360 select DM_PERSISTENT_DATA
363 dm-era tracks which parts of a block device are written to
364 over time. Useful for maintaining cache coherency when using
368 tristate "Clone target (EXPERIMENTAL)"
369 depends on BLK_DEV_DM
371 select DM_PERSISTENT_DATA
373 dm-clone produces a one-to-one copy of an existing, read-only source
374 device into a writable destination device. The cloned device is
375 visible/mountable immediately and the copy of the source device to the
376 destination device happens in the background, in parallel with user
382 tristate "Mirror target"
383 depends on BLK_DEV_DM
385 Allow volume managers to mirror logical volumes, also
386 needed for live data migration tools such as 'pvmove'.
388 config DM_LOG_USERSPACE
389 tristate "Mirror userspace logging"
390 depends on DM_MIRROR && NET
393 The userspace logging module provides a mechanism for
394 relaying the dm-dirty-log API to userspace. Log designs
395 which are more suited to userspace implementation (e.g.
396 shared storage logs) or experimental logs can be implemented
397 by leveraging this framework.
400 tristate "RAID 1/4/5/6/10 target"
401 depends on BLK_DEV_DM
408 A dm target that supports RAID1, RAID10, RAID4, RAID5 and RAID6 mappings
410 A RAID-5 set of N drives with a capacity of C MB per drive provides
411 the capacity of C * (N - 1) MB, and protects against a failure
412 of a single drive. For a given sector (row) number, (N - 1) drives
413 contain data sectors, and one drive contains the parity protection.
414 For a RAID-4 set, the parity blocks are present on a single drive,
415 while a RAID-5 set distributes the parity across the drives in one
416 of the available parity distribution methods.
418 A RAID-6 set of N drives with a capacity of C MB per drive
419 provides the capacity of C * (N - 2) MB, and protects
420 against a failure of any two drives. For a given sector
421 (row) number, (N - 2) drives contain data sectors, and two
422 drives contains two independent redundancy syndromes. Like
423 RAID-5, RAID-6 distributes the syndromes across the drives
424 in one of the available parity distribution methods.
427 tristate "Zero target"
428 depends on BLK_DEV_DM
430 A target that discards writes, and returns all zeroes for
431 reads. Useful in some recovery situations.
434 tristate "Multipath target"
435 depends on BLK_DEV_DM
436 # nasty syntax but means make DM_MULTIPATH independent
437 # of SCSI_DH if the latter isn't defined but if
438 # it is, DM_MULTIPATH must depend on it. We get a build
439 # error if SCSI_DH=m and DM_MULTIPATH=y
440 depends on !SCSI_DH || SCSI
442 Allow volume managers to support multipath hardware.
444 config DM_MULTIPATH_QL
445 tristate "I/O Path Selector based on the number of in-flight I/Os"
446 depends on DM_MULTIPATH
448 This path selector is a dynamic load balancer which selects
449 the path with the least number of in-flight I/Os.
453 config DM_MULTIPATH_ST
454 tristate "I/O Path Selector based on the service time"
455 depends on DM_MULTIPATH
457 This path selector is a dynamic load balancer which selects
458 the path expected to complete the incoming I/O in the shortest
463 config DM_MULTIPATH_HST
464 tristate "I/O Path Selector based on historical service time"
465 depends on DM_MULTIPATH
467 This path selector is a dynamic load balancer which selects
468 the path expected to complete the incoming I/O in the shortest
469 time by comparing estimated service time (based on historical
474 config DM_MULTIPATH_IOA
475 tristate "I/O Path Selector based on CPU submission"
476 depends on DM_MULTIPATH
478 This path selector selects the path based on the CPU the IO is
479 executed on and the CPU to path mapping setup at path addition time.
484 tristate "I/O delaying target"
485 depends on BLK_DEV_DM
487 A target that delays reads and/or writes and can send
488 them to different devices. Useful for testing.
493 tristate "Bad sector simulation target"
494 depends on BLK_DEV_DM
496 A target that simulates bad sector behavior.
502 bool "DM \"dm-mod.create=\" parameter support"
503 depends on BLK_DEV_DM=y
505 Enable "dm-mod.create=" parameter to create mapped devices at init time.
506 This option is useful to allow mounting rootfs without requiring an
508 See Documentation/admin-guide/device-mapper/dm-init.rst for dm-mod.create="..."
515 depends on BLK_DEV_DM
517 Generate udev events for DM events.
520 tristate "Flakey target"
521 depends on BLK_DEV_DM
523 A target that intermittently fails I/O for debugging purposes.
526 tristate "Verity target support"
527 depends on BLK_DEV_DM
532 This device-mapper target creates a read-only device that
533 transparently validates the data on one underlying device against
534 a pre-generated tree of cryptographic checksums stored on a second
537 You'll need to activate the digests you're going to use in the
538 cryptoapi configuration.
540 To compile this code as a module, choose M here: the module will
545 config DM_VERITY_VERIFY_ROOTHASH_SIG
547 bool "Verity data device root hash signature verification support"
549 select SYSTEM_DATA_VERIFICATION
551 Add ability for dm-verity device to be validated if the
552 pre-generated tree of cryptographic checksums passed has a pkcs#7
553 signature file that can validate the roothash of the tree.
555 By default, rely on the builtin trusted keyring.
559 config DM_VERITY_VERIFY_ROOTHASH_SIG_SECONDARY_KEYRING
560 bool "Verity data device root hash signature verification with secondary keyring"
561 depends on DM_VERITY_VERIFY_ROOTHASH_SIG
562 depends on SECONDARY_TRUSTED_KEYRING
564 Rely on the secondary trusted keyring to verify dm-verity signatures.
569 bool "Verity forward error correction support"
572 select REED_SOLOMON_DEC8
574 Add forward error correction support to dm-verity. This option
575 makes it possible to use pre-generated error correction data to
576 recover from corrupted blocks.
581 tristate "Switch target support (EXPERIMENTAL)"
582 depends on BLK_DEV_DM
584 This device-mapper target creates a device that supports an arbitrary
585 mapping of fixed-size regions of I/O across a fixed set of paths.
586 The path used for any specific region can be switched dynamically
587 by sending the target a message.
589 To compile this code as a module, choose M here: the module will
595 tristate "Log writes target support"
596 depends on BLK_DEV_DM
598 This device-mapper target takes two devices, one device to use
599 normally, one to log all write operations done to the first device.
600 This is for use by file system developers wishing to verify that
601 their fs is writing a consistent file system at all times by allowing
602 them to replay the log in a variety of ways and to check the
605 To compile this code as a module, choose M here: the module will
606 be called dm-log-writes.
611 tristate "Integrity target support"
612 depends on BLK_DEV_DM
613 select BLK_DEV_INTEGRITY
616 select CRYPTO_SKCIPHER
618 select DM_AUDIT if AUDIT
620 This device-mapper target emulates a block device that has
621 additional per-sector tags that can be used for storing
622 integrity information.
624 This integrity target is used with the dm-crypt target to
625 provide authenticated disk encryption or it can be used
628 To compile this code as a module, choose M here: the module will
629 be called dm-integrity.
632 tristate "Drive-managed zoned block device target support"
633 depends on BLK_DEV_DM
634 depends on BLK_DEV_ZONED
637 This device-mapper target takes a host-managed or host-aware zoned
638 block device and exposes most of its capacity as a regular block
639 device (drive-managed zoned block device) without any write
640 constraints. This is mainly intended for use with file systems that
641 do not natively support zoned block devices but still want to
642 benefit from the increased capacity offered by SMR disks. Other uses
643 by applications using raw block devices (for example object stores)
646 To compile this code as a module, choose M here: the module will
652 bool "DM audit events"
655 Generate audit events for device-mapper.
657 Enables audit logging of several security relevant events in the
658 particular device-mapper targets, especially the integrity target.