1 .. SPDX-License-Identifier: GPL-2.0
3 ============================
4 Tips For Running KUnit Tests
5 ============================
7 Using ``kunit.py run`` ("kunit tool")
8 =====================================
10 Running from any directory
11 --------------------------
13 It can be handy to create a bash function like:
17 function run_kunit() {
18 ( cd "$(git rev-parse --show-toplevel)" && ./tools/testing/kunit/kunit.py run $@ )
22 Early versions of ``kunit.py`` (before 5.6) didn't work unless run from
23 the kernel root, hence the use of a subshell and ``cd``.
25 Running a subset of tests
26 -------------------------
28 ``kunit.py run`` accepts an optional glob argument to filter tests. The format
29 is ``"<suite_glob>[.test_glob]"``.
31 Say that we wanted to run the sysctl tests, we could do so via:
35 $ echo -e 'CONFIG_KUNIT=y\nCONFIG_KUNIT_ALL_TESTS=y' > .kunit/.kunitconfig
36 $ ./tools/testing/kunit/kunit.py run 'sysctl*'
38 We can filter down to just the "write" tests via:
42 $ echo -e 'CONFIG_KUNIT=y\nCONFIG_KUNIT_ALL_TESTS=y' > .kunit/.kunitconfig
43 $ ./tools/testing/kunit/kunit.py run 'sysctl*.*write*'
45 We're paying the cost of building more tests than we need this way, but it's
46 easier than fiddling with ``.kunitconfig`` files or commenting out
49 However, if we wanted to define a set of tests in a less ad hoc way, the next
52 Defining a set of tests
53 -----------------------
55 ``kunit.py run`` (along with ``build``, and ``config``) supports a
56 ``--kunitconfig`` flag. So if you have a set of tests that you want to run on a
57 regular basis (especially if they have other dependencies), you can create a
58 specific ``.kunitconfig`` for them.
60 E.g. kunit has one for its tests:
64 $ ./tools/testing/kunit/kunit.py run --kunitconfig=lib/kunit/.kunitconfig
66 Alternatively, if you're following the convention of naming your
67 file ``.kunitconfig``, you can just pass in the dir, e.g.
71 $ ./tools/testing/kunit/kunit.py run --kunitconfig=lib/kunit
74 This is a relatively new feature (5.12+) so we don't have any
75 conventions yet about on what files should be checked in versus just
76 kept around locally. It's up to you and your maintainer to decide if a
77 config is useful enough to submit (and therefore have to maintain).
80 Having ``.kunitconfig`` fragments in a parent and child directory is
81 iffy. There's discussion about adding an "import" statement in these
82 files to make it possible to have a top-level config run tests from all
83 child directories. But that would mean ``.kunitconfig`` files are no
84 longer just simple .config fragments.
86 One alternative would be to have kunit tool recursively combine configs
87 automagically, but tests could theoretically depend on incompatible
88 options, so handling that would be tricky.
90 Setting kernel commandline parameters
91 -------------------------------------
93 You can use ``--kernel_args`` to pass arbitrary kernel arguments, e.g.
97 $ ./tools/testing/kunit/kunit.py run --kernel_args=param=42 --kernel_args=param2=false
100 Generating code coverage reports under UML
101 ------------------------------------------
104 TODO(brendanhiggins@google.com): There are various issues with UML and
105 versions of gcc 7 and up. You're likely to run into missing ``.gcda``
106 files or compile errors.
108 This is different from the "normal" way of getting coverage information that is
109 documented in Documentation/dev-tools/gcov.rst.
111 Instead of enabling ``CONFIG_GCOV_KERNEL=y``, we can set these options:
115 CONFIG_DEBUG_KERNEL=y
117 CONFIG_DEBUG_INFO_DWARF_TOOLCHAIN_DEFAULT=y
121 Putting it together into a copy-pastable sequence of commands:
125 # Append coverage options to the current config
126 $ echo -e "CONFIG_DEBUG_KERNEL=y\nCONFIG_DEBUG_INFO=y\nCONFIG_DEBUG_INFO_DWARF_TOOLCHAIN_DEFAULT=y\nCONFIG_GCOV=y" >> .kunit/.kunitconfig
127 $ ./tools/testing/kunit/kunit.py run
128 # Extract the coverage information from the build dir (.kunit/)
129 $ lcov -t "my_kunit_tests" -o coverage.info -c -d .kunit/
131 # From here on, it's the same process as with CONFIG_GCOV_KERNEL=y
132 # E.g. can generate an HTML report in a tmp dir like so:
133 $ genhtml -o /tmp/coverage_html coverage.info
136 If your installed version of gcc doesn't work, you can tweak the steps:
140 $ ./tools/testing/kunit/kunit.py run --make_options=CC=/usr/bin/gcc-6
141 $ lcov -t "my_kunit_tests" -o coverage.info -c -d .kunit/ --gcov-tool=/usr/bin/gcov-6
144 Running tests manually
145 ======================
147 Running tests without using ``kunit.py run`` is also an important use case.
148 Currently it's your only option if you want to test on architectures other than
151 As running the tests under UML is fairly straightforward (configure and compile
152 the kernel, run the ``./linux`` binary), this section will focus on testing
153 non-UML architectures.
156 Running built-in tests
157 ----------------------
159 When setting tests to ``=y``, the tests will run as part of boot and print
160 results to dmesg in TAP format. So you just need to add your tests to your
161 ``.config``, build and boot your kernel as normal.
163 So if we compiled our kernel with:
168 CONFIG_KUNIT_EXAMPLE_TEST=y
170 Then we'd see output like this in dmesg signaling the test ran and passed:
178 # example_simple_test: initializing
179 ok 1 - example_simple_test
182 Running tests as modules
183 ------------------------
185 Depending on the tests, you can build them as loadable modules.
187 For example, we'd change the config options from before to
192 CONFIG_KUNIT_EXAMPLE_TEST=m
194 Then after booting into our kernel, we can run the test via
198 $ modprobe kunit-example-test
200 This will then cause it to print TAP output to stdout.
203 The ``modprobe`` will *not* have a non-zero exit code if any test
204 failed (as of 5.13). But ``kunit.py parse`` would, see below.
207 You can set ``CONFIG_KUNIT=m`` as well, however, some features will not
208 work and thus some tests might break. Ideally tests would specify they
209 depend on ``KUNIT=y`` in their ``Kconfig``'s, but this is an edge case
210 most test authors won't think about.
211 As of 5.13, the only difference is that ``current->kunit_test`` will
214 Pretty-printing results
215 -----------------------
217 You can use ``kunit.py parse`` to parse dmesg for test output and print out
218 results in the same familiar format that ``kunit.py run`` does.
222 $ ./tools/testing/kunit/kunit.py parse /var/log/dmesg
225 Retrieving per suite results
226 ----------------------------
228 Regardless of how you're running your tests, you can enable
229 ``CONFIG_KUNIT_DEBUGFS`` to expose per-suite TAP-formatted results:
234 CONFIG_KUNIT_EXAMPLE_TEST=m
235 CONFIG_KUNIT_DEBUGFS=y
237 The results for each suite will be exposed under
238 ``/sys/kernel/debug/kunit/<suite>/results``.
239 So using our example config:
243 $ modprobe kunit-example-test > /dev/null
244 $ cat /sys/kernel/debug/kunit/example/results
247 # After removing the module, the corresponding files will go away
248 $ modprobe -r kunit-example-test
249 $ cat /sys/kernel/debug/kunit/example/results
250 /sys/kernel/debug/kunit/example/results: No such file or directory
252 Generating code coverage reports
253 --------------------------------
255 See Documentation/dev-tools/gcov.rst for details on how to do this.
257 The only vaguely KUnit-specific advice here is that you probably want to build
258 your tests as modules. That way you can isolate the coverage from tests from
259 other code executed during boot, e.g.
263 # Reset coverage counters before running the test.
264 $ echo 0 > /sys/kernel/debug/gcov/reset
265 $ modprobe kunit-example-test