1 .. SPDX-License-Identifier: GPL-2.0
3 ============================
4 Tips For Running KUnit Tests
5 ============================
7 Using ``kunit.py run`` ("kunit tool")
8 =====================================
10 Running from any directory
11 --------------------------
13 It can be handy to create a bash function like:
17 function run_kunit() {
18 ( cd "$(git rev-parse --show-toplevel)" && ./tools/testing/kunit/kunit.py run "$@" )
22 Early versions of ``kunit.py`` (before 5.6) didn't work unless run from
23 the kernel root, hence the use of a subshell and ``cd``.
25 Running a subset of tests
26 -------------------------
28 ``kunit.py run`` accepts an optional glob argument to filter tests. The format
29 is ``"<suite_glob>[.test_glob]"``.
31 Say that we wanted to run the sysctl tests, we could do so via:
35 $ echo -e 'CONFIG_KUNIT=y\nCONFIG_KUNIT_ALL_TESTS=y' > .kunit/.kunitconfig
36 $ ./tools/testing/kunit/kunit.py run 'sysctl*'
38 We can filter down to just the "write" tests via:
42 $ echo -e 'CONFIG_KUNIT=y\nCONFIG_KUNIT_ALL_TESTS=y' > .kunit/.kunitconfig
43 $ ./tools/testing/kunit/kunit.py run 'sysctl*.*write*'
45 We're paying the cost of building more tests than we need this way, but it's
46 easier than fiddling with ``.kunitconfig`` files or commenting out
49 However, if we wanted to define a set of tests in a less ad hoc way, the next
52 Defining a set of tests
53 -----------------------
55 ``kunit.py run`` (along with ``build``, and ``config``) supports a
56 ``--kunitconfig`` flag. So if you have a set of tests that you want to run on a
57 regular basis (especially if they have other dependencies), you can create a
58 specific ``.kunitconfig`` for them.
60 E.g. kunit has one for its tests:
64 $ ./tools/testing/kunit/kunit.py run --kunitconfig=lib/kunit/.kunitconfig
66 Alternatively, if you're following the convention of naming your
67 file ``.kunitconfig``, you can just pass in the dir, e.g.
71 $ ./tools/testing/kunit/kunit.py run --kunitconfig=lib/kunit
74 This is a relatively new feature (5.12+) so we don't have any
75 conventions yet about on what files should be checked in versus just
76 kept around locally. It's up to you and your maintainer to decide if a
77 config is useful enough to submit (and therefore have to maintain).
80 Having ``.kunitconfig`` fragments in a parent and child directory is
81 iffy. There's discussion about adding an "import" statement in these
82 files to make it possible to have a top-level config run tests from all
83 child directories. But that would mean ``.kunitconfig`` files are no
84 longer just simple .config fragments.
86 One alternative would be to have kunit tool recursively combine configs
87 automagically, but tests could theoretically depend on incompatible
88 options, so handling that would be tricky.
90 Setting kernel commandline parameters
91 -------------------------------------
93 You can use ``--kernel_args`` to pass arbitrary kernel arguments, e.g.
97 $ ./tools/testing/kunit/kunit.py run --kernel_args=param=42 --kernel_args=param2=false
100 Generating code coverage reports under UML
101 ------------------------------------------
104 TODO(brendanhiggins@google.com): There are various issues with UML and
105 versions of gcc 7 and up. You're likely to run into missing ``.gcda``
106 files or compile errors.
108 This is different from the "normal" way of getting coverage information that is
109 documented in Documentation/dev-tools/gcov.rst.
111 Instead of enabling ``CONFIG_GCOV_KERNEL=y``, we can set these options:
115 CONFIG_DEBUG_KERNEL=y
117 CONFIG_DEBUG_INFO_DWARF_TOOLCHAIN_DEFAULT=y
121 Putting it together into a copy-pastable sequence of commands:
125 # Append coverage options to the current config
126 $ ./tools/testing/kunit/kunit.py run --kunitconfig=.kunit/ --kunitconfig=tools/testing/kunit/configs/coverage_uml.config
127 # Extract the coverage information from the build dir (.kunit/)
128 $ lcov -t "my_kunit_tests" -o coverage.info -c -d .kunit/
130 # From here on, it's the same process as with CONFIG_GCOV_KERNEL=y
131 # E.g. can generate an HTML report in a tmp dir like so:
132 $ genhtml -o /tmp/coverage_html coverage.info
135 If your installed version of gcc doesn't work, you can tweak the steps:
139 $ ./tools/testing/kunit/kunit.py run --make_options=CC=/usr/bin/gcc-6
140 $ lcov -t "my_kunit_tests" -o coverage.info -c -d .kunit/ --gcov-tool=/usr/bin/gcov-6
143 Running tests manually
144 ======================
146 Running tests without using ``kunit.py run`` is also an important use case.
147 Currently it's your only option if you want to test on architectures other than
150 As running the tests under UML is fairly straightforward (configure and compile
151 the kernel, run the ``./linux`` binary), this section will focus on testing
152 non-UML architectures.
155 Running built-in tests
156 ----------------------
158 When setting tests to ``=y``, the tests will run as part of boot and print
159 results to dmesg in TAP format. So you just need to add your tests to your
160 ``.config``, build and boot your kernel as normal.
162 So if we compiled our kernel with:
167 CONFIG_KUNIT_EXAMPLE_TEST=y
169 Then we'd see output like this in dmesg signaling the test ran and passed:
177 # example_simple_test: initializing
178 ok 1 - example_simple_test
181 Running tests as modules
182 ------------------------
184 Depending on the tests, you can build them as loadable modules.
186 For example, we'd change the config options from before to
191 CONFIG_KUNIT_EXAMPLE_TEST=m
193 Then after booting into our kernel, we can run the test via
197 $ modprobe kunit-example-test
199 This will then cause it to print TAP output to stdout.
202 The ``modprobe`` will *not* have a non-zero exit code if any test
203 failed (as of 5.13). But ``kunit.py parse`` would, see below.
206 You can set ``CONFIG_KUNIT=m`` as well, however, some features will not
207 work and thus some tests might break. Ideally tests would specify they
208 depend on ``KUNIT=y`` in their ``Kconfig``'s, but this is an edge case
209 most test authors won't think about.
210 As of 5.13, the only difference is that ``current->kunit_test`` will
213 Pretty-printing results
214 -----------------------
216 You can use ``kunit.py parse`` to parse dmesg for test output and print out
217 results in the same familiar format that ``kunit.py run`` does.
221 $ ./tools/testing/kunit/kunit.py parse /var/log/dmesg
224 Retrieving per suite results
225 ----------------------------
227 Regardless of how you're running your tests, you can enable
228 ``CONFIG_KUNIT_DEBUGFS`` to expose per-suite TAP-formatted results:
233 CONFIG_KUNIT_EXAMPLE_TEST=m
234 CONFIG_KUNIT_DEBUGFS=y
236 The results for each suite will be exposed under
237 ``/sys/kernel/debug/kunit/<suite>/results``.
238 So using our example config:
242 $ modprobe kunit-example-test > /dev/null
243 $ cat /sys/kernel/debug/kunit/example/results
246 # After removing the module, the corresponding files will go away
247 $ modprobe -r kunit-example-test
248 $ cat /sys/kernel/debug/kunit/example/results
249 /sys/kernel/debug/kunit/example/results: No such file or directory
251 Generating code coverage reports
252 --------------------------------
254 See Documentation/dev-tools/gcov.rst for details on how to do this.
256 The only vaguely KUnit-specific advice here is that you probably want to build
257 your tests as modules. That way you can isolate the coverage from tests from
258 other code executed during boot, e.g.
262 # Reset coverage counters before running the test.
263 $ echo 0 > /sys/kernel/debug/gcov/reset
264 $ modprobe kunit-example-test
267 Test Attributes and Filtering
268 =============================
270 Test suites and cases can be marked with test attributes, such as speed of
271 test. These attributes will later be printed in test output and can be used to
272 filter test execution.
274 Marking Test Attributes
275 -----------------------
277 Tests are marked with an attribute by including a ``kunit_attributes`` object
278 in the test definition.
280 Test cases can be marked using the ``KUNIT_CASE_ATTR(test_name, attributes)``
281 macro to define the test case instead of ``KUNIT_CASE(test_name)``.
285 static const struct kunit_attributes example_attr = {
286 .speed = KUNIT_VERY_SLOW,
289 static struct kunit_case example_test_cases[] = {
290 KUNIT_CASE_ATTR(example_test, example_attr),
294 To mark a test case as slow, you can also use ``KUNIT_CASE_SLOW(test_name)``.
295 This is a helpful macro as the slow attribute is the most commonly used.
297 Test suites can be marked with an attribute by setting the "attr" field in the
302 static const struct kunit_attributes example_attr = {
303 .speed = KUNIT_VERY_SLOW,
306 static struct kunit_suite example_test_suite = {
308 .attr = example_attr,
312 Not all attributes need to be set in a ``kunit_attributes`` object. Unset
313 attributes will remain uninitialized and act as though the attribute is set
314 to 0 or NULL. Thus, if an attribute is set to 0, it is treated as unset.
315 These unset attributes will not be reported and may act as a default value
316 for filtering purposes.
321 When a user runs tests, attributes will be present in the raw kernel output (in
322 KTAP format). Note that attributes will be hidden by default in kunit.py output
323 for all passing tests but the raw kernel output can be accessed using the
324 ``--raw_output`` flag. This is an example of how test attributes for test cases
325 will be formatted in kernel output:
329 # example_test.speed: slow
332 This is an example of how test attributes for test suites will be formatted in
338 # Subtest: example_suite
339 # module: kunit_example_test
344 Additionally, users can output a full attribute report of tests with their
345 attributes, using the command line flag ``--list_tests_attr``:
349 kunit.py run "example" --list_tests_attr
352 This report can be accessed when running KUnit manually by passing in the
353 module_param ``kunit.action=list_attr``.
358 Users can filter tests using the ``--filter`` command line flag when running
359 tests. As an example:
363 kunit.py run --filter speed=slow
366 You can also use the following operations on filters: "<", ">", "<=", ">=",
367 "!=", and "=". Example:
371 kunit.py run --filter "speed>slow"
373 This example will run all tests with speeds faster than slow. Note that the
374 characters < and > are often interpreted by the shell, so they may need to be
375 quoted or escaped, as above.
377 Additionally, you can use multiple filters at once. Simply separate filters
378 using commas. Example:
382 kunit.py run --filter "speed>slow, module=kunit_example_test"
385 You can use this filtering feature when running KUnit manually by passing
386 the filter as a module param: ``kunit.filter="speed>slow, speed<=normal"``.
388 Filtered tests will not run or show up in the test output. You can use the
389 ``--filter_action=skip`` flag to skip filtered tests instead. These tests will be
390 shown in the test output in the test but will not run. To use this feature when
391 running KUnit manually, use the module param ``kunit.filter_action=skip``.
393 Rules of Filtering Procedure
394 ----------------------------
396 Since both suites and test cases can have attributes, there may be conflicts
397 between attributes during filtering. The process of filtering follows these
400 - Filtering always operates at a per-test level.
402 - If a test has an attribute set, then the test's value is filtered on.
404 - Otherwise, the value falls back to the suite's value.
406 - If neither are set, the attribute has a global "default" value, which is used.
408 List of Current Attributes
409 --------------------------
413 This attribute indicates the speed of a test's execution (how slow or fast the
416 This attribute is saved as an enum with the following categories: "normal",
417 "slow", or "very_slow". The assumed default speed for tests is "normal". This
418 indicates that the test takes a relatively trivial amount of time (less than
419 1 second), regardless of the machine it is running on. Any test slower than
420 this could be marked as "slow" or "very_slow".
422 The macro ``KUNIT_CASE_SLOW(test_name)`` can be easily used to set the speed
423 of a test case to "slow".
427 This attribute indicates the name of the module associated with the test.
429 This attribute is automatically saved as a string and is printed for each suite.
430 Tests can also be filtered using this attribute.