6 perf-stat - Run a command and gather performance counter statistics
11 'perf stat' [-e <EVENT> | --event=EVENT] [-a] <command>
12 'perf stat' [-e <EVENT> | --event=EVENT] [-a] \-- <command> [<options>]
13 'perf stat' [-e <EVENT> | --event=EVENT] [-a] record [-o file] \-- <command> [<options>]
14 'perf stat' report [-i file]
18 This command runs a command and gathers performance counter statistics
25 Any command you can specify in a shell.
35 Select the PMU event. Selection can be:
37 - a symbolic event name (use 'perf list' to list all events)
39 - a raw PMU event in the form of rN where N is a hexadecimal value
40 that represents the raw register encoding with the layout of the
41 event control registers as described by entries in
42 /sys/bus/event_source/devices/cpu/format/*.
44 - a symbolic or raw PMU event followed by an optional colon
45 and a list of event modifiers, e.g., cpu-cycles:p. See the
46 linkperf:perf-list[1] man page for details on event modifiers.
48 - a symbolically formed event like 'pmu/param1=0x3,param2/' where
49 param1 and param2 are defined as formats for the PMU in
50 /sys/bus/event_source/devices/<pmu>/format/*
52 'percore' is a event qualifier that sums up the event counts for both
53 hardware threads in a core. For example:
54 perf stat -A -a -e cpu/event,percore=1/,otherevent ...
56 - a symbolically formed event like 'pmu/config=M,config1=N,config2=K/'
57 where M, N, K are numbers (in decimal, hex, octal format).
58 Acceptable values for each of 'config', 'config1' and 'config2'
59 parameters are defined by corresponding entries in
60 /sys/bus/event_source/devices/<pmu>/format/*
62 Note that the last two syntaxes support prefix and glob matching in
63 the PMU name to simplify creation of events across multiple instances
64 of the same type of PMU in large systems (e.g. memory controller PMUs).
65 Multiple PMU instances are typical for uncore PMUs, so the prefix
66 'uncore_' is also ignored when performing this match.
71 child tasks do not inherit counters
74 stat events on existing process id (comma separated list)
78 stat events on existing thread id (comma separated list)
82 stat events on existing bpf program id (comma separated list),
83 requiring root rights. bpftool-prog could be used to find program
84 id all bpf programs in the system. For example:
86 # bpftool prog | head -n 1
87 17247: tracepoint name sys_enter tag 192d548b9d754067 gpl
89 # perf stat -e cycles,instructions --bpf-prog 17247 --timeout 1000
91 Performance counter stats for 'BPF program(s) 17247':
94 28,982 instructions # 0.34 insn per cycle
96 1.102235068 seconds time elapsed
99 Use BPF programs to aggregate readings from perf_events. This
100 allows multiple perf-stat sessions that are counting the same metric (cycles,
101 instructions, etc.) to share hardware counters.
102 To use BPF programs on common events by default, use
103 "perf config stat.bpf-counter-events=<list_of_events>".
106 With option "--bpf-counters", different perf-stat sessions share
107 information about shared BPF programs and maps via a pinned hashmap.
108 Use "--bpf-attr-map" to specify the path of this pinned hashmap.
109 The default path is /sys/fs/bpf/perf_attr_map.
112 --pfm-events events::
113 Select a PMU event using libpfm4 syntax (see http://perfmon2.sf.net)
114 including support for event filters. For example '--pfm-events
115 inst_retired:any_p:u:c=1:i'. More than one event can be passed to the
116 option using the comma separator. Hardware events and generic hardware
117 events cannot be mixed together. The latter must be used with the -e
118 option. The -e option and this one can be mixed and matched. Events
119 can be grouped using the {} notation.
124 system-wide collection from all CPUs (default if no target is specified)
127 Don't scale/normalize counter values
131 print more detailed statistics, can be specified up to 3 times
133 -d: detailed events, L1 and LLC data cache
134 -d -d: more detailed events, dTLB and iTLB events
135 -d -d -d: very detailed events, adding prefetch events
139 repeat command and print average + stddev (max: 100). 0 means forever.
143 print large numbers with thousands' separators according to locale.
144 Enabled by default. Use "--no-big-num" to disable.
145 Default setting can be changed with "perf config stat.big-num=false".
149 Count only on the list of CPUs provided. Multiple CPUs can be provided as a
150 comma-separated list with no space: 0,1. Ranges of CPUs are specified with -: 0-2.
151 In per-thread mode, this option is ignored. The -a option is still necessary
152 to activate system-wide monitoring. Default is to count on all CPUs.
156 Do not aggregate counts across all monitored CPUs.
160 null run - Don't start any counters.
162 This can be useful to measure just elapsed wall-clock time - or to assess the
163 raw overhead of perf stat itself, without running any counters.
167 be more verbose (show counter open errors, etc)
170 --field-separator SEP::
171 print counts using a CSV-style output to make it easy to import directly into
172 spreadsheets. Columns are separated by the string specified in SEP.
174 --table:: Display time for each run (-r option), in a table format, e.g.:
176 $ perf stat --null -r 5 --table perf bench sched pipe
178 Performance counter stats for 'perf bench sched pipe' (5 runs):
180 # Table of individual measurements:
188 5.483 +- 0.198 seconds time elapsed ( +- 3.62% )
192 monitor only in the container (cgroup) called "name". This option is available only
193 in per-cpu mode. The cgroup filesystem must be mounted. All threads belonging to
194 container "name" are monitored when they run on the monitored CPUs. Multiple cgroups
195 can be provided. Each cgroup is applied to the corresponding event, i.e., first cgroup
196 to first event, second cgroup to second event and so on. It is possible to provide
197 an empty cgroup (monitor all the time) using, e.g., -G foo,,bar. Cgroups must have
198 corresponding events, i.e., they always refer to events defined earlier on the command
199 line. If the user wants to track multiple events for a specific cgroup, the user can
200 use '-e e1 -e e2 -G foo,foo' or just use '-e e1 -e e2 -G foo'.
202 If wanting to monitor, say, 'cycles' for a cgroup and also for system wide, this
203 command line can be used: 'perf stat -e cycles -G cgroup_name -a -e cycles'.
205 --for-each-cgroup name::
206 Expand event list for each cgroup in "name" (allow multiple cgroups separated
207 by comma). It also support regex patterns to match multiple groups. This has same
208 effect that repeating -e option and -G option for each event x name. This option
209 cannot be used with -G/--cgroup option.
213 Print the output into the designated file.
216 Append to the output file designated with the -o option. Ignored if -o is not specified.
220 Log output to fd, instead of stderr. Complementary to --output, and mutually exclusive
221 with it. --append may be used here. Examples:
222 3>results perf stat --log-fd 3 \-- $cmd
223 3>>results perf stat --log-fd 3 --append \-- $cmd
225 --control=fifo:ctl-fifo[,ack-fifo]::
226 --control=fd:ctl-fd[,ack-fd]::
227 ctl-fifo / ack-fifo are opened and used as ctl-fd / ack-fd as follows.
228 Listen on ctl-fd descriptor for command to control measurement ('enable': enable events,
229 'disable': disable events). Measurements can be started with events disabled using
230 --delay=-1 option. Optionally send control command completion ('ack\n') to ack-fd descriptor
231 to synchronize with the controlling process. Example of bash shell script to enable and
232 disable events during measurements:
238 ctl_fifo=${ctl_dir}perf_ctl.fifo
239 test -p ${ctl_fifo} && unlink ${ctl_fifo}
241 exec {ctl_fd}<>${ctl_fifo}
243 ctl_ack_fifo=${ctl_dir}perf_ctl_ack.fifo
244 test -p ${ctl_ack_fifo} && unlink ${ctl_ack_fifo}
245 mkfifo ${ctl_ack_fifo}
246 exec {ctl_fd_ack}<>${ctl_ack_fifo}
248 perf stat -D -1 -e cpu-cycles -a -I 1000 \
249 --control fd:${ctl_fd},${ctl_fd_ack} \
253 sleep 5 && echo 'enable' >&${ctl_fd} && read -u ${ctl_fd_ack} e1 && echo "enabled(${e1})"
254 sleep 10 && echo 'disable' >&${ctl_fd} && read -u ${ctl_fd_ack} d1 && echo "disabled(${d1})"
257 unlink ${ctl_ack_fifo}
268 Pre and post measurement hooks, e.g.:
270 perf stat --repeat 10 --null --sync --pre 'make -s O=defconfig-build/clean' \-- make -s -j64 O=defconfig-build/ bzImage
273 --interval-print msecs::
274 Print count deltas every N milliseconds (minimum: 1ms)
275 The overhead percentage could be high in some cases, for instance with small, sub 100ms intervals. Use with caution.
276 example: 'perf stat -I 1000 -e cycles -a sleep 5'
278 If the metric exists, it is calculated by the counts generated in this interval and the metric is printed after #.
280 --interval-count times::
281 Print count deltas for fixed number of times.
282 This option should be used together with "-I" option.
283 example: 'perf stat -I 1000 --interval-count 2 -e cycles -a'
286 Clear the screen before next interval.
289 Stop the 'perf stat' session and print count deltas after N milliseconds (minimum: 10 ms).
290 This option is not supported with the "-I" option.
291 example: 'perf stat --time 2000 -e cycles -a'
294 Only print computed metrics. Print them in a single line.
295 Don't show any raw values. Not supported with --per-thread.
298 Aggregate counts per processor socket for system-wide mode measurements. This
299 is a useful mode to detect imbalance between sockets. To enable this mode,
300 use --per-socket in addition to -a. (system-wide). The output includes the
301 socket number and the number of online processors on that socket. This is
302 useful to gauge the amount of aggregation.
305 Aggregate counts per processor die for system-wide mode measurements. This
306 is a useful mode to detect imbalance between dies. To enable this mode,
307 use --per-die in addition to -a. (system-wide). The output includes the
308 die number and the number of online processors on that die. This is
309 useful to gauge the amount of aggregation.
312 Aggregate counts per cache instance for system-wide mode measurements. By
313 default, the aggregation happens for the cache level at the highest index
314 in the system. To specify a particular level, mention the cache level
315 alongside the option in the format [Ll][1-9][0-9]*. For example:
316 Using option "--per-cache=l3" or "--per-cache=L3" will aggregate the
317 information at the boundary of the level 3 cache in the system.
320 Aggregate counts per physical processor for system-wide mode measurements. This
321 is a useful mode to detect imbalance between physical cores. To enable this mode,
322 use --per-core in addition to -a. (system-wide). The output includes the
323 core number and the number of online logical processors on that physical processor.
326 Aggregate counts per monitored threads, when monitoring threads (-t option)
327 or processes (-p option).
330 Aggregate counts per NUMA nodes for system-wide mode measurements. This
331 is a useful mode to detect imbalance between NUMA nodes. To enable this
332 mode, use --per-node in addition to -a. (system-wide).
336 After starting the program, wait msecs before measuring (-1: start with events
337 disabled). This is useful to filter out the startup phase of the program,
338 which is often very different.
343 Print statistics of transactional execution if supported.
346 By default, events to compute a metric are placed in weak groups. The
347 group tries to enforce scheduling all or none of the events. The
348 --metric-no-group option places events outside of groups and may
349 increase the chance of the event being scheduled - leading to more
350 accuracy. However, as events may not be scheduled together accuracy
351 for metrics like instructions per cycle can be lower - as both metrics
352 may no longer be being measured at the same time.
355 By default metric events in different weak groups can be shared if one
356 group contains all the events needed by another. In such cases one
357 group will be eliminated reducing event multiplexing and making it so
358 that certain groups of metrics sum to 100%. A downside to sharing a
359 group is that the group may require multiplexing and so accuracy for a
360 small group that need not have multiplexing is lowered. This option
361 forbids the event merging logic from sharing events between groups and
362 may be used to increase accuracy in this case.
364 --metric-no-threshold::
365 Metric thresholds may increase the number of events necessary to
366 compute whether a metric has exceeded its threshold expression. This
367 may not be desirable, for example, as the events can introduce
368 multiplexing. This option disables the adding of threshold expression
369 events for a metric. However, if there are sufficient events to
370 compute the threshold then the threshold is still computed and used to
371 color the metric's computed value.
374 Don't print output, warnings or messages. This is useful with perf stat
375 record below to only write data to the perf.data file.
379 Stores stat data into perf data file.
387 Reads and reports stat data from perf data file.
394 Aggregate counts per processor socket for system-wide mode measurements.
397 Aggregate counts per processor die for system-wide mode measurements.
400 Aggregate counts per cache instance for system-wide mode measurements. By
401 default, the aggregation happens for the cache level at the highest index
402 in the system. To specify a particular level, mention the cache level
403 alongside the option in the format [Ll][1-9][0-9]*. For example: Using
404 option "--per-cache=l3" or "--per-cache=L3" will aggregate the
405 information at the boundary of the level 3 cache in the system.
408 Aggregate counts per physical processor for system-wide mode measurements.
412 Print metrics or metricgroups specified in a comma separated list.
413 For a group all metrics from the group are added.
414 The events from the metrics are automatically measured.
415 See perf list output for the possible metrics and metricgroups.
417 When threshold information is available for a metric, the
418 color red is used to signify a metric has exceeded a threshold
419 while green shows it hasn't. The default color means that
420 no threshold information was available or the threshold
421 couldn't be computed.
426 Do not aggregate/merge counts across monitored CPUs or PMUs.
428 When multiple events are created from a single event specification,
429 stat will, by default, aggregate the event counts and show the result
430 in a single row. This option disables that behavior and shows the
431 individual events and counts.
433 Multiple events are created from a single event specification when:
435 1. PID monitoring isn't requested and the system has more than one
436 CPU. For example, a system with 8 SMT threads will have one event
437 opened on each thread and aggregation is performed across them.
439 2. Prefix or glob wildcard matching is used for the PMU name. For
440 example, multiple memory controller PMUs may exist typically with a
441 suffix of _0, _1, etc. By default the event counts will all be
442 combined if the PMU is specified without the suffix such as
443 uncore_imc rather than uncore_imc_0.
445 3. Aliases, which are listed immediately after the Kernel PMU events
446 by perf list, are used.
449 Merge core event counts from all core PMUs. In hybrid or big.LITTLE
450 systems by default each core PMU will report its count
451 separately. This option forces core PMU counts to be combined to give
452 a behavior closer to having a single CPU type in the system.
455 Print top-down metrics supported by the CPU. This allows to determine
456 bottle necks in the CPU pipeline for CPU bound workloads, by breaking
457 the cycles consumed down into frontend bound, backend bound, bad
458 speculation and retiring.
460 Frontend bound means that the CPU cannot fetch and decode instructions fast
461 enough. Backend bound means that computation or memory access is the bottle
462 neck. Bad Speculation means that the CPU wasted cycles due to branch
463 mispredictions and similar issues. Retiring means that the CPU computed without
464 an apparently bottleneck. The bottleneck is only the real bottleneck
465 if the workload is actually bound by the CPU and not by something else.
467 For best results it is usually a good idea to use it with interval
468 mode like -I 1000, as the bottleneck of workloads can change often.
470 This enables --metric-only, unless overridden with --no-metric-only.
472 The following restrictions only apply to older Intel CPUs and Atom,
473 on newer CPUs (IceLake and later) TopDown can be collected for any thread:
475 The top down metrics are collected per core instead of per
476 CPU thread. Per core mode is automatically enabled
477 and -a (global monitoring) is needed, requiring root rights or
478 perf.perf_event_paranoid=-1.
480 Topdown uses the full Performance Monitoring Unit, and needs
481 disabling of the NMI watchdog (as root):
482 echo 0 > /proc/sys/kernel/nmi_watchdog
483 for best results. Otherwise the bottlenecks may be inconsistent
484 on workload with changing phases.
486 To interpret the results it is usually needed to know on which
487 CPUs the workload runs on. If needed the CPUs can be forced using
491 Print the top-down statistics that equal the input level. It allows
492 users to print the interested top-down metrics level instead of the
493 level 1 top-down metrics.
495 As the higher levels gather more metrics and use more counters they
496 will be less accurate. By convention a metric can be examined by
497 appending '_group' to it and this will increase accuracy compared to
498 gathering all metrics for a level. For example, level 1 analysis may
499 highlight 'tma_frontend_bound'. This metric may be drilled into with
500 'tma_frontend_bound_group' with
501 'perf stat -M tma_frontend_bound_group...'.
503 Error out if the input is higher than the supported max level.
506 Measure SMI cost if msr/aperf/ and msr/smi/ events are supported.
508 During the measurement, the /sys/device/cpu/freeze_on_smi will be set to
509 freeze core counters on SMI.
510 The aperf counter will not be effected by the setting.
511 The cost of SMI can be measured by (aperf - unhalted core cycles).
513 In practice, the percentages of SMI cycles is very useful for performance
514 oriented analysis. --metric_only will be applied by default.
515 The output is SMI cycles%, equals to (aperf - unhalted core cycles) / aperf
517 Users who wants to get the actual value can apply --no-metric-only.
520 Configure all used events to run in kernel space.
523 Configure all used events to run in user space.
525 --percore-show-thread::
526 The event modifier "percore" has supported to sum up the event counts
527 for all hardware threads in a core and show the counts per core.
529 This option with event modifier "percore" enabled also sums up the event
530 counts for all hardware threads in a core but show the sum counts per
531 hardware thread. This is essentially a replacement for the any bit and
532 convenient for post processing.
535 Print summary for interval mode (-I).
538 Don't print 'summary' at the first column for CVS summary output.
539 This option must be used with -x and --summary.
541 This option can be enabled in perf config by setting the variable
542 'stat.no-csv-summary'.
544 $ perf config stat.no-csv-summary=true
547 Only enable events on applying cpu with this type for hybrid platform
555 Performance counter stats for 'make':
557 83723.452481 task-clock:u (msec) # 1.004 CPUs utilized
558 0 context-switches:u # 0.000 K/sec
559 0 cpu-migrations:u # 0.000 K/sec
560 3,228,188 page-faults:u # 0.039 M/sec
561 229,570,665,834 cycles:u # 2.742 GHz
562 313,163,853,778 instructions:u # 1.36 insn per cycle
563 69,704,684,856 branches:u # 832.559 M/sec
564 2,078,861,393 branch-misses:u # 2.98% of all branches
566 83.409183620 seconds time elapsed
568 74.684747000 seconds user
569 8.739217000 seconds sys
573 As displayed in the example above we can display 3 types of timings.
574 We always display the time the counters were enabled/alive:
576 83.409183620 seconds time elapsed
578 For workload sessions we also display time the workloads spent in
581 74.684747000 seconds user
582 8.739217000 seconds sys
584 Those times are the very same as displayed by the 'time' tool.
589 With -x, perf stat is able to output a not-quite-CSV format output
590 Commas in the output are not put into "". To make it easy to parse
591 it is recommended to use a different character like -x \;
593 The fields are in this order:
595 - optional usec time stamp in fractions of second (with -I xxx)
596 - optional CPU, core, or socket identifier
597 - optional number of logical CPUs aggregated
599 - unit of the counter value or empty
601 - run time of counter
602 - percentage of measurement time the counter was running
603 - optional variance if multiple values are collected with -r
604 - optional metric value
605 - optional unit of metric
607 Additional metrics may be printed with all earlier fields being empty.
609 include::intel-hybrid.txt[]
614 With -j, perf stat is able to print out a JSON format output
615 that can be used for parsing.
617 - timestamp : optional usec time stamp in fractions of second (with -I)
618 - optional aggregate options:
619 - core : core identifier (with --per-core)
620 - die : die identifier (with --per-die)
621 - socket : socket identifier (with --per-socket)
622 - node : node identifier (with --per-node)
623 - thread : thread identifier (with --per-thread)
624 - counter-value : counter value
625 - unit : unit of the counter value or empty
627 - variance : optional variance if multiple values are collected (with -r)
628 - runtime : run time of counter
629 - metric-value : optional metric value
630 - metric-unit : optional unit of metric
634 linkperf:perf-top[1], linkperf:perf-list[1]