6 perf-stat - Run a command and gather performance counter statistics
11 'perf stat' [-e <EVENT> | --event=EVENT] [-a] <command>
12 'perf stat' [-e <EVENT> | --event=EVENT] [-a] -- <command> [<options>]
13 'perf stat' [-e <EVENT> | --event=EVENT] [-a] record [-o file] -- <command> [<options>]
14 'perf stat' report [-i file]
18 This command runs a command and gathers performance counter statistics
25 Any command you can specify in a shell.
35 Select the PMU event. Selection can be:
37 - a symbolic event name (use 'perf list' to list all events)
39 - a raw PMU event (eventsel+umask) in the form of rNNN where NNN is a
40 hexadecimal event descriptor.
42 - a symbolic or raw PMU event followed by an optional colon
43 and a list of event modifiers, e.g., cpu-cycles:p. See the
44 linkperf:perf-list[1] man page for details on event modifiers.
46 - a symbolically formed event like 'pmu/param1=0x3,param2/' where
47 param1 and param2 are defined as formats for the PMU in
48 /sys/bus/event_source/devices/<pmu>/format/*
50 - a symbolically formed event like 'pmu/config=M,config1=N,config2=K/'
51 where M, N, K are numbers (in decimal, hex, octal format).
52 Acceptable values for each of 'config', 'config1' and 'config2'
53 parameters are defined by corresponding entries in
54 /sys/bus/event_source/devices/<pmu>/format/*
56 Note that the last two syntaxes support prefix and glob matching in
57 the PMU name to simplify creation of events accross multiple instances
58 of the same type of PMU in large systems (e.g. memory controller PMUs).
59 Multiple PMU instances are typical for uncore PMUs, so the prefix
60 'uncore_' is also ignored when performing this match.
65 child tasks do not inherit counters
68 stat events on existing process id (comma separated list)
72 stat events on existing thread id (comma separated list)
77 system-wide collection from all CPUs (default if no target is specified)
81 scale/normalize counter values
85 print more detailed statistics, can be specified up to 3 times
87 -d: detailed events, L1 and LLC data cache
88 -d -d: more detailed events, dTLB and iTLB events
89 -d -d -d: very detailed events, adding prefetch events
93 repeat command and print average + stddev (max: 100). 0 means forever.
97 print large numbers with thousands' separators according to locale
101 Count only on the list of CPUs provided. Multiple CPUs can be provided as a
102 comma-separated list with no space: 0,1. Ranges of CPUs are specified with -: 0-2.
103 In per-thread mode, this option is ignored. The -a option is still necessary
104 to activate system-wide monitoring. Default is to count on all CPUs.
108 Do not aggregate counts across all monitored CPUs.
112 null run - don't start any counters
116 be more verbose (show counter open errors, etc)
119 --field-separator SEP::
120 print counts using a CSV-style output to make it easy to import directly into
121 spreadsheets. Columns are separated by the string specified in SEP.
123 --table:: Display time for each run (-r option), in a table format, e.g.:
125 $ perf stat --null -r 5 --table perf bench sched pipe
127 Performance counter stats for 'perf bench sched pipe' (5 runs):
129 # Table of individual measurements:
137 5.483 +- 0.198 seconds time elapsed ( +- 3.62% )
141 monitor only in the container (cgroup) called "name". This option is available only
142 in per-cpu mode. The cgroup filesystem must be mounted. All threads belonging to
143 container "name" are monitored when they run on the monitored CPUs. Multiple cgroups
144 can be provided. Each cgroup is applied to the corresponding event, i.e., first cgroup
145 to first event, second cgroup to second event and so on. It is possible to provide
146 an empty cgroup (monitor all the time) using, e.g., -G foo,,bar. Cgroups must have
147 corresponding events, i.e., they always refer to events defined earlier on the command
148 line. If the user wants to track multiple events for a specific cgroup, the user can
149 use '-e e1 -e e2 -G foo,foo' or just use '-e e1 -e e2 -G foo'.
151 If wanting to monitor, say, 'cycles' for a cgroup and also for system wide, this
152 command line can be used: 'perf stat -e cycles -G cgroup_name -a -e cycles'.
156 Print the output into the designated file.
159 Append to the output file designated with the -o option. Ignored if -o is not specified.
163 Log output to fd, instead of stderr. Complementary to --output, and mutually exclusive
164 with it. --append may be used here. Examples:
165 3>results perf stat --log-fd 3 -- $cmd
166 3>>results perf stat --log-fd 3 --append -- $cmd
170 Pre and post measurement hooks, e.g.:
172 perf stat --repeat 10 --null --sync --pre 'make -s O=defconfig-build/clean' -- make -s -j64 O=defconfig-build/ bzImage
175 --interval-print msecs::
176 Print count deltas every N milliseconds (minimum: 1ms)
177 The overhead percentage could be high in some cases, for instance with small, sub 100ms intervals. Use with caution.
178 example: 'perf stat -I 1000 -e cycles -a sleep 5'
180 --interval-count times::
181 Print count deltas for fixed number of times.
182 This option should be used together with "-I" option.
183 example: 'perf stat -I 1000 --interval-count 2 -e cycles -a'
186 Clear the screen before next interval.
189 Stop the 'perf stat' session and print count deltas after N milliseconds (minimum: 10 ms).
190 This option is not supported with the "-I" option.
191 example: 'perf stat --time 2000 -e cycles -a'
194 Only print computed metrics. Print them in a single line.
195 Don't show any raw values. Not supported with --per-thread.
198 Aggregate counts per processor socket for system-wide mode measurements. This
199 is a useful mode to detect imbalance between sockets. To enable this mode,
200 use --per-socket in addition to -a. (system-wide). The output includes the
201 socket number and the number of online processors on that socket. This is
202 useful to gauge the amount of aggregation.
205 Aggregate counts per physical processor for system-wide mode measurements. This
206 is a useful mode to detect imbalance between physical cores. To enable this mode,
207 use --per-core in addition to -a. (system-wide). The output includes the
208 core number and the number of online logical processors on that physical processor.
211 Aggregate counts per monitored threads, when monitoring threads (-t option)
212 or processes (-p option).
216 After starting the program, wait msecs before measuring. This is useful to
217 filter out the startup phase of the program, which is often very different.
222 Print statistics of transactional execution if supported.
226 Stores stat data into perf data file.
234 Reads and reports stat data from perf data file.
241 Aggregate counts per processor socket for system-wide mode measurements.
244 Aggregate counts per physical processor for system-wide mode measurements.
248 Print metrics or metricgroups specified in a comma separated list.
249 For a group all metrics from the group are added.
250 The events from the metrics are automatically measured.
251 See perf list output for the possble metrics and metricgroups.
255 Do not aggregate counts across all monitored CPUs.
258 Print top down level 1 metrics if supported by the CPU. This allows to
259 determine bottle necks in the CPU pipeline for CPU bound workloads,
260 by breaking the cycles consumed down into frontend bound, backend bound,
261 bad speculation and retiring.
263 Frontend bound means that the CPU cannot fetch and decode instructions fast
264 enough. Backend bound means that computation or memory access is the bottle
265 neck. Bad Speculation means that the CPU wasted cycles due to branch
266 mispredictions and similar issues. Retiring means that the CPU computed without
267 an apparently bottleneck. The bottleneck is only the real bottleneck
268 if the workload is actually bound by the CPU and not by something else.
270 For best results it is usually a good idea to use it with interval
271 mode like -I 1000, as the bottleneck of workloads can change often.
273 The top down metrics are collected per core instead of per
274 CPU thread. Per core mode is automatically enabled
275 and -a (global monitoring) is needed, requiring root rights or
276 perf.perf_event_paranoid=-1.
278 Topdown uses the full Performance Monitoring Unit, and needs
279 disabling of the NMI watchdog (as root):
280 echo 0 > /proc/sys/kernel/nmi_watchdog
281 for best results. Otherwise the bottlenecks may be inconsistent
282 on workload with changing phases.
284 This enables --metric-only, unless overriden with --no-metric-only.
286 To interpret the results it is usually needed to know on which
287 CPUs the workload runs on. If needed the CPUs can be forced using
291 Do not merge results from same PMUs.
293 When multiple events are created from a single event specification,
294 stat will, by default, aggregate the event counts and show the result
295 in a single row. This option disables that behavior and shows
296 the individual events and counts.
298 Multiple events are created from a single event specification when:
299 1. Prefix or glob matching is used for the PMU name.
300 2. Aliases, which are listed immediately after the Kernel PMU events
301 by perf list, are used.
304 Measure SMI cost if msr/aperf/ and msr/smi/ events are supported.
306 During the measurement, the /sys/device/cpu/freeze_on_smi will be set to
307 freeze core counters on SMI.
308 The aperf counter will not be effected by the setting.
309 The cost of SMI can be measured by (aperf - unhalted core cycles).
311 In practice, the percentages of SMI cycles is very useful for performance
312 oriented analysis. --metric_only will be applied by default.
313 The output is SMI cycles%, equals to (aperf - unhalted core cycles) / aperf
315 Users who wants to get the actual value can apply --no-metric-only.
322 Performance counter stats for 'make':
324 83723.452481 task-clock:u (msec) # 1.004 CPUs utilized
325 0 context-switches:u # 0.000 K/sec
326 0 cpu-migrations:u # 0.000 K/sec
327 3,228,188 page-faults:u # 0.039 M/sec
328 229,570,665,834 cycles:u # 2.742 GHz
329 313,163,853,778 instructions:u # 1.36 insn per cycle
330 69,704,684,856 branches:u # 832.559 M/sec
331 2,078,861,393 branch-misses:u # 2.98% of all branches
333 83.409183620 seconds time elapsed
335 74.684747000 seconds user
336 8.739217000 seconds sys
340 As displayed in the example above we can display 3 types of timings.
341 We always display the time the counters were enabled/alive:
343 83.409183620 seconds time elapsed
345 For workload sessions we also display time the workloads spent in
348 74.684747000 seconds user
349 8.739217000 seconds sys
351 Those times are the very same as displayed by the 'time' tool.
356 With -x, perf stat is able to output a not-quite-CSV format output
357 Commas in the output are not put into "". To make it easy to parse
358 it is recommended to use a different character like -x \;
360 The fields are in this order:
362 - optional usec time stamp in fractions of second (with -I xxx)
363 - optional CPU, core, or socket identifier
364 - optional number of logical CPUs aggregated
366 - unit of the counter value or empty
368 - run time of counter
369 - percentage of measurement time the counter was running
370 - optional variance if multiple values are collected with -r
371 - optional metric value
372 - optional unit of metric
374 Additional metrics may be printed with all earlier fields being empty.
378 linkperf:perf-top[1], linkperf:perf-list[1]