perf-stat — Run a command and gather performance counter statistics


perf stat [-e <EVENT> | --event=EVENT] [-a] <command>
perf stat [-e <EVENT> | --event=EVENT] [-a] — <command> [<options>]
perf stat [-e <EVENT> | --event=EVENT] [-a] record [-o file] — <command> [<options>]
perf stat report [-i file]


This command runs a command and gathers performance counter statistics from it.



Any command you can specify in a shell.


See Stat Record.


See Stat Report.

-e, --event=

Select the PMU event. Selection can be:

  • a symbolic event name (use perf list to list all events)
  • a raw PMU event (eventsel+umask) in the form of rNNN where NNN is a hexadecimal event descriptor.
  • a symbolically formed event like pmu/param1=0x3,param2/ where param1 and param2 are defined as formats for the PMU in /sys/bus/event_source/devices/<pmu>/format/*

    'percore' is a event qualifier that sums up the event counts for both
    hardware threads in a core. For example:
    perf stat -A -a -e cpu/event,percore=1/,otherevent ...
  • a symbolically formed event like pmu/config=M,config1=N,config2=K/ where M, N, K are numbers (in decimal, hex, octal format). Acceptable values for each of config, config1 and config2 parameters are defined by corresponding entries in /sys/bus/event_source/devices/<pmu>/format/*

    Note that the last two syntaxes support prefix and glob matching in
    the PMU name to simplify creation of events across multiple instances
    of the same type of PMU in large systems (e.g. memory controller PMUs).
    Multiple PMU instances are typical for uncore PMUs, so the prefix
    'uncore_' is also ignored when performing this match.
-i, --no-inherit

child tasks do not inherit counters

-p, --pid=<pid>

stat events on existing process id (comma separated list)

-t, --tid=<tid>

stat events on existing thread id (comma separated list)

-a, --all-cpus

system-wide collection from all CPUs (default if no target is specified)


Don’t scale/normalize counter values

-d, --detailed

print more detailed statistics, can be specified up to 3 times

      -d:          detailed events, L1 and LLC data cache
   -d -d:     more detailed events, dTLB and iTLB events
-d -d -d:     very detailed events, adding prefetch events
-r, --repeat=<n>

repeat command and print average + stddev (max: 100). 0 means forever.

-B, --big-num

print large numbers with thousands' separators according to locale

-C, --cpu=

Count only on the list of CPUs provided. Multiple CPUs can be provided as a comma-separated list with no space: 0,1. Ranges of CPUs are specified with -: 0-2. In per-thread mode, this option is ignored. The -a option is still necessary to activate system-wide monitoring. Default is to count on all CPUs.

-A, --no-aggr

Do not aggregate counts across all monitored CPUs.

-n, --null

null run - don’t start any counters

-v, --verbose

be more verbose (show counter open errors, etc)

-x SEP, --field-separator SEP

print counts using a CSV-style output to make it easy to import directly into spreadsheets. Columns are separated by the string specified in SEP.


Display time for each run (-r option), in a table format, e.g.:

$ perf stat --null -r 5 --table perf bench sched pipe
Performance counter stats for 'perf bench sched pipe' (5 runs):
# Table of individual measurements:
5.189 (-0.293) #
5.189 (-0.294) #
5.186 (-0.296) #
5.663 (+0.181) ##
6.186 (+0.703) ####
# Final result:
5.483 +- 0.198 seconds time elapsed  ( +-  3.62% )
-G name, --cgroup name

monitor only in the container (cgroup) called "name". This option is available only in per-cpu mode. The cgroup filesystem must be mounted. All threads belonging to container "name" are monitored when they run on the monitored CPUs. Multiple cgroups can be provided. Each cgroup is applied to the corresponding event, i.e., first cgroup to first event, second cgroup to second event and so on. It is possible to provide an empty cgroup (monitor all the time) using, e.g., -G foo,,bar. Cgroups must have corresponding events, i.e., they always refer to events defined earlier on the command line. If the user wants to track multiple events for a specific cgroup, the user can use -e e1 -e e2 -G foo,foo or just use -e e1 -e e2 -G foo.

If wanting to monitor, say, cycles for a cgroup and also for system wide, this command line can be used: perf stat -e cycles -G cgroup_name -a -e cycles.

-o file, --output file

Print the output into the designated file.


Append to the output file designated with the -o option. Ignored if -o is not specified.


Log output to fd, instead of stderr. Complementary to --output, and mutually exclusive with it. --append may be used here. Examples: 3>results perf stat --log-fd 3  — $cmd 3>>results perf stat --log-fd 3 --append — $cmd

--pre, --post

Pre and post measurement hooks, e.g.:

perf stat --repeat 10 --null --sync --pre make -s O=defconfig-build/clean — make -s -j64 O=defconfig-build/ bzImage

-I msecs, --interval-print msecs

Print count deltas every N milliseconds (minimum: 1ms) The overhead percentage could be high in some cases, for instance with small, sub 100ms intervals. Use with caution. example: perf stat -I 1000 -e cycles -a sleep 5

--interval-count times

Print count deltas for fixed number of times. This option should be used together with "-I" option. example: perf stat -I 1000 --interval-count 2 -e cycles -a


Clear the screen before next interval.

--timeout msecs

Stop the perf stat session and print count deltas after N milliseconds (minimum: 10 ms). This option is not supported with the "-I" option. example: perf stat --time 2000 -e cycles -a


Only print computed metrics. Print them in a single line. Don’t show any raw values. Not supported with --per-thread.


Aggregate counts per processor socket for system-wide mode measurements. This is a useful mode to detect imbalance between sockets. To enable this mode, use --per-socket in addition to -a. (system-wide). The output includes the socket number and the number of online processors on that socket. This is useful to gauge the amount of aggregation.


Aggregate counts per processor die for system-wide mode measurements. This is a useful mode to detect imbalance between dies. To enable this mode, use --per-die in addition to -a. (system-wide). The output includes the die number and the number of online processors on that die. This is useful to gauge the amount of aggregation.


Aggregate counts per physical processor for system-wide mode measurements. This is a useful mode to detect imbalance between physical cores. To enable this mode, use --per-core in addition to -a. (system-wide). The output includes the core number and the number of online logical processors on that physical processor.


Aggregate counts per monitored threads, when monitoring threads (-t option) or processes (-p option).

-D msecs, --delay msecs

After starting the program, wait msecs before measuring. This is useful to filter out the startup phase of the program, which is often very different.

-T, --transaction

Print statistics of transactional execution if supported.

Stat Record

Stores stat data into perf data file.

-o file, --output file

Output file name.

Stat Report

Reads and reports stat data from perf data file.

-i file, --input file

Input file name.


Aggregate counts per processor socket for system-wide mode measurements.


Aggregate counts per processor die for system-wide mode measurements.


Aggregate counts per physical processor for system-wide mode measurements.

-M, --metrics

Print metrics or metricgroups specified in a comma separated list. For a group all metrics from the group are added. The events from the metrics are automatically measured. See perf list output for the possble metrics and metricgroups.

-A, --no-aggr

Do not aggregate counts across all monitored CPUs.


Print top down level 1 metrics if supported by the CPU. This allows to determine bottle necks in the CPU pipeline for CPU bound workloads, by breaking the cycles consumed down into frontend bound, backend bound, bad speculation and retiring.

Frontend bound means that the CPU cannot fetch and decode instructions fast enough. Backend bound means that computation or memory access is the bottle neck. Bad Speculation means that the CPU wasted cycles due to branch mispredictions and similar issues. Retiring means that the CPU computed without an apparently bottleneck. The bottleneck is only the real bottleneck if the workload is actually bound by the CPU and not by something else.

For best results it is usually a good idea to use it with interval mode like -I 1000, as the bottleneck of workloads can change often.

The top down metrics are collected per core instead of per CPU thread. Per core mode is automatically enabled and -a (global monitoring) is needed, requiring root rights or perf.perf_event_paranoid=-1.

Topdown uses the full Performance Monitoring Unit, and needs disabling of the NMI watchdog (as root): echo 0 > /proc/sys/kernel/nmi_watchdog for best results. Otherwise the bottlenecks may be inconsistent on workload with changing phases.

This enables --metric-only, unless overridden with --no-metric-only.

To interpret the results it is usually needed to know on which CPUs the workload runs on. If needed the CPUs can be forced using taskset.


Do not merge results from same PMUs.

When multiple events are created from a single event specification, stat will, by default, aggregate the event counts and show the result in a single row. This option disables that behavior and shows the individual events and counts.

Multiple events are created from a single event specification when: 1. Prefix or glob matching is used for the PMU name. 2. Aliases, which are listed immediately after the Kernel PMU events by perf list, are used.


Measure SMI cost if msr/aperf/ and msr/smi/ events are supported.

During the measurement, the /sys/device/cpu/freeze_on_smi will be set to freeze core counters on SMI. The aperf counter will not be effected by the setting. The cost of SMI can be measured by (aperf - unhalted core cycles).

In practice, the percentages of SMI cycles is very useful for performance oriented analysis. --metric_only will be applied by default. The output is SMI cycles%, equals to (aperf - unhalted core cycles) / aperf

Users who wants to get the actual value can apply --no-metric-only.


$ perf stat — make

Performance counter stats for 'make':
   83723.452481      task-clock:u (msec)       #    1.004 CPUs utilized
              0      context-switches:u        #    0.000 K/sec
              0      cpu-migrations:u          #    0.000 K/sec
      3,228,188      page-faults:u             #    0.039 M/sec
229,570,665,834      cycles:u                  #    2.742 GHz
313,163,853,778      instructions:u            #    1.36  insn per cycle
 69,704,684,856      branches:u                #  832.559 M/sec
  2,078,861,393      branch-misses:u           #    2.98% of all branches
83.409183620 seconds time elapsed
74.684747000 seconds user
 8.739217000 seconds sys


As displayed in the example above we can display 3 types of timings. We always display the time the counters were enabled/alive:

83.409183620 seconds time elapsed

For workload sessions we also display time the workloads spent in user/system lands:

74.684747000 seconds user
 8.739217000 seconds sys

Those times are the very same as displayed by the time tool.

CSV Format

With -x, perf stat is able to output a not-quite-CSV format output Commas in the output are not put into "". To make it easy to parse it is recommended to use a different character like -x \;

The fields are in this order:

Additional metrics may be printed with all earlier fields being empty.

See Also

perf-top(1), perf-list(1)

Referenced By

perf(1), perf-kvm(1), perf-list(1), perf-record(1), perf-report(1), perf-top(1).

10/28/2019 perf Manual