perf-bench(1) ============= NAME ---- perf-bench - General framework for benchmark suites SYNOPSIS -------- [verse] 'perf bench' [] [] DESCRIPTION ----------- This 'perf bench' command is a general framework for benchmark suites. COMMON OPTIONS -------------- -r:: --repeat=:: Specify amount of times to repeat the run (default 10). -f:: --format=:: Specify format style. Current available format styles are: 'default':: Default style. This is mainly for human reading. --------------------- % perf bench sched pipe # with no style specified (executing 1000000 pipe operations between two tasks) Total time:5.855 sec 5.855061 usecs/op 170792 ops/sec --------------------- 'simple':: This simple style is friendly for automated processing by scripts. --------------------- % perf bench --format=simple sched pipe # specified simple 5.988 --------------------- SUBSYSTEM --------- 'sched':: Scheduler and IPC mechanisms. 'mem':: Memory access performance. 'numa':: NUMA scheduling and MM benchmarks. 'futex':: Futex stressing benchmarks. 'all':: All benchmark subsystems. SUITES FOR 'sched' ~~~~~~~~~~~~~~~~~~ *messaging*:: Suite for evaluating performance of scheduler and IPC mechanisms. Based on hackbench by Rusty Russell. Options of *messaging* ^^^^^^^^^^^^^^^^^^^^^^ -p:: --pipe:: Use pipe() instead of socketpair() -t:: --thread:: Be multi thread instead of multi process -g:: --group=:: Specify number of groups -l:: --nr_loops=:: Specify number of loops Example of *messaging* ^^^^^^^^^^^^^^^^^^^^^^ --------------------- % perf bench sched messaging # run with default options (20 sender and receiver processes per group) (10 groups == 400 processes run) Total time:0.308 sec % perf bench sched messaging -t -g 20 # be multi-thread, with 20 groups (20 sender and receiver threads per group) (20 groups == 800 threads run) Total time:0.582 sec --------------------- *pipe*:: Suite for pipe() system call. Based on pipe-test-1m.c by Ingo Molnar. Options of *pipe* ^^^^^^^^^^^^^^^^^ -l:: --loop=:: Specify number of loops. Example of *pipe* ^^^^^^^^^^^^^^^^^ --------------------- % perf bench sched pipe (executing 1000000 pipe operations between two tasks) Total time:8.091 sec 8.091833 usecs/op 123581 ops/sec % perf bench sched pipe -l 1000 # loop 1000 (executing 1000 pipe operations between two tasks) Total time:0.016 sec 16.948000 usecs/op 59004 ops/sec --------------------- SUITES FOR 'mem' ~~~~~~~~~~~~~~~~ *memcpy*:: Suite for evaluating performance of simple memory copy in various ways. Options of *memcpy* ^^^^^^^^^^^^^^^^^^^ -l:: --size:: Specify size of memory to copy (default: 1MB). Available units are B, KB, MB, GB and TB (case insensitive). -f:: --function:: Specify function to copy (default: default). Available functions are depend on the architecture. On x86-64, x86-64-unrolled, x86-64-movsq and x86-64-movsb are supported. -l:: --nr_loops:: Repeat memcpy invocation this number of times. -c:: --cycles:: Use perf's cpu-cycles event instead of gettimeofday syscall. *memset*:: Suite for evaluating performance of simple memory set in various ways. Options of *memset* ^^^^^^^^^^^^^^^^^^^ -l:: --size:: Specify size of memory to set (default: 1MB). Available units are B, KB, MB, GB and TB (case insensitive). -f:: --function:: Specify function to set (default: default). Available functions are depend on the architecture. On x86-64, x86-64-unrolled, x86-64-stosq and x86-64-stosb are supported. -l:: --nr_loops:: Repeat memset invocation this number of times. -c:: --cycles:: Use perf's cpu-cycles event instead of gettimeofday syscall. SUITES FOR 'numa' ~~~~~~~~~~~~~~~~~ *mem*:: Suite for evaluating NUMA workloads. SUITES FOR 'futex' ~~~~~~~~~~~~~~~~~~ *hash*:: Suite for evaluating hash tables. *wake*:: Suite for evaluating wake calls. *wake-parallel*:: Suite for evaluating parallel wake calls. *requeue*:: Suite for evaluating requeue calls. *lock-pi*:: Suite for evaluating futex lock_pi calls. SEE ALSO -------- linkperf:perf[1] tion>
authorAndrei Vagin <avagin@openvz.org>2017-01-04 19:28:14 -0800
committerEric W. Biederman <ebiederm@xmission.com>2017-01-10 13:34:56 +1300
commitadd7c65ca426b7a37184dd3d2172394e23d585d6 (patch)
tree28b08b519540041b06ed0ab0b1c005076b932e8e /net/ceph/msgpool.c
parent75422726b0f717d67db3283c2eb5bc14fa2619c5 (diff)
pid: fix lockdep deadlock warning due to ucount_lock
========================================================= [ INFO: possible irq lock inversion dependency detected ] 4.10.0-rc2-00024-g4aecec9-dirty #118 Tainted: G W --------------------------------------------------------- swapper/1/0 just changed the state of lock: (&(&sighand->siglock)->rlock){-.....}, at: [<ffffffffbd0a1bc6>] __lock_task_sighand+0xb6/0x2c0 but this lock took another, HARDIRQ-unsafe lock in the past: (ucounts_lock){+.+...} and interrupts could create inverse lock ordering between them. other info that might help us debug this: Chain exists of: &(&sighand->siglock)->rlock --> &(&tty->ctrl_lock)->rlock --> ucounts_lock Possible interrupt unsafe locking scenario: CPU0 CPU1 ---- ---- lock(ucounts_lock); local_irq_disable(); lock(&(&sighand->siglock)->rlock); lock(&(&tty->ctrl_lock)->rlock); <Interrupt> lock(&(&sighand->siglock)->rlock); *** DEADLOCK *** This patch removes a dependency between rlock and ucount_lock. Fixes: f333c700c610 ("pidns: Add a limit on the number of pid namespaces") Cc: stable@vger.kernel.org Signed-off-by: Andrei Vagin <avagin@openvz.org> Acked-by: Al Viro <viro@ZenIV.linux.org.uk> Signed-off-by: Eric W. Biederman <ebiederm@xmission.com>
Diffstat (limited to 'net/ceph/msgpool.c')