/* * The owner field of the rw_semaphore structure will be set to * RWSEM_READ_OWNED when a reader grabs the lock. A writer will clear * the owner field when it unlocks. A reader, on the other hand, will * not touch the owner field when it unlocks. * * In essence, the owner field now has the following 3 states: * 1) 0 * - lock is free or the owner hasn't set the field yet * 2) RWSEM_READER_OWNED * - lock is currently or previously owned by readers (lock is free * or not set by owner yet) * 3) Other non-zero value * - a writer owns the lock */ #define RWSEM_READER_OWNED ((struct task_struct *)1UL) #ifdef CONFIG_RWSEM_SPIN_ON_OWNER /* * All writes to owner are protected by WRITE_ONCE() to make sure that * store tearing can't happen as optimistic spinners may read and use * the owner value concurrently without lock. Read from owner, however, * may not need READ_ONCE() as long as the pointer value is only used * for comparison and isn't being dereferenced. */ static inline void rwsem_set_owner(struct rw_semaphore *sem) { WRITE_ONCE(sem->owner, current); } static inline void rwsem_clear_owner(struct rw_semaphore *sem) { WRITE_ONCE(sem->owner, NULL); } static inline void rwsem_set_reader_owned(struct rw_semaphore *sem) { /* * We check the owner value first to make sure that we will only * do a write to the rwsem cacheline when it is really necessary * to minimize cacheline contention. */ if (sem->owner != RWSEM_READER_OWNED) WRITE_ONCE(sem->owner, RWSEM_READER_OWNED); } static inline bool rwsem_owner_is_writer(struct task_struct *owner) { return owner && owner != RWSEM_READER_OWNED; } static inline bool rwsem_owner_is_reader(struct task_struct *owner) { return owner == RWSEM_READER_OWNED; } #else static inline void rwsem_set_owner(struct rw_semaphore *sem) { } static inline void rwsem_clear_owner(struct rw_semaphore *sem) { } static inline void rwsem_set_reader_owned(struct rw_semaphore *sem) { } #endif 55fef452d692bf101167914437e848e'>commitdiff
path: root/net/8021q/vlanproc.h
diff options
context:
space:
mode:
authorLinus Torvalds <torvalds@linux-foundation.org>2017-01-01 12:27:05 -0800
committerLinus Torvalds <torvalds@linux-foundation.org>2017-01-01 12:27:05 -0800
commit4759d386d55fef452d692bf101167914437e848e (patch)
treee7109c192ec589fcea2a98f9702aa3c0e4009581 /net/8021q/vlanproc.h
parent238d1d0f79f619d75c2cc741d6770fb0986aef24 (diff)
parent1db175428ee374489448361213e9c3b749d14900 (diff)
Merge branch 'libnvdimm-fixes' of git://git.kernel.org/pub/scm/linux/kernel/git/nvdimm/nvdimm
Pull DAX updates from Dan Williams: "The completion of Jan's DAX work for 4.10. As I mentioned in the libnvdimm-for-4.10 pull request, these are some final fixes for the DAX dirty-cacheline-tracking invalidation work that was merged through the -mm, ext4, and xfs trees in -rc1. These patches were prepared prior to the merge window, but we waited for 4.10-rc1 to have a stable merge base after all the prerequisites were merged. Quoting Jan on the overall changes in these patches: "So I'd like all these 6 patches to go for rc2. The first three patches fix invalidation of exceptional DAX entries (a bug which is there for a long time) - without these patches data loss can occur on power failure even though user called fsync(2). The other three patches change locking of DAX faults so that ->iomap_begin() is called in a more relaxed locking context and we are safe to start a transaction there for ext4" These have received a build success notification from the kbuild robot, and pass the latest libnvdimm unit tests. There have not been any -next releases since -rc1, so they have not appeared there" * 'libnvdimm-fixes' of git://git.kernel.org/pub/scm/linux/kernel/git/nvdimm/nvdimm: ext4: Simplify DAX fault path dax: Call ->iomap_begin without entry lock during dax fault dax: Finish fault completely when loading holes dax: Avoid page invalidation races and unnecessary radix tree traversals mm: Invalidate DAX radix tree entries only if appropriate ext2: Return BH_New buffers for zeroed blocks
Diffstat (limited to 'net/8021q/vlanproc.h')