#ifndef _UAPI__CODA_PSDEV_H #define _UAPI__CODA_PSDEV_H #include #define CODA_PSDEV_MAJOR 67 #define MAX_CODADEVS 5 /* how many do we allow */ /* messages between coda filesystem in kernel and Venus */ struct upc_req { struct list_head uc_chain; caddr_t uc_data; u_short uc_flags; u_short uc_inSize; /* Size is at most 5000 bytes */ u_short uc_outSize; u_short uc_opcode; /* copied from data to save lookup */ int uc_unique; wait_queue_head_t uc_sleep; /* process' wait queue */ }; #define CODA_REQ_ASYNC 0x1 #define CODA_REQ_READ 0x2 #define CODA_REQ_WRITE 0x4 #define CODA_REQ_ABORT 0x8 #endif /* _UAPI__CODA_PSDEV_H */ it logo'/> index : net-next.git
net-next plumbingsTobias Klauser
summaryrefslogtreecommitdiff
path: root/include/trace/events
diff options
context:
space:
mode:
authorVlastimil Babka <vbabka@suse.cz>2017-01-24 15:18:32 -0800
committerLinus Torvalds <torvalds@linux-foundation.org>2017-01-24 16:26:14 -0800
commitea57485af8f4221312a5a95d63c382b45e7840dc (patch)
tree11dffacd4921729f9009bdb87ab9782efbde81b6 /include/trace/events
parentff7a28a074ccbea999dadbb58c46212cf90984c6 (diff)
mm, page_alloc: fix check for NULL preferred_zone
Patch series "fix premature OOM regression in 4.7+ due to cpuset races". This is v2 of my attempt to fix the recent report based on LTP cpuset stress test [1]. The intention is to go to stable 4.9 LTSS with this, as triggering repeated OOMs is not nice. That's why the patches try to be not too intrusive. Unfortunately why investigating I found that modifying the testcase to use per-VMA policies instead of per-task policies will bring the OOM's back, but that seems to be much older and harder to fix problem. I have posted a RFC [2] but I believe that fixing the recent regressions has a higher priority. Longer-term we might try to think how to fix the cpuset mess in a better and less error prone way. I was for example very surprised to learn, that cpuset updates change not only task->mems_allowed, but also nodemask of mempolicies. Until now I expected the parameter to alloc_pages_nodemask() to be stable. I wonder why do we then treat cpusets specially in get_page_from_freelist() and distinguish HARDWALL etc, when there's unconditional intersection between mempolicy and cpuset. I would expect the nodemask adjustment for saving overhead in g_p_f(), but that clearly doesn't happen in the current form. So we have both crazy complexity and overhead, AFAICS. [1] https://lkml.kernel.org/r/CAFpQJXUq-JuEP=QPidy4p_=FN0rkH5Z-kfB4qBvsf6jMS87Edg@mail.gmail.com [2] https://lkml.kernel.org/r/7c459f26-13a6-a817-e508-b65b903a8378@suse.cz This patch (of 4): Since commit c33d6c06f60f ("mm, page_alloc: avoid looking up the first zone in a zonelist twice") we have a wrong check for NULL preferred_zone, which can theoretically happen due to concurrent cpuset modification. We check the zoneref pointer which is never NULL and we should check the zone pointer. Also document this in first_zones_zonelist() comment per Michal Hocko. Fixes: c33d6c06f60f ("mm, page_alloc: avoid looking up the first zone in a zonelist twice") Link: http://lkml.kernel.org/r/20170120103843.24587-2-vbabka@suse.cz Signed-off-by: Vlastimil Babka <vbabka@suse.cz> Acked-by: Mel Gorman <mgorman@techsingularity.net> Acked-by: Hillf Danton <hillf.zj@alibaba-inc.com> Cc: Ganapatrao Kulkarni <gpkulkarni@gmail.com> Cc: Michal Hocko <mhocko@suse.com> Cc: <stable@vger.kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Diffstat (limited to 'include/trace/events')