summaryrefslogtreecommitdiff
path: root/tools/perf/util/demangle-rust.c
diff options
context:
space:
mode:
authorStanislaw Gruszka <sgruszka@redhat.com>2017-01-30 12:12:47 +0100
committerKalle Valo <kvalo@codeaurora.org>2017-01-31 09:27:38 +0200
commit6715208d0a95ae417203f8e4a7937c1b4c4947f2 (patch)
tree979bee37f306943ec3c5217dd1f3be45792496f2 /tools/perf/util/demangle-rust.c
parenta971df0b9d04674e325346c17de9a895425ca5e1 (diff)
rt2800: enable rt3290 unconditionally on pci probe
When we restart system using sysrq RT3290 device do not initalize properly, hance always enable it via WLAN_FUN_CTRL register on probe. Bugzilla: https://bugzilla.kernel.org/show_bug.cgi?id=85461 Reported-and-tested-by: Giedrius Statkevičius <edrius.statkevicius@gmail.com> Signed-off-by: Stanislaw Gruszka <sgruszka@redhat.com> Signed-off-by: Kalle Valo <kvalo@codeaurora.org>
Diffstat (limited to 'tools/perf/util/demangle-rust.c')
0 files changed, 0 insertions, 0 deletions
patches try to be not too intrusive. Unfortunately why investigating I found that modifying the testcase to use per-VMA policies instead of per-task policies will bring the OOM's back, but that seems to be much older and harder to fix problem. I have posted a RFC [2] but I believe that fixing the recent regressions has a higher priority. Longer-term we might try to think how to fix the cpuset mess in a better and less error prone way. I was for example very surprised to learn, that cpuset updates change not only task->mems_allowed, but also nodemask of mempolicies. Until now I expected the parameter to alloc_pages_nodemask() to be stable. I wonder why do we then treat cpusets specially in get_page_from_freelist() and distinguish HARDWALL etc, when there's unconditional intersection between mempolicy and cpuset. I would expect the nodemask adjustment for saving overhead in g_p_f(), but that clearly doesn't happen in the current form. So we have both crazy complexity and overhead, AFAICS. [1] https://lkml.kernel.org/r/CAFpQJXUq-JuEP=QPidy4p_=FN0rkH5Z-kfB4qBvsf6jMS87Edg@mail.gmail.com [2] https://lkml.kernel.org/r/7c459f26-13a6-a817-e508-b65b903a8378@suse.cz This patch (of 4): Since commit c33d6c06f60f ("mm, page_alloc: avoid looking up the first zone in a zonelist twice") we have a wrong check for NULL preferred_zone, which can theoretically happen due to concurrent cpuset modification. We check the zoneref pointer which is never NULL and we should check the zone pointer. Also document this in first_zones_zonelist() comment per Michal Hocko. Fixes: c33d6c06f60f ("mm, page_alloc: avoid looking up the first zone in a zonelist twice") Link: http://lkml.kernel.org/r/20170120103843.24587-2-vbabka@suse.cz Signed-off-by: Vlastimil Babka <vbabka@suse.cz> Acked-by: Mel Gorman <mgorman@techsingularity.net> Acked-by: Hillf Danton <hillf.zj@alibaba-inc.com> Cc: Ganapatrao Kulkarni <gpkulkarni@gmail.com> Cc: Michal Hocko <mhocko@suse.com> Cc: <stable@vger.kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Diffstat (limited to 'include/dt-bindings/clock/renesas-cpg-mssr.h')