linux/debian/patches-rt/perf-x86-intel-Delay-memory...

75 lines
2.6 KiB
Diff

From: Peter Zijlstra <peterz@infradead.org>
Date: Wed, 19 Dec 2018 16:46:44 +0100
Subject: [PATCH] perf/x86/intel: Delay memory deallocation until cpu_dead
Origin: https://www.kernel.org/pub/linux/kernel/projects/rt/4.19/older/patches-4.19.10-rt8.tar.xz
intel_pmu_cpu_prepare() allocated memory for ->shared_regs among other
members of struct cpu_hw_events. This memory is released in
intel_pmu_cpu_dying() which is wrong. The counterpart of the
intel_pmu_cpu_prepare() callback is x86_pmu_dead_cpu().
Otherwise if the CPU fails on the UP path between CPUHP_PERF_X86_PREPARE
and CPUHP_AP_PERF_X86_STARTING then it won't release the memory but
allocate new memory on the next attempt to online the CPU (leaking the
old memory).
Also, if the CPU down path fails between CPUHP_AP_PERF_X86_STARTING and
CPUHP_PERF_X86_PREPARE then the CPU will go back online but never
allocate the memory that was released in x86_pmu_dying_cpu().
Make the memory allocation/free symmetrical in regard to the CPU hotplug
notifier by moving the deallocation to intel_pmu_cpu_dead().
This started in commit
a7e3ed1e47011 ("perf: Add support for supplementary event registers").
Cc: stable@vger.kernel.org
Reported-by: He Zhe <zhe.he@windriver.com>
Fixes: a7e3ed1e47011 ("perf: Add support for supplementary event registers").
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
[bigeasy: patch description]
Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
---
arch/x86/events/intel/core.c | 10 ++++++++--
1 file changed, 8 insertions(+), 2 deletions(-)
--- a/arch/x86/events/intel/core.c
+++ b/arch/x86/events/intel/core.c
@@ -3440,6 +3440,11 @@ static void free_excl_cntrs(int cpu)
static void intel_pmu_cpu_dying(int cpu)
{
+ fini_debug_store_on_cpu(cpu);
+}
+
+static void intel_pmu_cpu_dead(int cpu)
+{
struct cpu_hw_events *cpuc = &per_cpu(cpu_hw_events, cpu);
struct intel_shared_regs *pc;
@@ -3451,8 +3456,6 @@ static void intel_pmu_cpu_dying(int cpu)
}
free_excl_cntrs(cpu);
-
- fini_debug_store_on_cpu(cpu);
}
static void intel_pmu_sched_task(struct perf_event_context *ctx,
@@ -3541,6 +3544,7 @@ static __initconst const struct x86_pmu
.cpu_prepare = intel_pmu_cpu_prepare,
.cpu_starting = intel_pmu_cpu_starting,
.cpu_dying = intel_pmu_cpu_dying,
+ .cpu_dead = intel_pmu_cpu_dead,
};
static struct attribute *intel_pmu_attrs[];
@@ -3581,6 +3585,8 @@ static __initconst const struct x86_pmu
.cpu_prepare = intel_pmu_cpu_prepare,
.cpu_starting = intel_pmu_cpu_starting,
.cpu_dying = intel_pmu_cpu_dying,
+ .cpu_dead = intel_pmu_cpu_dead,
+
.guest_get_msrs = intel_guest_get_msrs,
.sched_task = intel_pmu_sched_task,
};