linux/debian/patches-rt/0087-x86-mm-pat-disable-pre...

66 lines
2.2 KiB
Diff
Raw Normal View History

2020-10-12 12:52:06 +00:00
From 4dd5348c4bdfbc53044d46d1d9ffb628b317576b Mon Sep 17 00:00:00 2001
Message-Id: <4dd5348c4bdfbc53044d46d1d9ffb628b317576b.1601675151.git.zanussi@kernel.org>
In-Reply-To: <5b5a156f9808b1acf1205606e03da117214549ea.1601675151.git.zanussi@kernel.org>
References: <5b5a156f9808b1acf1205606e03da117214549ea.1601675151.git.zanussi@kernel.org>
2018-12-14 09:55:05 +00:00
From: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
Date: Tue, 11 Dec 2018 21:53:43 +0100
2020-09-04 20:10:21 +00:00
Subject: [PATCH 087/333] x86/mm/pat: disable preemption __split_large_page()
2019-04-08 23:49:20 +00:00
after spin_lock()
2020-10-12 12:52:06 +00:00
Origin: https://www.kernel.org/pub/linux/kernel/projects/rt/4.19/older/patches-4.19.148-rt64.tar.xz
2018-12-14 09:55:05 +00:00
Commit "x86/mm/pat: Disable preemption around __flush_tlb_all()" added a
warning if __flush_tlb_all() is invoked in preemptible context. On !RT
the warning does not trigger because a spin lock is acquired which
disables preemption. On RT the spin lock does not disable preemption and
so the warning is seen.
Disable preemption to avoid the warning __flush_tlb_all().
Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
---
2019-04-08 23:49:20 +00:00
arch/x86/mm/pageattr.c | 8 ++++++++
2018-12-14 09:55:05 +00:00
1 file changed, 8 insertions(+)
2019-04-08 23:49:20 +00:00
diff --git a/arch/x86/mm/pageattr.c b/arch/x86/mm/pageattr.c
2020-03-06 11:44:27 +00:00
index 101f3ad0d6ad..0b0396261ca1 100644
2018-12-14 09:55:05 +00:00
--- a/arch/x86/mm/pageattr.c
+++ b/arch/x86/mm/pageattr.c
2019-04-08 23:49:20 +00:00
@@ -687,12 +687,18 @@ __split_large_page(struct cpa_data *cpa, pte_t *kpte, unsigned long address,
pgprot_t ref_prot;
2018-12-14 09:55:05 +00:00
spin_lock(&pgd_lock);
2019-04-08 23:49:20 +00:00
+ /*
2018-12-14 09:55:05 +00:00
+ * Keep preemption disabled after __flush_tlb_all() which expects not be
+ * preempted during the flush of the local TLB.
+ */
+ preempt_disable();
2019-04-08 23:49:20 +00:00
/*
2018-12-14 09:55:05 +00:00
* Check for races, another CPU might have split this page
* up for us already:
*/
tmp = _lookup_address_cpa(cpa, address, &level);
if (tmp != kpte) {
+ preempt_enable();
spin_unlock(&pgd_lock);
return 1;
}
2019-04-08 23:49:20 +00:00
@@ -726,6 +732,7 @@ __split_large_page(struct cpa_data *cpa, pte_t *kpte, unsigned long address,
2018-12-14 09:55:05 +00:00
break;
default:
+ preempt_enable();
spin_unlock(&pgd_lock);
return 1;
}
2019-04-08 23:49:20 +00:00
@@ -764,6 +771,7 @@ __split_large_page(struct cpa_data *cpa, pte_t *kpte, unsigned long address,
2018-12-14 09:55:05 +00:00
* going on.
*/
__flush_tlb_all();
+ preempt_enable();
spin_unlock(&pgd_lock);
return 0;
2020-01-03 23:36:11 +00:00
--
2020-06-22 13:14:16 +00:00
2.17.1
2020-01-03 23:36:11 +00:00