2019-04-08 23:49:20 +00:00
|
|
|
From b9aa9db48ab0cf192b26cafb81de88fa671b1bac Mon Sep 17 00:00:00 2001
|
2018-08-27 14:32:32 +00:00
|
|
|
From: Thomas Gleixner <tglx@linutronix.de>
|
|
|
|
Date: Fri, 1 Mar 2013 11:17:42 +0100
|
2019-04-08 23:49:20 +00:00
|
|
|
Subject: [PATCH 134/266] futex: Ensure lock/unlock symetry versus pi_lock and
|
|
|
|
hash bucket lock
|
|
|
|
Origin: https://www.kernel.org/pub/linux/kernel/projects/rt/4.19/older/patches-4.19.31-rt18.tar.xz
|
2018-08-27 14:32:32 +00:00
|
|
|
|
|
|
|
In exit_pi_state_list() we have the following locking construct:
|
|
|
|
|
|
|
|
spin_lock(&hb->lock);
|
|
|
|
raw_spin_lock_irq(&curr->pi_lock);
|
|
|
|
|
|
|
|
...
|
|
|
|
spin_unlock(&hb->lock);
|
|
|
|
|
|
|
|
In !RT this works, but on RT the migrate_enable() function which is
|
|
|
|
called from spin_unlock() sees atomic context due to the held pi_lock
|
|
|
|
and just decrements the migrate_disable_atomic counter of the
|
|
|
|
task. Now the next call to migrate_disable() sees the counter being
|
|
|
|
negative and issues a warning. That check should be in
|
|
|
|
migrate_enable() already.
|
|
|
|
|
|
|
|
Fix this by dropping pi_lock before unlocking hb->lock and reaquire
|
|
|
|
pi_lock after that again. This is safe as the loop code reevaluates
|
|
|
|
head again under the pi_lock.
|
|
|
|
|
|
|
|
Reported-by: Yong Zhang <yong.zhang@windriver.com>
|
|
|
|
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
|
|
|
|
Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
|
|
|
|
---
|
2019-04-08 23:49:20 +00:00
|
|
|
kernel/futex.c | 2 ++
|
2018-08-27 14:32:32 +00:00
|
|
|
1 file changed, 2 insertions(+)
|
|
|
|
|
2019-04-08 23:49:20 +00:00
|
|
|
diff --git a/kernel/futex.c b/kernel/futex.c
|
|
|
|
index a59202dd2c3f..8f58ce04bebf 100644
|
2018-08-27 14:32:32 +00:00
|
|
|
--- a/kernel/futex.c
|
|
|
|
+++ b/kernel/futex.c
|
2019-04-08 23:49:20 +00:00
|
|
|
@@ -918,7 +918,9 @@ void exit_pi_state_list(struct task_struct *curr)
|
2018-08-27 14:32:32 +00:00
|
|
|
if (head->next != next) {
|
|
|
|
/* retain curr->pi_lock for the loop invariant */
|
|
|
|
raw_spin_unlock(&pi_state->pi_mutex.wait_lock);
|
|
|
|
+ raw_spin_unlock_irq(&curr->pi_lock);
|
|
|
|
spin_unlock(&hb->lock);
|
|
|
|
+ raw_spin_lock_irq(&curr->pi_lock);
|
|
|
|
put_pi_state(pi_state);
|
|
|
|
continue;
|
|
|
|
}
|
2019-04-08 23:49:20 +00:00
|
|
|
--
|
|
|
|
2.20.1
|
|
|
|
|