[rt] Update to 4.8-rt1 and re-enable

This commit is contained in:
Ben Hutchings 2016-10-11 19:58:48 +01:00
parent b2080e6bd9
commit 387dbb7803
291 changed files with 4237 additions and 1808 deletions

1
debian/changelog vendored
View File

@ -25,6 +25,7 @@ linux (4.8.1-1~exp1) UNRELEASED; urgency=medium
reading the kernel log by default (sysctl: kernel.dmesg_restrict)
* bug script: Optionally use sudo to read a restricted kernel log, and fall
back to writing a placeholder
* [rt] Update to 4.8-rt1 and re-enable
-- Ben Hutchings <ben@decadent.org.uk> Sat, 01 Oct 2016 21:51:33 +0100

View File

@ -47,7 +47,7 @@ debug-info: true
signed-modules: true
[featureset-rt_base]
enabled: false
enabled: true
[description]
part-long-up: This kernel is not suitable for SMP (multi-processor,

View File

@ -1,7 +1,7 @@
From: "Yadi.hu" <yadi.hu@windriver.com>
Date: Wed, 10 Dec 2014 10:32:09 +0800
Subject: ARM: enable irq in translation/section permission fault handlers
Origin: https://www.kernel.org/pub/linux/kernel/projects/rt/4.6/older/patches-4.6.2-rt5.tar.xz
Origin: https://www.kernel.org/pub/linux/kernel/projects/rt/4.8/older/patches-4.8-rt1.tar.xz
Probably happens on all ARM, with
CONFIG_PREEMPT_RT_FULL

View File

@ -1,7 +1,7 @@
From: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
Date: Thu, 21 Mar 2013 19:01:05 +0100
Subject: printk: Drop the logbuf_lock more often
Origin: https://www.kernel.org/pub/linux/kernel/projects/rt/4.6/older/patches-4.6.2-rt5.tar.xz
Origin: https://www.kernel.org/pub/linux/kernel/projects/rt/4.8/older/patches-4.8-rt1.tar.xz
The lock is hold with irgs off. The latency drops 500us+ on my arm bugs
with a "full" buffer after executing "dmesg" on the shell.
@ -13,7 +13,7 @@ Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
--- a/kernel/printk/printk.c
+++ b/kernel/printk/printk.c
@@ -1268,6 +1268,7 @@ static int syslog_print_all(char __user
@@ -1399,6 +1399,7 @@ static int syslog_print_all(char __user
{
char *text;
int len = 0;
@ -21,7 +21,7 @@ Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
text = kmalloc(LOG_LINE_MAX + PREFIX_MAX, GFP_KERNEL);
if (!text)
@@ -1279,6 +1280,14 @@ static int syslog_print_all(char __user
@@ -1410,6 +1411,14 @@ static int syslog_print_all(char __user
u64 seq;
u32 idx;
enum log_flags prev;
@ -36,7 +36,7 @@ Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
/*
* Find first record that fits, including all following records,
@@ -1294,6 +1303,14 @@ static int syslog_print_all(char __user
@@ -1425,6 +1434,14 @@ static int syslog_print_all(char __user
prev = msg->flags;
idx = log_next(idx);
seq++;
@ -51,7 +51,7 @@ Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
}
/* move first record forward until length fits into the buffer */
@@ -1307,6 +1324,14 @@ static int syslog_print_all(char __user
@@ -1438,6 +1455,14 @@ static int syslog_print_all(char __user
prev = msg->flags;
idx = log_next(idx);
seq++;
@ -66,7 +66,7 @@ Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
}
/* last message fitting into this dump */
@@ -1347,6 +1372,7 @@ static int syslog_print_all(char __user
@@ -1478,6 +1503,7 @@ static int syslog_print_all(char __user
clear_seq = log_next_seq;
clear_idx = log_next_idx;
}

View File

@ -1,7 +1,7 @@
From: Josh Cartwright <joshc@ni.com>
Date: Thu, 11 Feb 2016 11:54:01 -0600
Subject: KVM: arm/arm64: downgrade preempt_disable()d region to migrate_disable()
Origin: https://www.kernel.org/pub/linux/kernel/projects/rt/4.6/older/patches-4.6.2-rt5.tar.xz
Origin: https://www.kernel.org/pub/linux/kernel/projects/rt/4.8/older/patches-4.8-rt1.tar.xz
kvm_arch_vcpu_ioctl_run() disables the use of preemption when updating
the vgic and timer states to prevent the calling task from migrating to
@ -23,7 +23,7 @@ Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
--- a/arch/arm/kvm/arm.c
+++ b/arch/arm/kvm/arm.c
@@ -581,7 +581,7 @@ int kvm_arch_vcpu_ioctl_run(struct kvm_v
@@ -584,7 +584,7 @@ int kvm_arch_vcpu_ioctl_run(struct kvm_v
* involves poking the GIC, which must be done in a
* non-preemptible context.
*/
@ -32,7 +32,7 @@ Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
kvm_pmu_flush_hwstate(vcpu);
kvm_timer_flush_hwstate(vcpu);
kvm_vgic_flush_hwstate(vcpu);
@@ -602,7 +602,7 @@ int kvm_arch_vcpu_ioctl_run(struct kvm_v
@@ -605,7 +605,7 @@ int kvm_arch_vcpu_ioctl_run(struct kvm_v
kvm_pmu_sync_hwstate(vcpu);
kvm_timer_sync_hwstate(vcpu);
kvm_vgic_sync_hwstate(vcpu);
@ -41,7 +41,7 @@ Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
continue;
}
@@ -658,7 +658,7 @@ int kvm_arch_vcpu_ioctl_run(struct kvm_v
@@ -661,7 +661,7 @@ int kvm_arch_vcpu_ioctl_run(struct kvm_v
kvm_vgic_sync_hwstate(vcpu);

View File

@ -1,7 +1,7 @@
From: Marcelo Tosatti <mtosatti@redhat.com>
Date: Wed, 8 Apr 2015 20:33:25 -0300
Subject: KVM: lapic: mark LAPIC timer handler as irqsafe
Origin: https://www.kernel.org/pub/linux/kernel/projects/rt/4.6/older/patches-4.6.2-rt5.tar.xz
Origin: https://www.kernel.org/pub/linux/kernel/projects/rt/4.8/older/patches-4.8-rt1.tar.xz
Since lapic timer handler only wakes up a simple waitqueue,
it can be executed from hardirq context.
@ -16,7 +16,7 @@ Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
--- a/arch/x86/kvm/lapic.c
+++ b/arch/x86/kvm/lapic.c
@@ -1870,6 +1870,7 @@ int kvm_create_lapic(struct kvm_vcpu *vc
@@ -1938,6 +1938,7 @@ int kvm_create_lapic(struct kvm_vcpu *vc
hrtimer_init(&apic->lapic_timer.timer, CLOCK_MONOTONIC,
HRTIMER_MODE_ABS_PINNED);
apic->lapic_timer.timer.function = apic_timer_fn;

View File

@ -1,7 +1,7 @@
From: Steven Rostedt <rostedt@goodmis.org>
Date: Wed, 13 Feb 2013 09:26:05 -0500
Subject: acpi/rt: Convert acpi_gbl_hardware lock back to a raw_spinlock_t
Origin: https://www.kernel.org/pub/linux/kernel/projects/rt/4.6/older/patches-4.6.2-rt5.tar.xz
Origin: https://www.kernel.org/pub/linux/kernel/projects/rt/4.8/older/patches-4.8-rt1.tar.xz
We hit the following bug with 3.6-rt:
@ -84,7 +84,7 @@ Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
/* Mutex for _OSI support */
--- a/drivers/acpi/acpica/hwregs.c
+++ b/drivers/acpi/acpica/hwregs.c
@@ -269,14 +269,14 @@ acpi_status acpi_hw_clear_acpi_status(vo
@@ -363,14 +363,14 @@ acpi_status acpi_hw_clear_acpi_status(vo
ACPI_BITMASK_ALL_FIXED_STATUS,
ACPI_FORMAT_UINT64(acpi_gbl_xpm1a_status.address)));
@ -103,7 +103,7 @@ Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
goto exit;
--- a/drivers/acpi/acpica/hwxface.c
+++ b/drivers/acpi/acpica/hwxface.c
@@ -374,7 +374,7 @@ acpi_status acpi_write_bit_register(u32
@@ -373,7 +373,7 @@ acpi_status acpi_write_bit_register(u32
return_ACPI_STATUS(AE_BAD_PARAMETER);
}
@ -112,7 +112,7 @@ Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
/*
* At this point, we know that the parent register is one of the
@@ -435,7 +435,7 @@ acpi_status acpi_write_bit_register(u32
@@ -434,7 +434,7 @@ acpi_status acpi_write_bit_register(u32
unlock_and_exit:
@ -143,7 +143,7 @@ Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
/* Delete the reader/writer lock */
--- a/include/acpi/platform/aclinux.h
+++ b/include/acpi/platform/aclinux.h
@@ -127,6 +127,7 @@
@@ -131,6 +131,7 @@
#define acpi_cache_t struct kmem_cache
#define acpi_spinlock spinlock_t *
@ -151,7 +151,7 @@ Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
#define acpi_cpu_flags unsigned long
/* Use native linux version of acpi_os_allocate_zeroed */
@@ -145,6 +146,20 @@
@@ -149,6 +150,20 @@
#define ACPI_USE_ALTERNATE_PROTOTYPE_acpi_os_get_thread_id
#define ACPI_USE_ALTERNATE_PROTOTYPE_acpi_os_create_lock

View File

@ -1,7 +1,7 @@
From: Anders Roxell <anders.roxell@linaro.org>
Date: Thu, 14 May 2015 17:52:17 +0200
Subject: arch/arm64: Add lazy preempt support
Origin: https://www.kernel.org/pub/linux/kernel/projects/rt/4.6/older/patches-4.6.2-rt5.tar.xz
Origin: https://www.kernel.org/pub/linux/kernel/projects/rt/4.8/older/patches-4.8-rt1.tar.xz
arm64 is missing support for PREEMPT_RT. The main feature which is
lacking is support for lazy preemption. The arch-specific entry code,
@ -13,21 +13,21 @@ indicate that support for full RT preemption is now available.
Signed-off-by: Anders Roxell <anders.roxell@linaro.org>
---
arch/arm64/Kconfig | 1 +
arch/arm64/include/asm/thread_info.h | 3 +++
arch/arm64/include/asm/thread_info.h | 6 +++++-
arch/arm64/kernel/asm-offsets.c | 1 +
arch/arm64/kernel/entry.S | 13 ++++++++++---
4 files changed, 15 insertions(+), 3 deletions(-)
4 files changed, 17 insertions(+), 4 deletions(-)
--- a/arch/arm64/Kconfig
+++ b/arch/arm64/Kconfig
@@ -81,6 +81,7 @@ config ARM64
@@ -90,6 +90,7 @@ config ARM64
select HAVE_PERF_EVENTS
select HAVE_PERF_REGS
select HAVE_PERF_USER_STACK_DUMP
select HAVE_RCU_TABLE_FREE
+ select HAVE_PREEMPT_LAZY
select HAVE_REGS_AND_STACK_ACCESS_API
select HAVE_RCU_TABLE_FREE
select HAVE_SYSCALL_TRACEPOINTS
select IOMMU_DMA if IOMMU_SUPPORT
select IRQ_DOMAIN
--- a/arch/arm64/include/asm/thread_info.h
+++ b/arch/arm64/include/asm/thread_info.h
@@ -49,6 +49,7 @@ struct thread_info {
@ -54,9 +54,19 @@ Signed-off-by: Anders Roxell <anders.roxell@linaro.org>
#define _TIF_NOHZ (1 << TIF_NOHZ)
#define _TIF_SYSCALL_TRACE (1 << TIF_SYSCALL_TRACE)
#define _TIF_SYSCALL_AUDIT (1 << TIF_SYSCALL_AUDIT)
@@ -132,7 +135,8 @@ static inline struct thread_info *curren
#define _TIF_32BIT (1 << TIF_32BIT)
#define _TIF_WORK_MASK (_TIF_NEED_RESCHED | _TIF_SIGPENDING | \
- _TIF_NOTIFY_RESUME | _TIF_FOREIGN_FPSTATE)
+ _TIF_NOTIFY_RESUME | _TIF_FOREIGN_FPSTATE | \
+ _TIF_NEED_RESCHED_LAZY)
#define _TIF_SYSCALL_WORK (_TIF_SYSCALL_TRACE | _TIF_SYSCALL_AUDIT | \
_TIF_SYSCALL_TRACEPOINT | _TIF_SECCOMP | \
--- a/arch/arm64/kernel/asm-offsets.c
+++ b/arch/arm64/kernel/asm-offsets.c
@@ -36,6 +36,7 @@ int main(void)
@@ -37,6 +37,7 @@ int main(void)
BLANK();
DEFINE(TI_FLAGS, offsetof(struct thread_info, flags));
DEFINE(TI_PREEMPT, offsetof(struct thread_info, preempt_count));
@ -66,7 +76,7 @@ Signed-off-by: Anders Roxell <anders.roxell@linaro.org>
DEFINE(TI_CPU, offsetof(struct thread_info, cpu));
--- a/arch/arm64/kernel/entry.S
+++ b/arch/arm64/kernel/entry.S
@@ -411,11 +411,16 @@ ENDPROC(el1_sync)
@@ -434,11 +434,16 @@ ENDPROC(el1_sync)
#ifdef CONFIG_PREEMPT
ldr w24, [tsk, #TI_PREEMPT] // get preempt count
@ -86,7 +96,7 @@ Signed-off-by: Anders Roxell <anders.roxell@linaro.org>
#endif
#ifdef CONFIG_TRACE_IRQFLAGS
bl trace_hardirqs_on
@@ -429,6 +434,7 @@ ENDPROC(el1_irq)
@@ -452,6 +457,7 @@ ENDPROC(el1_irq)
1: bl preempt_schedule_irq // irq en/disable is done inside
ldr x0, [tsk, #TI_FLAGS] // get new tasks TI_FLAGS
tbnz x0, #TIF_NEED_RESCHED, 1b // needs rescheduling?
@ -94,7 +104,7 @@ Signed-off-by: Anders Roxell <anders.roxell@linaro.org>
ret x24
#endif
@@ -675,6 +681,7 @@ ENDPROC(cpu_switch_to)
@@ -708,6 +714,7 @@ ENDPROC(cpu_switch_to)
*/
work_pending:
tbnz x1, #TIF_NEED_RESCHED, work_resched

View File

@ -1,7 +1,7 @@
From: Benedikt Spranger <b.spranger@linutronix.de>
Date: Sat, 6 Mar 2010 17:47:10 +0100
Subject: ARM: AT91: PIT: Remove irq handler when clock event is unused
Origin: https://www.kernel.org/pub/linux/kernel/projects/rt/4.6/older/patches-4.6.2-rt5.tar.xz
Origin: https://www.kernel.org/pub/linux/kernel/projects/rt/4.8/older/patches-4.8-rt1.tar.xz
Setup and remove the interrupt handler in clock event mode selection.
This avoids calling the (shared) interrupt handler when the device is
@ -14,9 +14,9 @@ commit 8fe82a55 ("ARM: at91: sparse irq support") which is included since v3.6.
Patch based on what Sami Pietikäinen <Sami.Pietikainen@wapice.com> suggested].
Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
---
drivers/clocksource/timer-atmel-pit.c | 17 +++++++++--------
drivers/clocksource/timer-atmel-st.c | 32 ++++++++++++++++++++++----------
2 files changed, 31 insertions(+), 18 deletions(-)
drivers/clocksource/timer-atmel-pit.c | 18 +++++++++---------
drivers/clocksource/timer-atmel-st.c | 34 ++++++++++++++++++++++------------
2 files changed, 31 insertions(+), 21 deletions(-)
--- a/drivers/clocksource/timer-atmel-pit.c
+++ b/drivers/clocksource/timer-atmel-pit.c
@ -45,24 +45,18 @@ Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
/* update clocksource counter */
data->cnt += data->cycle * PIT_PICNT(pit_read(data->base, AT91_PIT_PIVR));
@@ -181,7 +190,6 @@ static void __init at91sam926x_pit_commo
{
unsigned long pit_rate;
unsigned bits;
- int ret;
/*
* Use our actual MCK to figure out how many MCK/16 ticks per
@@ -206,13 +214,6 @@ static void __init at91sam926x_pit_commo
data->clksrc.flags = CLOCK_SOURCE_IS_CONTINUOUS;
clocksource_register_hz(&data->clksrc, pit_rate);
@@ -211,15 +220,6 @@ static int __init at91sam926x_pit_common
return ret;
}
- /* Set up irq handler */
- ret = request_irq(data->irq, at91sam926x_pit_interrupt,
- IRQF_SHARED | IRQF_TIMER | IRQF_IRQPOLL,
- "at91_tick", data);
- if (ret)
- panic(pr_fmt("Unable to setup IRQ\n"));
- if (ret) {
- pr_err("Unable to setup IRQ\n");
- return ret;
- }
-
/* Set up and register clockevents */
data->clkevt.name = "pit";
@ -116,7 +110,7 @@ Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
/* PIT for periodic irqs; fixed rate of 1/HZ */
irqmask = AT91_ST_PITS;
regmap_write(regmap_st, AT91_ST_PIMR, timer_latch);
@@ -198,7 +217,7 @@ static void __init atmel_st_timer_init(s
@@ -198,7 +217,7 @@ static int __init atmel_st_timer_init(st
{
struct clk *sclk;
unsigned int sclk_rate, val;
@ -124,24 +118,28 @@ Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
+ int ret;
regmap_st = syscon_node_to_regmap(node);
if (IS_ERR(regmap_st))
@@ -210,17 +229,10 @@ static void __init atmel_st_timer_init(s
if (IS_ERR(regmap_st)) {
@@ -212,21 +231,12 @@ static int __init atmel_st_timer_init(st
regmap_read(regmap_st, AT91_ST_SR, &val);
/* Get the interrupts property */
- irq = irq_of_parse_and_map(node, 0);
- if (!irq)
- if (!irq) {
+ atmel_st_irq = irq_of_parse_and_map(node, 0);
+ if (!atmel_st_irq)
panic(pr_fmt("Unable to get IRQ from DT\n"));
+ if (!atmel_st_irq) {
pr_err("Unable to get IRQ from DT\n");
return -EINVAL;
}
- /* Make IRQs happen for the system timer */
- ret = request_irq(irq, at91rm9200_timer_interrupt,
- IRQF_SHARED | IRQF_TIMER | IRQF_IRQPOLL,
- "at91_tick", regmap_st);
- if (ret)
- panic(pr_fmt("Unable to setup IRQ\n"));
- if (ret) {
- pr_err("Unable to setup IRQ\n");
- return ret;
- }
-
sclk = of_clk_get(node, 0);
if (IS_ERR(sclk))
panic(pr_fmt("Unable to get slow clock\n"));
if (IS_ERR(sclk)) {
pr_err("Unable to get slow clock\n");

View File

@ -1,7 +1,7 @@
From: Thomas Gleixner <tglx@linutronix.de>
Date: Sat, 1 May 2010 18:29:35 +0200
Subject: ARM: at91: tclib: Default to tclib timer for RT
Origin: https://www.kernel.org/pub/linux/kernel/projects/rt/4.6/older/patches-4.6.2-rt5.tar.xz
Origin: https://www.kernel.org/pub/linux/kernel/projects/rt/4.8/older/patches-4.8-rt1.tar.xz
RT is not too happy about the shared timer interrupt in AT91
devices. Default to tclib timer for RT.

View File

@ -1,7 +1,7 @@
From: Frank Rowand <frank.rowand@am.sony.com>
Date: Mon, 19 Sep 2011 14:51:14 -0700
Subject: arm: Convert arm boot_lock to raw
Origin: https://www.kernel.org/pub/linux/kernel/projects/rt/4.6/older/patches-4.6.2-rt5.tar.xz
Origin: https://www.kernel.org/pub/linux/kernel/projects/rt/4.8/older/patches-4.8-rt1.tar.xz
The arm boot_lock is used by the secondary processor startup code. The locking
task is the idle thread, which has idle->sched_class == &idle_sched_class.
@ -168,16 +168,16 @@ Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
#endif
--- a/arch/arm/mach-omap2/omap-smp.c
+++ b/arch/arm/mach-omap2/omap-smp.c
@@ -43,7 +43,7 @@
/* SCU base address */
static void __iomem *scu_base;
@@ -64,7 +64,7 @@ static const struct omap_smp_config omap
.startup_addr = omap5_secondary_startup,
};
-static DEFINE_SPINLOCK(boot_lock);
+static DEFINE_RAW_SPINLOCK(boot_lock);
void __iomem *omap4_get_scu_base(void)
{
@@ -74,8 +74,8 @@ static void omap4_secondary_init(unsigne
@@ -131,8 +131,8 @@ static void omap4_secondary_init(unsigne
/*
* Synchronise with the boot thread.
*/
@ -188,7 +188,7 @@ Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
}
static int omap4_boot_secondary(unsigned int cpu, struct task_struct *idle)
@@ -89,7 +89,7 @@ static int omap4_boot_secondary(unsigned
@@ -146,7 +146,7 @@ static int omap4_boot_secondary(unsigned
* Set synchronisation state between this boot processor
* and the secondary one
*/
@ -197,7 +197,7 @@ Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
/*
* Update the AuxCoreBoot0 with boot state for secondary core.
@@ -166,7 +166,7 @@ static int omap4_boot_secondary(unsigned
@@ -223,7 +223,7 @@ static int omap4_boot_secondary(unsigned
* Now the secondary core is starting up let it run its
* calibrations, then wait for it to finish
*/
@ -368,7 +368,7 @@ Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
}
--- a/arch/arm/plat-versatile/platsmp.c
+++ b/arch/arm/plat-versatile/platsmp.c
@@ -30,7 +30,7 @@ static void write_pen_release(int val)
@@ -32,7 +32,7 @@ static void write_pen_release(int val)
sync_cache_w(&pen_release);
}
@ -377,7 +377,7 @@ Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
void versatile_secondary_init(unsigned int cpu)
{
@@ -43,8 +43,8 @@ void versatile_secondary_init(unsigned i
@@ -45,8 +45,8 @@ void versatile_secondary_init(unsigned i
/*
* Synchronise with the boot thread.
*/
@ -388,7 +388,7 @@ Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
}
int versatile_boot_secondary(unsigned int cpu, struct task_struct *idle)
@@ -55,7 +55,7 @@ int versatile_boot_secondary(unsigned in
@@ -57,7 +57,7 @@ int versatile_boot_secondary(unsigned in
* Set synchronisation state between this boot processor
* and the secondary one
*/
@ -397,7 +397,7 @@ Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
/*
* This is really belt and braces; we hold unintended secondary
@@ -85,7 +85,7 @@ int versatile_boot_secondary(unsigned in
@@ -87,7 +87,7 @@ int versatile_boot_secondary(unsigned in
* now the secondary core is starting up let it run its
* calibrations, then wait for it to finish
*/

View File

@ -1,7 +1,7 @@
Subject: arm: Enable highmem for rt
From: Thomas Gleixner <tglx@linutronix.de>
Date: Wed, 13 Feb 2013 11:03:11 +0100
Origin: https://www.kernel.org/pub/linux/kernel/projects/rt/4.6/older/patches-4.6.2-rt5.tar.xz
Origin: https://www.kernel.org/pub/linux/kernel/projects/rt/4.8/older/patches-4.8-rt1.tar.xz
fixup highmem for ARM.

View File

@ -1,7 +1,7 @@
From: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
Date: Mon, 11 Mar 2013 21:37:27 +0100
Subject: arm/highmem: Flush tlb on unmap
Origin: https://www.kernel.org/pub/linux/kernel/projects/rt/4.6/older/patches-4.6.2-rt5.tar.xz
Origin: https://www.kernel.org/pub/linux/kernel/projects/rt/4.8/older/patches-4.8-rt1.tar.xz
The tlb should be flushed on unmap and thus make the mapping entry
invalid. This is only done in the non-debug case which does not look

View File

@ -1,22 +1,23 @@
Subject: arm: Add support for lazy preemption
From: Thomas Gleixner <tglx@linutronix.de>
Date: Wed, 31 Oct 2012 12:04:11 +0100
Origin: https://www.kernel.org/pub/linux/kernel/projects/rt/4.6/older/patches-4.6.2-rt5.tar.xz
Origin: https://www.kernel.org/pub/linux/kernel/projects/rt/4.8/older/patches-4.8-rt1.tar.xz
Implement the arm pieces for lazy preempt.
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
---
arch/arm/Kconfig | 1 +
arch/arm/include/asm/thread_info.h | 3 +++
arch/arm/include/asm/thread_info.h | 8 ++++++--
arch/arm/kernel/asm-offsets.c | 1 +
arch/arm/kernel/entry-armv.S | 13 +++++++++++--
arch/arm/kernel/entry-armv.S | 19 ++++++++++++++++---
arch/arm/kernel/entry-common.S | 9 +++++++--
arch/arm/kernel/signal.c | 3 ++-
5 files changed, 18 insertions(+), 3 deletions(-)
6 files changed, 33 insertions(+), 8 deletions(-)
--- a/arch/arm/Kconfig
+++ b/arch/arm/Kconfig
@@ -71,6 +71,7 @@ config ARM
@@ -75,6 +75,7 @@ config ARM
select HAVE_PERF_EVENTS
select HAVE_PERF_REGS
select HAVE_PERF_USER_STACK_DUMP
@ -34,11 +35,13 @@ Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
mm_segment_t addr_limit; /* address limit */
struct task_struct *task; /* main task structure */
__u32 cpu; /* cpu */
@@ -143,6 +144,7 @@ extern int vfp_restore_user_hwstate(stru
@@ -142,7 +143,8 @@ extern int vfp_restore_user_hwstate(stru
#define TIF_SYSCALL_TRACE 4 /* syscall trace active */
#define TIF_SYSCALL_AUDIT 5 /* syscall auditing active */
#define TIF_SYSCALL_TRACEPOINT 6 /* syscall tracepoint instrumentation */
#define TIF_SECCOMP 7 /* seccomp syscall filtering active */
+#define TIF_NEED_RESCHED_LAZY 8
-#define TIF_SECCOMP 7 /* seccomp syscall filtering active */
+#define TIF_SECCOMP 8 /* seccomp syscall filtering active */
+#define TIF_NEED_RESCHED_LAZY 7
#define TIF_NOHZ 12 /* in adaptive nohz mode */
#define TIF_USING_IWMMXT 17
@ -50,6 +53,16 @@ Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
#define _TIF_UPROBE (1 << TIF_UPROBE)
#define _TIF_SYSCALL_TRACE (1 << TIF_SYSCALL_TRACE)
#define _TIF_SYSCALL_AUDIT (1 << TIF_SYSCALL_AUDIT)
@@ -167,7 +170,8 @@ extern int vfp_restore_user_hwstate(stru
* Change these and you break ASM code in entry-common.S
*/
#define _TIF_WORK_MASK (_TIF_NEED_RESCHED | _TIF_SIGPENDING | \
- _TIF_NOTIFY_RESUME | _TIF_UPROBE)
+ _TIF_NOTIFY_RESUME | _TIF_UPROBE | \
+ _TIF_NEED_RESCHED_LAZY)
#endif /* __KERNEL__ */
#endif /* __ASM_ARM_THREAD_INFO_H */
--- a/arch/arm/kernel/asm-offsets.c
+++ b/arch/arm/kernel/asm-offsets.c
@@ -65,6 +65,7 @@ int main(void)
@ -62,9 +75,9 @@ Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
DEFINE(TI_CPU, offsetof(struct thread_info, cpu));
--- a/arch/arm/kernel/entry-armv.S
+++ b/arch/arm/kernel/entry-armv.S
@@ -215,11 +215,18 @@ ENDPROC(__dabt_svc)
@@ -220,11 +220,18 @@ ENDPROC(__dabt_svc)
#ifdef CONFIG_PREEMPT
get_thread_info tsk
ldr r8, [tsk, #TI_PREEMPT] @ get preempt count
- ldr r0, [tsk, #TI_FLAGS] @ get flags
teq r8, #0 @ if preempt count != 0
@ -83,15 +96,48 @@ Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
#endif
svc_exit r5, irq = 1 @ return from exception
@@ -234,6 +241,8 @@ ENDPROC(__irq_svc)
@@ -239,8 +246,14 @@ ENDPROC(__irq_svc)
1: bl preempt_schedule_irq @ irq en/disable is done inside
ldr r0, [tsk, #TI_FLAGS] @ get new tasks TI_FLAGS
tst r0, #_TIF_NEED_RESCHED
+ bne 1b
+ tst r0, #_TIF_NEED_RESCHED_LAZY
reteq r8 @ go again
b 1b
- b 1b
+ ldr r0, [tsk, #TI_PREEMPT_LAZY] @ get preempt lazy count
+ teq r0, #0 @ if preempt lazy count != 0
+ beq 1b
+ ret r8 @ go again
+
#endif
__und_fault:
--- a/arch/arm/kernel/entry-common.S
+++ b/arch/arm/kernel/entry-common.S
@@ -36,7 +36,9 @@
UNWIND(.cantunwind )
disable_irq_notrace @ disable interrupts
ldr r1, [tsk, #TI_FLAGS] @ re-check for syscall tracing
- tst r1, #_TIF_SYSCALL_WORK | _TIF_WORK_MASK
+ tst r1, #((_TIF_SYSCALL_WORK | _TIF_WORK_MASK) & ~_TIF_SECCOMP)
+ bne fast_work_pending
+ tst r1, #_TIF_SECCOMP
bne fast_work_pending
/* perform architecture specific actions before user return */
@@ -62,8 +64,11 @@ ENDPROC(ret_fast_syscall)
str r0, [sp, #S_R0 + S_OFF]! @ save returned r0
disable_irq_notrace @ disable interrupts
ldr r1, [tsk, #TI_FLAGS] @ re-check for syscall tracing
- tst r1, #_TIF_SYSCALL_WORK | _TIF_WORK_MASK
+ tst r1, #((_TIF_SYSCALL_WORK | _TIF_WORK_MASK) & ~_TIF_SECCOMP)
+ bne do_slower_path
+ tst r1, #_TIF_SECCOMP
beq no_work_pending
+do_slower_path:
UNWIND(.fnend )
ENDPROC(ret_fast_syscall)
--- a/arch/arm/kernel/signal.c
+++ b/arch/arm/kernel/signal.c
@@ -572,7 +572,8 @@ do_work_pending(struct pt_regs *regs, un

View File

@ -1,7 +1,7 @@
From: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
Date: Fri, 20 Sep 2013 14:31:54 +0200
Subject: arm/unwind: use a raw_spin_lock
Origin: https://www.kernel.org/pub/linux/kernel/projects/rt/4.6/older/patches-4.6.2-rt5.tar.xz
Origin: https://www.kernel.org/pub/linux/kernel/projects/rt/4.8/older/patches-4.8-rt1.tar.xz
Mostly unwind is done with irqs enabled however SLUB may call it with
irqs disabled while creating a new SLUB cache.

View File

@ -1,7 +1,7 @@
Subject: arm64/xen: Make XEN depend on !RT
From: Thomas Gleixner <tglx@linutronix.de>
Date: Mon, 12 Oct 2015 11:18:40 +0200
Origin: https://www.kernel.org/pub/linux/kernel/projects/rt/4.6/older/patches-4.6.2-rt5.tar.xz
Origin: https://www.kernel.org/pub/linux/kernel/projects/rt/4.8/older/patches-4.8-rt1.tar.xz
It's not ready and probably never will be, unless xen folks have a
look at it.
@ -13,7 +13,7 @@ Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
--- a/arch/arm64/Kconfig
+++ b/arch/arm64/Kconfig
@@ -624,7 +624,7 @@ config XEN_DOM0
@@ -689,7 +689,7 @@ config XEN_DOM0
config XEN
bool "Xen guest support on ARM64"

View File

@ -1,7 +1,7 @@
From: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
Date: Wed, 09 Mar 2016 10:51:06 +0100
Subject: arm: at91: do not disable/enable clocks in a row
Origin: https://www.kernel.org/pub/linux/kernel/projects/rt/4.6/older/patches-4.6.2-rt5.tar.xz
Origin: https://www.kernel.org/pub/linux/kernel/projects/rt/4.8/older/patches-4.8-rt1.tar.xz
Currently the driver will disable the clock and enable it one line later
if it is switching from periodic mode into one shot.

View File

@ -1,7 +1,7 @@
From: Steven Rostedt <srostedt@redhat.com>
Date: Fri, 3 Jul 2009 08:44:29 -0500
Subject: ata: Do not disable interrupts in ide code for preempt-rt
Origin: https://www.kernel.org/pub/linux/kernel/projects/rt/4.6/older/patches-4.6.2-rt5.tar.xz
Origin: https://www.kernel.org/pub/linux/kernel/projects/rt/4.8/older/patches-4.8-rt1.tar.xz
Use the local_irq_*_nort variants.

View File

@ -1,7 +1,7 @@
From: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
Date: Sat, 3 May 2014 11:00:29 +0200
Subject: blk-mq: revert raw locks, post pone notifier to POST_DEAD
Origin: https://www.kernel.org/pub/linux/kernel/projects/rt/4.6/older/patches-4.6.2-rt5.tar.xz
Origin: https://www.kernel.org/pub/linux/kernel/projects/rt/4.8/older/patches-4.8-rt1.tar.xz
The blk_mq_cpu_notify_lock should be raw because some CPU down levels
are called with interrupts off. The notifier itself calls currently one
@ -73,7 +73,7 @@ Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
void blk_mq_init_cpu_notifier(struct blk_mq_cpu_notifier *notifier,
--- a/block/blk-mq.c
+++ b/block/blk-mq.c
@@ -1641,7 +1641,7 @@ static int blk_mq_hctx_notify(void *data
@@ -1687,7 +1687,7 @@ static int blk_mq_hctx_notify(void *data
{
struct blk_mq_hw_ctx *hctx = data;

View File

@ -1,7 +1,7 @@
From: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
Date: Fri, 13 Feb 2015 11:01:26 +0100
Subject: block: blk-mq: Use swait
Origin: https://www.kernel.org/pub/linux/kernel/projects/rt/4.6/older/patches-4.6.2-rt5.tar.xz
Origin: https://www.kernel.org/pub/linux/kernel/projects/rt/4.8/older/patches-4.8-rt1.tar.xz
| BUG: sleeping function called from invalid context at kernel/locking/rtmutex.c:914
| in_atomic(): 1, irqs_disabled(): 0, pid: 255, name: kworker/u257:6
@ -46,7 +46,7 @@ Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
--- a/block/blk-core.c
+++ b/block/blk-core.c
@@ -660,7 +660,7 @@ int blk_queue_enter(struct request_queue
@@ -662,7 +662,7 @@ int blk_queue_enter(struct request_queue
if (nowait)
return -EBUSY;
@ -55,7 +55,7 @@ Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
!atomic_read(&q->mq_freeze_depth) ||
blk_queue_dying(q));
if (blk_queue_dying(q))
@@ -680,7 +680,7 @@ static void blk_queue_usage_counter_rele
@@ -682,7 +682,7 @@ static void blk_queue_usage_counter_rele
struct request_queue *q =
container_of(ref, struct request_queue, q_usage_counter);
@ -64,7 +64,7 @@ Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
}
static void blk_rq_timed_out_timer(unsigned long data)
@@ -749,7 +749,7 @@ struct request_queue *blk_alloc_queue_no
@@ -751,7 +751,7 @@ struct request_queue *blk_alloc_queue_no
q->bypass_depth = 1;
__set_bit(QUEUE_FLAG_BYPASS, &q->queue_flags);
@ -104,7 +104,7 @@ Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
bool blk_mq_can_queue(struct blk_mq_hw_ctx *hctx)
--- a/include/linux/blkdev.h
+++ b/include/linux/blkdev.h
@@ -458,7 +458,7 @@ struct request_queue {
@@ -468,7 +468,7 @@ struct request_queue {
struct throtl_data *td;
#endif
struct rcu_head rcu_head;

View File

@ -1,7 +1,7 @@
From: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
Date: Thu, 29 Jan 2015 15:10:08 +0100
Subject: block/mq: don't complete requests via IPI
Origin: https://www.kernel.org/pub/linux/kernel/projects/rt/4.6/older/patches-4.6.2-rt5.tar.xz
Origin: https://www.kernel.org/pub/linux/kernel/projects/rt/4.8/older/patches-4.8-rt1.tar.xz
The IPI runs in hardirq context and there are sleeping locks. This patch
moves the completion into a workqueue.
@ -28,7 +28,7 @@ Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
rq->__sector = (sector_t) -1;
--- a/block/blk-mq.c
+++ b/block/blk-mq.c
@@ -196,6 +196,9 @@ static void blk_mq_rq_ctx_init(struct re
@@ -197,6 +197,9 @@ static void blk_mq_rq_ctx_init(struct re
rq->resid_len = 0;
rq->sense = NULL;
@ -38,7 +38,7 @@ Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
INIT_LIST_HEAD(&rq->timeout_list);
rq->timeout = 0;
@@ -323,6 +326,17 @@ void blk_mq_end_request(struct request *
@@ -379,6 +382,17 @@ void blk_mq_end_request(struct request *
}
EXPORT_SYMBOL(blk_mq_end_request);
@ -56,7 +56,7 @@ Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
static void __blk_mq_complete_request_remote(void *data)
{
struct request *rq = data;
@@ -330,6 +344,8 @@ static void __blk_mq_complete_request_re
@@ -386,6 +400,8 @@ static void __blk_mq_complete_request_re
rq->q->softirq_done_fn(rq);
}
@ -65,7 +65,7 @@ Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
static void blk_mq_ipi_complete_request(struct request *rq)
{
struct blk_mq_ctx *ctx = rq->mq_ctx;
@@ -346,10 +362,14 @@ static void blk_mq_ipi_complete_request(
@@ -402,10 +418,14 @@ static void blk_mq_ipi_complete_request(
shared = cpus_share_cache(cpu, ctx->cpu);
if (cpu != ctx->cpu && !shared && cpu_online(ctx->cpu)) {
@ -82,7 +82,7 @@ Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
}
--- a/include/linux/blk-mq.h
+++ b/include/linux/blk-mq.h
@@ -218,6 +218,7 @@ static inline u16 blk_mq_unique_tag_to_t
@@ -222,6 +222,7 @@ static inline u16 blk_mq_unique_tag_to_t
struct blk_mq_hw_ctx *blk_mq_map_queue(struct request_queue *, const int ctx_index);
struct blk_mq_hw_ctx *blk_mq_alloc_single_hw_queue(struct blk_mq_tag_set *, unsigned int, int);
@ -92,11 +92,11 @@ Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
void blk_mq_start_request(struct request *rq);
--- a/include/linux/blkdev.h
+++ b/include/linux/blkdev.h
@@ -90,6 +90,7 @@ struct request {
@@ -89,6 +89,7 @@ struct request {
struct list_head queuelist;
union {
struct call_single_data csd;
+ struct work_struct work;
unsigned long fifo_time;
u64 fifo_time;
};

View File

@ -1,7 +1,7 @@
From: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
Date: Tue, 14 Jul 2015 14:26:34 +0200
Subject: block/mq: do not invoke preempt_disable()
Origin: https://www.kernel.org/pub/linux/kernel/projects/rt/4.6/older/patches-4.6.2-rt5.tar.xz
Origin: https://www.kernel.org/pub/linux/kernel/projects/rt/4.8/older/patches-4.8-rt1.tar.xz
preempt_disable() and get_cpu() don't play well together with the sleeping
locks it tries to allocate later.
@ -14,7 +14,7 @@ Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
--- a/block/blk-mq.c
+++ b/block/blk-mq.c
@@ -341,7 +341,7 @@ static void blk_mq_ipi_complete_request(
@@ -397,7 +397,7 @@ static void blk_mq_ipi_complete_request(
return;
}
@ -23,7 +23,7 @@ Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
if (!test_bit(QUEUE_FLAG_SAME_FORCE, &rq->q->queue_flags))
shared = cpus_share_cache(cpu, ctx->cpu);
@@ -353,7 +353,7 @@ static void blk_mq_ipi_complete_request(
@@ -409,7 +409,7 @@ static void blk_mq_ipi_complete_request(
} else {
rq->q->softirq_done_fn(rq);
}
@ -32,7 +32,7 @@ Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
}
static void __blk_mq_complete_request(struct request *rq)
@@ -868,14 +868,14 @@ void blk_mq_run_hw_queue(struct blk_mq_h
@@ -938,14 +938,14 @@ void blk_mq_run_hw_queue(struct blk_mq_h
return;
if (!async) {

View File

@ -1,7 +1,7 @@
From: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
Date: Wed, 9 Apr 2014 10:37:23 +0200
Subject: block: mq: use cpu_light()
Origin: https://www.kernel.org/pub/linux/kernel/projects/rt/4.6/older/patches-4.6.2-rt5.tar.xz
Origin: https://www.kernel.org/pub/linux/kernel/projects/rt/4.8/older/patches-4.8-rt1.tar.xz
there is a might sleep splat because get_cpu() disables preemption and
later we grab a lock. As a workaround for this we use get_cpu_light().

View File

@ -1,7 +1,7 @@
Subject: block: Shorten interrupt disabled regions
From: Thomas Gleixner <tglx@linutronix.de>
Date: Wed, 22 Jun 2011 19:47:02 +0200
Origin: https://www.kernel.org/pub/linux/kernel/projects/rt/4.6/older/patches-4.6.2-rt5.tar.xz
Origin: https://www.kernel.org/pub/linux/kernel/projects/rt/4.8/older/patches-4.8-rt1.tar.xz
Moving the blk_sched_flush_plug() call out of the interrupt/preempt
disabled region in the scheduler allows us to replace
@ -48,7 +48,7 @@ Link: http://lkml.kernel.org/r/20110622174919.025446432@linutronix.de
--- a/block/blk-core.c
+++ b/block/blk-core.c
@@ -3209,7 +3209,7 @@ static void queue_unplugged(struct reque
@@ -3171,7 +3171,7 @@ static void queue_unplugged(struct reque
blk_run_queue_async(q);
else
__blk_run_queue(q);
@ -57,7 +57,7 @@ Link: http://lkml.kernel.org/r/20110622174919.025446432@linutronix.de
}
static void flush_plug_callbacks(struct blk_plug *plug, bool from_schedule)
@@ -3257,7 +3257,6 @@ EXPORT_SYMBOL(blk_check_plugged);
@@ -3219,7 +3219,6 @@ EXPORT_SYMBOL(blk_check_plugged);
void blk_flush_plug_list(struct blk_plug *plug, bool from_schedule)
{
struct request_queue *q;
@ -65,7 +65,7 @@ Link: http://lkml.kernel.org/r/20110622174919.025446432@linutronix.de
struct request *rq;
LIST_HEAD(list);
unsigned int depth;
@@ -3277,11 +3276,6 @@ void blk_flush_plug_list(struct blk_plug
@@ -3239,11 +3238,6 @@ void blk_flush_plug_list(struct blk_plug
q = NULL;
depth = 0;
@ -77,7 +77,7 @@ Link: http://lkml.kernel.org/r/20110622174919.025446432@linutronix.de
while (!list_empty(&list)) {
rq = list_entry_rq(list.next);
list_del_init(&rq->queuelist);
@@ -3294,7 +3288,7 @@ void blk_flush_plug_list(struct blk_plug
@@ -3256,7 +3250,7 @@ void blk_flush_plug_list(struct blk_plug
queue_unplugged(q, depth, from_schedule);
q = rq->q;
depth = 0;
@ -86,7 +86,7 @@ Link: http://lkml.kernel.org/r/20110622174919.025446432@linutronix.de
}
/*
@@ -3321,8 +3315,6 @@ void blk_flush_plug_list(struct blk_plug
@@ -3283,8 +3277,6 @@ void blk_flush_plug_list(struct blk_plug
*/
if (q)
queue_unplugged(q, depth, from_schedule);

View File

@ -1,7 +1,7 @@
Subject: block: Use cpu_chill() for retry loops
From: Thomas Gleixner <tglx@linutronix.de>
Date: Thu, 20 Dec 2012 18:28:26 +0100
Origin: https://www.kernel.org/pub/linux/kernel/projects/rt/4.6/older/patches-4.6.2-rt5.tar.xz
Origin: https://www.kernel.org/pub/linux/kernel/projects/rt/4.8/older/patches-4.8-rt1.tar.xz
Retry loops on RT might loop forever when the modifying side was
preempted. Steven also observed a live lock when there was a

View File

@ -1,7 +1,7 @@
From: Ingo Molnar <mingo@elte.hu>
Date: Fri, 3 Jul 2009 08:29:58 -0500
Subject: bug: BUG_ON/WARN_ON variants dependend on RT/!RT
Origin: https://www.kernel.org/pub/linux/kernel/projects/rt/4.6/older/patches-4.6.2-rt5.tar.xz
Origin: https://www.kernel.org/pub/linux/kernel/projects/rt/4.8/older/patches-4.8-rt1.tar.xz
Introduce RT/NON-RT WARN/BUG statements to avoid ifdefs in the code.

View File

@ -1,7 +1,7 @@
From: Mike Galbraith <umgwanakikbuti@gmail.com>
Date: Sat, 21 Jun 2014 10:09:48 +0200
Subject: memcontrol: Prevent scheduling while atomic in cgroup code
Origin: https://www.kernel.org/pub/linux/kernel/projects/rt/4.6/older/patches-4.6.2-rt5.tar.xz
Origin: https://www.kernel.org/pub/linux/kernel/projects/rt/4.8/older/patches-4.8-rt1.tar.xz
mm, memcg: make refill_stock() use get_cpu_light()
@ -33,33 +33,73 @@ What happens:
Fix it by replacing get/put_cpu_var() with get/put_cpu_light().
Reported-by: Nikita Yushchenko <nyushchenko@dev.rtsoft.ru>
Signed-off-by: Mike Galbraith <umgwanakikbuti@gmail.com>
[bigeasy: use memcg_stock_ll as a locallock since it is now IRQ-off region]
Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
---
mm/memcontrol.c | 7 +++++--
1 file changed, 5 insertions(+), 2 deletions(-)
mm/memcontrol.c | 13 +++++++------
1 file changed, 7 insertions(+), 6 deletions(-)
--- a/mm/memcontrol.c
+++ b/mm/memcontrol.c
@@ -1828,14 +1828,17 @@ static void drain_local_stock(struct wor
*/
static void refill_stock(struct mem_cgroup *memcg, unsigned int nr_pages)
{
- struct memcg_stock_pcp *stock = &get_cpu_var(memcg_stock);
+ struct memcg_stock_pcp *stock;
+ int cpu = get_cpu_light();
+
+ stock = &per_cpu(memcg_stock, cpu);
@@ -1727,6 +1727,7 @@ struct memcg_stock_pcp {
#define FLUSHING_CACHED_CHARGE 0
};
static DEFINE_PER_CPU(struct memcg_stock_pcp, memcg_stock);
+static DEFINE_LOCAL_IRQ_LOCK(memcg_stock_ll);
static DEFINE_MUTEX(percpu_charge_mutex);
if (stock->cached != memcg) { /* reset if necessary */
drain_stock(stock);
stock->cached = memcg;
/**
@@ -1749,7 +1750,7 @@ static bool consume_stock(struct mem_cgr
if (nr_pages > CHARGE_BATCH)
return ret;
- local_irq_save(flags);
+ local_lock_irqsave(memcg_stock_ll, flags);
stock = this_cpu_ptr(&memcg_stock);
if (memcg == stock->cached && stock->nr_pages >= nr_pages) {
@@ -1757,7 +1758,7 @@ static bool consume_stock(struct mem_cgr
ret = true;
}
stock->nr_pages += nr_pages;
- put_cpu_var(memcg_stock);
+ put_cpu_light();
- local_irq_restore(flags);
+ local_unlock_irqrestore(memcg_stock_ll, flags);
return ret;
}
@@ -1784,13 +1785,13 @@ static void drain_local_stock(struct wor
struct memcg_stock_pcp *stock;
unsigned long flags;
- local_irq_save(flags);
+ local_lock_irqsave(memcg_stock_ll, flags);
stock = this_cpu_ptr(&memcg_stock);
drain_stock(stock);
clear_bit(FLUSHING_CACHED_CHARGE, &stock->flags);
- local_irq_restore(flags);
+ local_unlock_irqrestore(memcg_stock_ll, flags);
}
/*
@@ -1802,7 +1803,7 @@ static void refill_stock(struct mem_cgro
struct memcg_stock_pcp *stock;
unsigned long flags;
- local_irq_save(flags);
+ local_lock_irqsave(memcg_stock_ll, flags);
stock = this_cpu_ptr(&memcg_stock);
if (stock->cached != memcg) { /* reset if necessary */
@@ -1811,7 +1812,7 @@ static void refill_stock(struct mem_cgro
}
stock->nr_pages += nr_pages;
- local_irq_restore(flags);
+ local_unlock_irqrestore(memcg_stock_ll, flags);
}
/*

View File

@ -1,7 +1,7 @@
From: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
Date: Fri, 13 Feb 2015 15:52:24 +0100
Subject: cgroups: use simple wait in css_release()
Origin: https://www.kernel.org/pub/linux/kernel/projects/rt/4.6/older/patches-4.6.2-rt5.tar.xz
Origin: https://www.kernel.org/pub/linux/kernel/projects/rt/4.8/older/patches-4.8-rt1.tar.xz
To avoid:
|BUG: sleeping function called from invalid context at kernel/locking/rtmutex.c:914
@ -53,7 +53,7 @@ Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
/*
--- a/kernel/cgroup.c
+++ b/kernel/cgroup.c
@@ -5005,10 +5005,10 @@ static void css_free_rcu_fn(struct rcu_h
@@ -5027,10 +5027,10 @@ static void css_free_rcu_fn(struct rcu_h
queue_work(cgroup_destroy_wq, &css->destroy_work);
}
@ -66,7 +66,7 @@ Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
struct cgroup_subsys *ss = css->ss;
struct cgroup *cgrp = css->cgroup;
@@ -5049,8 +5049,8 @@ static void css_release(struct percpu_re
@@ -5071,8 +5071,8 @@ static void css_release(struct percpu_re
struct cgroup_subsys_state *css =
container_of(ref, struct cgroup_subsys_state, refcnt);
@ -77,7 +77,7 @@ Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
}
static void init_and_link_css(struct cgroup_subsys_state *css,
@@ -5694,6 +5694,7 @@ static int __init cgroup_wq_init(void)
@@ -5716,6 +5716,7 @@ static int __init cgroup_wq_init(void)
*/
cgroup_destroy_wq = alloc_workqueue("cgroup_destroy", 0, 1);
BUG_ON(!cgroup_destroy_wq);

View File

@ -1,7 +1,7 @@
From: Alexandre Belloni <alexandre.belloni@free-electrons.com>
Date: Thu, 17 Mar 2016 21:09:43 +0100
Subject: [PATCH] clockevents/drivers/timer-atmel-pit: fix double free_irq
Origin: https://www.kernel.org/pub/linux/kernel/projects/rt/4.6/older/patches-4.6.2-rt5.tar.xz
Origin: https://www.kernel.org/pub/linux/kernel/projects/rt/4.8/older/patches-4.8-rt1.tar.xz
clockevents_exchange_device() changes the state from detached to shutdown
and so at that point the IRQ has not yet been requested.

View File

@ -1,7 +1,7 @@
From: Benedikt Spranger <b.spranger@linutronix.de>
Date: Mon, 8 Mar 2010 18:57:04 +0100
Subject: clocksource: TCLIB: Allow higher clock rates for clock events
Origin: https://www.kernel.org/pub/linux/kernel/projects/rt/4.6/older/patches-4.6.2-rt5.tar.xz
Origin: https://www.kernel.org/pub/linux/kernel/projects/rt/4.8/older/patches-4.8-rt1.tar.xz
As default the TCLIB uses the 32KiHz base clock rate for clock events.
Add a compile time selection to allow higher clock resulution.

View File

@ -1,7 +1,7 @@
Subject: completion: Use simple wait queues
From: Thomas Gleixner <tglx@linutronix.de>
Date: Fri, 11 Jan 2013 11:23:51 +0100
Origin: https://www.kernel.org/pub/linux/kernel/projects/rt/4.6/older/patches-4.6.2-rt5.tar.xz
Origin: https://www.kernel.org/pub/linux/kernel/projects/rt/4.8/older/patches-4.8-rt1.tar.xz
Completions have no long lasting callbacks and therefor do not need
the complex waitqueue variant. Use simple waitqueues which reduces the
@ -36,7 +36,7 @@ Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
break;
--- a/drivers/usb/gadget/function/f_fs.c
+++ b/drivers/usb/gadget/function/f_fs.c
@@ -1393,7 +1393,7 @@ static void ffs_data_put(struct ffs_data
@@ -1509,7 +1509,7 @@ static void ffs_data_put(struct ffs_data
pr_info("%s(): freeing\n", __func__);
ffs_data_clear(ffs);
BUG_ON(waitqueue_active(&ffs->ev.waitq) ||
@ -102,7 +102,7 @@ Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
/**
--- a/include/linux/suspend.h
+++ b/include/linux/suspend.h
@@ -194,6 +194,12 @@ struct platform_freeze_ops {
@@ -193,6 +193,12 @@ struct platform_freeze_ops {
void (*end)(void);
};
@ -137,8 +137,8 @@ Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
struct mm_struct;
--- a/kernel/power/hibernate.c
+++ b/kernel/power/hibernate.c
@@ -649,6 +649,10 @@ static void power_down(void)
cpu_relax();
@@ -681,6 +681,10 @@ static int load_image_and_restore(void)
return error;
}
+#ifndef CONFIG_SUSPEND
@ -148,7 +148,7 @@ Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
/**
* hibernate - Carry out system hibernation, including saving the image.
*/
@@ -661,6 +665,8 @@ int hibernate(void)
@@ -694,6 +698,8 @@ int hibernate(void)
return -EPERM;
}
@ -157,7 +157,7 @@ Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
lock_system_sleep();
/* The snapshot device should not be opened while we're running */
if (!atomic_add_unless(&snapshot_device_available, -1, 0)) {
@@ -726,6 +732,7 @@ int hibernate(void)
@@ -771,6 +777,7 @@ int hibernate(void)
atomic_inc(&snapshot_device_available);
Unlock:
unlock_system_sleep();
@ -167,7 +167,7 @@ Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
--- a/kernel/power/suspend.c
+++ b/kernel/power/suspend.c
@@ -521,6 +521,8 @@ static int enter_state(suspend_state_t s
@@ -523,6 +523,8 @@ static int enter_state(suspend_state_t s
return error;
}
@ -176,7 +176,7 @@ Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
/**
* pm_suspend - Externally visible function for suspending the system.
* @state: System sleep state to enter.
@@ -535,6 +537,8 @@ int pm_suspend(suspend_state_t state)
@@ -537,6 +539,8 @@ int pm_suspend(suspend_state_t state)
if (state <= PM_SUSPEND_ON || state >= PM_SUSPEND_MAX)
return -EINVAL;
@ -185,7 +185,7 @@ Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
error = enter_state(state);
if (error) {
suspend_stats.fail++;
@@ -542,6 +546,7 @@ int pm_suspend(suspend_state_t state)
@@ -544,6 +548,7 @@ int pm_suspend(suspend_state_t state)
} else {
suspend_stats.success++;
}
@ -287,7 +287,7 @@ Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
EXPORT_SYMBOL(completion_done);
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -3141,7 +3141,10 @@ void migrate_disable(void)
@@ -3317,7 +3317,10 @@ void migrate_disable(void)
}
#ifdef CONFIG_SCHED_DEBUG
@ -299,7 +299,7 @@ Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
#endif
if (p->migrate_disable) {
@@ -3168,7 +3171,10 @@ void migrate_enable(void)
@@ -3344,7 +3347,10 @@ void migrate_enable(void)
}
#ifdef CONFIG_SCHED_DEBUG

View File

@ -1,7 +1,7 @@
Subject: sched: Use the proper LOCK_OFFSET for cond_resched()
From: Thomas Gleixner <tglx@linutronix.de>
Date: Sun, 17 Jul 2011 22:51:33 +0200
Origin: https://www.kernel.org/pub/linux/kernel/projects/rt/4.6/older/patches-4.6.2-rt5.tar.xz
Origin: https://www.kernel.org/pub/linux/kernel/projects/rt/4.8/older/patches-4.8-rt1.tar.xz
RT does not increment preempt count when a 'sleeping' spinlock is
locked. Update PREEMPT_LOCK_OFFSET for that case.

View File

@ -1,7 +1,7 @@
Subject: sched: Take RT softirq semantics into account in cond_resched()
From: Thomas Gleixner <tglx@linutronix.de>
Date: Thu, 14 Jul 2011 09:56:44 +0200
Origin: https://www.kernel.org/pub/linux/kernel/projects/rt/4.6/older/patches-4.6.2-rt5.tar.xz
Origin: https://www.kernel.org/pub/linux/kernel/projects/rt/4.8/older/patches-4.8-rt1.tar.xz
The softirq semantics work different on -RT. There is no SOFTIRQ_MASK in
the preemption counter which leads to the BUG_ON() statement in
@ -16,7 +16,7 @@ Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
--- a/include/linux/sched.h
+++ b/include/linux/sched.h
@@ -3029,12 +3029,16 @@ extern int __cond_resched_lock(spinlock_
@@ -3258,12 +3258,16 @@ extern int __cond_resched_lock(spinlock_
__cond_resched_lock(lock); \
})
@ -35,7 +35,7 @@ Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
{
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -4812,6 +4812,7 @@ int __cond_resched_lock(spinlock_t *lock
@@ -5021,6 +5021,7 @@ int __cond_resched_lock(spinlock_t *lock
}
EXPORT_SYMBOL(__cond_resched_lock);
@ -43,7 +43,7 @@ Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
int __sched __cond_resched_softirq(void)
{
BUG_ON(!in_softirq());
@@ -4825,6 +4826,7 @@ int __sched __cond_resched_softirq(void)
@@ -5034,6 +5035,7 @@ int __sched __cond_resched_softirq(void)
return 0;
}
EXPORT_SYMBOL(__cond_resched_softirq);

View File

@ -1,7 +1,7 @@
From: Steven Rostedt <rostedt@goodmis.org>
Date: Thu, 5 Dec 2013 09:16:52 -0500
Subject: cpu hotplug: Document why PREEMPT_RT uses a spinlock
Origin: https://www.kernel.org/pub/linux/kernel/projects/rt/4.6/older/patches-4.6.2-rt5.tar.xz
Origin: https://www.kernel.org/pub/linux/kernel/projects/rt/4.8/older/patches-4.8-rt1.tar.xz
The patch:

View File

@ -1,7 +1,7 @@
Subject: cpu: Make hotplug.lock a "sleeping" spinlock on RT
From: Steven Rostedt <rostedt@goodmis.org>
Date: Fri, 02 Mar 2012 10:36:57 -0500
Origin: https://www.kernel.org/pub/linux/kernel/projects/rt/4.6/older/patches-4.6.2-rt5.tar.xz
Origin: https://www.kernel.org/pub/linux/kernel/projects/rt/4.8/older/patches-4.8-rt1.tar.xz
Tasks can block on hotplug.lock in pin_current_cpu(), but their state
might be != RUNNING. So the mutex wakeup will set the state
@ -20,8 +20,8 @@ Cc: Clark Williams <clark.williams@gmail.com>
Link: http://lkml.kernel.org/r/1330702617.25686.265.camel@gandalf.stny.rr.com
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
---
kernel/cpu.c | 34 +++++++++++++++++++++++++++-------
1 file changed, 27 insertions(+), 7 deletions(-)
kernel/cpu.c | 32 +++++++++++++++++++++++++-------
1 file changed, 25 insertions(+), 7 deletions(-)
--- a/kernel/cpu.c
+++ b/kernel/cpu.c
@ -42,7 +42,7 @@ Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
/*
* Also blocks the new readers during
* an ongoing cpu hotplug operation.
@@ -153,12 +159,26 @@ static struct {
@@ -153,12 +159,24 @@ static struct {
} cpu_hotplug = {
.active_writer = NULL,
.wq = __WAIT_QUEUE_HEAD_INITIALIZER(cpu_hotplug.wq),
@ -57,19 +57,17 @@ Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
};
+#ifdef CONFIG_PREEMPT_RT_FULL
+# define hotplug_lock() rt_spin_lock(&cpu_hotplug.lock)
+# define hotplug_trylock() rt_spin_trylock(&cpu_hotplug.lock)
+# define hotplug_unlock() rt_spin_unlock(&cpu_hotplug.lock)
+# define hotplug_lock() rt_spin_lock__no_mg(&cpu_hotplug.lock)
+# define hotplug_unlock() rt_spin_unlock__no_mg(&cpu_hotplug.lock)
+#else
+# define hotplug_lock() mutex_lock(&cpu_hotplug.lock)
+# define hotplug_trylock() mutex_trylock(&cpu_hotplug.lock)
+# define hotplug_unlock() mutex_unlock(&cpu_hotplug.lock)
+#endif
+
/* Lockdep annotations for get/put_online_cpus() and cpu_hotplug_begin/end() */
#define cpuhp_lock_acquire_read() lock_map_acquire_read(&cpu_hotplug.dep_map)
#define cpuhp_lock_acquire_tryread() \
@@ -195,8 +215,8 @@ void pin_current_cpu(void)
@@ -195,8 +213,8 @@ void pin_current_cpu(void)
return;
}
preempt_enable();
@ -80,7 +78,7 @@ Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
preempt_disable();
goto retry;
}
@@ -269,9 +289,9 @@ void get_online_cpus(void)
@@ -269,9 +287,9 @@ void get_online_cpus(void)
if (cpu_hotplug.active_writer == current)
return;
cpuhp_lock_acquire_read();
@ -92,7 +90,7 @@ Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
}
EXPORT_SYMBOL_GPL(get_online_cpus);
@@ -324,11 +344,11 @@ void cpu_hotplug_begin(void)
@@ -324,11 +342,11 @@ void cpu_hotplug_begin(void)
cpuhp_lock_acquire();
for (;;) {
@ -106,7 +104,7 @@ Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
schedule();
}
finish_wait(&cpu_hotplug.wq, &wait);
@@ -337,7 +357,7 @@ void cpu_hotplug_begin(void)
@@ -337,7 +355,7 @@ void cpu_hotplug_begin(void)
void cpu_hotplug_done(void)
{
cpu_hotplug.active_writer = NULL;

View File

@ -1,7 +1,7 @@
From: Steven Rostedt <srostedt@redhat.com>
Date: Mon, 16 Jul 2012 08:07:43 +0000
Subject: cpu/rt: Rework cpu down for PREEMPT_RT
Origin: https://www.kernel.org/pub/linux/kernel/projects/rt/4.6/older/patches-4.6.2-rt5.tar.xz
Origin: https://www.kernel.org/pub/linux/kernel/projects/rt/4.8/older/patches-4.8-rt1.tar.xz
Bringing a CPU down is a pain with the PREEMPT_RT kernel because
tasks can be preempted in many more places than in non-RT. In
@ -51,13 +51,13 @@ Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
---
include/linux/sched.h | 7 +
kernel/cpu.c | 240 ++++++++++++++++++++++++++++++++++++++++----------
kernel/cpu.c | 238 +++++++++++++++++++++++++++++++++++++++++---------
kernel/sched/core.c | 78 ++++++++++++++++
3 files changed, 281 insertions(+), 44 deletions(-)
3 files changed, 281 insertions(+), 42 deletions(-)
--- a/include/linux/sched.h
+++ b/include/linux/sched.h
@@ -2325,6 +2325,10 @@ extern void do_set_cpus_allowed(struct t
@@ -2429,6 +2429,10 @@ extern void do_set_cpus_allowed(struct t
extern int set_cpus_allowed_ptr(struct task_struct *p,
const struct cpumask *new_mask);
@ -68,7 +68,7 @@ Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
#else
static inline void do_set_cpus_allowed(struct task_struct *p,
const struct cpumask *new_mask)
@@ -2337,6 +2341,9 @@ static inline int set_cpus_allowed_ptr(s
@@ -2441,6 +2445,9 @@ static inline int set_cpus_allowed_ptr(s
return -EINVAL;
return 0;
}
@ -97,7 +97,7 @@ Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
/*
* Also blocks the new readers during
* an ongoing cpu hotplug operation.
@@ -158,27 +152,13 @@ static struct {
@@ -158,25 +152,13 @@ static struct {
#endif
} cpu_hotplug = {
.active_writer = NULL,
@ -114,19 +114,17 @@ Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
};
-#ifdef CONFIG_PREEMPT_RT_FULL
-# define hotplug_lock() rt_spin_lock(&cpu_hotplug.lock)
-# define hotplug_trylock() rt_spin_trylock(&cpu_hotplug.lock)
-# define hotplug_unlock() rt_spin_unlock(&cpu_hotplug.lock)
-# define hotplug_lock() rt_spin_lock__no_mg(&cpu_hotplug.lock)
-# define hotplug_unlock() rt_spin_unlock__no_mg(&cpu_hotplug.lock)
-#else
-# define hotplug_lock() mutex_lock(&cpu_hotplug.lock)
-# define hotplug_trylock() mutex_trylock(&cpu_hotplug.lock)
-# define hotplug_unlock() mutex_unlock(&cpu_hotplug.lock)
-#endif
-
/* Lockdep annotations for get/put_online_cpus() and cpu_hotplug_begin/end() */
#define cpuhp_lock_acquire_read() lock_map_acquire_read(&cpu_hotplug.dep_map)
#define cpuhp_lock_acquire_tryread() \
@@ -186,12 +166,42 @@ static struct {
@@ -184,12 +166,42 @@ static struct {
#define cpuhp_lock_acquire() lock_map_acquire(&cpu_hotplug.dep_map)
#define cpuhp_lock_release() lock_map_release(&cpu_hotplug.dep_map)
@ -159,8 +157,8 @@ Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
};
+#ifdef CONFIG_PREEMPT_RT_FULL
+# define hotplug_lock(hp) rt_spin_lock(&(hp)->lock)
+# define hotplug_unlock(hp) rt_spin_unlock(&(hp)->lock)
+# define hotplug_lock(hp) rt_spin_lock__no_mg(&(hp)->lock)
+# define hotplug_unlock(hp) rt_spin_unlock__no_mg(&(hp)->lock)
+#else
+# define hotplug_lock(hp) mutex_lock(&(hp)->mutex)
+# define hotplug_unlock(hp) mutex_unlock(&(hp)->mutex)
@ -169,7 +167,7 @@ Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
static DEFINE_PER_CPU(struct hotplug_pcp, hotplug_pcp);
/**
@@ -205,18 +215,39 @@ static DEFINE_PER_CPU(struct hotplug_pcp
@@ -203,18 +215,39 @@ static DEFINE_PER_CPU(struct hotplug_pcp
void pin_current_cpu(void)
{
struct hotplug_pcp *hp;
@ -213,7 +211,7 @@ Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
preempt_disable();
goto retry;
}
@@ -237,26 +268,84 @@ void unpin_current_cpu(void)
@@ -235,26 +268,84 @@ void unpin_current_cpu(void)
wake_up_process(hp->unplug);
}
@ -305,7 +303,7 @@ Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
/*
* Start the sync_unplug_thread on the target cpu and wait for it to
* complete.
@@ -264,23 +353,83 @@ static int sync_unplug_thread(void *data
@@ -262,23 +353,83 @@ static int sync_unplug_thread(void *data
static int cpu_unplug_begin(unsigned int cpu)
{
struct hotplug_pcp *hp = &per_cpu(hotplug_pcp, cpu);
@ -396,7 +394,7 @@ Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
}
void get_online_cpus(void)
@@ -289,9 +438,9 @@ void get_online_cpus(void)
@@ -287,9 +438,9 @@ void get_online_cpus(void)
if (cpu_hotplug.active_writer == current)
return;
cpuhp_lock_acquire_read();
@ -408,7 +406,7 @@ Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
}
EXPORT_SYMBOL_GPL(get_online_cpus);
@@ -344,11 +493,11 @@ void cpu_hotplug_begin(void)
@@ -342,11 +493,11 @@ void cpu_hotplug_begin(void)
cpuhp_lock_acquire();
for (;;) {
@ -422,7 +420,7 @@ Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
schedule();
}
finish_wait(&cpu_hotplug.wq, &wait);
@@ -357,7 +506,7 @@ void cpu_hotplug_begin(void)
@@ -355,7 +506,7 @@ void cpu_hotplug_begin(void)
void cpu_hotplug_done(void)
{
cpu_hotplug.active_writer = NULL;
@ -431,7 +429,7 @@ Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
cpuhp_lock_release();
}
@@ -838,6 +987,9 @@ static int takedown_cpu(unsigned int cpu
@@ -828,6 +979,9 @@ static int takedown_cpu(unsigned int cpu
kthread_park(per_cpu_ptr(&cpuhp_state, cpu)->thread);
smpboot_park_threads(cpu);
@ -443,7 +441,7 @@ Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
* interrupt affinities.
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -1091,6 +1091,84 @@ void do_set_cpus_allowed(struct task_str
@@ -1129,6 +1129,84 @@ void do_set_cpus_allowed(struct task_str
enqueue_task(rq, p, ENQUEUE_RESTORE);
}
@ -485,8 +483,8 @@ Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
+ struct migration_arg arg;
+ struct cpumask *cpumask;
+ struct cpumask *mask;
+ unsigned long flags;
+ unsigned int dest_cpu;
+ struct rq_flags rf;
+ struct rq *rq;
+
+ /*
@ -497,7 +495,7 @@ Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
+ return 0;
+
+ mutex_lock(&sched_down_mutex);
+ rq = task_rq_lock(p, &flags);
+ rq = task_rq_lock(p, &rf);
+
+ cpumask = this_cpu_ptr(&sched_cpumasks);
+ mask = &p->cpus_allowed;
@ -506,7 +504,7 @@ Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
+
+ if (!cpumask_weight(cpumask)) {
+ /* It's only on this CPU? */
+ task_rq_unlock(rq, p, &flags);
+ task_rq_unlock(rq, p, &rf);
+ mutex_unlock(&sched_down_mutex);
+ return 0;
+ }
@ -516,7 +514,7 @@ Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
+ arg.task = p;
+ arg.dest_cpu = dest_cpu;
+
+ task_rq_unlock(rq, p, &flags);
+ task_rq_unlock(rq, p, &rf);
+
+ stop_one_cpu(cpu_of(rq), migration_cpu_stop, &arg);
+ tlb_migrate_finish(p->mm);

View File

@ -1,7 +1,7 @@
From: Steven Rostedt <rostedt@goodmis.org>
Date: Tue, 4 Mar 2014 12:28:32 -0500
Subject: cpu_chill: Add a UNINTERRUPTIBLE hrtimer_nanosleep
Origin: https://www.kernel.org/pub/linux/kernel/projects/rt/4.6/older/patches-4.6.2-rt5.tar.xz
Origin: https://www.kernel.org/pub/linux/kernel/projects/rt/4.8/older/patches-4.8-rt1.tar.xz
We hit another bug that was caused by switching cpu_chill() from
msleep() to hrtimer_nanosleep().
@ -34,7 +34,7 @@ Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
--- a/kernel/time/hrtimer.c
+++ b/kernel/time/hrtimer.c
@@ -1669,12 +1669,13 @@ void hrtimer_init_sleeper(struct hrtimer
@@ -1649,12 +1649,13 @@ void hrtimer_init_sleeper(struct hrtimer
}
EXPORT_SYMBOL_GPL(hrtimer_init_sleeper);
@ -50,7 +50,7 @@ Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
hrtimer_start_expires(&t->timer, mode);
if (likely(t->task))
@@ -1716,7 +1717,8 @@ long __sched hrtimer_nanosleep_restart(s
@@ -1696,7 +1697,8 @@ long __sched hrtimer_nanosleep_restart(s
HRTIMER_MODE_ABS);
hrtimer_set_expires_tv64(&t.timer, restart->nanosleep.expires);
@ -60,7 +60,7 @@ Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
goto out;
rmtp = restart->nanosleep.rmtp;
@@ -1733,8 +1735,10 @@ long __sched hrtimer_nanosleep_restart(s
@@ -1713,8 +1715,10 @@ long __sched hrtimer_nanosleep_restart(s
return ret;
}
@ -73,7 +73,7 @@ Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
{
struct restart_block *restart;
struct hrtimer_sleeper t;
@@ -1747,7 +1751,7 @@ long hrtimer_nanosleep(struct timespec *
@@ -1727,7 +1731,7 @@ long hrtimer_nanosleep(struct timespec *
hrtimer_init_on_stack(&t.timer, clockid, mode);
hrtimer_set_expires_range_ns(&t.timer, timespec_to_ktime(*rqtp), slack);
@ -82,7 +82,7 @@ Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
goto out;
/* Absolute timers do not update the rmtp value and restart: */
@@ -1774,6 +1778,12 @@ long hrtimer_nanosleep(struct timespec *
@@ -1754,6 +1758,12 @@ long hrtimer_nanosleep(struct timespec *
return ret;
}
@ -95,7 +95,7 @@ Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
SYSCALL_DEFINE2(nanosleep, struct timespec __user *, rqtp,
struct timespec __user *, rmtp)
{
@@ -1800,7 +1810,8 @@ void cpu_chill(void)
@@ -1780,7 +1790,8 @@ void cpu_chill(void)
unsigned int freeze_flag = current->flags & PF_NOFREEZE;
current->flags |= PF_NOFREEZE;

View File

@ -1,7 +1,7 @@
From: Tiejun Chen <tiejun.chen@windriver.com>
Subject: cpu_down: move migrate_enable() back
Date: Thu, 7 Nov 2013 10:06:07 +0800
Origin: https://www.kernel.org/pub/linux/kernel/projects/rt/4.6/older/patches-4.6.2-rt5.tar.xz
Origin: https://www.kernel.org/pub/linux/kernel/projects/rt/4.8/older/patches-4.8-rt1.tar.xz
Commit 08c1ab68, "hotplug-use-migrate-disable.patch", intends to
use migrate_enable()/migrate_disable() to replace that combination
@ -35,7 +35,7 @@ Signed-off-by: Tiejun Chen <tiejun.chen@windriver.com>
--- a/kernel/cpu.c
+++ b/kernel/cpu.c
@@ -1125,6 +1125,7 @@ static int __ref _cpu_down(unsigned int
@@ -1117,6 +1117,7 @@ static int __ref _cpu_down(unsigned int
goto restore_cpus;
}
@ -43,7 +43,7 @@ Signed-off-by: Tiejun Chen <tiejun.chen@windriver.com>
cpu_hotplug_begin();
ret = cpu_unplug_begin(cpu);
if (ret) {
@@ -1172,7 +1173,6 @@ static int __ref _cpu_down(unsigned int
@@ -1164,7 +1165,6 @@ static int __ref _cpu_down(unsigned int
cpu_unplug_done(cpu);
out_cancel:
cpu_hotplug_done();

View File

@ -1,7 +1,7 @@
From: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
Date: Thu, 9 Apr 2015 15:23:01 +0200
Subject: cpufreq: drop K8's driver from beeing selected
Origin: https://www.kernel.org/pub/linux/kernel/projects/rt/4.6/older/patches-4.6.2-rt5.tar.xz
Origin: https://www.kernel.org/pub/linux/kernel/projects/rt/4.8/older/patches-4.8-rt1.tar.xz
Ralf posted a picture of a backtrace from
@ -22,7 +22,7 @@ Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
--- a/drivers/cpufreq/Kconfig.x86
+++ b/drivers/cpufreq/Kconfig.x86
@@ -123,7 +123,7 @@ config X86_POWERNOW_K7_ACPI
@@ -124,7 +124,7 @@ config X86_POWERNOW_K7_ACPI
config X86_POWERNOW_K8
tristate "AMD Opteron/Athlon64 PowerNow!"

View File

@ -1,7 +1,7 @@
Subject: cpumask: Disable CONFIG_CPUMASK_OFFSTACK for RT
From: Thomas Gleixner <tglx@linutronix.de>
Date: Wed, 14 Dec 2011 01:03:49 +0100
Origin: https://www.kernel.org/pub/linux/kernel/projects/rt/4.6/older/patches-4.6.2-rt5.tar.xz
Origin: https://www.kernel.org/pub/linux/kernel/projects/rt/4.8/older/patches-4.8-rt1.tar.xz
We can't deal with the cpumask allocations which happen in atomic
context (see arch/x86/kernel/apic/io_apic.c) on RT right now.
@ -14,7 +14,7 @@ Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
--- a/arch/x86/Kconfig
+++ b/arch/x86/Kconfig
@@ -892,7 +892,7 @@ config IOMMU_HELPER
@@ -888,7 +888,7 @@ config IOMMU_HELPER
config MAXSMP
bool "Enable Maximum number of SMP Processors and NUMA Nodes"
depends on X86_64 && SMP && DEBUG_KERNEL
@ -25,7 +25,7 @@ Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
If unsure, say N.
--- a/lib/Kconfig
+++ b/lib/Kconfig
@@ -397,6 +397,7 @@ config CHECK_SIGNATURE
@@ -400,6 +400,7 @@ config CHECK_SIGNATURE
config CPUMASK_OFFSTACK
bool "Force CPU masks off stack" if DEBUG_PER_CPU_MAPS

View File

@ -1,7 +1,7 @@
From: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
Date: Fri, 21 Feb 2014 17:24:04 +0100
Subject: crypto: Reduce preempt disabled regions, more algos
Origin: https://www.kernel.org/pub/linux/kernel/projects/rt/4.6/older/patches-4.6.2-rt5.tar.xz
Origin: https://www.kernel.org/pub/linux/kernel/projects/rt/4.8/older/patches-4.8-rt1.tar.xz
Don Estabrook reported
| kernel: WARNING: CPU: 2 PID: 858 at kernel/sched/core.c:2428 migrate_disable+0xed/0x100()

View File

@ -1,7 +1,7 @@
Subject: debugobjects: Make RT aware
From: Thomas Gleixner <tglx@linutronix.de>
Date: Sun, 17 Jul 2011 21:41:35 +0200
Origin: https://www.kernel.org/pub/linux/kernel/projects/rt/4.6/older/patches-4.6.2-rt5.tar.xz
Origin: https://www.kernel.org/pub/linux/kernel/projects/rt/4.8/older/patches-4.8-rt1.tar.xz
Avoid filling the pool / allocating memory with irqs off().
@ -12,7 +12,7 @@ Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
--- a/lib/debugobjects.c
+++ b/lib/debugobjects.c
@@ -309,7 +309,10 @@ static void
@@ -308,7 +308,10 @@ static void
struct debug_obj *obj;
unsigned long flags;

View File

@ -1,7 +1,7 @@
Subject: dm: Make rt aware
From: Thomas Gleixner <tglx@linutronix.de>
Date: Mon, 14 Nov 2011 23:06:09 +0100
Origin: https://www.kernel.org/pub/linux/kernel/projects/rt/4.6/older/patches-4.6.2-rt5.tar.xz
Origin: https://www.kernel.org/pub/linux/kernel/projects/rt/4.8/older/patches-4.8-rt1.tar.xz
Use the BUG_ON_NORT variant for the irq_disabled() checks. RT has
interrupts legitimately enabled here as we cant deadlock against the
@ -11,12 +11,12 @@ Reported-by: Luis Claudio R. Goncalves <lclaudio@uudg.org>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
---
drivers/md/dm.c | 2 +-
drivers/md/dm-rq.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
--- a/drivers/md/dm.c
+++ b/drivers/md/dm.c
@@ -2187,7 +2187,7 @@ static void dm_request_fn(struct request
--- a/drivers/md/dm-rq.c
+++ b/drivers/md/dm-rq.c
@@ -802,7 +802,7 @@ static void dm_old_request_fn(struct req
/* Establish tio->ti before queuing work (map_tio_request) */
tio->ti = ti;
queue_kthread_work(&md->kworker, &tio->work);

View File

@ -2,7 +2,7 @@ From: Mike Galbraith <umgwanakikbuti@gmail.com>
Date: Thu, 31 Mar 2016 04:08:28 +0200
Subject: [PATCH] drivers/block/zram: Replace bit spinlocks with rtmutex
for -rt
Origin: https://www.kernel.org/pub/linux/kernel/projects/rt/4.6/older/patches-4.6.2-rt5.tar.xz
Origin: https://www.kernel.org/pub/linux/kernel/projects/rt/4.8/older/patches-4.8-rt1.tar.xz
They're nondeterministic, and lead to ___might_sleep() splats in -rt.
OTOH, they're a lot less wasteful than an rtmutex per page.
@ -16,7 +16,7 @@ Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
--- a/drivers/block/zram/zram_drv.c
+++ b/drivers/block/zram/zram_drv.c
@@ -520,6 +520,8 @@ static struct zram_meta *zram_meta_alloc
@@ -519,6 +519,8 @@ static struct zram_meta *zram_meta_alloc
goto out_error;
}
@ -25,9 +25,9 @@ Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
return meta;
out_error:
@@ -568,12 +570,12 @@ static int zram_decompress_page(struct z
@@ -567,12 +569,12 @@ static int zram_decompress_page(struct z
unsigned long handle;
size_t size;
unsigned int size;
- bit_spin_lock(ZRAM_ACCESS, &meta->table[index].value);
+ zram_lock_table(&meta->table[index]);
@ -40,16 +40,16 @@ Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
clear_page(mem);
return 0;
}
@@ -584,7 +586,7 @@ static int zram_decompress_page(struct z
else
ret = zcomp_decompress(zram->comp, cmem, size, mem);
@@ -587,7 +589,7 @@ static int zram_decompress_page(struct z
zcomp_stream_put(zram->comp);
}
zs_unmap_object(meta->mem_pool, handle);
- bit_spin_unlock(ZRAM_ACCESS, &meta->table[index].value);
+ zram_unlock_table(&meta->table[index]);
/* Should NEVER happen. Return bio error if it does. */
if (unlikely(ret)) {
@@ -604,14 +606,14 @@ static int zram_bvec_read(struct zram *z
@@ -607,14 +609,14 @@ static int zram_bvec_read(struct zram *z
struct zram_meta *meta = zram->meta;
page = bvec->bv_page;
@ -67,7 +67,7 @@ Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
if (is_partial_io(bvec))
/* Use a temporary buffer to decompress the page */
@@ -689,10 +691,10 @@ static int zram_bvec_write(struct zram *
@@ -691,10 +693,10 @@ static int zram_bvec_write(struct zram *
if (user_mem)
kunmap_atomic(user_mem);
/* Free memory associated with this sector now. */
@ -80,7 +80,7 @@ Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
atomic64_inc(&zram->stats.zero_pages);
ret = 0;
@@ -752,12 +754,12 @@ static int zram_bvec_write(struct zram *
@@ -785,12 +787,12 @@ static int zram_bvec_write(struct zram *
* Free memory associated with this sector
* before overwriting unused sectors.
*/
@ -95,7 +95,7 @@ Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
/* Update stats */
atomic64_add(clen, &zram->stats.compr_data_size);
@@ -800,9 +802,9 @@ static void zram_bio_discard(struct zram
@@ -833,9 +835,9 @@ static void zram_bio_discard(struct zram
}
while (n >= PAGE_SIZE) {
@ -107,7 +107,7 @@ Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
atomic64_inc(&zram->stats.notify_free);
index++;
n -= PAGE_SIZE;
@@ -928,9 +930,9 @@ static void zram_slot_free_notify(struct
@@ -964,9 +966,9 @@ static void zram_slot_free_notify(struct
zram = bdev->bd_disk->private_data;
meta = zram->meta;
@ -121,7 +121,7 @@ Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
--- a/drivers/block/zram/zram_drv.h
+++ b/drivers/block/zram/zram_drv.h
@@ -72,6 +72,9 @@ enum zram_pageflags {
@@ -73,6 +73,9 @@ enum zram_pageflags {
struct zram_table_entry {
unsigned long handle;
unsigned long value;
@ -131,7 +131,7 @@ Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
};
struct zram_stats {
@@ -119,4 +122,42 @@ struct zram {
@@ -120,4 +123,42 @@ struct zram {
*/
bool claim; /* Protected by bdev->bd_mutex */
};

View File

@ -1,7 +1,7 @@
From: Ingo Molnar <mingo@elte.hu>
Date: Fri, 3 Jul 2009 08:29:24 -0500
Subject: drivers/net: Use disable_irq_nosync() in 8139too
Origin: https://www.kernel.org/pub/linux/kernel/projects/rt/4.6/older/patches-4.6.2-rt5.tar.xz
Origin: https://www.kernel.org/pub/linux/kernel/projects/rt/4.8/older/patches-4.8-rt1.tar.xz
Use disable_irq_nosync() instead of disable_irq() as this might be
called in atomic context with netpoll.
@ -15,7 +15,7 @@ Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
--- a/drivers/net/ethernet/realtek/8139too.c
+++ b/drivers/net/ethernet/realtek/8139too.c
@@ -2229,7 +2229,7 @@ static void rtl8139_poll_controller(stru
@@ -2233,7 +2233,7 @@ static void rtl8139_poll_controller(stru
struct rtl8139_private *tp = netdev_priv(dev);
const int irq = tp->pci_dev->irq;

View File

@ -1,7 +1,7 @@
From: Steven Rostedt <rostedt@goodmis.org>
Date: Fri, 3 Jul 2009 08:30:00 -0500
Subject: drivers/net: vortex fix locking issues
Origin: https://www.kernel.org/pub/linux/kernel/projects/rt/4.6/older/patches-4.6.2-rt5.tar.xz
Origin: https://www.kernel.org/pub/linux/kernel/projects/rt/4.8/older/patches-4.8-rt1.tar.xz
Argh, cut and paste wasn't enough...

View File

@ -1,7 +1,7 @@
From: Ingo Molnar <mingo@elte.hu>
Date: Fri, 3 Jul 2009 08:29:30 -0500
Subject: drivers: random: Reduce preempt disabled region
Origin: https://www.kernel.org/pub/linux/kernel/projects/rt/4.6/older/patches-4.6.2-rt5.tar.xz
Origin: https://www.kernel.org/pub/linux/kernel/projects/rt/4.8/older/patches-4.8-rt1.tar.xz
No need to keep preemption disabled across the whole function.
@ -14,7 +14,7 @@ Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
--- a/drivers/char/random.c
+++ b/drivers/char/random.c
@@ -796,8 +796,6 @@ static void add_timer_randomness(struct
@@ -1028,8 +1028,6 @@ static void add_timer_randomness(struct
} sample;
long delta, delta2, delta3;
@ -23,7 +23,7 @@ Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
sample.jiffies = jiffies;
sample.cycles = random_get_entropy();
sample.num = num;
@@ -838,7 +836,6 @@ static void add_timer_randomness(struct
@@ -1070,7 +1068,6 @@ static void add_timer_randomness(struct
*/
credit_entropy_bits(r, min_t(int, fls(delta>>1), 11));
}

View File

@ -1,7 +1,7 @@
Subject: tty/serial/omap: Make the locking RT aware
From: Thomas Gleixner <tglx@linutronix.de>
Date: Thu, 28 Jul 2011 13:32:57 +0200
Origin: https://www.kernel.org/pub/linux/kernel/projects/rt/4.6/older/patches-4.6.2-rt5.tar.xz
Origin: https://www.kernel.org/pub/linux/kernel/projects/rt/4.8/older/patches-4.8-rt1.tar.xz
The lock is a sleeping lock and local_irq_save() is not the
optimsation we are looking for. Redo it to make it work on -RT and

View File

@ -1,7 +1,7 @@
Subject: tty/serial/pl011: Make the locking work on RT
From: Thomas Gleixner <tglx@linutronix.de>
Date: Tue, 08 Jan 2013 21:36:51 +0100
Origin: https://www.kernel.org/pub/linux/kernel/projects/rt/4.6/older/patches-4.6.2-rt5.tar.xz
Origin: https://www.kernel.org/pub/linux/kernel/projects/rt/4.8/older/patches-4.8-rt1.tar.xz
The lock is a sleeping lock and local_irq_save() is not the optimsation
we are looking for. Redo it to make it work on -RT and non-RT.
@ -13,7 +13,7 @@ Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
--- a/drivers/tty/serial/amba-pl011.c
+++ b/drivers/tty/serial/amba-pl011.c
@@ -2166,13 +2166,19 @@ pl011_console_write(struct console *co,
@@ -2167,13 +2167,19 @@ pl011_console_write(struct console *co,
clk_enable(uap->clk);
@ -36,7 +36,7 @@ Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
/*
* First save the CR then disable the interrupts
@@ -2196,8 +2202,7 @@ pl011_console_write(struct console *co,
@@ -2197,8 +2203,7 @@ pl011_console_write(struct console *co,
pl011_write(old_cr, uap, REG_CR);
if (locked)

View File

@ -1,7 +1,7 @@
From: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
Date: Thu, 25 Apr 2013 18:12:52 +0200
Subject: drm/i915: drop trace_i915_gem_ring_dispatch on rt
Origin: https://www.kernel.org/pub/linux/kernel/projects/rt/4.6/older/patches-4.6.2-rt5.tar.xz
Origin: https://www.kernel.org/pub/linux/kernel/projects/rt/4.8/older/patches-4.8-rt1.tar.xz
This tracepoint is responsible for:
@ -47,7 +47,7 @@ Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
--- a/drivers/gpu/drm/i915/i915_gem_execbuffer.c
+++ b/drivers/gpu/drm/i915/i915_gem_execbuffer.c
@@ -1314,7 +1314,9 @@ i915_gem_ringbuffer_submission(struct i9
@@ -1302,7 +1302,9 @@ i915_gem_ringbuffer_submission(struct i9
if (ret)
return ret;
@ -56,4 +56,4 @@ Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
+#endif
i915_gem_execbuffer_move_to_active(vmas, params->request);
i915_gem_execbuffer_retire_commands(params);

View File

@ -1,7 +1,7 @@
Subject: drm,i915: Use local_lock/unlock_irq() in intel_pipe_update_start/end()
From: Mike Galbraith <umgwanakikbuti@gmail.com>
Date: Sat, 27 Feb 2016 09:01:42 +0100
Origin: https://www.kernel.org/pub/linux/kernel/projects/rt/4.6/older/patches-4.6.2-rt5.tar.xz
Origin: https://www.kernel.org/pub/linux/kernel/projects/rt/4.8/older/patches-4.8-rt1.tar.xz
[ 8.014039] BUG: sleeping function called from invalid context at kernel/locking/rtmutex.c:918
@ -70,7 +70,7 @@ Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
static bool
format_is_yuv(uint32_t format)
@@ -64,6 +65,8 @@ static int usecs_to_scanlines(const stru
@@ -64,6 +65,8 @@ int intel_usecs_to_scanlines(const struc
1000 * adjusted_mode->crtc_htotal);
}
@ -79,8 +79,8 @@ Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
/**
* intel_pipe_update_start() - start update of a set of display registers
* @crtc: the crtc of which the registers are going to be updated
@@ -96,7 +99,7 @@ void intel_pipe_update_start(struct inte
min = vblank_start - usecs_to_scanlines(adjusted_mode, 100);
@@ -94,7 +97,7 @@ void intel_pipe_update_start(struct inte
min = vblank_start - intel_usecs_to_scanlines(adjusted_mode, 100);
max = vblank_start - 1;
- local_irq_disable();
@ -88,7 +88,7 @@ Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
if (min <= 0 || max <= 0)
return;
@@ -126,11 +129,11 @@ void intel_pipe_update_start(struct inte
@@ -124,11 +127,11 @@ void intel_pipe_update_start(struct inte
break;
}
@ -102,9 +102,9 @@ Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
}
finish_wait(wq, &wait);
@@ -164,7 +167,7 @@ void intel_pipe_update_end(struct intel_
trace_i915_pipe_update_end(crtc, end_vbl_count, scanline_end);
@@ -180,7 +183,7 @@ void intel_pipe_update_end(struct intel_
crtc->base.state->event = NULL;
}
- local_irq_enable();
+ local_unlock_irq(pipe_update_lock);

View File

@ -1,7 +1,7 @@
Subject: drm,radeon,i915: Use preempt_disable/enable_rt() where recommended
From: Mike Galbraith <umgwanakikbuti@gmail.com>
Date: Sat, 27 Feb 2016 08:09:11 +0100
Origin: https://www.kernel.org/pub/linux/kernel/projects/rt/4.6/older/patches-4.6.2-rt5.tar.xz
Origin: https://www.kernel.org/pub/linux/kernel/projects/rt/4.8/older/patches-4.8-rt1.tar.xz
DRM folks identified the spots, so use them.
@ -16,7 +16,7 @@ Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
--- a/drivers/gpu/drm/i915/i915_irq.c
+++ b/drivers/gpu/drm/i915/i915_irq.c
@@ -830,6 +830,7 @@ static int i915_get_crtc_scanoutpos(stru
@@ -812,6 +812,7 @@ static int i915_get_crtc_scanoutpos(stru
spin_lock_irqsave(&dev_priv->uncore.lock, irqflags);
/* preempt_disable_rt() should go right here in PREEMPT_RT patchset. */
@ -24,7 +24,7 @@ Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
/* Get optional system timestamp before query. */
if (stime)
@@ -881,6 +882,7 @@ static int i915_get_crtc_scanoutpos(stru
@@ -863,6 +864,7 @@ static int i915_get_crtc_scanoutpos(stru
*etime = ktime_get();
/* preempt_enable_rt() should go right here in PREEMPT_RT patchset. */
@ -34,7 +34,7 @@ Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
--- a/drivers/gpu/drm/radeon/radeon_display.c
+++ b/drivers/gpu/drm/radeon/radeon_display.c
@@ -1863,6 +1863,7 @@ int radeon_get_crtc_scanoutpos(struct dr
@@ -1869,6 +1869,7 @@ int radeon_get_crtc_scanoutpos(struct dr
struct radeon_device *rdev = dev->dev_private;
/* preempt_disable_rt() should go right here in PREEMPT_RT patchset. */
@ -42,7 +42,7 @@ Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
/* Get optional system timestamp before query. */
if (stime)
@@ -1955,6 +1956,7 @@ int radeon_get_crtc_scanoutpos(struct dr
@@ -1961,6 +1962,7 @@ int radeon_get_crtc_scanoutpos(struct dr
*etime = ktime_get();
/* preempt_enable_rt() should go right here in PREEMPT_RT patchset. */

View File

@ -1,7 +1,7 @@
From: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
Date: Sun, 16 Aug 2015 14:27:50 +0200
Subject: dump stack: don't disable preemption during trace
Origin: https://www.kernel.org/pub/linux/kernel/projects/rt/4.6/older/patches-4.6.2-rt5.tar.xz
Origin: https://www.kernel.org/pub/linux/kernel/projects/rt/4.8/older/patches-4.8-rt1.tar.xz
I see here large latencies during a stack dump on x86. The
preempt_disable() and get_cpu() should forbid moving the task to another
@ -29,7 +29,7 @@ Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
int graph = 0;
u32 *prev_esp;
@@ -86,7 +86,7 @@ void dump_trace(struct task_struct *task
@@ -84,7 +84,7 @@ void dump_trace(struct task_struct *task
break;
touch_nmi_watchdog();
}
@ -46,19 +46,19 @@ Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
{
- const unsigned cpu = get_cpu();
+ const unsigned cpu = get_cpu_light();
struct thread_info *tinfo;
unsigned long *irq_stack = (unsigned long *)per_cpu(irq_stack_ptr, cpu);
unsigned long dummy;
@@ -241,7 +241,7 @@ void dump_trace(struct task_struct *task
unsigned used = 0;
@@ -239,7 +239,7 @@ void dump_trace(struct task_struct *task
* This handles the process stack:
*/
bp = ops->walk_stack(tinfo, stack, bp, ops, data, NULL, &graph);
bp = ops->walk_stack(task, stack, bp, ops, data, NULL, &graph);
- put_cpu();
+ put_cpu_light();
}
EXPORT_SYMBOL(dump_trace);
@@ -255,7 +255,7 @@ show_stack_log_lvl(struct task_struct *t
@@ -253,7 +253,7 @@ show_stack_log_lvl(struct task_struct *t
int cpu;
int i;
@ -67,8 +67,8 @@ Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
cpu = smp_processor_id();
irq_stack_end = (unsigned long *)(per_cpu(irq_stack_ptr, cpu));
@@ -291,7 +291,7 @@ show_stack_log_lvl(struct task_struct *t
pr_cont(" %016lx", *stack++);
@@ -299,7 +299,7 @@ show_stack_log_lvl(struct task_struct *t
stack++;
touch_nmi_watchdog();
}
- preempt_enable();

View File

@ -1,7 +1,7 @@
Subject: fs/epoll: Do not disable preemption on RT
From: Thomas Gleixner <tglx@linutronix.de>
Date: Fri, 08 Jul 2011 16:35:35 +0200
Origin: https://www.kernel.org/pub/linux/kernel/projects/rt/4.6/older/patches-4.6.2-rt5.tar.xz
Origin: https://www.kernel.org/pub/linux/kernel/projects/rt/4.8/older/patches-4.8-rt1.tar.xz
ep_call_nested() takes a sleeping lock so we can't disable preemption.
The light version is enough since ep_call_nested() doesn't mind beeing

View File

@ -1,7 +1,7 @@
From: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
Date: Mon, 16 Feb 2015 18:49:10 +0100
Subject: fs/aio: simple simple work
Origin: https://www.kernel.org/pub/linux/kernel/projects/rt/4.6/older/patches-4.6.2-rt5.tar.xz
Origin: https://www.kernel.org/pub/linux/kernel/projects/rt/4.8/older/patches-4.8-rt1.tar.xz
|BUG: sleeping function called from invalid context at kernel/locking/rtmutex.c:768
|in_atomic(): 1, irqs_disabled(): 0, pid: 26, name: rcuos/2
@ -47,7 +47,7 @@ Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
/*
* signals when all in-flight requests are done
@@ -253,6 +254,7 @@ static int __init aio_setup(void)
@@ -258,6 +259,7 @@ static int __init aio_setup(void)
.mount = aio_mount,
.kill_sb = kill_anon_super,
};
@ -55,7 +55,7 @@ Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
aio_mnt = kern_mount(&aio_fs);
if (IS_ERR(aio_mnt))
panic("Failed to create aio fs mount.");
@@ -568,9 +570,9 @@ static int kiocb_cancel(struct aio_kiocb
@@ -578,9 +580,9 @@ static int kiocb_cancel(struct aio_kiocb
return cancel(&kiocb->common);
}
@ -67,7 +67,7 @@ Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
pr_debug("freeing %p\n", ctx);
@@ -589,8 +591,8 @@ static void free_ioctx_reqs(struct percp
@@ -599,8 +601,8 @@ static void free_ioctx_reqs(struct percp
if (ctx->rq_wait && atomic_dec_and_test(&ctx->rq_wait->count))
complete(&ctx->rq_wait->comp);
@ -78,7 +78,7 @@ Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
}
/*
@@ -598,9 +600,9 @@ static void free_ioctx_reqs(struct percp
@@ -608,9 +610,9 @@ static void free_ioctx_reqs(struct percp
* and ctx->users has dropped to 0, so we know no more kiocbs can be submitted -
* now it's safe to cancel any that need to be.
*/
@ -90,7 +90,7 @@ Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
struct aio_kiocb *req;
spin_lock_irq(&ctx->ctx_lock);
@@ -619,6 +621,14 @@ static void free_ioctx_users(struct perc
@@ -629,6 +631,14 @@ static void free_ioctx_users(struct perc
percpu_ref_put(&ctx->reqs);
}

View File

@ -1,7 +1,7 @@
Subject: block: Turn off warning which is bogus on RT
From: Thomas Gleixner <tglx@linutronix.de>
Date: Tue, 14 Jun 2011 17:05:09 +0200
Origin: https://www.kernel.org/pub/linux/kernel/projects/rt/4.6/older/patches-4.6.2-rt5.tar.xz
Origin: https://www.kernel.org/pub/linux/kernel/projects/rt/4.8/older/patches-4.8-rt1.tar.xz
On -RT the context is always with IRQs enabled. Ignore this warning on -RT.

View File

@ -0,0 +1,24 @@
From: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
Date: Wed, 14 Sep 2016 11:55:23 +0200
Subject: fs/dcache: include wait.h
Origin: https://www.kernel.org/pub/linux/kernel/projects/rt/4.8/older/patches-4.8-rt1.tar.xz
Since commit d9171b934526 ("parallel lookups machinery, part 4 (and
last)") dcache.h is using but does not include wait.h. It works as long
as it is included somehow earlier and fails otherwise.
Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
---
include/linux/dcache.h | 1 +
1 file changed, 1 insertion(+)
--- a/include/linux/dcache.h
+++ b/include/linux/dcache.h
@@ -11,6 +11,7 @@
#include <linux/rcupdate.h>
#include <linux/lockref.h>
#include <linux/stringhash.h>
+#include <linux/wait.h>
struct path;
struct vfsmount;

View File

@ -0,0 +1,28 @@
From: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
Date: Wed, 14 Sep 2016 17:57:03 +0200
Subject: [PATCH] fs/dcache: init in_lookup_hashtable
Origin: https://www.kernel.org/pub/linux/kernel/projects/rt/4.8/older/patches-4.8-rt1.tar.xz
in_lookup_hashtable was introduced in commit 94bdd655caba ("parallel
lookups machinery, part 3") and never initialized but since it is in
the data it is all zeros. But we need this for -RT.
Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
---
fs/dcache.c | 5 +++++
1 file changed, 5 insertions(+)
--- a/fs/dcache.c
+++ b/fs/dcache.c
@@ -3601,6 +3601,11 @@ EXPORT_SYMBOL(d_genocide);
void __init vfs_caches_init_early(void)
{
+ int i;
+
+ for (i = 0; i < ARRAY_SIZE(in_lookup_hashtable); i++)
+ INIT_HLIST_BL_HEAD(&in_lookup_hashtable[i]);
+
dcache_init_early();
inode_init_early();
}

View File

@ -1,7 +1,7 @@
Subject: fs: dcache: Use cpu_chill() in trylock loops
From: Thomas Gleixner <tglx@linutronix.de>
Date: Wed, 07 Mar 2012 21:00:34 +0100
Origin: https://www.kernel.org/pub/linux/kernel/projects/rt/4.6/older/patches-4.6.2-rt5.tar.xz
Origin: https://www.kernel.org/pub/linux/kernel/projects/rt/4.8/older/patches-4.8-rt1.tar.xz
Retry loops on RT might loop forever when the modifying side was
preempted. Use cpu_chill() instead of cpu_relax() to let the system
@ -12,9 +12,9 @@ Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
---
fs/autofs4/autofs_i.h | 1 +
fs/autofs4/expire.c | 2 +-
fs/dcache.c | 5 +++--
fs/dcache.c | 20 ++++++++++++++++----
fs/namespace.c | 3 ++-
4 files changed, 7 insertions(+), 4 deletions(-)
4 files changed, 20 insertions(+), 6 deletions(-)
--- a/fs/autofs4/autofs_i.h
+++ b/fs/autofs4/autofs_i.h
@ -47,16 +47,38 @@ Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
#include <linux/slab.h>
#include <linux/init.h>
#include <linux/hash.h>
@@ -578,7 +579,7 @@ static struct dentry *dentry_kill(struct
@@ -750,6 +751,8 @@ static inline bool fast_dput(struct dent
*/
void dput(struct dentry *dentry)
{
+ struct dentry *parent;
+
if (unlikely(!dentry))
return;
failed:
spin_unlock(&dentry->d_lock);
- cpu_relax();
+ cpu_chill();
return dentry; /* try again with same dentry */
@@ -788,9 +791,18 @@ void dput(struct dentry *dentry)
return;
kill_it:
- dentry = dentry_kill(dentry);
- if (dentry) {
- cond_resched();
+ parent = dentry_kill(dentry);
+ if (parent) {
+ int r;
+
+ if (parent == dentry) {
+ /* the task with the highest priority won't schedule */
+ r = cond_resched();
+ if (!r)
+ cpu_chill();
+ } else {
+ dentry = parent;
+ }
goto repeat;
}
}
@@ -2316,7 +2317,7 @@ void d_delete(struct dentry * dentry)
@@ -2321,7 +2333,7 @@ void d_delete(struct dentry * dentry)
if (dentry->d_lockref.count == 1) {
if (!spin_trylock(&inode->i_lock)) {
spin_unlock(&dentry->d_lock);

View File

@ -0,0 +1,215 @@
From: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
Date: Wed, 14 Sep 2016 14:35:49 +0200
Subject: [PATCH] fs/dcache: use swait_queue instead of waitqueue
Origin: https://www.kernel.org/pub/linux/kernel/projects/rt/4.8/older/patches-4.8-rt1.tar.xz
__d_lookup_done() invokes wake_up_all() while holding a hlist_bl_lock()
which disables preemption. As a workaround convert it to swait.
Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
---
fs/cifs/readdir.c | 2 +-
fs/dcache.c | 27 +++++++++++++++------------
fs/fuse/dir.c | 2 +-
fs/namei.c | 4 ++--
fs/nfs/dir.c | 4 ++--
fs/nfs/unlink.c | 4 ++--
fs/proc/base.c | 2 +-
fs/proc/proc_sysctl.c | 2 +-
include/linux/dcache.h | 4 ++--
include/linux/nfs_xdr.h | 2 +-
kernel/sched/swait.c | 1 +
11 files changed, 29 insertions(+), 25 deletions(-)
--- a/fs/cifs/readdir.c
+++ b/fs/cifs/readdir.c
@@ -80,7 +80,7 @@ cifs_prime_dcache(struct dentry *parent,
struct inode *inode;
struct super_block *sb = parent->d_sb;
struct cifs_sb_info *cifs_sb = CIFS_SB(sb);
- DECLARE_WAIT_QUEUE_HEAD_ONSTACK(wq);
+ DECLARE_SWAIT_QUEUE_HEAD_ONSTACK(wq);
cifs_dbg(FYI, "%s: for %s\n", __func__, name->name);
--- a/fs/dcache.c
+++ b/fs/dcache.c
@@ -2393,21 +2393,24 @@ static inline void end_dir_add(struct in
static void d_wait_lookup(struct dentry *dentry)
{
- if (d_in_lookup(dentry)) {
- DECLARE_WAITQUEUE(wait, current);
- add_wait_queue(dentry->d_wait, &wait);
- do {
- set_current_state(TASK_UNINTERRUPTIBLE);
- spin_unlock(&dentry->d_lock);
- schedule();
- spin_lock(&dentry->d_lock);
- } while (d_in_lookup(dentry));
- }
+ struct swait_queue __wait;
+
+ if (!d_in_lookup(dentry))
+ return;
+
+ INIT_LIST_HEAD(&__wait.task_list);
+ do {
+ prepare_to_swait(dentry->d_wait, &__wait, TASK_UNINTERRUPTIBLE);
+ spin_unlock(&dentry->d_lock);
+ schedule();
+ spin_lock(&dentry->d_lock);
+ } while (d_in_lookup(dentry));
+ finish_swait(dentry->d_wait, &__wait);
}
struct dentry *d_alloc_parallel(struct dentry *parent,
const struct qstr *name,
- wait_queue_head_t *wq)
+ struct swait_queue_head *wq)
{
unsigned int hash = name->hash;
struct hlist_bl_head *b = in_lookup_hash(parent, hash);
@@ -2516,7 +2519,7 @@ void __d_lookup_done(struct dentry *dent
hlist_bl_lock(b);
dentry->d_flags &= ~DCACHE_PAR_LOOKUP;
__hlist_bl_del(&dentry->d_u.d_in_lookup_hash);
- wake_up_all(dentry->d_wait);
+ swake_up_all(dentry->d_wait);
dentry->d_wait = NULL;
hlist_bl_unlock(b);
INIT_HLIST_NODE(&dentry->d_u.d_alias);
--- a/fs/fuse/dir.c
+++ b/fs/fuse/dir.c
@@ -1174,7 +1174,7 @@ static int fuse_direntplus_link(struct f
struct inode *dir = d_inode(parent);
struct fuse_conn *fc;
struct inode *inode;
- DECLARE_WAIT_QUEUE_HEAD_ONSTACK(wq);
+ DECLARE_SWAIT_QUEUE_HEAD_ONSTACK(wq);
if (!o->nodeid) {
/*
--- a/fs/namei.c
+++ b/fs/namei.c
@@ -1629,7 +1629,7 @@ static struct dentry *lookup_slow(const
{
struct dentry *dentry = ERR_PTR(-ENOENT), *old;
struct inode *inode = dir->d_inode;
- DECLARE_WAIT_QUEUE_HEAD_ONSTACK(wq);
+ DECLARE_SWAIT_QUEUE_HEAD_ONSTACK(wq);
inode_lock_shared(inode);
/* Don't go there if it's already dead */
@@ -3086,7 +3086,7 @@ static int lookup_open(struct nameidata
struct dentry *dentry;
int error, create_error = 0;
umode_t mode = op->mode;
- DECLARE_WAIT_QUEUE_HEAD_ONSTACK(wq);
+ DECLARE_SWAIT_QUEUE_HEAD_ONSTACK(wq);
if (unlikely(IS_DEADDIR(dir_inode)))
return -ENOENT;
--- a/fs/nfs/dir.c
+++ b/fs/nfs/dir.c
@@ -485,7 +485,7 @@ static
void nfs_prime_dcache(struct dentry *parent, struct nfs_entry *entry)
{
struct qstr filename = QSTR_INIT(entry->name, entry->len);
- DECLARE_WAIT_QUEUE_HEAD_ONSTACK(wq);
+ DECLARE_SWAIT_QUEUE_HEAD_ONSTACK(wq);
struct dentry *dentry;
struct dentry *alias;
struct inode *dir = d_inode(parent);
@@ -1484,7 +1484,7 @@ int nfs_atomic_open(struct inode *dir, s
struct file *file, unsigned open_flags,
umode_t mode, int *opened)
{
- DECLARE_WAIT_QUEUE_HEAD_ONSTACK(wq);
+ DECLARE_SWAIT_QUEUE_HEAD_ONSTACK(wq);
struct nfs_open_context *ctx;
struct dentry *res;
struct iattr attr = { .ia_valid = ATTR_OPEN };
--- a/fs/nfs/unlink.c
+++ b/fs/nfs/unlink.c
@@ -12,7 +12,7 @@
#include <linux/sunrpc/clnt.h>
#include <linux/nfs_fs.h>
#include <linux/sched.h>
-#include <linux/wait.h>
+#include <linux/swait.h>
#include <linux/namei.h>
#include <linux/fsnotify.h>
@@ -205,7 +205,7 @@ nfs_async_unlink(struct dentry *dentry,
goto out_free_name;
}
data->res.dir_attr = &data->dir_attr;
- init_waitqueue_head(&data->wq);
+ init_swait_queue_head(&data->wq);
status = -EBUSY;
spin_lock(&dentry->d_lock);
--- a/fs/proc/base.c
+++ b/fs/proc/base.c
@@ -1819,7 +1819,7 @@ bool proc_fill_cache(struct file *file,
child = d_hash_and_lookup(dir, &qname);
if (!child) {
- DECLARE_WAIT_QUEUE_HEAD_ONSTACK(wq);
+ DECLARE_SWAIT_QUEUE_HEAD_ONSTACK(wq);
child = d_alloc_parallel(dir, &qname, &wq);
if (IS_ERR(child))
goto end_instantiate;
--- a/fs/proc/proc_sysctl.c
+++ b/fs/proc/proc_sysctl.c
@@ -627,7 +627,7 @@ static bool proc_sys_fill_cache(struct f
child = d_lookup(dir, &qname);
if (!child) {
- DECLARE_WAIT_QUEUE_HEAD_ONSTACK(wq);
+ DECLARE_SWAIT_QUEUE_HEAD_ONSTACK(wq);
child = d_alloc_parallel(dir, &qname, &wq);
if (IS_ERR(child))
return false;
--- a/include/linux/dcache.h
+++ b/include/linux/dcache.h
@@ -101,7 +101,7 @@ struct dentry {
union {
struct list_head d_lru; /* LRU list */
- wait_queue_head_t *d_wait; /* in-lookup ones only */
+ struct swait_queue_head *d_wait; /* in-lookup ones only */
};
struct list_head d_child; /* child of parent list */
struct list_head d_subdirs; /* our children */
@@ -231,7 +231,7 @@ extern void d_set_d_op(struct dentry *de
extern struct dentry * d_alloc(struct dentry *, const struct qstr *);
extern struct dentry * d_alloc_pseudo(struct super_block *, const struct qstr *);
extern struct dentry * d_alloc_parallel(struct dentry *, const struct qstr *,
- wait_queue_head_t *);
+ struct swait_queue_head *);
extern struct dentry * d_splice_alias(struct inode *, struct dentry *);
extern struct dentry * d_add_ci(struct dentry *, struct inode *, struct qstr *);
extern struct dentry * d_exact_alias(struct dentry *, struct inode *);
--- a/include/linux/nfs_xdr.h
+++ b/include/linux/nfs_xdr.h
@@ -1484,7 +1484,7 @@ struct nfs_unlinkdata {
struct nfs_removeargs args;
struct nfs_removeres res;
struct dentry *dentry;
- wait_queue_head_t wq;
+ struct swait_queue_head wq;
struct rpc_cred *cred;
struct nfs_fattr dir_attr;
long timeout;
--- a/kernel/sched/swait.c
+++ b/kernel/sched/swait.c
@@ -74,6 +74,7 @@ void swake_up_all(struct swait_queue_hea
if (!swait_active(q))
return;
+ WARN_ON(irqs_disabled());
raw_spin_lock_irq(&q->lock);
list_splice_init(&q->task_list, &tmp);
while (!list_empty(&tmp)) {

View File

@ -1,7 +1,7 @@
From: Thomas Gleixner <tglx@linutronix.de>
Date: Fri, 18 Mar 2011 10:11:25 +0100
Subject: fs: jbd/jbd2: Make state lock and journal head lock rt safe
Origin: https://www.kernel.org/pub/linux/kernel/projects/rt/4.6/older/patches-4.6.2-rt5.tar.xz
Origin: https://www.kernel.org/pub/linux/kernel/projects/rt/4.8/older/patches-4.8-rt1.tar.xz
bit_spin_locks break under RT.

View File

@ -1,7 +1,7 @@
From: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
Date: Mon, 17 Feb 2014 17:30:03 +0100
Subject: fs: jbd2: pull your plug when waiting for space
Origin: https://www.kernel.org/pub/linux/kernel/projects/rt/4.6/older/patches-4.6.2-rt5.tar.xz
Origin: https://www.kernel.org/pub/linux/kernel/projects/rt/4.8/older/patches-4.8-rt1.tar.xz
Two cps in parallel managed to stall the the ext4 fs. It seems that
journal code is either waiting for locks or sleeping waiting for

View File

@ -1,7 +1,7 @@
From: Thomas Gleixner <tglx@linutronix.de>
Date: Sun, 19 Jul 2009 08:44:27 -0500
Subject: fs: namespace preemption fix
Origin: https://www.kernel.org/pub/linux/kernel/projects/rt/4.6/older/patches-4.6.2-rt5.tar.xz
Origin: https://www.kernel.org/pub/linux/kernel/projects/rt/4.8/older/patches-4.8-rt1.tar.xz
On RT we cannot loop with preemption disabled here as
mnt_make_readonly() might have been preempted. We can safely enable

View File

@ -0,0 +1,139 @@
From: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
Date: Thu, 15 Sep 2016 10:51:27 +0200
Subject: [PATCH] fs/nfs: turn rmdir_sem into a semaphore
Origin: https://www.kernel.org/pub/linux/kernel/projects/rt/4.8/older/patches-4.8-rt1.tar.xz
The RW semaphore had a reader side which used the _non_owner version
because it most likely took the reader lock in one thread and released it
in another which would cause lockdep to complain if the "regular"
version was used.
On -RT we need the owner because the rw lock is turned into a rtmutex.
The semaphores on the hand are "plain simple" and should work as
expected. We can't have multiple readers but on -RT we don't allow
multiple readers anyway so that is not a loss.
Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
---
fs/nfs/dir.c | 8 ++++++++
fs/nfs/inode.c | 4 ++++
fs/nfs/unlink.c | 31 +++++++++++++++++++++++++++----
include/linux/nfs_fs.h | 4 ++++
4 files changed, 43 insertions(+), 4 deletions(-)
--- a/fs/nfs/dir.c
+++ b/fs/nfs/dir.c
@@ -1799,7 +1799,11 @@ int nfs_rmdir(struct inode *dir, struct
trace_nfs_rmdir_enter(dir, dentry);
if (d_really_is_positive(dentry)) {
+#ifdef CONFIG_PREEMPT_RT_BASE
+ down(&NFS_I(d_inode(dentry))->rmdir_sem);
+#else
down_write(&NFS_I(d_inode(dentry))->rmdir_sem);
+#endif
error = NFS_PROTO(dir)->rmdir(dir, &dentry->d_name);
/* Ensure the VFS deletes this inode */
switch (error) {
@@ -1809,7 +1813,11 @@ int nfs_rmdir(struct inode *dir, struct
case -ENOENT:
nfs_dentry_handle_enoent(dentry);
}
+#ifdef CONFIG_PREEMPT_RT_BASE
+ up(&NFS_I(d_inode(dentry))->rmdir_sem);
+#else
up_write(&NFS_I(d_inode(dentry))->rmdir_sem);
+#endif
} else
error = NFS_PROTO(dir)->rmdir(dir, &dentry->d_name);
trace_nfs_rmdir_exit(dir, dentry, error);
--- a/fs/nfs/inode.c
+++ b/fs/nfs/inode.c
@@ -1957,7 +1957,11 @@ static void init_once(void *foo)
nfsi->nrequests = 0;
nfsi->commit_info.ncommit = 0;
atomic_set(&nfsi->commit_info.rpcs_out, 0);
+#ifdef CONFIG_PREEMPT_RT_BASE
+ sema_init(&nfsi->rmdir_sem, 1);
+#else
init_rwsem(&nfsi->rmdir_sem);
+#endif
nfs4_init_once(nfsi);
}
--- a/fs/nfs/unlink.c
+++ b/fs/nfs/unlink.c
@@ -51,6 +51,29 @@ static void nfs_async_unlink_done(struct
rpc_restart_call_prepare(task);
}
+#ifdef CONFIG_PREEMPT_RT_BASE
+static void nfs_down_anon(struct semaphore *sema)
+{
+ down(sema);
+}
+
+static void nfs_up_anon(struct semaphore *sema)
+{
+ up(sema);
+}
+
+#else
+static void nfs_down_anon(struct rw_semaphore *rwsem)
+{
+ down_read_non_owner(rwsem);
+}
+
+static void nfs_up_anon(struct rw_semaphore *rwsem)
+{
+ up_read_non_owner(rwsem);
+}
+#endif
+
/**
* nfs_async_unlink_release - Release the sillydelete data.
* @task: rpc_task of the sillydelete
@@ -64,7 +87,7 @@ static void nfs_async_unlink_release(voi
struct dentry *dentry = data->dentry;
struct super_block *sb = dentry->d_sb;
- up_read_non_owner(&NFS_I(d_inode(dentry->d_parent))->rmdir_sem);
+ nfs_up_anon(&NFS_I(d_inode(dentry->d_parent))->rmdir_sem);
d_lookup_done(dentry);
nfs_free_unlinkdata(data);
dput(dentry);
@@ -117,10 +140,10 @@ static int nfs_call_unlink(struct dentry
struct inode *dir = d_inode(dentry->d_parent);
struct dentry *alias;
- down_read_non_owner(&NFS_I(dir)->rmdir_sem);
+ nfs_down_anon(&NFS_I(dir)->rmdir_sem);
alias = d_alloc_parallel(dentry->d_parent, &data->args.name, &data->wq);
if (IS_ERR(alias)) {
- up_read_non_owner(&NFS_I(dir)->rmdir_sem);
+ nfs_up_anon(&NFS_I(dir)->rmdir_sem);
return 0;
}
if (!d_in_lookup(alias)) {
@@ -142,7 +165,7 @@ static int nfs_call_unlink(struct dentry
ret = 0;
spin_unlock(&alias->d_lock);
dput(alias);
- up_read_non_owner(&NFS_I(dir)->rmdir_sem);
+ nfs_up_anon(&NFS_I(dir)->rmdir_sem);
/*
* If we'd displaced old cached devname, free it. At that
* point dentry is definitely not a root, so we won't need
--- a/include/linux/nfs_fs.h
+++ b/include/linux/nfs_fs.h
@@ -165,7 +165,11 @@ struct nfs_inode {
/* Readers: in-flight sillydelete RPC calls */
/* Writers: rmdir */
+#ifdef CONFIG_PREEMPT_RT_BASE
+ struct semaphore rmdir_sem;
+#else
struct rw_semaphore rmdir_sem;
+#endif
#if IS_ENABLED(CONFIG_NFS_V4)
struct nfs4_cached_acl *nfs4_acl;

View File

@ -1,7 +1,7 @@
From: Mike Galbraith <efault@gmx.de>
Date: Fri, 3 Jul 2009 08:44:12 -0500
Subject: fs: ntfs: disable interrupt only on !RT
Origin: https://www.kernel.org/pub/linux/kernel/projects/rt/4.6/older/patches-4.6.2-rt5.tar.xz
Origin: https://www.kernel.org/pub/linux/kernel/projects/rt/4.8/older/patches-4.8-rt1.tar.xz
On Sat, 2007-10-27 at 11:44 +0200, Ingo Molnar wrote:
> * Nick Piggin <nickpiggin@yahoo.com.au> wrote:

View File

@ -1,7 +1,7 @@
From: Thomas Gleixner <tglx@linutronix.de>
Date: Fri, 18 Mar 2011 09:18:52 +0100
Subject: buffer_head: Replace bh_uptodate_lock for -rt
Origin: https://www.kernel.org/pub/linux/kernel/projects/rt/4.6/older/patches-4.6.2-rt5.tar.xz
Origin: https://www.kernel.org/pub/linux/kernel/projects/rt/4.8/older/patches-4.8-rt1.tar.xz
Wrap the bit_spin_lock calls into a separate inline and add the RT
replacements with a real spinlock.
@ -15,7 +15,7 @@ Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
--- a/fs/buffer.c
+++ b/fs/buffer.c
@@ -300,8 +300,7 @@ static void end_buffer_async_read(struct
@@ -301,8 +301,7 @@ static void end_buffer_async_read(struct
* decide that the page is now completely done.
*/
first = page_buffers(page);
@ -25,7 +25,7 @@ Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
clear_buffer_async_read(bh);
unlock_buffer(bh);
tmp = bh;
@@ -314,8 +313,7 @@ static void end_buffer_async_read(struct
@@ -315,8 +314,7 @@ static void end_buffer_async_read(struct
}
tmp = tmp->b_this_page;
} while (tmp != bh);
@ -35,7 +35,7 @@ Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
/*
* If none of the buffers had errors and they are all
@@ -327,9 +325,7 @@ static void end_buffer_async_read(struct
@@ -328,9 +326,7 @@ static void end_buffer_async_read(struct
return;
still_busy:
@ -46,7 +46,7 @@ Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
}
/*
@@ -357,8 +353,7 @@ void end_buffer_async_write(struct buffe
@@ -358,8 +354,7 @@ void end_buffer_async_write(struct buffe
}
first = page_buffers(page);
@ -56,7 +56,7 @@ Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
clear_buffer_async_write(bh);
unlock_buffer(bh);
@@ -370,15 +365,12 @@ void end_buffer_async_write(struct buffe
@@ -371,15 +366,12 @@ void end_buffer_async_write(struct buffe
}
tmp = tmp->b_this_page;
}
@ -74,7 +74,7 @@ Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
}
EXPORT_SYMBOL(end_buffer_async_write);
@@ -3314,6 +3306,7 @@ struct buffer_head *alloc_buffer_head(gf
@@ -3384,6 +3376,7 @@ struct buffer_head *alloc_buffer_head(gf
struct buffer_head *ret = kmem_cache_zalloc(bh_cachep, gfp_flags);
if (ret) {
INIT_LIST_HEAD(&ret->b_assoc_buffers);

View File

@ -1,7 +1,7 @@
From: Thomas Gleixner <tglx@linutronix.de>
Date: Sun, 17 Jul 2011 21:56:42 +0200
Subject: trace: Add migrate-disabled counter to tracing output
Origin: https://www.kernel.org/pub/linux/kernel/projects/rt/4.6/older/patches-4.6.2-rt5.tar.xz
Origin: https://www.kernel.org/pub/linux/kernel/projects/rt/4.8/older/patches-4.8-rt1.tar.xz
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
---
@ -24,7 +24,7 @@ Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
#define TRACE_EVENT_TYPE_MAX \
--- a/kernel/trace/trace.c
+++ b/kernel/trace/trace.c
@@ -1669,6 +1669,8 @@ tracing_generic_entry_update(struct trac
@@ -1909,6 +1909,8 @@ tracing_generic_entry_update(struct trac
((pc & SOFTIRQ_MASK) ? TRACE_FLAG_SOFTIRQ : 0) |
(tif_need_resched() ? TRACE_FLAG_NEED_RESCHED : 0) |
(test_preempt_need_resched() ? TRACE_FLAG_PREEMPT_RESCHED : 0);
@ -33,7 +33,7 @@ Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
}
EXPORT_SYMBOL_GPL(tracing_generic_entry_update);
@@ -2566,9 +2568,10 @@ static void print_lat_help_header(struct
@@ -2897,9 +2899,10 @@ static void print_lat_help_header(struct
"# | / _----=> need-resched \n"
"# || / _---=> hardirq/softirq \n"
"# ||| / _--=> preempt-depth \n"
@ -49,7 +49,7 @@ Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
static void print_event_info(struct trace_buffer *buf, struct seq_file *m)
--- a/kernel/trace/trace_events.c
+++ b/kernel/trace/trace_events.c
@@ -188,6 +188,8 @@ static int trace_define_common_fields(vo
@@ -187,6 +187,8 @@ static int trace_define_common_fields(vo
__common_field(unsigned char, flags);
__common_field(unsigned char, preempt_count);
__common_field(int, pid);

View File

@ -0,0 +1,43 @@
From: Thomas Gleixner <tglx@linutronix.de>
Date: Fri, 1 Mar 2013 11:17:42 +0100
Subject: futex: Ensure lock/unlock symetry versus pi_lock and hash bucket lock
Origin: https://www.kernel.org/pub/linux/kernel/projects/rt/4.8/older/patches-4.8-rt1.tar.xz
In exit_pi_state_list() we have the following locking construct:
spin_lock(&hb->lock);
raw_spin_lock_irq(&curr->pi_lock);
...
spin_unlock(&hb->lock);
In !RT this works, but on RT the migrate_enable() function which is
called from spin_unlock() sees atomic context due to the held pi_lock
and just decrements the migrate_disable_atomic counter of the
task. Now the next call to migrate_disable() sees the counter being
negative and issues a warning. That check should be in
migrate_enable() already.
Fix this by dropping pi_lock before unlocking hb->lock and reaquire
pi_lock after that again. This is safe as the loop code reevaluates
head again under the pi_lock.
Reported-by: Yong Zhang <yong.zhang@windriver.com>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
---
kernel/futex.c | 2 ++
1 file changed, 2 insertions(+)
--- a/kernel/futex.c
+++ b/kernel/futex.c
@@ -895,7 +895,9 @@ void exit_pi_state_list(struct task_stru
* task still owns the PI-state:
*/
if (head->next != next) {
+ raw_spin_unlock_irq(&curr->pi_lock);
spin_unlock(&hb->lock);
+ raw_spin_lock_irq(&curr->pi_lock);
continue;
}

View File

@ -1,7 +1,7 @@
From: Steven Rostedt <rostedt@goodmis.org>
Date: Tue, 14 Jul 2015 14:26:34 +0200
Subject: futex: Fix bug on when a requeued RT task times out
Origin: https://www.kernel.org/pub/linux/kernel/projects/rt/4.6/older/patches-4.6.2-rt5.tar.xz
Origin: https://www.kernel.org/pub/linux/kernel/projects/rt/4.8/older/patches-4.8-rt1.tar.xz
Requeue with timeout causes a bug with PREEMPT_RT_FULL.

View File

@ -1,7 +1,7 @@
From: Ingo Molnar <mingo@elte.hu>
Date: Fri, 3 Jul 2009 08:29:57 -0500
Subject: genirq: Disable irqpoll on -rt
Origin: https://www.kernel.org/pub/linux/kernel/projects/rt/4.6/older/patches-4.6.2-rt5.tar.xz
Origin: https://www.kernel.org/pub/linux/kernel/projects/rt/4.8/older/patches-4.8-rt1.tar.xz
Creates long latencies for no value

View File

@ -1,122 +1,72 @@
From: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
Date: Wed, 21 Aug 2013 17:48:46 +0200
Subject: genirq: Do not invoke the affinity callback via a workqueue on RT
Origin: https://www.kernel.org/pub/linux/kernel/projects/rt/4.6/older/patches-4.6.2-rt5.tar.xz
Origin: https://www.kernel.org/pub/linux/kernel/projects/rt/4.8/older/patches-4.8-rt1.tar.xz
Joe Korty reported, that __irq_set_affinity_locked() schedules a
workqueue while holding a rawlock which results in a might_sleep()
warning.
This patch moves the invokation into a process context so that we only
wakeup() a process while holding the lock.
This patch uses swork_queue() instead.
Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
---
include/linux/interrupt.h | 2 +
kernel/irq/manage.c | 79 ++++++++++++++++++++++++++++++++++++++++++++--
2 files changed, 78 insertions(+), 3 deletions(-)
drivers/scsi/qla2xxx/qla_isr.c | 4 +++
include/linux/interrupt.h | 5 ++++
kernel/irq/manage.c | 43 ++++++++++++++++++++++++++++++++++++++---
3 files changed, 49 insertions(+), 3 deletions(-)
--- a/drivers/scsi/qla2xxx/qla_isr.c
+++ b/drivers/scsi/qla2xxx/qla_isr.c
@@ -3125,7 +3125,11 @@ qla24xx_enable_msix(struct qla_hw_data *
* kref_put().
*/
kref_get(&qentry->irq_notify.kref);
+#ifdef CONFIG_PREEMPT_RT_BASE
+ swork_queue(&qentry->irq_notify.swork);
+#else
schedule_work(&qentry->irq_notify.work);
+#endif
}
/*
--- a/include/linux/interrupt.h
+++ b/include/linux/interrupt.h
@@ -217,6 +217,7 @@ extern void resume_device_irqs(void);
* @irq: Interrupt to which notification applies
* @kref: Reference count, for internal use
* @work: Work item, for internal use
+ * @list: List item for deferred callbacks
* @notify: Function to be called on change. This will be
* called in process context.
* @release: Function to be called on release. This will be
@@ -228,6 +229,7 @@ struct irq_affinity_notify {
@@ -14,6 +14,7 @@
#include <linux/hrtimer.h>
#include <linux/kref.h>
#include <linux/workqueue.h>
+#include <linux/swork.h>
#include <linux/atomic.h>
#include <asm/ptrace.h>
@@ -229,7 +230,11 @@ extern void resume_device_irqs(void);
struct irq_affinity_notify {
unsigned int irq;
struct kref kref;
+#ifdef CONFIG_PREEMPT_RT_BASE
+ struct swork_event swork;
+#else
struct work_struct work;
+ struct list_head list;
+#endif
void (*notify)(struct irq_affinity_notify *, const cpumask_t *mask);
void (*release)(struct kref *ref);
};
--- a/kernel/irq/manage.c
+++ b/kernel/irq/manage.c
@@ -181,6 +181,62 @@ static inline void
irq_get_pending(struct cpumask *mask, struct irq_desc *desc) { }
#endif
+#ifdef CONFIG_PREEMPT_RT_FULL
+static void _irq_affinity_notify(struct irq_affinity_notify *notify);
+static struct task_struct *set_affinity_helper;
+static LIST_HEAD(affinity_list);
+static DEFINE_RAW_SPINLOCK(affinity_list_lock);
+
+static int set_affinity_thread(void *unused)
+{
+ while (1) {
+ struct irq_affinity_notify *notify;
+ int empty;
+
+ set_current_state(TASK_INTERRUPTIBLE);
+
+ raw_spin_lock_irq(&affinity_list_lock);
+ empty = list_empty(&affinity_list);
+ raw_spin_unlock_irq(&affinity_list_lock);
+
+ if (empty)
+ schedule();
+ if (kthread_should_stop())
+ break;
+ set_current_state(TASK_RUNNING);
+try_next:
+ notify = NULL;
+
+ raw_spin_lock_irq(&affinity_list_lock);
+ if (!list_empty(&affinity_list)) {
+ notify = list_first_entry(&affinity_list,
+ struct irq_affinity_notify, list);
+ list_del_init(&notify->list);
+ }
+ raw_spin_unlock_irq(&affinity_list_lock);
+
+ if (!notify)
+ continue;
+ _irq_affinity_notify(notify);
+ goto try_next;
+ }
+ return 0;
+}
+
+static void init_helper_thread(void)
+{
+ if (set_affinity_helper)
+ return;
+ set_affinity_helper = kthread_run(set_affinity_thread, NULL,
+ "affinity-cb");
+ WARN_ON(IS_ERR(set_affinity_helper));
+}
+#else
+
+static inline void init_helper_thread(void) { }
+
+#endif
+
int irq_do_set_affinity(struct irq_data *data, const struct cpumask *mask,
bool force)
{
@@ -220,7 +276,17 @@ int irq_set_affinity_locked(struct irq_d
@@ -235,7 +235,12 @@ int irq_set_affinity_locked(struct irq_d
if (desc->affinity_notify) {
kref_get(&desc->affinity_notify->kref);
+
+#ifdef CONFIG_PREEMPT_RT_FULL
+ raw_spin_lock(&affinity_list_lock);
+ if (list_empty(&desc->affinity_notify->list))
+ list_add_tail(&affinity_list,
+ &desc->affinity_notify->list);
+ raw_spin_unlock(&affinity_list_lock);
+ wake_up_process(set_affinity_helper);
+#ifdef CONFIG_PREEMPT_RT_BASE
+ swork_queue(&desc->affinity_notify->swork);
+#else
schedule_work(&desc->affinity_notify->work);
+#endif
}
irqd_set(data, IRQD_AFFINITY_SET);
@@ -258,10 +324,8 @@ int irq_set_affinity_hint(unsigned int i
@@ -273,10 +278,8 @@ int irq_set_affinity_hint(unsigned int i
}
EXPORT_SYMBOL_GPL(irq_set_affinity_hint);
@ -128,26 +78,52 @@ Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
struct irq_desc *desc = irq_to_desc(notify->irq);
cpumask_var_t cpumask;
unsigned long flags;
@@ -283,6 +347,13 @@ static void irq_affinity_notify(struct w
@@ -298,6 +301,35 @@ static void irq_affinity_notify(struct w
kref_put(&notify->kref, notify->release);
}
+#ifdef CONFIG_PREEMPT_RT_BASE
+static void init_helper_thread(void)
+{
+ static int init_sworker_once;
+
+ if (init_sworker_once)
+ return;
+ if (WARN_ON(swork_get()))
+ return;
+ init_sworker_once = 1;
+}
+
+static void irq_affinity_notify(struct swork_event *swork)
+{
+ struct irq_affinity_notify *notify =
+ container_of(swork, struct irq_affinity_notify, swork);
+ _irq_affinity_notify(notify);
+}
+
+#else
+
+static void irq_affinity_notify(struct work_struct *work)
+{
+ struct irq_affinity_notify *notify =
+ container_of(work, struct irq_affinity_notify, work);
+ _irq_affinity_notify(notify);
+}
+#endif
+
/**
* irq_set_affinity_notifier - control notification of IRQ affinity changes
* @irq: Interrupt for which to enable/disable notification
@@ -312,6 +383,8 @@ irq_set_affinity_notifier(unsigned int i
@@ -326,7 +358,12 @@ irq_set_affinity_notifier(unsigned int i
if (notify) {
notify->irq = irq;
kref_init(&notify->kref);
INIT_WORK(&notify->work, irq_affinity_notify);
+ INIT_LIST_HEAD(&notify->list);
+#ifdef CONFIG_PREEMPT_RT_BASE
+ INIT_SWORK(&notify->swork, irq_affinity_notify);
+ init_helper_thread();
+#else
INIT_WORK(&notify->work, irq_affinity_notify);
+#endif
}
raw_spin_lock_irqsave(&desc->lock, flags);

View File

@ -1,7 +1,7 @@
Subject: genirq: Force interrupt thread on RT
From: Thomas Gleixner <tglx@linutronix.de>
Date: Sun, 03 Apr 2011 11:57:29 +0200
Origin: https://www.kernel.org/pub/linux/kernel/projects/rt/4.6/older/patches-4.6.2-rt5.tar.xz
Origin: https://www.kernel.org/pub/linux/kernel/projects/rt/4.8/older/patches-4.8-rt1.tar.xz
Force threaded_irqs and optimize the code (force_irqthreads) in regard
to this.
@ -14,7 +14,7 @@ Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
--- a/include/linux/interrupt.h
+++ b/include/linux/interrupt.h
@@ -390,9 +390,13 @@ extern int irq_set_irqchip_state(unsigne
@@ -398,9 +398,13 @@ extern int irq_set_irqchip_state(unsigne
bool state);
#ifdef CONFIG_IRQ_FORCED_THREADING

View File

@ -1,7 +1,7 @@
From: Josh Cartwright <joshc@ni.com>
Date: Thu, 11 Feb 2016 11:54:00 -0600
Subject: genirq: update irq_set_irqchip_state documentation
Origin: https://www.kernel.org/pub/linux/kernel/projects/rt/4.6/older/patches-4.6.2-rt5.tar.xz
Origin: https://www.kernel.org/pub/linux/kernel/projects/rt/4.8/older/patches-4.8-rt1.tar.xz
On -rt kernels, the use of migrate_disable()/migrate_enable() is
sufficient to guarantee a task isn't moved to another CPU. Update the
@ -15,7 +15,7 @@ Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
--- a/kernel/irq/manage.c
+++ b/kernel/irq/manage.c
@@ -2084,7 +2084,7 @@ EXPORT_SYMBOL_GPL(irq_get_irqchip_state)
@@ -2111,7 +2111,7 @@ EXPORT_SYMBOL_GPL(irq_get_irqchip_state)
* This call sets the internal irqchip state of an interrupt,
* depending on the value of @which.
*

View File

@ -0,0 +1,33 @@
From: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
Date: Tue, 14 Jul 2015 14:26:34 +0200
Subject: gpu: don't check for the lock owner.
Origin: https://www.kernel.org/pub/linux/kernel/projects/rt/4.8/older/patches-4.8-rt1.tar.xz
Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
---
drivers/gpu/drm/i915/i915_gem_shrinker.c | 2 +-
drivers/gpu/drm/msm/msm_gem_shrinker.c | 2 +-
2 files changed, 2 insertions(+), 2 deletions(-)
--- a/drivers/gpu/drm/i915/i915_gem_shrinker.c
+++ b/drivers/gpu/drm/i915/i915_gem_shrinker.c
@@ -40,7 +40,7 @@ static bool mutex_is_locked_by(struct mu
if (!mutex_is_locked(mutex))
return false;
-#if defined(CONFIG_DEBUG_MUTEXES) || defined(CONFIG_MUTEX_SPIN_ON_OWNER)
+#if (defined(CONFIG_DEBUG_MUTEXES) || defined(CONFIG_MUTEX_SPIN_ON_OWNER)) && !defined(CONFIG_PREEMPT_RT_BASE)
return mutex->owner == task;
#else
/* Since UP may be pre-empted, we cannot assume that we own the lock */
--- a/drivers/gpu/drm/msm/msm_gem_shrinker.c
+++ b/drivers/gpu/drm/msm/msm_gem_shrinker.c
@@ -23,7 +23,7 @@ static bool mutex_is_locked_by(struct mu
if (!mutex_is_locked(mutex))
return false;
-#if defined(CONFIG_SMP) || defined(CONFIG_DEBUG_MUTEXES)
+#if (defined(CONFIG_SMP) || defined(CONFIG_DEBUG_MUTEXES)) && !defined(CONFIG_PREEMPT_RT_BASE)
return mutex->owner == task;
#else
/* Since UP may be pre-empted, we cannot assume that we own the lock */

View File

@ -1,7 +1,7 @@
From: Mike Galbraith <umgwanakikbuti@gmail.com>
Date: Tue, 24 Mar 2015 08:14:49 +0100
Subject: hotplug: Use set_cpus_allowed_ptr() in sync_unplug_thread()
Origin: https://www.kernel.org/pub/linux/kernel/projects/rt/4.6/older/patches-4.6.2-rt5.tar.xz
Origin: https://www.kernel.org/pub/linux/kernel/projects/rt/4.8/older/patches-4.8-rt1.tar.xz
do_set_cpus_allowed() is not safe vs ->sched_class change.

View File

@ -1,7 +1,7 @@
Subject: hotplug: Lightweight get online cpus
From: Thomas Gleixner <tglx@linutronix.de>
Date: Wed, 15 Jun 2011 12:36:06 +0200
Origin: https://www.kernel.org/pub/linux/kernel/projects/rt/4.6/older/patches-4.6.2-rt5.tar.xz
Origin: https://www.kernel.org/pub/linux/kernel/projects/rt/4.8/older/patches-4.8-rt1.tar.xz
get_online_cpus() is a heavy weight function which involves a global
mutex. migrate_disable() wants a simpler construct which prevents only
@ -19,7 +19,7 @@ Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
--- a/include/linux/cpu.h
+++ b/include/linux/cpu.h
@@ -221,9 +221,6 @@ static inline void cpu_notifier_register
@@ -192,9 +192,6 @@ static inline void cpu_notifier_register
#endif /* CONFIG_SMP */
extern struct bus_type cpu_subsys;
@ -29,7 +29,7 @@ Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
#ifdef CONFIG_HOTPLUG_CPU
/* Stop CPUs going up and down. */
@@ -233,6 +230,8 @@ extern void get_online_cpus(void);
@@ -204,6 +201,8 @@ extern void get_online_cpus(void);
extern void put_online_cpus(void);
extern void cpu_hotplug_disable(void);
extern void cpu_hotplug_enable(void);
@ -38,7 +38,7 @@ Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
#define hotcpu_notifier(fn, pri) cpu_notifier(fn, pri)
#define __hotcpu_notifier(fn, pri) __cpu_notifier(fn, pri)
#define register_hotcpu_notifier(nb) register_cpu_notifier(nb)
@@ -250,6 +249,8 @@ static inline void cpu_hotplug_done(void
@@ -221,6 +220,8 @@ static inline void cpu_hotplug_done(void
#define put_online_cpus() do { } while (0)
#define cpu_hotplug_disable() do { } while (0)
#define cpu_hotplug_enable() do { } while (0)
@ -150,7 +150,7 @@ Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
void get_online_cpus(void)
{
@@ -807,6 +901,8 @@ static int __ref _cpu_down(unsigned int
@@ -799,6 +893,8 @@ static int __ref _cpu_down(unsigned int
struct cpuhp_cpu_state *st = per_cpu_ptr(&cpuhp_state, cpu);
int prev_state, ret = 0;
bool hasdied = false;
@ -159,7 +159,7 @@ Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
if (num_online_cpus() == 1)
return -EBUSY;
@@ -814,7 +910,27 @@ static int __ref _cpu_down(unsigned int
@@ -806,7 +902,27 @@ static int __ref _cpu_down(unsigned int
if (!cpu_present(cpu))
return -EINVAL;
@ -187,7 +187,7 @@ Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
cpuhp_tasks_frozen = tasks_frozen;
@@ -853,6 +969,8 @@ static int __ref _cpu_down(unsigned int
@@ -845,6 +961,8 @@ static int __ref _cpu_down(unsigned int
hasdied = prev_state != st->state && st->state == CPUHP_OFFLINE;
out:

View File

@ -1,7 +1,7 @@
Subject: hotplug: sync_unplug: No "\n" in task name
From: Yong Zhang <yong.zhang0@gmail.com>
Date: Sun, 16 Oct 2011 18:56:43 +0800
Origin: https://www.kernel.org/pub/linux/kernel/projects/rt/4.6/older/patches-4.6.2-rt5.tar.xz
Origin: https://www.kernel.org/pub/linux/kernel/projects/rt/4.8/older/patches-4.8-rt1.tar.xz
Otherwise the output will look a little odd.

View File

@ -1,7 +1,7 @@
Subject: hotplug: Use migrate disable on unplug
From: Thomas Gleixner <tglx@linutronix.de>
Date: Sun, 17 Jul 2011 19:35:29 +0200
Origin: https://www.kernel.org/pub/linux/kernel/projects/rt/4.6/older/patches-4.6.2-rt5.tar.xz
Origin: https://www.kernel.org/pub/linux/kernel/projects/rt/4.8/older/patches-4.8-rt1.tar.xz
Migration needs to be disabled accross the unplug handling to make
sure that the unplug thread is off the unplugged cpu.
@ -13,7 +13,7 @@ Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
--- a/kernel/cpu.c
+++ b/kernel/cpu.c
@@ -918,14 +918,13 @@ static int __ref _cpu_down(unsigned int
@@ -910,14 +910,13 @@ static int __ref _cpu_down(unsigned int
cpumask_andnot(cpumask, cpu_online_mask, cpumask_of(cpu));
set_cpus_allowed_ptr(current, cpumask);
free_cpumask_var(cpumask);
@ -30,7 +30,7 @@ Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
cpu_hotplug_begin();
ret = cpu_unplug_begin(cpu);
@@ -974,6 +973,7 @@ static int __ref _cpu_down(unsigned int
@@ -966,6 +965,7 @@ static int __ref _cpu_down(unsigned int
cpu_unplug_done(cpu);
out_cancel:
cpu_hotplug_done();

View File

@ -1,7 +1,7 @@
From: Yang Shi <yang.shi@windriver.com>
Date: Mon, 16 Sep 2013 14:09:19 -0700
Subject: hrtimer: Move schedule_work call to helper thread
Origin: https://www.kernel.org/pub/linux/kernel/projects/rt/4.6/older/patches-4.6.2-rt5.tar.xz
Origin: https://www.kernel.org/pub/linux/kernel/projects/rt/4.8/older/patches-4.8-rt1.tar.xz
When run ltp leapsec_timer test, the following call trace is caught:
@ -43,72 +43,46 @@ Reference upstream commit b68d61c705ef02384c0538b8d9374545097899ca
from git://git.kernel.org/pub/scm/linux/kernel/git/rt/linux-stable-rt.git, which
makes a similar change.
add a helper thread which does the call to schedule_work and wake up that
thread instead of calling schedule_work directly.
Signed-off-by: Yang Shi <yang.shi@windriver.com>
[bigeasy: use swork_queue() instead a helper thread]
Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
---
kernel/time/hrtimer.c | 40 ++++++++++++++++++++++++++++++++++++++++
1 file changed, 40 insertions(+)
kernel/time/hrtimer.c | 24 ++++++++++++++++++++++++
1 file changed, 24 insertions(+)
--- a/kernel/time/hrtimer.c
+++ b/kernel/time/hrtimer.c
@@ -48,6 +48,7 @@
#include <linux/sched/rt.h>
#include <linux/sched/deadline.h>
#include <linux/timer.h>
+#include <linux/kthread.h>
#include <linux/freezer.h>
#include <asm/uaccess.h>
@@ -707,6 +708,44 @@ static void clock_was_set_work(struct wo
static DECLARE_WORK(hrtimer_work, clock_was_set_work);
@@ -696,6 +696,29 @@ static void hrtimer_switch_to_hres(void)
retrigger_next_event(NULL);
}
+#ifdef CONFIG_PREEMPT_RT_FULL
+/*
+ * RT can not call schedule_work from real interrupt context.
+ * Need to make a thread to do the real work.
+ */
+static struct task_struct *clock_set_delay_thread;
+static bool do_clock_set_delay;
+
+static int run_clock_set_delay(void *ignore)
+static struct swork_event clock_set_delay_work;
+
+static void run_clock_set_delay(struct swork_event *event)
+{
+ while (!kthread_should_stop()) {
+ set_current_state(TASK_INTERRUPTIBLE);
+ if (do_clock_set_delay) {
+ do_clock_set_delay = false;
+ schedule_work(&hrtimer_work);
+ }
+ schedule();
+ }
+ __set_current_state(TASK_RUNNING);
+ return 0;
+ clock_was_set();
+}
+
+void clock_was_set_delayed(void)
+{
+ do_clock_set_delay = true;
+ /* Make visible before waking up process */
+ smp_wmb();
+ wake_up_process(clock_set_delay_thread);
+ swork_queue(&clock_set_delay_work);
+}
+
+static __init int create_clock_set_delay_thread(void)
+{
+ clock_set_delay_thread = kthread_run(run_clock_set_delay, NULL, "kclksetdelayd");
+ BUG_ON(!clock_set_delay_thread);
+ WARN_ON(swork_get());
+ INIT_SWORK(&clock_set_delay_work, run_clock_set_delay);
+ return 0;
+}
+early_initcall(create_clock_set_delay_thread);
+#else /* PREEMPT_RT_FULL */
/*
* Called from timekeeping and resume code to reprogramm the hrtimer
* interrupt device on all cpus.
@@ -715,6 +754,7 @@ void clock_was_set_delayed(void)
+
static void clock_was_set_work(struct work_struct *work)
{
clock_was_set();
@@ -711,6 +734,7 @@ void clock_was_set_delayed(void)
{
schedule_work(&hrtimer_work);
}

View File

@ -1,7 +1,7 @@
From: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
Date: Wed, 23 Dec 2015 20:57:41 +0100
Subject: hrtimer: enfore 64byte alignment
Origin: https://www.kernel.org/pub/linux/kernel/projects/rt/4.6/older/patches-4.6.2-rt5.tar.xz
Origin: https://www.kernel.org/pub/linux/kernel/projects/rt/4.8/older/patches-4.8-rt1.tar.xz
The patch "hrtimer: Fixup hrtimer callback changes for preempt-rt" adds
a list_head expired to struct hrtimer_clock_base and with it we run into

View File

@ -1,7 +1,7 @@
From: Thomas Gleixner <tglx@linutronix.de>
Date: Fri, 3 Jul 2009 08:44:31 -0500
Subject: hrtimer: Fixup hrtimer callback changes for preempt-rt
Origin: https://www.kernel.org/pub/linux/kernel/projects/rt/4.6/older/patches-4.6.2-rt5.tar.xz
Origin: https://www.kernel.org/pub/linux/kernel/projects/rt/4.8/older/patches-4.8-rt1.tar.xz
In preempt-rt we can not call the callbacks which take sleeping locks
from the timer interrupt context.
@ -16,10 +16,10 @@ Signed-off-by: Ingo Molnar <mingo@elte.hu>
include/linux/hrtimer.h | 7 ++
kernel/sched/core.c | 1
kernel/sched/rt.c | 1
kernel/time/hrtimer.c | 137 +++++++++++++++++++++++++++++++++++++++++++----
kernel/time/hrtimer.c | 144 ++++++++++++++++++++++++++++++++++++++++++++---
kernel/time/tick-sched.c | 1
kernel/watchdog.c | 1
6 files changed, 139 insertions(+), 9 deletions(-)
6 files changed, 146 insertions(+), 9 deletions(-)
--- a/include/linux/hrtimer.h
+++ b/include/linux/hrtimer.h
@ -67,7 +67,7 @@ Signed-off-by: Ingo Molnar <mingo@elte.hu>
unsigned int clock_was_set_seq;
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -306,6 +306,7 @@ static void init_rq_hrtick(struct rq *rq
@@ -345,6 +345,7 @@ static void init_rq_hrtick(struct rq *rq
hrtimer_init(&rq->hrtick_timer, CLOCK_MONOTONIC, HRTIMER_MODE_REL);
rq->hrtick_timer.function = hrtick;
@ -87,7 +87,7 @@ Signed-off-by: Ingo Molnar <mingo@elte.hu>
--- a/kernel/time/hrtimer.c
+++ b/kernel/time/hrtimer.c
@@ -724,11 +724,8 @@ static inline int hrtimer_is_hres_enable
@@ -720,11 +720,8 @@ static inline int hrtimer_is_hres_enable
static inline void hrtimer_switch_to_hres(void) { }
static inline void
hrtimer_force_reprogram(struct hrtimer_cpu_base *base, int skip_equal) { }
@ -101,7 +101,7 @@ Signed-off-by: Ingo Molnar <mingo@elte.hu>
static inline void hrtimer_init_hres(struct hrtimer_cpu_base *base) { }
static inline void retrigger_next_event(void *arg) { }
@@ -877,7 +874,7 @@ void hrtimer_wait_for_timer(const struct
@@ -873,7 +870,7 @@ void hrtimer_wait_for_timer(const struct
{
struct hrtimer_clock_base *base = timer->base;
@ -110,7 +110,7 @@ Signed-off-by: Ingo Molnar <mingo@elte.hu>
wait_event(base->cpu_base->wait,
!(hrtimer_callback_running(timer)));
}
@@ -927,6 +924,11 @@ static void __remove_hrtimer(struct hrti
@@ -923,6 +920,11 @@ static void __remove_hrtimer(struct hrti
if (!(state & HRTIMER_STATE_ENQUEUED))
return;
@ -122,7 +122,7 @@ Signed-off-by: Ingo Molnar <mingo@elte.hu>
if (!timerqueue_del(&base->active, &timer->node))
cpu_base->active_bases &= ~(1 << base->index);
@@ -1167,6 +1169,7 @@ static void __hrtimer_init(struct hrtime
@@ -1163,6 +1165,7 @@ static void __hrtimer_init(struct hrtime
base = hrtimer_clockid_to_base(clock_id);
timer->base = &cpu_base->clock_base[base];
@ -130,7 +130,7 @@ Signed-off-by: Ingo Molnar <mingo@elte.hu>
timerqueue_init(&timer->node);
#ifdef CONFIG_TIMER_STATS
@@ -1207,6 +1210,7 @@ bool hrtimer_active(const struct hrtimer
@@ -1203,6 +1206,7 @@ bool hrtimer_active(const struct hrtimer
seq = raw_read_seqcount_begin(&cpu_base->seq);
if (timer->state != HRTIMER_STATE_INACTIVE ||
@ -138,7 +138,7 @@ Signed-off-by: Ingo Molnar <mingo@elte.hu>
cpu_base->running == timer)
return true;
@@ -1305,12 +1309,112 @@ static void __run_hrtimer(struct hrtimer
@@ -1301,12 +1305,112 @@ static void __run_hrtimer(struct hrtimer
cpu_base->running = NULL;
}
@ -251,7 +251,7 @@ Signed-off-by: Ingo Molnar <mingo@elte.hu>
for (; active; base++, active >>= 1) {
struct timerqueue_node *node;
@@ -1350,9 +1454,14 @@ static void __hrtimer_run_queues(struct
@@ -1346,9 +1450,14 @@ static void __hrtimer_run_queues(struct
if (basenow.tv64 < hrtimer_get_softexpires_tv64(timer))
break;
@ -267,7 +267,7 @@ Signed-off-by: Ingo Molnar <mingo@elte.hu>
}
#ifdef CONFIG_HIGH_RES_TIMERS
@@ -1494,8 +1603,6 @@ void hrtimer_run_queues(void)
@@ -1490,8 +1599,6 @@ void hrtimer_run_queues(void)
now = hrtimer_update_base(cpu_base);
__hrtimer_run_queues(cpu_base, now);
raw_spin_unlock(&cpu_base->lock);
@ -276,7 +276,7 @@ Signed-off-by: Ingo Molnar <mingo@elte.hu>
}
/*
@@ -1517,6 +1624,7 @@ static enum hrtimer_restart hrtimer_wake
@@ -1513,6 +1620,7 @@ static enum hrtimer_restart hrtimer_wake
void hrtimer_init_sleeper(struct hrtimer_sleeper *sl, struct task_struct *task)
{
sl->timer.function = hrtimer_wakeup;
@ -284,7 +284,7 @@ Signed-off-by: Ingo Molnar <mingo@elte.hu>
sl->task = task;
}
EXPORT_SYMBOL_GPL(hrtimer_init_sleeper);
@@ -1651,6 +1759,7 @@ static void init_hrtimers_cpu(int cpu)
@@ -1647,6 +1755,7 @@ int hrtimers_prepare_cpu(unsigned int cp
for (i = 0; i < HRTIMER_MAX_CLOCK_BASES; i++) {
cpu_base->clock_base[i].cpu_base = cpu_base;
timerqueue_init_head(&cpu_base->clock_base[i].active);
@ -292,38 +292,43 @@ Signed-off-by: Ingo Molnar <mingo@elte.hu>
}
cpu_base->cpu = cpu;
@@ -1755,11 +1864,21 @@ static struct notifier_block hrtimers_nb
.notifier_call = hrtimer_cpu_notify,
};
@@ -1723,9 +1832,26 @@ int hrtimers_dead_cpu(unsigned int scpu)
#endif /* CONFIG_HOTPLUG_CPU */
+#ifdef CONFIG_PREEMPT_RT_BASE
+
+static void run_hrtimer_softirq(struct softirq_action *h)
+{
+ hrtimer_rt_run_pending();
+}
+
+static void hrtimers_open_softirq(void)
+{
+ open_softirq(HRTIMER_SOFTIRQ, run_hrtimer_softirq);
+}
+
+#else
+static void hrtimers_open_softirq(void) { }
+#endif
+
void __init hrtimers_init(void)
{
hrtimer_cpu_notify(&hrtimers_nb, (unsigned long)CPU_UP_PREPARE,
(void *)(long)smp_processor_id());
register_cpu_notifier(&hrtimers_nb);
+#ifdef CONFIG_PREEMPT_RT_BASE
+ open_softirq(HRTIMER_SOFTIRQ, run_hrtimer_softirq);
+#endif
hrtimers_prepare_cpu(smp_processor_id());
+ hrtimers_open_softirq();
}
/**
--- a/kernel/time/tick-sched.c
+++ b/kernel/time/tick-sched.c
@@ -1213,6 +1213,7 @@ void tick_setup_sched_timer(void)
@@ -1195,6 +1195,7 @@ void tick_setup_sched_timer(void)
* Emulate tick processing via per-CPU hrtimers:
*/
hrtimer_init(&ts->sched_timer, CLOCK_MONOTONIC, HRTIMER_MODE_ABS);
+ ts->sched_timer.irqsafe = 1;
ts->sched_timer.function = tick_sched_timer;
/* Get the next period (per cpu) */
/* Get the next period (per-CPU) */
--- a/kernel/watchdog.c
+++ b/kernel/watchdog.c
@@ -523,6 +523,7 @@ static void watchdog_enable(unsigned int

View File

@ -1,7 +1,7 @@
From: Ingo Molnar <mingo@elte.hu>
Date: Fri, 3 Jul 2009 08:29:34 -0500
Subject: hrtimers: Prepare full preemption
Origin: https://www.kernel.org/pub/linux/kernel/projects/rt/4.6/older/patches-4.6.2-rt5.tar.xz
Origin: https://www.kernel.org/pub/linux/kernel/projects/rt/4.8/older/patches-4.8-rt1.tar.xz
Make cancellation of a running callback in softirq context safe
against preemption.
@ -53,7 +53,7 @@ Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
}
--- a/kernel/time/hrtimer.c
+++ b/kernel/time/hrtimer.c
@@ -860,6 +860,32 @@ u64 hrtimer_forward(struct hrtimer *time
@@ -856,6 +856,32 @@ u64 hrtimer_forward(struct hrtimer *time
}
EXPORT_SYMBOL_GPL(hrtimer_forward);
@ -86,7 +86,7 @@ Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
/*
* enqueue_hrtimer - internal function to (re)start a timer
*
@@ -1077,7 +1103,7 @@ int hrtimer_cancel(struct hrtimer *timer
@@ -1073,7 +1099,7 @@ int hrtimer_cancel(struct hrtimer *timer
if (ret >= 0)
return ret;
@ -95,7 +95,7 @@ Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
}
}
EXPORT_SYMBOL_GPL(hrtimer_cancel);
@@ -1468,6 +1494,8 @@ void hrtimer_run_queues(void)
@@ -1464,6 +1490,8 @@ void hrtimer_run_queues(void)
now = hrtimer_update_base(cpu_base);
__hrtimer_run_queues(cpu_base, now);
raw_spin_unlock(&cpu_base->lock);
@ -104,16 +104,16 @@ Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
}
/*
@@ -1627,6 +1655,9 @@ static void init_hrtimers_cpu(int cpu)
@@ -1623,6 +1651,9 @@ int hrtimers_prepare_cpu(unsigned int cp
cpu_base->cpu = cpu;
hrtimer_init_hres(cpu_base);
+#ifdef CONFIG_PREEMPT_RT_BASE
+ init_waitqueue_head(&cpu_base->wait);
+#endif
return 0;
}
#ifdef CONFIG_HOTPLUG_CPU
--- a/kernel/time/itimer.c
+++ b/kernel/time/itimer.c
@@ -213,6 +213,7 @@ int do_setitimer(int which, struct itime

View File

@ -1,7 +1,7 @@
From: Mike Galbraith <bitbucket@online.de>
Date: Fri, 30 Aug 2013 07:57:25 +0200
Subject: hwlat-detector: Don't ignore threshold module parameter
Origin: https://www.kernel.org/pub/linux/kernel/projects/rt/4.6/older/patches-4.6.2-rt5.tar.xz
Origin: https://www.kernel.org/pub/linux/kernel/projects/rt/4.8/older/patches-4.8-rt1.tar.xz
If the user specified a threshold at module load time, use it.

View File

@ -1,7 +1,7 @@
From: Steven Rostedt <rostedt@goodmis.org>
Date: Mon, 19 Aug 2013 17:33:25 -0400
Subject: hwlat-detector: Update hwlat_detector to add outer loop detection
Origin: https://www.kernel.org/pub/linux/kernel/projects/rt/4.6/older/patches-4.6.2-rt5.tar.xz
Origin: https://www.kernel.org/pub/linux/kernel/projects/rt/4.8/older/patches-4.8-rt1.tar.xz
The hwlat_detector reads two timestamps in a row, then reports any
gap between those calls. The problem is, it misses everything between

View File

@ -1,7 +1,7 @@
From: Steven Rostedt <rostedt@goodmis.org>
Date: Mon, 19 Aug 2013 17:33:27 -0400
Subject: hwlat-detector: Use thread instead of stop machine
Origin: https://www.kernel.org/pub/linux/kernel/projects/rt/4.6/older/patches-4.6.2-rt5.tar.xz
Origin: https://www.kernel.org/pub/linux/kernel/projects/rt/4.8/older/patches-4.8-rt1.tar.xz
There's no reason to use stop machine to search for hardware latency.
Simply disabling interrupts while running the loop will do enough to

View File

@ -1,7 +1,7 @@
From: Steven Rostedt <rostedt@goodmis.org>
Date: Mon, 19 Aug 2013 17:33:26 -0400
Subject: hwlat-detector: Use trace_clock_local if available
Origin: https://www.kernel.org/pub/linux/kernel/projects/rt/4.6/older/patches-4.6.2-rt5.tar.xz
Origin: https://www.kernel.org/pub/linux/kernel/projects/rt/4.8/older/patches-4.8-rt1.tar.xz
As ktime_get() calls into the timing code which does a read_seq(), it
may be affected by other CPUS that touch that lock. To remove this

View File

@ -1,7 +1,7 @@
Subject: hwlatdetect.patch
From: Carsten Emde <C.Emde@osadl.org>
Date: Tue, 19 Jul 2011 13:53:12 +0100
Origin: https://www.kernel.org/pub/linux/kernel/projects/rt/4.6/older/patches-4.6.2-rt5.tar.xz
Origin: https://www.kernel.org/pub/linux/kernel/projects/rt/4.8/older/patches-4.8-rt1.tar.xz
Jon Masters developed this wonderful SMI detector. For details please
consult Documentation/hwlat_detector.txt. It could be ported to Linux
@ -123,7 +123,7 @@ Signed-off-by: Carsten Emde <C.Emde@osadl.org>
depends on PCI
--- a/drivers/misc/Makefile
+++ b/drivers/misc/Makefile
@@ -39,6 +39,7 @@ obj-$(CONFIG_C2PORT) += c2port/
@@ -38,6 +38,7 @@ obj-$(CONFIG_C2PORT) += c2port/
obj-$(CONFIG_HMC6352) += hmc6352.o
obj-y += eeprom/
obj-y += cb710/

View File

@ -1,7 +1,7 @@
From: Clark Williams <williams@redhat.com>
Date: Tue, 26 May 2015 10:43:43 -0500
Subject: i915: bogus warning from i915 when running on PREEMPT_RT
Origin: https://www.kernel.org/pub/linux/kernel/projects/rt/4.6/older/patches-4.6.2-rt5.tar.xz
Origin: https://www.kernel.org/pub/linux/kernel/projects/rt/4.8/older/patches-4.8-rt1.tar.xz
The i915 driver has a 'WARN_ON(!in_interrupt())' in the display
handler, which whines constanly on the RT kernel (since the interrupt
@ -19,9 +19,9 @@ Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
--- a/drivers/gpu/drm/i915/intel_display.c
+++ b/drivers/gpu/drm/i915/intel_display.c
@@ -11475,7 +11475,7 @@ void intel_check_page_flip(struct drm_de
@@ -11613,7 +11613,7 @@ void intel_check_page_flip(struct drm_i9
struct intel_crtc *intel_crtc = to_intel_crtc(crtc);
struct intel_unpin_work *work;
struct intel_flip_work *work;
- WARN_ON(!in_interrupt());
+ WARN_ON_NONRT(!in_interrupt());

View File

@ -1,7 +1,7 @@
From: Ingo Molnar <mingo@elte.hu>
Date: Fri, 3 Jul 2009 08:30:16 -0500
Subject: ide: Do not disable interrupts for PREEMPT-RT
Origin: https://www.kernel.org/pub/linux/kernel/projects/rt/4.6/older/patches-4.6.2-rt5.tar.xz
Origin: https://www.kernel.org/pub/linux/kernel/projects/rt/4.8/older/patches-4.8-rt1.tar.xz
Use the local_irq_*_nort variants.

View File

@ -1,7 +1,7 @@
From: Thomas Gleixner <tglx@linutronix.de>
Date: Tue, 14 Jul 2015 14:26:34 +0200
Subject: idr: Use local lock instead of preempt enable/disable
Origin: https://www.kernel.org/pub/linux/kernel/projects/rt/4.6/older/patches-4.6.2-rt5.tar.xz
Origin: https://www.kernel.org/pub/linux/kernel/projects/rt/4.8/older/patches-4.8-rt1.tar.xz
We need to protect the per cpu variable and prevent migration.

View File

@ -1,7 +1,7 @@
From: Sven-Thorsten Dietrich <sdietrich@novell.com>
Date: Fri, 3 Jul 2009 08:30:35 -0500
Subject: infiniband: Mellanox IB driver patch use _nort() primitives
Origin: https://www.kernel.org/pub/linux/kernel/projects/rt/4.6/older/patches-4.6.2-rt5.tar.xz
Origin: https://www.kernel.org/pub/linux/kernel/projects/rt/4.8/older/patches-4.8-rt1.tar.xz
Fixes in_atomic stack-dump, when Mellanox module is loaded into the RT
Kernel.
@ -21,7 +21,7 @@ Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
--- a/drivers/infiniband/ulp/ipoib/ipoib_multicast.c
+++ b/drivers/infiniband/ulp/ipoib/ipoib_multicast.c
@@ -883,7 +883,7 @@ void ipoib_mcast_restart_task(struct wor
@@ -897,7 +897,7 @@ void ipoib_mcast_restart_task(struct wor
ipoib_dbg_mcast(priv, "restarting multicast task\n");
@ -30,7 +30,7 @@ Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
netif_addr_lock(dev);
spin_lock(&priv->lock);
@@ -965,7 +965,7 @@ void ipoib_mcast_restart_task(struct wor
@@ -979,7 +979,7 @@ void ipoib_mcast_restart_task(struct wor
spin_unlock(&priv->lock);
netif_addr_unlock(dev);

View File

@ -1,7 +1,7 @@
From: Ingo Molnar <mingo@elte.hu>
Date: Fri, 3 Jul 2009 08:30:16 -0500
Subject: input: gameport: Do not disable interrupts on PREEMPT_RT
Origin: https://www.kernel.org/pub/linux/kernel/projects/rt/4.6/older/patches-4.6.2-rt5.tar.xz
Origin: https://www.kernel.org/pub/linux/kernel/projects/rt/4.8/older/patches-4.8-rt1.tar.xz
Use the _nort() primitives.

View File

@ -1,7 +1,7 @@
Subject: Intrduce migrate_disable() + cpu_light()
From: Thomas Gleixner <tglx@linutronix.de>
Date: Fri, 17 Jun 2011 15:42:38 +0200
Origin: https://www.kernel.org/pub/linux/kernel/projects/rt/4.6/older/patches-4.6.2-rt5.tar.xz
Origin: https://www.kernel.org/pub/linux/kernel/projects/rt/4.8/older/patches-4.8-rt1.tar.xz
Introduce migrate_disable(). The task can't be pushed to another CPU but can
be preempted.
@ -42,7 +42,7 @@ Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
--- a/include/linux/cpu.h
+++ b/include/linux/cpu.h
@@ -221,6 +221,9 @@ static inline void cpu_notifier_register
@@ -192,6 +192,9 @@ static inline void cpu_notifier_register
#endif /* CONFIG_SMP */
extern struct bus_type cpu_subsys;
@ -77,7 +77,7 @@ Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
#ifdef CONFIG_PREEMPT_NOTIFIERS
--- a/include/linux/sched.h
+++ b/include/linux/sched.h
@@ -1429,6 +1429,12 @@ struct task_struct {
@@ -1495,6 +1495,12 @@ struct task_struct {
#endif
unsigned int policy;
@ -90,7 +90,7 @@ Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
int nr_cpus_allowed;
cpumask_t cpus_allowed;
@@ -1875,14 +1881,6 @@ extern int arch_task_struct_size __read_
@@ -1946,14 +1952,6 @@ extern int arch_task_struct_size __read_
# define arch_task_struct_size (sizeof(struct task_struct))
#endif
@ -105,7 +105,7 @@ Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
#define TNF_MIGRATED 0x01
#define TNF_NO_GROUP 0x02
#define TNF_SHARED 0x04
@@ -3164,6 +3162,31 @@ static inline void set_task_cpu(struct t
@@ -3394,6 +3392,31 @@ static inline void set_task_cpu(struct t
#endif /* CONFIG_SMP */
@ -151,7 +151,7 @@ Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
* boot command line:
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -1051,6 +1051,11 @@ void do_set_cpus_allowed(struct task_str
@@ -1089,6 +1089,11 @@ void do_set_cpus_allowed(struct task_str
lockdep_assert_held(&p->pi_lock);
@ -163,16 +163,16 @@ Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
queued = task_on_rq_queued(p);
running = task_current(rq, p);
@@ -1112,7 +1117,7 @@ static int __set_cpus_allowed_ptr(struct
do_set_cpus_allowed(p, new_mask);
@@ -1168,7 +1173,7 @@ static int __set_cpus_allowed_ptr(struct
}
/* Can the task run on the task's current CPU? If so, we're done */
- if (cpumask_test_cpu(task_cpu(p), new_mask))
+ if (cpumask_test_cpu(task_cpu(p), new_mask) || __migrate_disabled(p))
goto out;
dest_cpu = cpumask_any_and(cpu_active_mask, new_mask);
@@ -3061,6 +3066,69 @@ static inline void schedule_debug(struct
dest_cpu = cpumask_any_and(cpu_valid_mask, new_mask);
@@ -3237,6 +3242,69 @@ static inline void schedule_debug(struct
schedstat_inc(this_rq(), sched_count);
}
@ -244,7 +244,7 @@ Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
*/
--- a/kernel/sched/debug.c
+++ b/kernel/sched/debug.c
@@ -559,6 +559,9 @@ void print_rt_rq(struct seq_file *m, int
@@ -552,6 +552,9 @@ void print_rt_rq(struct seq_file *m, int
P(rt_throttled);
PN(rt_time);
PN(rt_runtime);
@ -254,7 +254,7 @@ Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
#undef PN
#undef P
@@ -954,6 +957,10 @@ void proc_sched_show_task(struct task_st
@@ -947,6 +950,10 @@ void proc_sched_show_task(struct task_st
#endif
P(policy);
P(prio);

View File

@ -1,7 +1,7 @@
Subject: iommu/amd: Use WARN_ON_NORT in __attach_device()
From: Thomas Gleixner <tglx@linutronix.de>
Date: Sat, 27 Feb 2016 10:22:23 +0100
Origin: https://www.kernel.org/pub/linux/kernel/projects/rt/4.6/older/patches-4.6.2-rt5.tar.xz
Origin: https://www.kernel.org/pub/linux/kernel/projects/rt/4.8/older/patches-4.8-rt1.tar.xz
RT does not disable interrupts here, but the protection is still
correct. Fixup the WARN_ON so it won't yell on RT.
@ -17,7 +17,7 @@ Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
--- a/drivers/iommu/amd_iommu.c
+++ b/drivers/iommu/amd_iommu.c
@@ -2165,10 +2165,10 @@ static int __attach_device(struct iommu_
@@ -1832,10 +1832,10 @@ static int __attach_device(struct iommu_
int ret;
/*
@ -31,7 +31,7 @@ Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
/* lock domain */
spin_lock(&domain->lock);
@@ -2331,10 +2331,10 @@ static void __detach_device(struct iommu
@@ -2003,10 +2003,10 @@ static void __detach_device(struct iommu
struct protection_domain *domain;
/*

View File

@ -0,0 +1,82 @@
From: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
Date: Thu, 15 Sep 2016 16:58:19 +0200
Subject: [PATCH] iommu/iova: don't disable preempt around this_cpu_ptr()
Origin: https://www.kernel.org/pub/linux/kernel/projects/rt/4.8/older/patches-4.8-rt1.tar.xz
Commit 583248e6620a ("iommu/iova: Disable preemption around use of
this_cpu_ptr()") disables preemption while accessing a per-CPU variable.
This does keep lockdep quiet. However I don't see the point why it is
bad if we get migrated after its access to another CPU.
__iova_rcache_insert() and __iova_rcache_get() immediately locks the
variable after obtaining it - before accessing its members.
_If_ we get migrated away after retrieving the address of cpu_rcache
before taking the lock then the *other* task on the same CPU will
retrieve the same address of cpu_rcache and will spin on the lock.
alloc_iova_fast() disables preemption while invoking
free_cpu_cached_iovas() on each CPU. The function itself uses
per_cpu_ptr() which does not trigger a warning (like this_cpu_ptr()
does) because it assumes the caller knows what he does because he might
access the data structure from a different CPU (which means he needs
protection against concurrent access).
Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
---
drivers/iommu/iova.c | 9 +++------
1 file changed, 3 insertions(+), 6 deletions(-)
--- a/drivers/iommu/iova.c
+++ b/drivers/iommu/iova.c
@@ -22,6 +22,7 @@
#include <linux/slab.h>
#include <linux/smp.h>
#include <linux/bitops.h>
+#include <linux/cpu.h>
static bool iova_rcache_insert(struct iova_domain *iovad,
unsigned long pfn,
@@ -420,10 +421,8 @@ alloc_iova_fast(struct iova_domain *iova
/* Try replenishing IOVAs by flushing rcache. */
flushed_rcache = true;
- preempt_disable();
for_each_online_cpu(cpu)
free_cpu_cached_iovas(cpu, iovad);
- preempt_enable();
goto retry;
}
@@ -751,7 +750,7 @@ static bool __iova_rcache_insert(struct
bool can_insert = false;
unsigned long flags;
- cpu_rcache = get_cpu_ptr(rcache->cpu_rcaches);
+ cpu_rcache = raw_cpu_ptr(rcache->cpu_rcaches);
spin_lock_irqsave(&cpu_rcache->lock, flags);
if (!iova_magazine_full(cpu_rcache->loaded)) {
@@ -781,7 +780,6 @@ static bool __iova_rcache_insert(struct
iova_magazine_push(cpu_rcache->loaded, iova_pfn);
spin_unlock_irqrestore(&cpu_rcache->lock, flags);
- put_cpu_ptr(rcache->cpu_rcaches);
if (mag_to_free) {
iova_magazine_free_pfns(mag_to_free, iovad);
@@ -815,7 +813,7 @@ static unsigned long __iova_rcache_get(s
bool has_pfn = false;
unsigned long flags;
- cpu_rcache = get_cpu_ptr(rcache->cpu_rcaches);
+ cpu_rcache = raw_cpu_ptr(rcache->cpu_rcaches);
spin_lock_irqsave(&cpu_rcache->lock, flags);
if (!iova_magazine_empty(cpu_rcache->loaded)) {
@@ -837,7 +835,6 @@ static unsigned long __iova_rcache_get(s
iova_pfn = iova_magazine_pop(cpu_rcache->loaded, limit_pfn);
spin_unlock_irqrestore(&cpu_rcache->lock, flags);
- put_cpu_ptr(rcache->cpu_rcaches);
return iova_pfn;
}

View File

@ -0,0 +1,59 @@
From: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
Date: Thu, 15 Sep 2016 17:16:44 +0200
Subject: [PATCH] iommu/vt-d: don't disable preemption while accessing
deferred_flush()
Origin: https://www.kernel.org/pub/linux/kernel/projects/rt/4.8/older/patches-4.8-rt1.tar.xz
get_cpu() disables preemption and returns the current CPU number. The
CPU number is later only used once while retrieving the address of the
local's CPU deferred_flush pointer.
We can instead use raw_cpu_ptr() while we remain preemptible. The worst
thing that can happen is that flush_unmaps_timeout() is invoked multiple
times: once by taskA after seeing HIGH_WATER_MARK and then preempted to
another CPU and then by taskB which saw HIGH_WATER_MARK on the same CPU
as taskA. It is also likely that ->size got from HIGH_WATER_MARK to 0
right after its read because another CPU invoked flush_unmaps_timeout()
for this CPU.
The access to flush_data is protected by a spinlock so even if we get
migrated to another CPU or preempted - the data structure is protected.
While at it, I marked deferred_flush static since I can't find a
reference to it outside of this file.
Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
---
drivers/iommu/intel-iommu.c | 8 ++------
1 file changed, 2 insertions(+), 6 deletions(-)
--- a/drivers/iommu/intel-iommu.c
+++ b/drivers/iommu/intel-iommu.c
@@ -479,7 +479,7 @@ struct deferred_flush_data {
struct deferred_flush_table *tables;
};
-DEFINE_PER_CPU(struct deferred_flush_data, deferred_flush);
+static DEFINE_PER_CPU(struct deferred_flush_data, deferred_flush);
/* bitmap for indexing intel_iommus */
static int g_num_of_iommus;
@@ -3626,10 +3626,8 @@ static void add_unmap(struct dmar_domain
struct intel_iommu *iommu;
struct deferred_flush_entry *entry;
struct deferred_flush_data *flush_data;
- unsigned int cpuid;
- cpuid = get_cpu();
- flush_data = per_cpu_ptr(&deferred_flush, cpuid);
+ flush_data = raw_cpu_ptr(&deferred_flush);
/* Flush all CPUs' entries to avoid deferring too much. If
* this becomes a bottleneck, can just flush us, and rely on
@@ -3662,8 +3660,6 @@ static void add_unmap(struct dmar_domain
}
flush_data->size++;
spin_unlock_irqrestore(&flush_data->lock, flags);
-
- put_cpu();
}
static void intel_unmap(struct device *dev, dma_addr_t dev_addr, size_t size)

View File

@ -1,7 +1,7 @@
From: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
Date: Fri, 30 Oct 2015 11:59:07 +0100
Subject: ipc/msg: Implement lockless pipelined wakeups
Origin: https://www.kernel.org/pub/linux/kernel/projects/rt/4.6/older/patches-4.6.2-rt5.tar.xz
Origin: https://www.kernel.org/pub/linux/kernel/projects/rt/4.8/older/patches-4.8-rt1.tar.xz
This patch moves the wakeup_process() invocation so it is not done under
the perm->lock by making use of a lockless wake_q. With this change, the

View File

@ -1,7 +1,7 @@
Subject: ipc/sem: Rework semaphore wakeups
From: Peter Zijlstra <peterz@infradead.org>
Date: Wed, 14 Sep 2011 11:57:04 +0200
Origin: https://www.kernel.org/pub/linux/kernel/projects/rt/4.6/older/patches-4.6.2-rt5.tar.xz
Origin: https://www.kernel.org/pub/linux/kernel/projects/rt/4.8/older/patches-4.8-rt1.tar.xz
Current sysv sems have a weird ass wakeup scheme that involves keeping
preemption disabled over a potential O(n^2) loop and busy waiting on
@ -30,7 +30,7 @@ Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
--- a/ipc/sem.c
+++ b/ipc/sem.c
@@ -697,6 +697,13 @@ static int perform_atomic_semop(struct s
@@ -686,6 +686,13 @@ static int perform_atomic_semop(struct s
static void wake_up_sem_queue_prepare(struct list_head *pt,
struct sem_queue *q, int error)
{
@ -44,7 +44,7 @@ Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
if (list_empty(pt)) {
/*
* Hold preempt off so that we don't get preempted and have the
@@ -708,6 +715,7 @@ static void wake_up_sem_queue_prepare(st
@@ -697,6 +704,7 @@ static void wake_up_sem_queue_prepare(st
q->pid = error;
list_add_tail(&q->list, pt);
@ -52,7 +52,7 @@ Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
}
/**
@@ -721,6 +729,7 @@ static void wake_up_sem_queue_prepare(st
@@ -710,6 +718,7 @@ static void wake_up_sem_queue_prepare(st
*/
static void wake_up_sem_queue_do(struct list_head *pt)
{
@ -60,7 +60,7 @@ Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
struct sem_queue *q, *t;
int did_something;
@@ -733,6 +742,7 @@ static void wake_up_sem_queue_do(struct
@@ -722,6 +731,7 @@ static void wake_up_sem_queue_do(struct
}
if (did_something)
preempt_enable();

View File

@ -1,7 +1,7 @@
Subject: genirq: Allow disabling of softirq processing in irq thread context
From: Thomas Gleixner <tglx@linutronix.de>
Date: Tue, 31 Jan 2012 13:01:27 +0100
Origin: https://www.kernel.org/pub/linux/kernel/projects/rt/4.6/older/patches-4.6.2-rt5.tar.xz
Origin: https://www.kernel.org/pub/linux/kernel/projects/rt/4.8/older/patches-4.8-rt1.tar.xz
The processing of softirqs in irq thread context is a performance gain
for the non-rt workloads of a system, but it's counterproductive for
@ -65,7 +65,7 @@ Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
--- a/kernel/irq/manage.c
+++ b/kernel/irq/manage.c
@@ -938,7 +938,15 @@ irq_forced_thread_fn(struct irq_desc *de
@@ -881,7 +881,15 @@ irq_forced_thread_fn(struct irq_desc *de
local_bh_disable();
ret = action->thread_fn(action->irq, action->dev_id);
irq_finalize_oneshot(desc, action);
@ -82,7 +82,7 @@ Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
return ret;
}
@@ -1388,6 +1396,9 @@ static int
@@ -1338,6 +1346,9 @@ static int
irqd_set(&desc->irq_data, IRQD_NO_BALANCING);
}

View File

@ -1,7 +1,7 @@
Subject: irqwork: Move irq safe work to irq context
From: Thomas Gleixner <tglx@linutronix.de>
Date: Sun, 15 Nov 2015 18:40:17 +0100
Origin: https://www.kernel.org/pub/linux/kernel/projects/rt/4.6/older/patches-4.6.2-rt5.tar.xz
Origin: https://www.kernel.org/pub/linux/kernel/projects/rt/4.8/older/patches-4.8-rt1.tar.xz
On architectures where arch_irq_work_has_interrupt() returns false, we
end up running the irq safe work from the softirq context. That
@ -56,7 +56,7 @@ Cc: stable-rt@vger.kernel.org
* Synchronize against the irq_work @entry, ensures the entry is not
--- a/kernel/time/timer.c
+++ b/kernel/time/timer.c
@@ -1484,7 +1484,7 @@ void update_process_times(int user_tick)
@@ -1630,7 +1630,7 @@ void update_process_times(int user_tick)
scheduler_tick();
run_local_timers();
rcu_check_callbacks(user_tick);
@ -65,14 +65,14 @@ Cc: stable-rt@vger.kernel.org
if (in_irq())
irq_work_tick();
#endif
@@ -1498,9 +1498,7 @@ static void run_timer_softirq(struct sof
@@ -1670,9 +1670,7 @@ static void run_timer_softirq(struct sof
{
struct tvec_base *base = this_cpu_ptr(&tvec_bases);
struct timer_base *base = this_cpu_ptr(&timer_bases[BASE_STD]);
-#if defined(CONFIG_IRQ_WORK) && defined(CONFIG_PREEMPT_RT_FULL)
- irq_work_tick();
-#endif
+ irq_work_tick_soft();
if (time_after_eq(jiffies, base->timer_jiffies))
__run_timers(base);
__run_timers(base);
if (IS_ENABLED(CONFIG_NO_HZ_COMMON) && base->nohz_active)

Some files were not shown because too many files have changed in this diff Show More