diff options
author | Yunhong Jiang <yunhong.jiang@intel.com> | 2015-08-18 11:07:48 -0700 |
---|---|---|
committer | Yunhong Jiang <yunhong.jiang@linux.intel.com> | 2016-07-21 17:48:26 -0700 |
commit | ef4e798bc8761c401451649ed17a52e3e1c638e8 (patch) | |
tree | 84993bd865ba6e65be526af0259ca25f33c9d662 | |
parent | c715b6029fd5b4eaf323f5efde4ec5db5ba0a9b4 (diff) |
Add the "timers: do not raise softirq unconditionally" temporarily
This patch enable the nohz_full and is important for RT. Bring it back
temporarily, while waiting for more work on RT community.
Please refer to https://lkml.org/lkml/2015/3/17/783 for more information
of the revert.
A little rebase needed because it's reverted on old code base.
Please notice that we change the rt_mutex_trylock() so that we can get
the tvec_base lock there. This is sure to be wrong, and should be fixed
cleanly. And that's the major reason the original patch are reverted on the
upstream RT linux. Will discuss with upstream on how to achieve the
solution.
Upstream status: pending
Change-Id: I2747e087faf4145b69b800a60b8d9414bc71e206
Signed-off-by: Yunhong Jiang <yunhong.jiang@linux.intel.com>
-rw-r--r-- | kernel/kernel/locking/rtmutex.c | 7 | ||||
-rw-r--r-- | kernel/kernel/time/timer.c | 30 |
2 files changed, 29 insertions, 8 deletions
diff --git a/kernel/kernel/locking/rtmutex.c b/kernel/kernel/locking/rtmutex.c index 66971005c..30777e813 100644 --- a/kernel/kernel/locking/rtmutex.c +++ b/kernel/kernel/locking/rtmutex.c @@ -2058,13 +2058,6 @@ EXPORT_SYMBOL_GPL(rt_mutex_timed_lock); */ int __sched rt_mutex_trylock(struct rt_mutex *lock) { -#ifdef CONFIG_PREEMPT_RT_FULL - if (WARN_ON(in_irq() || in_nmi())) -#else - if (WARN_ON(in_irq() || in_nmi() || in_serving_softirq())) -#endif - return 0; - return rt_mutex_fasttrylock(lock, rt_mutex_slowtrylock); } EXPORT_SYMBOL_GPL(rt_mutex_trylock); diff --git a/kernel/kernel/time/timer.c b/kernel/kernel/time/timer.c index fee8682c2..76a301b24 100644 --- a/kernel/kernel/time/timer.c +++ b/kernel/kernel/time/timer.c @@ -1509,8 +1509,36 @@ static void run_timer_softirq(struct softirq_action *h) */ void run_local_timers(void) { + struct tvec_base *base = this_cpu_ptr(&tvec_bases); + hrtimer_run_queues(); - raise_softirq(TIMER_SOFTIRQ); + /* + * We can access this lockless as we are in the timer + * interrupt. If there are no timers queued, nothing to do in + * the timer softirq. + */ +#ifdef CONFIG_PREEMPT_RT_FULL + if (irq_work_needs_cpu()) { + raise_softirq(TIMER_SOFTIRQ); + return; + } + if (!spin_do_trylock(&base->lock)) { + raise_softirq(TIMER_SOFTIRQ); + return; + } +#endif + if (!base->active_timers) + goto out; + + /* Check whether the next pending timer has expired */ + if (time_before_eq(base->next_timer, jiffies)) + raise_softirq(TIMER_SOFTIRQ); +out: +#ifdef CONFIG_PREEMPT_RT_FULL + rt_spin_unlock(&base->lock); +#endif + /* The ; ensures that gcc won't complain in the !RT case */ + ; } #ifdef __ARCH_WANT_SYS_ALARM |