summaryrefslogtreecommitdiffstats
path: root/kernel/Documentation/RCU
diff options
context:
space:
mode:
Diffstat (limited to 'kernel/Documentation/RCU')
-rw-r--r--kernel/Documentation/RCU/00-INDEX36
-rw-r--r--kernel/Documentation/RCU/NMI-RCU.txt120
-rw-r--r--kernel/Documentation/RCU/RTFP.txt2812
-rw-r--r--kernel/Documentation/RCU/UP.txt135
-rw-r--r--kernel/Documentation/RCU/arrayRCU.txt141
-rw-r--r--kernel/Documentation/RCU/checklist.txt437
-rw-r--r--kernel/Documentation/RCU/listRCU.txt315
-rw-r--r--kernel/Documentation/RCU/lockdep-splat.txt110
-rw-r--r--kernel/Documentation/RCU/lockdep.txt112
-rw-r--r--kernel/Documentation/RCU/rcu.txt96
-rw-r--r--kernel/Documentation/RCU/rcu_dereference.txt371
-rw-r--r--kernel/Documentation/RCU/rcubarrier.txt321
-rw-r--r--kernel/Documentation/RCU/rculist_nulls.txt172
-rw-r--r--kernel/Documentation/RCU/rcuref.txt132
-rw-r--r--kernel/Documentation/RCU/stallwarn.txt250
-rw-r--r--kernel/Documentation/RCU/torture.txt334
-rw-r--r--kernel/Documentation/RCU/trace.txt554
-rw-r--r--kernel/Documentation/RCU/whatisRCU.txt1031
18 files changed, 7479 insertions, 0 deletions
diff --git a/kernel/Documentation/RCU/00-INDEX b/kernel/Documentation/RCU/00-INDEX
new file mode 100644
index 000000000..f773a264a
--- /dev/null
+++ b/kernel/Documentation/RCU/00-INDEX
@@ -0,0 +1,36 @@
+00-INDEX
+ - This file
+arrayRCU.txt
+ - Using RCU to Protect Read-Mostly Arrays
+checklist.txt
+ - Review Checklist for RCU Patches
+listRCU.txt
+ - Using RCU to Protect Read-Mostly Linked Lists
+lockdep.txt
+ - RCU and lockdep checking
+lockdep-splat.txt
+ - RCU Lockdep splats explained.
+NMI-RCU.txt
+ - Using RCU to Protect Dynamic NMI Handlers
+rcu_dereference.txt
+ - Proper care and feeding of return values from rcu_dereference()
+rcubarrier.txt
+ - RCU and Unloadable Modules
+rculist_nulls.txt
+ - RCU list primitives for use with SLAB_DESTROY_BY_RCU
+rcuref.txt
+ - Reference-count design for elements of lists/arrays protected by RCU
+rcu.txt
+ - RCU Concepts
+RTFP.txt
+ - List of RCU papers (bibliography) going back to 1980.
+stallwarn.txt
+ - RCU CPU stall warnings (module parameter rcu_cpu_stall_suppress)
+torture.txt
+ - RCU Torture Test Operation (CONFIG_RCU_TORTURE_TEST)
+trace.txt
+ - CONFIG_RCU_TRACE debugfs files and formats
+UP.txt
+ - RCU on Uniprocessor Systems
+whatisRCU.txt
+ - What is RCU?
diff --git a/kernel/Documentation/RCU/NMI-RCU.txt b/kernel/Documentation/RCU/NMI-RCU.txt
new file mode 100644
index 000000000..687777f83
--- /dev/null
+++ b/kernel/Documentation/RCU/NMI-RCU.txt
@@ -0,0 +1,120 @@
+Using RCU to Protect Dynamic NMI Handlers
+
+
+Although RCU is usually used to protect read-mostly data structures,
+it is possible to use RCU to provide dynamic non-maskable interrupt
+handlers, as well as dynamic irq handlers. This document describes
+how to do this, drawing loosely from Zwane Mwaikambo's NMI-timer
+work in "arch/x86/oprofile/nmi_timer_int.c" and in
+"arch/x86/kernel/traps.c".
+
+The relevant pieces of code are listed below, each followed by a
+brief explanation.
+
+ static int dummy_nmi_callback(struct pt_regs *regs, int cpu)
+ {
+ return 0;
+ }
+
+The dummy_nmi_callback() function is a "dummy" NMI handler that does
+nothing, but returns zero, thus saying that it did nothing, allowing
+the NMI handler to take the default machine-specific action.
+
+ static nmi_callback_t nmi_callback = dummy_nmi_callback;
+
+This nmi_callback variable is a global function pointer to the current
+NMI handler.
+
+ void do_nmi(struct pt_regs * regs, long error_code)
+ {
+ int cpu;
+
+ nmi_enter();
+
+ cpu = smp_processor_id();
+ ++nmi_count(cpu);
+
+ if (!rcu_dereference_sched(nmi_callback)(regs, cpu))
+ default_do_nmi(regs);
+
+ nmi_exit();
+ }
+
+The do_nmi() function processes each NMI. It first disables preemption
+in the same way that a hardware irq would, then increments the per-CPU
+count of NMIs. It then invokes the NMI handler stored in the nmi_callback
+function pointer. If this handler returns zero, do_nmi() invokes the
+default_do_nmi() function to handle a machine-specific NMI. Finally,
+preemption is restored.
+
+In theory, rcu_dereference_sched() is not needed, since this code runs
+only on i386, which in theory does not need rcu_dereference_sched()
+anyway. However, in practice it is a good documentation aid, particularly
+for anyone attempting to do something similar on Alpha or on systems
+with aggressive optimizing compilers.
+
+Quick Quiz: Why might the rcu_dereference_sched() be necessary on Alpha,
+ given that the code referenced by the pointer is read-only?
+
+
+Back to the discussion of NMI and RCU...
+
+ void set_nmi_callback(nmi_callback_t callback)
+ {
+ rcu_assign_pointer(nmi_callback, callback);
+ }
+
+The set_nmi_callback() function registers an NMI handler. Note that any
+data that is to be used by the callback must be initialized up -before-
+the call to set_nmi_callback(). On architectures that do not order
+writes, the rcu_assign_pointer() ensures that the NMI handler sees the
+initialized values.
+
+ void unset_nmi_callback(void)
+ {
+ rcu_assign_pointer(nmi_callback, dummy_nmi_callback);
+ }
+
+This function unregisters an NMI handler, restoring the original
+dummy_nmi_handler(). However, there may well be an NMI handler
+currently executing on some other CPU. We therefore cannot free
+up any data structures used by the old NMI handler until execution
+of it completes on all other CPUs.
+
+One way to accomplish this is via synchronize_sched(), perhaps as
+follows:
+
+ unset_nmi_callback();
+ synchronize_sched();
+ kfree(my_nmi_data);
+
+This works because synchronize_sched() blocks until all CPUs complete
+any preemption-disabled segments of code that they were executing.
+Since NMI handlers disable preemption, synchronize_sched() is guaranteed
+not to return until all ongoing NMI handlers exit. It is therefore safe
+to free up the handler's data as soon as synchronize_sched() returns.
+
+Important note: for this to work, the architecture in question must
+invoke nmi_enter() and nmi_exit() on NMI entry and exit, respectively.
+
+
+Answer to Quick Quiz
+
+ Why might the rcu_dereference_sched() be necessary on Alpha, given
+ that the code referenced by the pointer is read-only?
+
+ Answer: The caller to set_nmi_callback() might well have
+ initialized some data that is to be used by the new NMI
+ handler. In this case, the rcu_dereference_sched() would
+ be needed, because otherwise a CPU that received an NMI
+ just after the new handler was set might see the pointer
+ to the new NMI handler, but the old pre-initialized
+ version of the handler's data.
+
+ This same sad story can happen on other CPUs when using
+ a compiler with aggressive pointer-value speculation
+ optimizations.
+
+ More important, the rcu_dereference_sched() makes it
+ clear to someone reading the code that the pointer is
+ being protected by RCU-sched.
diff --git a/kernel/Documentation/RCU/RTFP.txt b/kernel/Documentation/RCU/RTFP.txt
new file mode 100644
index 000000000..f29bcbc46
--- /dev/null
+++ b/kernel/Documentation/RCU/RTFP.txt
@@ -0,0 +1,2812 @@
+Read the Fscking Papers!
+
+
+This document describes RCU-related publications, and is followed by
+the corresponding bibtex entries. A number of the publications may
+be found at http://www.rdrop.com/users/paulmck/RCU/. For others, browsers
+and search engines will usually find what you are looking for.
+
+The first thing resembling RCU was published in 1980, when Kung and Lehman
+[Kung80] recommended use of a garbage collector to defer destruction
+of nodes in a parallel binary search tree in order to simplify its
+implementation. This works well in environments that have garbage
+collectors, but most production garbage collectors incur significant
+overhead.
+
+In 1982, Manber and Ladner [Manber82,Manber84] recommended deferring
+destruction until all threads running at that time have terminated, again
+for a parallel binary search tree. This approach works well in systems
+with short-lived threads, such as the K42 research operating system.
+However, Linux has long-lived tasks, so more is needed.
+
+In 1986, Hennessy, Osisek, and Seigh [Hennessy89] introduced passive
+serialization, which is an RCU-like mechanism that relies on the presence
+of "quiescent states" in the VM/XA hypervisor that are guaranteed not
+to be referencing the data structure. However, this mechanism was not
+optimized for modern computer systems, which is not surprising given
+that these overheads were not so expensive in the mid-80s. Nonetheless,
+passive serialization appears to be the first deferred-destruction
+mechanism to be used in production. Furthermore, the relevant patent
+has lapsed, so this approach may be used in non-GPL software, if desired.
+(In contrast, implementation of RCU is permitted only in software licensed
+under either GPL or LGPL. Sorry!!!)
+
+In 1987, Rashid et al. described lazy TLB-flush [RichardRashid87a].
+At first glance, this has nothing to do with RCU, but nevertheless
+this paper helped inspire the update-side batching used in the later
+RCU implementation in DYNIX/ptx. In 1988, Barbara Liskov published
+a description of Argus that noted that use of out-of-date values can
+be tolerated in some situations. Thus, this paper provides some early
+theoretical justification for use of stale data.
+
+In 1990, Pugh [Pugh90] noted that explicitly tracking which threads
+were reading a given data structure permitted deferred free to operate
+in the presence of non-terminating threads. However, this explicit
+tracking imposes significant read-side overhead, which is undesirable
+in read-mostly situations. This algorithm does take pains to avoid
+write-side contention and parallelize the other write-side overheads by
+providing a fine-grained locking design, however, it would be interesting
+to see how much of the performance advantage reported in 1990 remains
+today.
+
+At about this same time, Andrews [Andrews91textbook] described ``chaotic
+relaxation'', where the normal barriers between successive iterations
+of convergent numerical algorithms are relaxed, so that iteration $n$
+might use data from iteration $n-1$ or even $n-2$. This introduces
+error, which typically slows convergence and thus increases the number of
+iterations required. However, this increase is sometimes more than made
+up for by a reduction in the number of expensive barrier operations,
+which are otherwise required to synchronize the threads at the end
+of each iteration. Unfortunately, chaotic relaxation requires highly
+structured data, such as the matrices used in scientific programs, and
+is thus inapplicable to most data structures in operating-system kernels.
+
+In 1992, Henry (now Alexia) Massalin completed a dissertation advising
+parallel programmers to defer processing when feasible to simplify
+synchronization [HMassalinPhD]. RCU makes extremely heavy use of
+this advice.
+
+In 1993, Jacobson [Jacobson93] verbally described what is perhaps the
+simplest deferred-free technique: simply waiting a fixed amount of time
+before freeing blocks awaiting deferred free. Jacobson did not describe
+any write-side changes he might have made in this work using SGI's Irix
+kernel. Aju John published a similar technique in 1995 [AjuJohn95].
+This works well if there is a well-defined upper bound on the length of
+time that reading threads can hold references, as there might well be in
+hard real-time systems. However, if this time is exceeded, perhaps due
+to preemption, excessive interrupts, or larger-than-anticipated load,
+memory corruption can ensue, with no reasonable means of diagnosis.
+Jacobson's technique is therefore inappropriate for use in production
+operating-system kernels, except when such kernels can provide hard
+real-time response guarantees for all operations.
+
+Also in 1995, Pu et al. [Pu95a] applied a technique similar to that of Pugh's
+read-side-tracking to permit replugging of algorithms within a commercial
+Unix operating system. However, this replugging permitted only a single
+reader at a time. The following year, this same group of researchers
+extended their technique to allow for multiple readers [Cowan96a].
+Their approach requires memory barriers (and thus pipeline stalls),
+but reduces memory latency, contention, and locking overheads.
+
+1995 also saw the first publication of DYNIX/ptx's RCU mechanism
+[Slingwine95], which was optimized for modern CPU architectures,
+and was successfully applied to a number of situations within the
+DYNIX/ptx kernel. The corresponding conference paper appeared in 1998
+[McKenney98].
+
+In 1999, the Tornado and K42 groups described their "generations"
+mechanism, which is quite similar to RCU [Gamsa99]. These operating
+systems made pervasive use of RCU in place of "existence locks", which
+greatly simplifies locking hierarchies and helps avoid deadlocks.
+
+The year 2000 saw an email exchange that would likely have
+led to yet another independent invention of something like RCU
+[RustyRussell2000a,RustyRussell2000b]. Instead, 2001 saw the first
+RCU presentation involving Linux [McKenney01a] at OLS. The resulting
+abundance of RCU patches was presented the following year [McKenney02a],
+and use of RCU in dcache was first described that same year [Linder02a].
+
+Also in 2002, Michael [Michael02b,Michael02a] presented "hazard-pointer"
+techniques that defer the destruction of data structures to simplify
+non-blocking synchronization (wait-free synchronization, lock-free
+synchronization, and obstruction-free synchronization are all examples of
+non-blocking synchronization). The corresponding journal article appeared
+in 2004 [MagedMichael04a]. This technique eliminates locking, reduces
+contention, reduces memory latency for readers, and parallelizes pipeline
+stalls and memory latency for writers. However, these techniques still
+impose significant read-side overhead in the form of memory barriers.
+Researchers at Sun worked along similar lines in the same timeframe
+[HerlihyLM02]. These techniques can be thought of as inside-out reference
+counts, where the count is represented by the number of hazard pointers
+referencing a given data structure rather than the more conventional
+counter field within the data structure itself. The key advantage
+of inside-out reference counts is that they can be stored in immortal
+variables, thus allowing races between access and deletion to be avoided.
+
+By the same token, RCU can be thought of as a "bulk reference count",
+where some form of reference counter covers all reference by a given CPU
+or thread during a set timeframe. This timeframe is related to, but
+not necessarily exactly the same as, an RCU grace period. In classic
+RCU, the reference counter is the per-CPU bit in the "bitmask" field,
+and each such bit covers all references that might have been made by
+the corresponding CPU during the prior grace period. Of course, RCU
+can be thought of in other terms as well.
+
+In 2003, the K42 group described how RCU could be used to create
+hot-pluggable implementations of operating-system functions [Appavoo03a].
+Later that year saw a paper describing an RCU implementation
+of System V IPC [Arcangeli03] (following up on a suggestion by
+Hugh Dickins [Dickins02a] and an implementation by Mingming Cao
+[MingmingCao2002IPCRCU]), and an introduction to RCU in Linux Journal
+[McKenney03a].
+
+2004 has seen a Linux-Journal article on use of RCU in dcache
+[McKenney04a], a performance comparison of locking to RCU on several
+different CPUs [McKenney04b], a dissertation describing use of RCU in a
+number of operating-system kernels [PaulEdwardMcKenneyPhD], a paper
+describing how to make RCU safe for soft-realtime applications [Sarma04c],
+and a paper describing SELinux performance with RCU [JamesMorris04b].
+
+2005 brought further adaptation of RCU to realtime use, permitting
+preemption of RCU realtime critical sections [PaulMcKenney05a,
+PaulMcKenney05b].
+
+2006 saw the first best-paper award for an RCU paper [ThomasEHart2006a],
+as well as further work on efficient implementations of preemptible
+RCU [PaulEMcKenney2006b], but priority-boosting of RCU read-side critical
+sections proved elusive. An RCU implementation permitting general
+blocking in read-side critical sections appeared [PaulEMcKenney2006c],
+Robert Olsson described an RCU-protected trie-hash combination
+[RobertOlsson2006a].
+
+2007 saw the journal version of the award-winning RCU paper from 2006
+[ThomasEHart2007a], as well as a paper demonstrating use of Promela
+and Spin to mechanically verify an optimization to Oleg Nesterov's
+QRCU [PaulEMcKenney2007QRCUspin], a design document describing
+preemptible RCU [PaulEMcKenney2007PreemptibleRCU], and the three-part
+LWN "What is RCU?" series [PaulEMcKenney2007WhatIsRCUFundamentally,
+PaulEMcKenney2008WhatIsRCUUsage, and PaulEMcKenney2008WhatIsRCUAPI].
+
+2008 saw a journal paper on real-time RCU [DinakarGuniguntala2008IBMSysJ],
+a history of how Linux changed RCU more than RCU changed Linux
+[PaulEMcKenney2008RCUOSR], and a design overview of hierarchical RCU
+[PaulEMcKenney2008HierarchicalRCU].
+
+2009 introduced user-level RCU algorithms [PaulEMcKenney2009MaliciousURCU],
+which Mathieu Desnoyers is now maintaining [MathieuDesnoyers2009URCU]
+[MathieuDesnoyersPhD]. TINY_RCU [PaulEMcKenney2009BloatWatchRCU] made
+its appearance, as did expedited RCU [PaulEMcKenney2009expeditedRCU].
+The problem of resizeable RCU-protected hash tables may now be on a path
+to a solution [JoshTriplett2009RPHash]. A few academic researchers are now
+using RCU to solve their parallel problems [HariKannan2009DynamicAnalysisRCU].
+
+2010 produced a simpler preemptible-RCU implementation
+based on TREE_RCU [PaulEMcKenney2010SimpleOptRCU], lockdep-RCU
+[PaulEMcKenney2010LockdepRCU], another resizeable RCU-protected hash
+table [HerbertXu2010RCUResizeHash] (this one consuming more memory,
+but allowing arbitrary changes in hash function, as required for DoS
+avoidance in the networking code), realization of the 2009 RCU-protected
+hash table with atomic node move [JoshTriplett2010RPHash], an update on
+the RCU API [PaulEMcKenney2010RCUAPI].
+
+2011 marked the inclusion of Nick Piggin's fully lockless dentry search
+[LinusTorvalds2011Linux2:6:38:rc1:NPigginVFS], an RCU-protected red-black
+tree using software transactional memory to protect concurrent updates
+(strange, but true!) [PhilHoward2011RCUTMRBTree], yet another variant of
+RCU-protected resizeable hash tables [Triplett:2011:RPHash], the 3.0 RCU
+trainwreck [PaulEMcKenney2011RCU3.0trainwreck], and Neil Brown's "Meet the
+Lockers" LWN article [NeilBrown2011MeetTheLockers]. Some academic
+work looked at debugging uses of RCU [Seyster:2011:RFA:2075416.2075425].
+
+In 2012, Josh Triplett received his Ph.D. with his dissertation
+covering RCU-protected resizable hash tables and the relationship
+between memory barriers and read-side traversal order: If the updater
+is making changes in the opposite direction from the read-side traveral
+order, the updater need only execute a memory-barrier instruction,
+but if in the same direction, the updater needs to wait for a grace
+period between the individual updates [JoshTriplettPhD]. Also in 2012,
+after seventeen years of attempts, an RCU paper made it into a top-flight
+academic journal, IEEE Transactions on Parallel and Distributed Systems
+[MathieuDesnoyers2012URCU]. A group of researchers in Spain applied
+user-level RCU to crowd simulation [GuillermoVigueras2012RCUCrowd], and
+another group of researchers in Europe produced a formal description of
+RCU based on separation logic [AlexeyGotsman2012VerifyGraceExtended],
+which was published in the 2013 European Symposium on Programming
+[AlexeyGotsman2013ESOPRCU].
+
+
+
+Bibtex Entries
+
+@article{Kung80
+,author="H. T. Kung and Q. Lehman"
+,title="Concurrent Manipulation of Binary Search Trees"
+,Year="1980"
+,Month="September"
+,journal="ACM Transactions on Database Systems"
+,volume="5"
+,number="3"
+,pages="354-382"
+,annotation={
+ Use garbage collector to clean up data after everyone is done with it.
+ .
+ Oldest use of something vaguely resembling RCU that I have found.
+ http://portal.acm.org/citation.cfm?id=320619&dl=GUIDE,
+ [Viewed December 3, 2007]
+}
+}
+
+@techreport{Manber82
+,author="Udi Manber and Richard E. Ladner"
+,title="Concurrency Control in a Dynamic Search Structure"
+,institution="Department of Computer Science, University of Washington"
+,address="Seattle, Washington"
+,year="1982"
+,number="82-01-01"
+,month="January"
+,pages="28"
+,annotation={
+ .
+ Superseded by Manber84.
+ .
+ Describes concurrent AVL tree implementation. Uses a
+ garbage-collection mechanism to handle concurrent use and deletion
+ of nodes in the tree, but lacks the summary-of-execution-history
+ concept of read-copy locking.
+ .
+ Keeps full list of processes that were active when a given
+ node was to be deleted, and waits until all such processes have
+ -terminated- before allowing this node to be reused. This is
+ not described in great detail -- one could imagine using process
+ IDs for this if the ID space was large enough that overlapping
+ never occurred.
+ .
+ This restriction makes this algorithm unsuitable for use in
+ systems comprised of long-lived processes. It also produces
+ completely unacceptable overhead in systems with large numbers
+ of processes. Finally, it is specific to AVL trees.
+ .
+ Cites Kung80, so not an independent invention, but the first
+ RCU-like usage that does not rely on an automatic garbage
+ collector.
+}
+}
+
+@article{Manber84
+,author="Udi Manber and Richard E. Ladner"
+,title="Concurrency Control in a Dynamic Search Structure"
+,Year="1984"
+,Month="September"
+,journal="ACM Transactions on Database Systems"
+,volume="9"
+,number="3"
+,pages="439-455"
+,annotation={
+ Describes concurrent AVL tree implementation. Uses a
+ garbage-collection mechanism to handle concurrent use and deletion
+ of nodes in the tree, but lacks the summary-of-execution-history
+ concept of read-copy locking.
+ .
+ Keeps full list of processes that were active when a given
+ node was to be deleted, and waits until all such processes have
+ -terminated- before allowing this node to be reused. This is
+ not described in great detail -- one could imagine using process
+ IDs for this if the ID space was large enough that overlapping
+ never occurred.
+ .
+ This restriction makes this algorithm unsuitable for use in
+ systems comprised of long-lived processes. It also produces
+ completely unacceptable overhead in systems with large numbers
+ of processes. Finally, it is specific to AVL trees.
+}
+}
+
+@Conference{RichardRashid87a
+,Author="Richard Rashid and Avadis Tevanian and Michael Young and
+David Golub and Robert Baron and David Black and William Bolosky and
+Jonathan Chew"
+,Title="Machine-Independent Virtual Memory Management for Paged
+Uniprocessor and Multiprocessor Architectures"
+,Booktitle="{2\textsuperscript{nd} Symposium on Architectural Support
+for Programming Languages and Operating Systems}"
+,Publisher="Association for Computing Machinery"
+,Month="October"
+,Year="1987"
+,pages="31-39"
+,Address="Palo Alto, CA"
+,note="Available:
+\url{http://www.cse.ucsc.edu/~randal/221/rashid-machvm.pdf}
+[Viewed February 17, 2005]"
+,annotation={
+ Describes lazy TLB flush, where one waits for each CPU to pass
+ through a scheduling-clock interrupt before reusing a given range
+ of virtual address. Does not describe how one determines that
+ all CPUs have in fact taken such an interrupt, though there are
+ no shortage of straightforward methods for accomplishing this.
+ .
+ Note that it does not make sense to just wait a fixed amount of
+ time, since a given CPU might have interrupts disabled for an
+ extended amount of time.
+}
+}
+
+@article{BarbaraLiskov1988ArgusCACM
+,author = {Barbara Liskov}
+,title = {Distributed programming in {Argus}}
+,journal = {Commun. ACM}
+,volume = {31}
+,number = {3}
+,year = {1988}
+,issn = {0001-0782}
+,pages = {300--312}
+,doi = {http://doi.acm.org/10.1145/42392.42399}
+,publisher = {ACM}
+,address = {New York, NY, USA}
+,annotation={
+ At the top of page 307: "Conflicts with deposits and withdrawals
+ are necessary if the reported total is to be up to date. They
+ could be avoided by having total return a sum that is slightly
+ out of date." Relies on semantics -- approximate numerical
+ values sometimes OK.
+}
+}
+
+@techreport{Hennessy89
+,author="James P. Hennessy and Damian L. Osisek and Joseph W. {Seigh II}"
+,title="Passive Serialization in a Multitasking Environment"
+,institution="US Patent and Trademark Office"
+,address="Washington, DC"
+,year="1989"
+,number="US Patent 4,809,168 (lapsed)"
+,month="February"
+,pages="11"
+}
+
+@techreport{Pugh90
+,author="William Pugh"
+,title="Concurrent Maintenance of Skip Lists"
+,institution="Institute of Advanced Computer Science Studies, Department of Computer Science, University of Maryland"
+,address="College Park, Maryland"
+,year="1990"
+,number="CS-TR-2222.1"
+,month="June"
+,annotation={
+ Concurrent access to skip lists. Has both weak and strong search.
+ Uses concept of ``garbage queue'', but has no real way of cleaning
+ the garbage efficiently.
+ .
+ Appears to be an independent invention of an RCU-like mechanism.
+}
+}
+
+# Was Adams91, see also syncrefs.bib.
+@Book{Andrews91textbook
+,Author="Gregory R. Andrews"
+,title="Concurrent Programming, Principles, and Practices"
+,Publisher="Benjamin Cummins"
+,Year="1991"
+,annotation={
+ Has a few paragraphs describing ``chaotic relaxation'', a
+ numerical analysis technique that allows multiprocessors to
+ avoid synchronization overhead by using possibly-stale data.
+ .
+ Seems like this is descended from yet another independent
+ invention of RCU-like function -- but this is restricted
+ in that reclamation is not necessary.
+}
+}
+
+@phdthesis{HMassalinPhD
+,author="H. Massalin"
+,title="Synthesis: An Efficient Implementation of Fundamental Operating
+System Services"
+,school="Columbia University"
+,address="New York, NY"
+,year="1992"
+,annotation={
+ Mondo optimizing compiler.
+ Wait-free stuff.
+ Good advice: defer work to avoid synchronization. See page 90
+ (PDF page 106), Section 5.4, fourth bullet point.
+}
+}
+
+@unpublished{Jacobson93
+,author="Van Jacobson"
+,title="Avoid Read-Side Locking Via Delayed Free"
+,year="1993"
+,month="September"
+,note="private communication"
+,annotation={
+ Use fixed time delay to approximate grace period. Very simple,
+ but subject to random memory corruption under heavy load.
+ .
+ Independent invention of RCU-like mechanism.
+}
+}
+
+@Conference{AjuJohn95
+,Author="Aju John"
+,Title="Dynamic vnodes -- Design and Implementation"
+,Booktitle="{USENIX Winter 1995}"
+,Publisher="USENIX Association"
+,Month="January"
+,Year="1995"
+,pages="11-23"
+,Address="New Orleans, LA"
+,note="Available:
+\url{https://www.usenix.org/publications/library/proceedings/neworl/full_papers/john.a}
+[Viewed October 1, 2010]"
+,annotation={
+ Age vnodes out of the cache, and have a fixed time set by a kernel
+ parameter. Not clear that all races were in fact correctly handled.
+ Used a 20-minute time by default, which would most definitely not
+ be suitable during DoS attacks or virus scans.
+ .
+ Apparently independent invention of RCU-like mechanism.
+}
+}
+
+@conference{Pu95a
+,Author = "Calton Pu and Tito Autrey and Andrew Black and Charles Consel and
+Crispin Cowan and Jon Inouye and Lakshmi Kethana and Jonathan Walpole and
+Ke Zhang"
+,Title = "Optimistic Incremental Specialization: Streamlining a Commercial
+,Operating System"
+,Booktitle = "15\textsuperscript{th} ACM Symposium on
+,Operating Systems Principles (SOSP'95)"
+,address = "Copper Mountain, CO"
+,month="December"
+,year="1995"
+,pages="314-321"
+,annotation={
+ Uses a replugger, but with a flag to signal when people are
+ using the resource at hand. Only one reader at a time.
+}
+}
+
+@conference{Cowan96a
+,Author = "Crispin Cowan and Tito Autrey and Charles Krasic and
+,Calton Pu and Jonathan Walpole"
+,Title = "Fast Concurrent Dynamic Linking for an Adaptive Operating System"
+,Booktitle = "International Conference on Configurable Distributed Systems
+(ICCDS'96)"
+,address = "Annapolis, MD"
+,month="May"
+,year="1996"
+,pages="108"
+,isbn="0-8186-7395-8"
+,annotation={
+ Uses a replugger, but with a counter to signal when people are
+ using the resource at hand. Allows multiple readers.
+}
+}
+
+@techreport{Slingwine95
+,author="John D. Slingwine and Paul E. McKenney"
+,title="Apparatus and Method for Achieving Reduced Overhead Mutual
+Exclusion and Maintaining Coherency in a Multiprocessor System
+Utilizing Execution History and Thread Monitoring"
+,institution="US Patent and Trademark Office"
+,address="Washington, DC"
+,year="1995"
+,number="US Patent 5,442,758"
+,month="August"
+,annotation={
+ Describes the parallel RCU infrastructure. Includes NUMA aspect
+ (structure of bitmap can reflect bus structure of computer system).
+ .
+ Another independent invention of an RCU-like mechanism, but the
+ "real" RCU this time!
+}
+}
+
+@techreport{Slingwine97
+,author="John D. Slingwine and Paul E. McKenney"
+,title="Method for Maintaining Data Coherency Using Thread Activity
+Summaries in a Multicomputer System"
+,institution="US Patent and Trademark Office"
+,address="Washington, DC"
+,year="1997"
+,number="US Patent 5,608,893"
+,month="March"
+,pages="19"
+,annotation={
+ Describes use of RCU to synchronize data between a pair of
+ SMP/NUMA computer systems.
+}
+}
+
+@techreport{Slingwine98
+,author="John D. Slingwine and Paul E. McKenney"
+,title="Apparatus and Method for Achieving Reduced Overhead Mutual
+Exclusion and Maintaining Coherency in a Multiprocessor System
+Utilizing Execution History and Thread Monitoring"
+,institution="US Patent and Trademark Office"
+,address="Washington, DC"
+,year="1998"
+,number="US Patent 5,727,209"
+,month="March"
+,annotation={
+ Describes doing an atomic update by copying the data item and
+ then substituting it into the data structure.
+}
+}
+
+@Conference{McKenney98
+,Author="Paul E. McKenney and John D. Slingwine"
+,Title="Read-Copy Update: Using Execution History to Solve Concurrency
+Problems"
+,Booktitle="{Parallel and Distributed Computing and Systems}"
+,Month="October"
+,Year="1998"
+,pages="509-518"
+,Address="Las Vegas, NV"
+,annotation={
+ Describes and analyzes RCU mechanism in DYNIX/ptx. Describes
+ application to linked list update and log-buffer flushing.
+ Defines 'quiescent state'. Includes both measured and analytic
+ evaluation.
+ http://www.rdrop.com/users/paulmck/RCU/rclockpdcsproof.pdf
+ [Viewed December 3, 2007]
+}
+}
+
+@Conference{Gamsa99
+,Author="Ben Gamsa and Orran Krieger and Jonathan Appavoo and Michael Stumm"
+,Title="Tornado: Maximizing Locality and Concurrency in a Shared Memory
+Multiprocessor Operating System"
+,Booktitle="{Proceedings of the 3\textsuperscript{rd} Symposium on
+Operating System Design and Implementation}"
+,Month="February"
+,Year="1999"
+,pages="87-100"
+,Address="New Orleans, LA"
+,annotation={
+ Use of RCU-like facility in K42/Tornado. Another independent
+ invention of RCU.
+ See especially pages 7-9 (Section 5).
+ http://www.usenix.org/events/osdi99/full_papers/gamsa/gamsa.pdf
+ [Viewed August 30, 2006]
+}
+}
+
+@unpublished{RustyRussell2000a
+,Author="Rusty Russell"
+,Title="Re: modular net drivers"
+,month="June"
+,year="2000"
+,day="23"
+,note="Available:
+\url{http://oss.sgi.com/projects/netdev/archive/2000-06/msg00250.html}
+[Viewed April 10, 2006]"
+,annotation={
+ Proto-RCU proposal from Phil Rumpf and Rusty Russell.
+ Yet another independent invention of RCU.
+ Outline of algorithm to unload modules...
+ .
+ Appeared on net-dev mailing list.
+}
+}
+
+@unpublished{RustyRussell2000b
+,Author="Rusty Russell"
+,Title="Re: modular net drivers"
+,month="June"
+,year="2000"
+,day="24"
+,note="Available:
+\url{http://oss.sgi.com/projects/netdev/archive/2000-06/msg00254.html}
+[Viewed April 10, 2006]"
+,annotation={
+ Proto-RCU proposal from Phil Rumpf and Rusty Russell.
+ .
+ Appeared on net-dev mailing list.
+}
+}
+
+@unpublished{McKenney01b
+,Author="Paul E. McKenney and Dipankar Sarma"
+,Title="Read-Copy Update Mutual Exclusion in {Linux}"
+,month="February"
+,year="2001"
+,note="Available:
+\url{http://lse.sourceforge.net/locking/rcu/rcupdate_doc.html}
+[Viewed October 18, 2004]"
+,annotation={
+ Prototypical Linux documentation for RCU.
+}
+}
+
+@techreport{Slingwine01
+,author="John D. Slingwine and Paul E. McKenney"
+,title="Apparatus and Method for Achieving Reduced Overhead Mutual
+Exclusion and Maintaining Coherency in a Multiprocessor System
+Utilizing Execution History and Thread Monitoring"
+,institution="US Patent and Trademark Office"
+,address="Washington, DC"
+,year="2001"
+,number="US Patent 6,219,690"
+,month="April"
+,annotation={
+ 'Change in mode' aspect of RCU. Can be thought of as a lazy barrier.
+}
+}
+
+@Conference{McKenney01a
+,Author="Paul E. McKenney and Jonathan Appavoo and Andi Kleen and
+Orran Krieger and Rusty Russell and Dipankar Sarma and Maneesh Soni"
+,Title="Read-Copy Update"
+,Booktitle="{Ottawa Linux Symposium}"
+,Month="July"
+,Year="2001"
+,note="Available:
+\url{http://www.linuxsymposium.org/2001/abstracts/readcopy.php}
+\url{http://www.rdrop.com/users/paulmck/RCU/rclock_OLS.2001.05.01c.pdf}
+[Viewed June 23, 2004]"
+,annotation={
+ Described RCU, and presented some patches implementing and using
+ it in the Linux kernel.
+}
+}
+
+@unpublished{McKenney01f
+,Author="Paul E. McKenney"
+,Title="{RFC:} patch to allow lock-free traversal of lists with insertion"
+,month="October"
+,year="2001"
+,note="Available:
+\url{http://marc.theaimsgroup.com/?l=linux-kernel&m=100259266316456&w=2}
+[Viewed June 23, 2004]"
+,annotation={
+ Memory-barrier and Alpha thread. 100 messages, not too bad...
+}
+}
+
+@unpublished{Spraul01
+,Author="Manfred Spraul"
+,Title="Re: {RFC:} patch to allow lock-free traversal of lists with insertion"
+,month="October"
+,year="2001"
+,note="Available:
+\url{http://marc.theaimsgroup.com/?l=linux-kernel&m=100264675012867&w=2}
+[Viewed June 23, 2004]"
+,annotation={
+ Suggested burying memory barriers in Linux's list-manipulation
+ primitives.
+}
+}
+
+@unpublished{LinusTorvalds2001a
+,Author="Linus Torvalds"
+,Title="{Re:} {[Lse-tech]} {Re:} {RFC:} patch to allow lock-free traversal of lists with insertion"
+,month="October"
+,year="2001"
+,note="Available:
+\url{http://lkml.org/lkml/2001/10/13/105}
+[Viewed August 21, 2004]"
+,annotation={
+}
+}
+
+@unpublished{Blanchard02a
+,Author="Anton Blanchard"
+,Title="some RCU dcache and ratcache results"
+,month="March"
+,year="2002"
+,note="Available:
+\url{http://marc.theaimsgroup.com/?l=linux-kernel&m=101637107412972&w=2}
+[Viewed October 18, 2004]"
+}
+
+@conference{Michael02b
+,author="Maged M. Michael"
+,title="High Performance Dynamic Lock-Free Hash Tables and List-Based Sets"
+,Year="2002"
+,Month="August"
+,booktitle="{Proceedings of the 14\textsuperscript{th} Annual ACM
+Symposium on Parallel
+Algorithms and Architecture}"
+,pages="73-82"
+,annotation={
+Like the title says...
+}
+}
+
+@Conference{Linder02a
+,Author="Hanna Linder and Dipankar Sarma and Maneesh Soni"
+,Title="Scalability of the Directory Entry Cache"
+,Booktitle="{Ottawa Linux Symposium}"
+,Month="June"
+,Year="2002"
+,pages="289-300"
+,annotation={
+ Measured scalability of Linux 2.4 kernel's directory-entry cache
+ (dcache), and measured some scalability enhancements.
+}
+}
+
+@Conference{McKenney02a
+,Author="Paul E. McKenney and Dipankar Sarma and
+Andrea Arcangeli and Andi Kleen and Orran Krieger and Rusty Russell"
+,Title="Read-Copy Update"
+,Booktitle="{Ottawa Linux Symposium}"
+,Month="June"
+,Year="2002"
+,pages="338-367"
+,note="Available:
+\url{http://www.linux.org.uk/~ajh/ols2002_proceedings.pdf.gz}
+[Viewed June 23, 2004]"
+,annotation={
+ Presented and compared a number of RCU implementations for the
+ Linux kernel.
+}
+}
+
+@unpublished{Sarma02a
+,Author="Dipankar Sarma"
+,Title="specweb99: dcache scalability results"
+,month="July"
+,year="2002"
+,note="Available:
+\url{http://marc.theaimsgroup.com/?l=linux-kernel&m=102645767914212&w=2}
+[Viewed June 23, 2004]"
+,annotation={
+ Compare fastwalk and RCU for dcache. RCU won.
+}
+}
+
+@unpublished{Barbieri02
+,Author="Luca Barbieri"
+,Title="Re: {[PATCH]} Initial support for struct {vfs\_cred}"
+,month="August"
+,year="2002"
+,note="Available:
+\url{http://marc.theaimsgroup.com/?l=linux-kernel&m=103082050621241&w=2}
+[Viewed: June 23, 2004]"
+,annotation={
+ Suggested RCU for vfs\_shared\_cred.
+}
+}
+
+@conference{Michael02a
+,author="Maged M. Michael"
+,title="Safe Memory Reclamation for Dynamic Lock-Free Objects Using Atomic
+Reads and Writes"
+,Year="2002"
+,Month="August"
+,booktitle="{Proceedings of the 21\textsuperscript{st} Annual ACM
+Symposium on Principles of Distributed Computing}"
+,pages="21-30"
+,annotation={
+ Each thread keeps an array of pointers to items that it is
+ currently referencing. Sort of an inside-out garbage collection
+ mechanism, but one that requires the accessing code to explicitly
+ state its needs. Also requires read-side memory barriers on
+ most architectures.
+}
+}
+
+@unpublished{Dickins02a
+,author="Hugh Dickins"
+,title="Use RCU for System-V IPC"
+,year="2002"
+,month="October"
+,note="private communication"
+}
+
+@InProceedings{HerlihyLM02
+,author={Maurice Herlihy and Victor Luchangco and Mark Moir}
+,title="The Repeat Offender Problem: A Mechanism for Supporting Dynamic-Sized,
+Lock-Free Data Structures"
+,booktitle={Proceedings of 16\textsuperscript{th} International
+Symposium on Distributed Computing}
+,year=2002
+,month="October"
+,pages="339-353"
+}
+
+@unpublished{Sarma02b
+,Author="Dipankar Sarma"
+,Title="Some dcache\_rcu benchmark numbers"
+,month="October"
+,year="2002"
+,note="Available:
+\url{http://marc.theaimsgroup.com/?l=linux-kernel&m=103462075416638&w=2}
+[Viewed June 23, 2004]"
+,annotation={
+ Performance of dcache RCU on kernbench for 16x NUMA-Q and 1x,
+ 2x, and 4x systems. RCU does no harm, and helps on 16x.
+}
+}
+
+@unpublished{MingmingCao2002IPCRCU
+,Author="Mingming Cao"
+,Title="[PATCH]updated ipc lock patch"
+,month="October"
+,year="2002"
+,note="Available:
+\url{https://lkml.org/lkml/2002/10/24/262}
+[Viewed February 15, 2014]"
+,annotation={
+ Mingming Cao's patch to introduce RCU to SysV IPC.
+}
+}
+
+@unpublished{LinusTorvalds2003a
+,Author="Linus Torvalds"
+,Title="Re: {[PATCH]} small fixes in brlock.h"
+,month="March"
+,year="2003"
+,note="Available:
+\url{http://lkml.org/lkml/2003/3/9/205}
+[Viewed March 13, 2006]"
+,annotation={
+ Linus suggests replacing brlock with RCU and/or seqlocks:
+ .
+ 'It's entirely possible that the current user could be replaced
+ by RCU and/or seqlocks, and we could get rid of brlocks entirely.'
+ .
+ Steve Hemminger responds by replacing them with RCU.
+}
+}
+
+@article{Appavoo03a
+,author="J. Appavoo and K. Hui and C. A. N. Soules and R. W. Wisniewski and
+D. M. {Da Silva} and O. Krieger and M. A. Auslander and D. J. Edelsohn and
+B. Gamsa and G. R. Ganger and P. McKenney and M. Ostrowski and
+B. Rosenburg and M. Stumm and J. Xenidis"
+,title="Enabling Autonomic Behavior in Systems Software With Hot Swapping"
+,Year="2003"
+,Month="January"
+,journal="IBM Systems Journal"
+,volume="42"
+,number="1"
+,pages="60-76"
+,annotation={
+ Use of RCU to enable hot-swapping for autonomic behavior in K42.
+}
+}
+
+@unpublished{Seigh03
+,author="Joseph W. {Seigh II}"
+,title="Read Copy Update"
+,Year="2003"
+,Month="March"
+,note="email correspondence"
+,annotation={
+ Described the relationship of the VM/XA passive serialization to RCU.
+}
+}
+
+@Conference{Arcangeli03
+,Author="Andrea Arcangeli and Mingming Cao and Paul E. McKenney and
+Dipankar Sarma"
+,Title="Using Read-Copy Update Techniques for {System V IPC} in the
+{Linux} 2.5 Kernel"
+,Booktitle="Proceedings of the 2003 USENIX Annual Technical Conference
+(FREENIX Track)"
+,Publisher="USENIX Association"
+,year="2003"
+,month="June"
+,pages="297-310"
+,annotation={
+ Compared updated RCU implementations for the Linux kernel, and
+ described System V IPC use of RCU, including order-of-magnitude
+ performance improvements.
+ http://www.rdrop.com/users/paulmck/RCU/rcu.FREENIX.2003.06.14.pdf
+}
+}
+
+@Conference{Soules03a
+,Author="Craig A. N. Soules and Jonathan Appavoo and Kevin Hui and
+Dilma {Da Silva} and Gregory R. Ganger and Orran Krieger and
+Michael Stumm and Robert W. Wisniewski and Marc Auslander and
+Michal Ostrowski and Bryan Rosenburg and Jimi Xenidis"
+,Title="System Support for Online Reconfiguration"
+,Booktitle="Proceedings of the 2003 USENIX Annual Technical Conference"
+,Publisher="USENIX Association"
+,year="2003"
+,month="June"
+,pages="141-154"
+}
+
+@article{McKenney03a
+,author="Paul E. McKenney"
+,title="Using {RCU} in the {Linux} 2.5 Kernel"
+,Year="2003"
+,Month="October"
+,journal="Linux Journal"
+,volume="1"
+,number="114"
+,pages="18-26"
+,note="Available:
+\url{http://www.linuxjournal.com/article/6993}
+[Viewed November 14, 2007]"
+,annotation={
+ Reader-friendly intro to RCU, with the infamous old-man-and-brat
+ cartoon.
+}
+}
+
+@unpublished{Sarma03a
+,Author="Dipankar Sarma"
+,Title="RCU low latency patches"
+,month="December"
+,year="2003"
+,note="Message ID: 20031222180114.GA2248@in.ibm.com"
+,annotation={
+ dipankar/ct.2004.03.27/RCUll.2003.12.22.patch
+}
+}
+
+@techreport{Friedberg03a
+,author="Stuart A. Friedberg"
+,title="Lock-Free Wild Card Search Data Structure and Method"
+,institution="US Patent and Trademark Office"
+,address="Washington, DC"
+,year="2003"
+,number="US Patent 6,662,184"
+,month="December"
+,pages="112"
+,annotation={
+ Applies RCU to a wildcard-search Patricia tree in order to permit
+ synchronization-free lookup. RCU is used to retain removed nodes
+ for a grace period before freeing them.
+}
+}
+
+@article{McKenney04a
+,author="Paul E. McKenney and Dipankar Sarma and Maneesh Soni"
+,title="Scaling dcache with {RCU}"
+,Year="2004"
+,Month="January"
+,journal="Linux Journal"
+,volume="1"
+,number="118"
+,pages="38-46"
+,annotation={
+ Reader friendly intro to dcache and RCU.
+ http://www.linuxjournal.com/node/7124
+ [Viewed December 26, 2010]
+}
+}
+
+@Conference{McKenney04b
+,Author="Paul E. McKenney"
+,Title="{RCU} vs. Locking Performance on Different {CPUs}"
+,Booktitle="{linux.conf.au}"
+,Month="January"
+,Year="2004"
+,Address="Adelaide, Australia"
+,note="Available:
+\url{http://www.linux.org.au/conf/2004/abstracts.html#90}
+\url{http://www.rdrop.com/users/paulmck/RCU/lockperf.2004.01.17a.pdf}
+[Viewed June 23, 2004]"
+,annotation={
+ Compares performance of RCU to that of other locking primitives
+ over a number of CPUs (x86, Opteron, Itanium, and PPC).
+}
+}
+
+@unpublished{Sarma04a
+,Author="Dipankar Sarma"
+,Title="{[PATCH]} {RCU} for low latency (experimental)"
+,month="March"
+,year="2004"
+,note="\url{http://marc.theaimsgroup.com/?l=linux-kernel&m=108003746402892&w=2}"
+,annotation={
+ Head of thread: dipankar/2004.03.23/rcu-low-lat.1.patch
+}
+}
+
+@unpublished{Sarma04b
+,Author="Dipankar Sarma"
+,Title="Re: {[PATCH]} {RCU} for low latency (experimental)"
+,month="March"
+,year="2004"
+,note="\url{http://marc.theaimsgroup.com/?l=linux-kernel&m=108016474829546&w=2}"
+,annotation={
+ dipankar/rcuth.2004.03.24/rcu-throttle.patch
+}
+}
+
+@unpublished{Spraul04a
+,Author="Manfred Spraul"
+,Title="[RFC] 0/5 rcu lock update"
+,month="May"
+,year="2004"
+,note="Available:
+\url{http://marc.theaimsgroup.com/?l=linux-kernel&m=108546407726602&w=2}
+[Viewed June 23, 2004]"
+,annotation={
+ Hierarchical-bitmap patch for RCU infrastructure.
+}
+}
+
+@unpublished{Steiner04a
+,Author="Jack Steiner"
+,Title="Re: [Lse-tech] [RFC, PATCH] 1/5 rcu lock update:
+Add per-cpu batch counter"
+,month="May"
+,year="2004"
+,note="Available:
+\url{http://marc.theaimsgroup.com/?l=linux-kernel&m=108551764515332&w=2}
+[Viewed June 23, 2004]"
+,annotation={
+ RCU runs reasonably on a 512-CPU SGI using Manfred Spraul's patches,
+ which may be found at:
+ https://lkml.org/lkml/2004/5/20/49 (split vars into cachelines)
+ https://lkml.org/lkml/2004/5/22/114 (cpu_quiet() patch)
+ https://lkml.org/lkml/2004/5/25/24 (0/5)
+ https://lkml.org/lkml/2004/5/25/23 (1/5)
+ https://lkml.org/lkml/2004/5/25/265 (works for Jack)
+ https://lkml.org/lkml/2004/5/25/20 (2/5)
+ https://lkml.org/lkml/2004/5/25/22 (3/5)
+ https://lkml.org/lkml/2004/5/25/19 (4/5)
+ https://lkml.org/lkml/2004/5/25/21 (5/5)
+}
+}
+
+@Conference{Sarma04c
+,Author="Dipankar Sarma and Paul E. McKenney"
+,Title="Making {RCU} Safe for Deep Sub-Millisecond Response
+Realtime Applications"
+,Booktitle="Proceedings of the 2004 USENIX Annual Technical Conference
+(FREENIX Track)"
+,Publisher="USENIX Association"
+,year="2004"
+,month="June"
+,pages="182-191"
+,annotation={
+ Describes and compares a number of modifications to the Linux RCU
+ implementation that make it friendly to realtime applications.
+ https://www.usenix.org/conference/2004-usenix-annual-technical-conference/making-rcu-safe-deep-sub-millisecond-response
+ [Viewed July 26, 2012]
+}
+}
+
+@article{MagedMichael04a
+,author="Maged M. Michael"
+,title="Hazard Pointers: Safe Memory Reclamation for Lock-Free Objects"
+,Year="2004"
+,Month="June"
+,journal="IEEE Transactions on Parallel and Distributed Systems"
+,volume="15"
+,number="6"
+,pages="491-504"
+,url="Available:
+\url{http://www.research.ibm.com/people/m/michael/ieeetpds-2004.pdf}
+[Viewed March 1, 2005]"
+,annotation={
+ New canonical hazard-pointer citation.
+}
+}
+
+@phdthesis{PaulEdwardMcKenneyPhD
+,author="Paul E. McKenney"
+,title="Exploiting Deferred Destruction:
+An Analysis of Read-Copy-Update Techniques
+in Operating System Kernels"
+,school="OGI School of Science and Engineering at
+Oregon Health and Sciences University"
+,year="2004"
+,annotation={
+ Describes RCU implementations and presents design patterns
+ corresponding to common uses of RCU in several operating-system
+ kernels.
+ http://www.rdrop.com/users/paulmck/RCU/RCUdissertation.2004.07.14e1.pdf
+ [Viewed October 15, 2004]
+}
+}
+
+@unpublished{PaulEMcKenney2004rcu:dereference
+,Author="Dipankar Sarma"
+,Title="{Re: RCU : Abstracted RCU dereferencing [5/5]}"
+,month="August"
+,year="2004"
+,note="Available:
+\url{http://lkml.org/lkml/2004/8/6/237}
+[Viewed June 8, 2010]"
+,annotation={
+ Introduce rcu_dereference().
+}
+}
+
+@unpublished{JimHouston04a
+,Author="Jim Houston"
+,Title="{[RFC\&PATCH] Alternative {RCU} implementation}"
+,month="August"
+,year="2004"
+,note="Available:
+\url{http://lkml.org/lkml/2004/8/30/87}
+[Viewed February 17, 2005]"
+,annotation={
+ Uses active code in rcu_read_lock() and rcu_read_unlock() to
+ make RCU happen, allowing RCU to function on CPUs that do not
+ receive a scheduling-clock interrupt.
+}
+}
+
+@unpublished{TomHart04a
+,Author="Thomas E. Hart"
+,Title="Master's Thesis: Applying Lock-free Techniques to the {Linux} Kernel"
+,month="October"
+,year="2004"
+,note="Available:
+\url{http://www.cs.toronto.edu/~tomhart/masters_thesis.html}
+[Viewed October 15, 2004]"
+,annotation={
+ Proposes comparing RCU to lock-free methods for the Linux kernel.
+}
+}
+
+@unpublished{Vaddagiri04a
+,Author="Srivatsa Vaddagiri"
+,Title="Subject: [RFC] Use RCU for tcp\_ehash lookup"
+,month="October"
+,year="2004"
+,note="Available:
+\url{http://marc.theaimsgroup.com/?t=109395731700004&r=1&w=2}
+[Viewed October 18, 2004]"
+,annotation={
+ Srivatsa's RCU patch for tcp_ehash lookup.
+}
+}
+
+@unpublished{Thirumalai04a
+,Author="Ravikiran Thirumalai"
+,Title="Subject: [patchset] Lockfree fd lookup 0 of 5"
+,month="October"
+,year="2004"
+,note="Available:
+\url{http://marc.theaimsgroup.com/?t=109144217400003&r=1&w=2}
+[Viewed October 18, 2004]"
+,annotation={
+ Ravikiran's lockfree FD patch.
+}
+}
+
+@unpublished{Thirumalai04b
+,Author="Ravikiran Thirumalai"
+,Title="Subject: Re: [patchset] Lockfree fd lookup 0 of 5"
+,month="October"
+,year="2004"
+,note="Available:
+\url{http://marc.theaimsgroup.com/?l=linux-kernel&m=109152521410459&w=2}
+[Viewed October 18, 2004]"
+,annotation={
+ Ravikiran's lockfree FD patch.
+}
+}
+
+@unpublished{PaulEMcKenney2004rcu:assign:pointer
+,Author="Paul E. McKenney"
+,Title="{[PATCH 1/3] RCU: \url{rcu_assign_pointer()} removal of memory barriers}"
+,month="October"
+,year="2004"
+,note="Available:
+\url{http://lkml.org/lkml/2004/10/23/241}
+[Viewed June 8, 2010]"
+,annotation={
+ Introduce rcu_assign_pointer().
+}
+}
+
+@unpublished{JamesMorris04a
+,Author="James Morris"
+,Title="{[PATCH 2/3] SELinux} scalability - convert {AVC} to {RCU}"
+,day="15"
+,month="November"
+,year="2004"
+,note="\url{http://marc.theaimsgroup.com/?l=linux-kernel&m=110054979416004&w=2}"
+,annotation={
+ James Morris posts Kaigai Kohei's patch to LKML.
+ [Viewed December 10, 2004]
+ Kaigai's patch is at https://lkml.org/lkml/2004/9/27/52
+}
+}
+
+@unpublished{JamesMorris04b
+,Author="James Morris"
+,Title="Recent Developments in {SELinux} Kernel Performance"
+,month="December"
+,year="2004"
+,note="Available:
+\url{http://www.livejournal.com/users/james_morris/2153.html}
+[Viewed December 10, 2004]"
+,annotation={
+ RCU helps SELinux performance. ;-) Made LWN.
+}
+}
+
+@unpublished{PaulMcKenney2005RCUSemantics
+,Author="Paul E. McKenney and Jonathan Walpole"
+,Title="{RCU} Semantics: A First Attempt"
+,month="January"
+,year="2005"
+,day="30"
+,note="Available:
+\url{http://www.rdrop.com/users/paulmck/RCU/rcu-semantics.2005.01.30a.pdf}
+[Viewed December 6, 2009]"
+,annotation={
+ Early derivation of RCU semantics.
+}
+}
+
+@unpublished{PaulMcKenney2005e
+,Author="Paul E. McKenney"
+,Title="Real-Time Preemption and {RCU}"
+,month="March"
+,year="2005"
+,day="17"
+,note="Available:
+\url{http://lkml.org/lkml/2005/3/17/199}
+[Viewed September 5, 2005]"
+,annotation={
+ First posting showing how RCU can be safely adapted for
+ preemptable RCU read side critical sections.
+}
+}
+
+@unpublished{EsbenNeilsen2005a
+,Author="Esben Neilsen"
+,Title="Re: Real-Time Preemption and {RCU}"
+,month="March"
+,year="2005"
+,day="18"
+,note="Available:
+\url{http://lkml.org/lkml/2005/3/18/122}
+[Viewed March 30, 2006]"
+,annotation={
+ Esben Neilsen suggests read-side suppression of grace-period
+ processing for crude-but-workable realtime RCU. The downside
+ is indefinite grace periods... But this is OK for experimentation
+ and testing.
+}
+}
+
+@unpublished{TomHart05a
+,Author="Thomas E. Hart and Paul E. McKenney and Angela Demke Brown"
+,Title="Efficient Memory Reclamation is Necessary for Fast Lock-Free
+Data Structures"
+,month="March"
+,year="2005"
+,note="Available:
+\url{ftp://ftp.cs.toronto.edu/csrg-technical-reports/515/}
+[Viewed March 4, 2005]"
+,annotation={
+ Comparison of RCU, QBSR, and EBSR. RCU wins for read-mostly
+ workloads. ;-)
+}
+}
+
+@unpublished{JonCorbet2005DeprecateSyncKernel
+,Author="Jonathan Corbet"
+,Title="API change: synchronize_kernel() deprecated"
+,month="May"
+,day="3"
+,year="2005"
+,note="Available:
+\url{http://lwn.net/Articles/134484/}
+[Viewed May 3, 2005]"
+,annotation={
+ Jon Corbet describes deprecation of synchronize_kernel()
+ in favor of synchronize_rcu() and synchronize_sched().
+}
+}
+
+@unpublished{PaulMcKenney05a
+,Author="Paul E. McKenney"
+,Title="{[RFC]} {RCU} and {CONFIG\_PREEMPT\_RT} progress"
+,month="May"
+,year="2005"
+,note="Available:
+\url{http://lkml.org/lkml/2005/5/9/185}
+[Viewed May 13, 2005]"
+,annotation={
+ First publication of working lock-based deferred free patches
+ for the CONFIG_PREEMPT_RT environment.
+}
+}
+
+@conference{PaulMcKenney05b
+,Author="Paul E. McKenney and Dipankar Sarma"
+,Title="Towards Hard Realtime Response from the {Linux} Kernel on {SMP} Hardware"
+,Booktitle="linux.conf.au 2005"
+,month="April"
+,year="2005"
+,address="Canberra, Australia"
+,note="Available:
+\url{http://www.rdrop.com/users/paulmck/RCU/realtimeRCU.2005.04.23a.pdf}
+[Viewed May 13, 2005]"
+,annotation={
+ Realtime turns into making RCU yet more realtime friendly.
+ http://lca2005.linux.org.au/Papers/Paul%20McKenney/Towards%20Hard%20Realtime%20Response%20from%20the%20Linux%20Kernel/LKS.2005.04.22a.pdf
+}
+}
+
+@unpublished{PaulEMcKenneyHomePage
+,Author="Paul E. McKenney"
+,Title="{Paul} {E.} {McKenney}"
+,month="May"
+,year="2005"
+,note="Available:
+\url{http://www.rdrop.com/users/paulmck/}
+[Viewed May 25, 2005]"
+,annotation={
+ Paul McKenney's home page.
+}
+}
+
+@unpublished{PaulEMcKenneyRCUPage
+,Author="Paul E. McKenney"
+,Title="Read-Copy Update {(RCU)}"
+,month="May"
+,year="2005"
+,note="Available:
+\url{http://www.rdrop.com/users/paulmck/RCU}
+[Viewed May 25, 2005]"
+,annotation={
+ Paul McKenney's RCU page.
+}
+}
+
+@unpublished{JosephSeigh2005a
+,Author="Joseph Seigh"
+,Title="{RCU}+{SMR} (hazard pointers)"
+,month="July"
+,year="2005"
+,note="Personal communication"
+,annotation={
+ Joe Seigh announcing his atomic-ptr-plus project.
+ http://sourceforge.net/projects/atomic-ptr-plus/
+}
+}
+
+@unpublished{JosephSeigh2005b
+,Author="Joseph Seigh"
+,Title="Lock-free synchronization primitives"
+,month="July"
+,day="6"
+,year="2005"
+,note="Available:
+\url{http://sourceforge.net/projects/atomic-ptr-plus/}
+[Viewed August 8, 2005]"
+,annotation={
+ Joe Seigh's atomic-ptr-plus project.
+}
+}
+
+@unpublished{PaulMcKenney2005c
+,Author="Paul E.McKenney"
+,Title="{[RFC,PATCH] RCU} and {CONFIG\_PREEMPT\_RT} sane patch"
+,month="August"
+,day="1"
+,year="2005"
+,note="Available:
+\url{http://lkml.org/lkml/2005/8/1/155}
+[Viewed March 14, 2006]"
+,annotation={
+ First operating counter-based realtime RCU patch posted to LKML.
+}
+}
+
+@unpublished{PaulMcKenney2005d
+,Author="Paul E. McKenney"
+,Title="Re: [Fwd: Re: [patch] Real-Time Preemption, -RT-2.6.13-rc4-V0.7.52-01]"
+,month="August"
+,day="8"
+,year="2005"
+,note="Available:
+\url{http://lkml.org/lkml/2005/8/8/108}
+[Viewed March 14, 2006]"
+,annotation={
+ First operating counter-based realtime RCU patch posted to LKML,
+ but fixed so that various unusual combinations of configuration
+ parameters all function properly.
+}
+}
+
+@unpublished{PaulMcKenney2005rcutorture
+,Author="Paul E. McKenney"
+,Title="{[PATCH]} {RCU} torture testing"
+,month="October"
+,day="1"
+,year="2005"
+,note="Available:
+\url{http://lkml.org/lkml/2005/10/1/70}
+[Viewed March 14, 2006]"
+,annotation={
+ First rcutorture patch.
+}
+}
+
+@unpublished{DavidSMiller2006HashedLocking
+,Author="David S. Miller"
+,Title="Re: [{PATCH}, {RFC}] {RCU} : {OOM} avoidance and lower latency"
+,month="January"
+,day="6"
+,year="2006"
+,note="Available:
+\url{https://lkml.org/lkml/2006/1/7/22}
+[Viewed February 29, 2012]"
+,annotation={
+ David Miller's view on hashed arrays of locks: used to really
+ like it, but time he saw an opportunity for this technique,
+ something else always proved superior. Partitioning or RCU. ;-)
+}
+}
+
+@conference{ThomasEHart2006a
+,Author="Thomas E. Hart and Paul E. McKenney and Angela Demke Brown"
+,Title="Making Lockless Synchronization Fast: Performance Implications
+of Memory Reclamation"
+,Booktitle="20\textsuperscript{th} {IEEE} International Parallel and
+Distributed Processing Symposium"
+,month="April"
+,year="2006"
+,day="25-29"
+,address="Rhodes, Greece"
+,note="Available:
+\url{http://www.rdrop.com/users/paulmck/RCU/hart_ipdps06.pdf}
+[Viewed April 28, 2008]"
+,annotation={
+ Compares QSBR, HPBR, EBR, and lock-free reference counting.
+ http://www.cs.toronto.edu/~tomhart/perflab/ipdps06.tgz
+}
+}
+
+@unpublished{NickPiggin2006radixtree
+,Author="Nick Piggin"
+,Title="[patch 3/3] radix-tree: {RCU} lockless readside"
+,month="June"
+,day="20"
+,year="2006"
+,note="Available:
+\url{http://lkml.org/lkml/2006/6/20/238}
+[Viewed March 25, 2008]"
+,annotation={
+ RCU-protected radix tree.
+}
+}
+
+@Conference{PaulEMcKenney2006b
+,Author="Paul E. McKenney and Dipankar Sarma and Ingo Molnar and
+Suparna Bhattacharya"
+,Title="Extending {RCU} for Realtime and Embedded Workloads"
+,Booktitle="{Ottawa Linux Symposium}"
+,Month="July"
+,Year="2006"
+,pages="v2 123-138"
+,note="Available:
+\url{http://www.linuxsymposium.org/2006/view_abstract.php?content_key=184}
+\url{http://www.rdrop.com/users/paulmck/RCU/OLSrtRCU.2006.08.11a.pdf}
+[Viewed January 1, 2007]"
+,annotation={
+ Described how to improve the -rt implementation of realtime RCU.
+}
+}
+
+@unpublished{WikipediaRCU
+,Author="Paul E. McKenney and Chris Purcell and Algae and Ben Schumin and
+Gaius Cornelius and Qwertyus and Neil Conway and Sbw and Blainster and
+Canis Rufus and Zoicon5 and Anome and Hal Eisen"
+,Title="Read-Copy Update"
+,month="July"
+,day="8"
+,year="2006"
+,note="\url{http://en.wikipedia.org/wiki/Read-copy-update}"
+,annotation={
+ Wikipedia RCU page as of July 8 2006.
+ [Viewed August 21, 2006]
+}
+}
+
+@Conference{NickPiggin2006LocklessPageCache
+,Author="Nick Piggin"
+,Title="A Lockless Pagecache in Linux---Introduction, Progress, Performance"
+,Booktitle="{Ottawa Linux Symposium}"
+,Month="July"
+,Year="2006"
+,pages="v2 249-254"
+,note="Available:
+\url{http://www.linuxsymposium.org/2006/view_abstract.php?content_key=184}
+[Viewed January 11, 2009]"
+,annotation={
+ Uses RCU-protected radix tree for a lockless page cache.
+}
+}
+
+@unpublished{PaulEMcKenney2006c
+,Author="Paul E. McKenney"
+,Title="Sleepable {RCU}"
+,month="October"
+,day="9"
+,year="2006"
+,note="Available:
+\url{http://lwn.net/Articles/202847/}
+Revised:
+\url{http://www.rdrop.com/users/paulmck/RCU/srcu.2007.01.14a.pdf}
+[Viewed August 21, 2006]"
+,annotation={
+ LWN article introducing SRCU.
+}
+}
+
+@unpublished{RobertOlsson2006a
+,Author="Robert Olsson and Stefan Nilsson"
+,Title="{TRASH}: A dynamic {LC}-trie and hash data structure"
+,month="August"
+,day="18"
+,year="2006"
+,note="\url{http://www.nada.kth.se/~snilsson/publications/TRASH/trash.pdf}"
+,annotation={
+ RCU-protected dynamic trie-hash combination.
+ [Viewed March 4, 2011]
+}
+}
+
+@unpublished{ChristophHellwig2006RCU2SRCU
+,Author="Christoph Hellwig"
+,Title="Re: {[-mm PATCH 1/4]} {RCU}: split classic rcu"
+,month="September"
+,day="28"
+,year="2006"
+,note="Available:
+\url{http://lkml.org/lkml/2006/9/28/160}
+[Viewed March 27, 2008]"
+}
+
+@unpublished{PaulEMcKenneyRCUusagePage
+,Author="Paul E. McKenney"
+,Title="{RCU} {Linux} Usage"
+,month="October"
+,year="2006"
+,note="Available:
+\url{http://www.rdrop.com/users/paulmck/RCU/linuxusage.html}
+[Viewed January 14, 2007]"
+,annotation={
+ Paul McKenney's RCU page showing graphs plotting Linux-kernel
+ usage of RCU.
+}
+}
+
+@unpublished{PaulEMcKenneyRCUusageRawDataPage
+,Author="Paul E. McKenney"
+,Title="Read-Copy Update {(RCU)} Usage in {Linux} Kernel"
+,month="October"
+,year="2006"
+,note="Available:
+\url{http://www.rdrop.com/users/paulmck/RCU/linuxusage/rculocktab.html}
+[Viewed January 14, 2007]"
+,annotation={
+ Paul McKenney's RCU page showing Linux usage of RCU in tabular
+ form, with links to corresponding cscope databases.
+}
+}
+
+@unpublished{GauthamShenoy2006RCUrwlock
+,Author="Gautham R. Shenoy"
+,Title="[PATCH 4/5] lock\_cpu\_hotplug: Redesign - Lightweight implementation of lock\_cpu\_hotplug"
+,month="October"
+,year="2006"
+,day=26
+,note="Available:
+\url{http://lkml.org/lkml/2006/10/26/73}
+[Viewed January 26, 2009]"
+,annotation={
+ RCU-based reader-writer lock that allows readers to proceed with
+ no memory barriers or atomic instruction in absence of writers.
+ If writer do show up, readers must of course wait as required by
+ the semantics of reader-writer locking. This is a recursive
+ lock.
+}
+}
+
+@unpublished{JensAxboe2006SlowSRCU
+,Author="Jens Axboe"
+,Title="Re: [patch] cpufreq: mark \url{cpufreq_tsc()} as
+\url{core_initcall_sync}"
+,month="November"
+,year="2006"
+,day=17
+,note="Available:
+\url{http://lkml.org/lkml/2006/11/17/56}
+[Viewed May 28, 2007]"
+,annotation={
+ SRCU's grace periods are too slow for Jens, even after a
+ factor-of-three speedup.
+ Sped-up version of SRCU at http://lkml.org/lkml/2006/11/17/359.
+}
+}
+
+@unpublished{OlegNesterov2006QRCU
+,Author="Oleg Nesterov"
+,Title="Re: [patch] cpufreq: mark {\tt cpufreq\_tsc()} as
+{\tt core\_initcall\_sync}"
+,month="November"
+,year="2006"
+,day=19
+,note="Available:
+\url{http://lkml.org/lkml/2006/11/19/69}
+[Viewed May 28, 2007]"
+,annotation={
+ First cut of QRCU. Expanded/corrected versions followed.
+ Used to be OlegNesterov2007QRCU, now time-corrected.
+}
+}
+
+@unpublished{OlegNesterov2006aQRCU
+,Author="Oleg Nesterov"
+,Title="Re: [RFC, PATCH 1/2] qrcu: {"quick"} srcu implementation"
+,month="November"
+,year="2006"
+,day=30
+,note="Available:
+\url{http://lkml.org/lkml/2006/11/29/330}
+[Viewed November 26, 2008]"
+,annotation={
+ Expanded/corrected version of QRCU.
+ Used to be OlegNesterov2007aQRCU, now time-corrected.
+}
+}
+
+@unpublished{EvgeniyPolyakov2006RCUslowdown
+,Author="Evgeniy Polyakov"
+,Title="Badness in postponing work"
+,month="December"
+,year="2006"
+,day=05
+,note="Available:
+\url{http://www.ioremap.net/node/41}
+[Viewed October 28, 2008]"
+,annotation={
+ Using RCU as a pure delay leads to a 2.5x slowdown in skbs in
+ the Linux kernel.
+}
+}
+
+@inproceedings{ChrisMatthews2006ClusteredObjectsRCU
+,author = {Matthews, Chris and Coady, Yvonne and Appavoo, Jonathan}
+,title = {Portability events: a programming model for scalable system infrastructures}
+,booktitle = {PLOS '06: Proceedings of the 3rd workshop on Programming languages and operating systems}
+,year = {2006}
+,isbn = {1-59593-577-0}
+,pages = {11}
+,location = {San Jose, California}
+,doi = {http://doi.acm.org/10.1145/1215995.1216006}
+,publisher = {ACM}
+,address = {New York, NY, USA}
+,annotation={
+ Uses K42's RCU-like functionality to manage clustered-object
+ lifetimes.
+}
+}
+
+@article{DilmaDaSilva2006K42
+,author = {Silva, Dilma Da and Krieger, Orran and Wisniewski, Robert W. and Waterland, Amos and Tam, David and Baumann, Andrew}
+,title = {K42: an infrastructure for operating system research}
+,journal = {SIGOPS Oper. Syst. Rev.}
+,volume = {40}
+,number = {2}
+,year = {2006}
+,issn = {0163-5980}
+,pages = {34--42}
+,doi = {http://doi.acm.org/10.1145/1131322.1131333}
+,publisher = {ACM}
+,address = {New York, NY, USA}
+,annotation={
+ Describes relationship of K42 generations to RCU.
+}
+}
+
+# CoreyMinyard2007list_splice_rcu
+@unpublished{CoreyMinyard2007list:splice:rcu
+,Author="Corey Minyard and Paul E. McKenney"
+,Title="{[PATCH]} add an {RCU} version of list splicing"
+,month="January"
+,year="2007"
+,day=3
+,note="Available:
+\url{http://lkml.org/lkml/2007/1/3/112}
+[Viewed May 28, 2007]"
+,annotation={
+ Patch for list_splice_rcu().
+}
+}
+
+@unpublished{PaulEMcKenney2007rcubarrier
+,Author="Paul E. McKenney"
+,Title="{RCU} and Unloadable Modules"
+,month="January"
+,day="14"
+,year="2007"
+,note="Available:
+\url{http://lwn.net/Articles/217484/}
+[Viewed November 22, 2007]"
+,annotation={
+ LWN article introducing the rcu_barrier() primitive.
+}
+}
+
+@unpublished{PeterZijlstra2007SyncBarrier
+,Author="Peter Zijlstra and Ingo Molnar"
+,Title="{[PATCH 3/7]} barrier: a scalable synchonisation barrier"
+,month="January"
+,year="2007"
+,day=28
+,note="Available:
+\url{http://lkml.org/lkml/2007/1/28/34}
+[Viewed March 27, 2008]"
+,annotation={
+ RCU-like implementation for frequent updaters and rare readers(!).
+ Subsumed into QRCU. Maybe...
+}
+}
+
+@unpublished{PaulEMcKenney2007BoostRCU
+,Author="Paul E. McKenney"
+,Title="Priority-Boosting {RCU} Read-Side Critical Sections"
+,month="February"
+,day="5"
+,year="2007"
+,note="\url{http://lwn.net/Articles/220677/}"
+,annotation={
+ LWN article introducing RCU priority boosting.
+ Revised:
+ http://www.rdrop.com/users/paulmck/RCU/RCUbooststate.2007.04.16a.pdf
+ [Viewed September 7, 2007]
+}
+}
+
+@unpublished{PaulMcKenney2007QRCUpatch
+,Author="Paul E. McKenney"
+,Title="{[PATCH]} {QRCU} with lockless fastpath"
+,month="February"
+,year="2007"
+,day=24
+,note="Available:
+\url{http://lkml.org/lkml/2007/2/25/18}
+[Viewed March 27, 2008]"
+,annotation={
+ Patch for QRCU supplying lock-free fast path.
+}
+}
+
+@article{JonathanAppavoo2007K42RCU
+,author = {Appavoo, Jonathan and Silva, Dilma Da and Krieger, Orran and Auslander, Marc and Ostrowski, Michal and Rosenburg, Bryan and Waterland, Amos and Wisniewski, Robert W. and Xenidis, Jimi and Stumm, Michael and Soares, Livio}
+,title = {Experience distributing objects in an SMMP OS}
+,journal = {ACM Trans. Comput. Syst.}
+,volume = {25}
+,number = {3}
+,year = {2007}
+,issn = {0734-2071}
+,pages = {6/1--6/52}
+,doi = {http://doi.acm.org/10.1145/1275517.1275518}
+,publisher = {ACM}
+,address = {New York, NY, USA}
+,annotation={
+ Role of RCU in K42.
+}
+}
+
+@conference{RobertOlsson2007Trash
+,Author="Robert Olsson and Stefan Nilsson"
+,Title="{TRASH}: A dynamic {LC}-trie and hash data structure"
+,booktitle="Workshop on High Performance Switching and Routing (HPSR'07)"
+,month="May"
+,year="2007"
+,note="Available:
+\url{http://ieeexplore.ieee.org/xpl/freeabs_all.jsp?arnumber=4281239}
+[Viewed October 1, 2010]"
+,annotation={
+ RCU-protected dynamic trie-hash combination.
+}
+}
+
+@conference{PeterZijlstra2007ConcurrentPagecacheRCU
+,Author="Peter Zijlstra"
+,Title="Concurrent Pagecache"
+,Booktitle="Linux Symposium"
+,month="June"
+,year="2007"
+,address="Ottawa, Canada"
+,note="Available:
+\url{http://ols.108.redhat.com/2007/Reprints/zijlstra-Reprint.pdf}
+[Viewed April 14, 2008]"
+,annotation={
+ Page-cache modifications permitting RCU readers and concurrent
+ updates.
+}
+}
+
+@unpublished{PaulEMcKenney2007whatisRCU
+,Author="Paul E. McKenney"
+,Title="What is {RCU}?"
+,year="2007"
+,month="07"
+,note="Available:
+\url{http://www.rdrop.com/users/paulmck/RCU/whatisRCU.html}
+[Viewed July 6, 2007]"
+,annotation={
+ Describes RCU in Linux kernel.
+}
+}
+
+@unpublished{PaulEMcKenney2007QRCUspin
+,Author="Paul E. McKenney"
+,Title="Using {Promela} and {Spin} to verify parallel algorithms"
+,month="August"
+,day="1"
+,year="2007"
+,note="Available:
+\url{http://lwn.net/Articles/243851/}
+[Viewed September 8, 2007]"
+,annotation={
+ LWN article describing Promela and spin, and also using Oleg
+ Nesterov's QRCU as an example (with Paul McKenney's fastpath).
+ Merged patch at: http://lkml.org/lkml/2007/2/25/18
+}
+}
+
+@unpublished{PaulEMcKenney2007WG21DDOatomics
+,Author="Paul E. McKenney and Hans-J. Boehm and Lawrence Crowl"
+,Title="C++ Data-Dependency Ordering: Atomics and Memory Model"
+,month="August"
+,day="3"
+,year="2007"
+,note="Available:
+\url{http://open-std.org/jtc1/sc22/wg21/docs/papers/2008/n2664.htm}
+[Viewed December 7, 2009]"
+,annotation={
+ RCU for C++, parts 1 and 2.
+}
+}
+
+@unpublished{PaulEMcKenney2007WG21DDOannotation
+,Author="Paul E. McKenney and Lawrence Crowl"
+,Title="C++ Data-Dependency Ordering: Function Annotation"
+,month="September"
+,day="18"
+,year="2008"
+,note="Available:
+\url{http://open-std.org/jtc1/sc22/wg21/docs/papers/2008/n2782.htm}
+[Viewed December 7, 2009]"
+,annotation={
+ RCU for C++, part 2, updated many times.
+}
+}
+
+@unpublished{PaulEMcKenney2007PreemptibleRCUPatch
+,Author="Paul E. McKenney"
+,Title="[PATCH RFC 0/9] {RCU}: Preemptible {RCU}"
+,month="September"
+,day="10"
+,year="2007"
+,note="Available:
+\url{http://lkml.org/lkml/2007/9/10/213}
+[Viewed October 25, 2007]"
+,annotation={
+ Final patch for preemptable RCU to -rt. (Later patches were
+ to mainline, eventually incorporated.)
+}
+}
+
+@unpublished{PaulEMcKenney2007PreemptibleRCU
+,Author="Paul E. McKenney"
+,Title="The design of preemptible read-copy-update"
+,month="October"
+,day="8"
+,year="2007"
+,note="Available:
+\url{http://lwn.net/Articles/253651/}
+[Viewed October 25, 2007]"
+,annotation={
+ LWN article describing the design of preemptible RCU.
+}
+}
+
+@article{ThomasEHart2007a
+,Author="Thomas E. Hart and Paul E. McKenney and Angela Demke Brown and Jonathan Walpole"
+,Title="Performance of memory reclamation for lockless synchronization"
+,journal="J. Parallel Distrib. Comput."
+,volume={67}
+,number="12"
+,year="2007"
+,issn="0743-7315"
+,pages="1270--1285"
+,doi="http://dx.doi.org/10.1016/j.jpdc.2007.04.010"
+,publisher="Academic Press, Inc."
+,address="Orlando, FL, USA"
+,annotation={
+ Compares QSBR, HPBR, EBR, and lock-free reference counting.
+ Journal version of ThomasEHart2006a.
+}
+}
+
+# MathieuDesnoyers2007call_rcu_schedNeeded
+@unpublished{MathieuDesnoyers2007call:rcu:schedNeeded
+,Author="Mathieu Desnoyers"
+,Title="Re: [patch 1/2] {Linux} Kernel Markers - Support Multiple Probes"
+,month="December"
+,day="20"
+,year="2007"
+,note="Available:
+\url{http://lkml.org/lkml/2007/12/20/244}
+[Viewed March 27, 2008]"
+,annotation={
+ Request for call_rcu_sched() and rcu_barrier_sched().
+}
+}
+
+
+########################################################################
+#
+# "What is RCU?" LWN series.
+#
+# http://lwn.net/Articles/262464/ (What is RCU, Fundamentally?)
+# http://lwn.net/Articles/263130/ (What is RCU's Usage?)
+# http://lwn.net/Articles/264090/ (What is RCU's API?)
+
+@unpublished{PaulEMcKenney2007WhatIsRCUFundamentally
+,Author="Paul E. McKenney and Jonathan Walpole"
+,Title="What is {RCU}, Fundamentally?"
+,month="December"
+,day="17"
+,year="2007"
+,note="Available:
+\url{http://lwn.net/Articles/262464/}
+[Viewed December 27, 2007]"
+,annotation={
+ Lays out the three basic components of RCU: (1) publish-subscribe,
+ (2) wait for pre-existing readers to complete, and (2) maintain
+ multiple versions.
+}
+}
+
+@unpublished{PaulEMcKenney2008WhatIsRCUUsage
+,Author="Paul E. McKenney"
+,Title="What is {RCU}? Part 2: Usage"
+,month="January"
+,day="4"
+,year="2008"
+,note="Available:
+\url{http://lwn.net/Articles/263130/}
+[Viewed January 4, 2008]"
+,annotation={
+ Lays out six uses of RCU:
+ 1. RCU is a Reader-Writer Lock Replacement
+ 2. RCU is a Restricted Reference-Counting Mechanism
+ 3. RCU is a Bulk Reference-Counting Mechanism
+ 4. RCU is a Poor Man's Garbage Collector
+ 5. RCU is a Way of Providing Existence Guarantees
+ 6. RCU is a Way of Waiting for Things to Finish
+}
+}
+
+@unpublished{PaulEMcKenney2008WhatIsRCUAPI
+,Author="Paul E. McKenney"
+,Title="{RCU} part 3: the {RCU} {API}"
+,month="January"
+,day="17"
+,year="2008"
+,note="Available:
+\url{http://lwn.net/Articles/264090/}
+[Viewed January 10, 2008]"
+,annotation={
+ Gives an overview of the Linux-kernel RCU API and a brief annotated RCU
+ bibliography.
+}
+}
+
+#
+# "What is RCU?" LWN series.
+#
+########################################################################
+
+
+@unpublished{SteveRostedt2008dyntickRCUpatch
+,Author="Steven Rostedt and Paul E. McKenney"
+,Title="{[PATCH]} add support for dynamic ticks and preempt rcu"
+,month="January"
+,day="29"
+,year="2008"
+,note="Available:
+\url{http://lkml.org/lkml/2008/1/29/208}
+[Viewed March 27, 2008]"
+,annotation={
+ Patch that prevents preemptible RCU from unnecessarily waking
+ up dynticks-idle CPUs.
+}
+}
+
+@unpublished{PaulEMcKenney2008LKMLDependencyOrdering
+,Author="Paul E. McKenney"
+,Title="Re: [PATCH 02/22 -v7] Add basic support for gcc profiler instrumentation"
+,month="February"
+,day="1"
+,year="2008"
+,note="Available:
+\url{http://lkml.org/lkml/2008/2/2/255}
+[Viewed October 18, 2008]"
+,annotation={
+ Explanation of compilers violating dependency ordering.
+}
+}
+
+@Conference{PaulEMcKenney2008Beijing
+,Author="Paul E. McKenney"
+,Title="Introducing Technology Into {Linux} Or:
+Introducing your technology Into {Linux} will require introducing a
+lot of {Linux} into your technology!!!"
+,Booktitle="2008 Linux Developer Symposium - China"
+,Publisher="OSS China"
+,Month="February"
+,Year="2008"
+,Address="Beijing, China"
+,note="Available:
+\url{http://www.rdrop.com/users/paulmck/RCU/TechIntroLinux.2008.02.19a.pdf}
+[Viewed August 12, 2008]"
+}
+
+@unpublished{PaulEMcKenney2008dynticksRCU
+,Author="Paul E. McKenney and Steven Rostedt"
+,Title="Integrating and Validating dynticks and Preemptable RCU"
+,month="April"
+,day="24"
+,year="2008"
+,note="Available:
+\url{http://lwn.net/Articles/279077/}
+[Viewed April 24, 2008]"
+,annotation={
+ Describes use of Promela and Spin to validate (and fix!) the
+ dynticks/RCU interface.
+}
+}
+
+@article{DinakarGuniguntala2008IBMSysJ
+,author="D. Guniguntala and P. E. McKenney and J. Triplett and J. Walpole"
+,title="The read-copy-update mechanism for supporting real-time applications on shared-memory multiprocessor systems with {Linux}"
+,Year="2008"
+,Month="May"
+,journal="IBM Systems Journal"
+,volume="47"
+,number="2"
+,pages="221-236"
+,annotation={
+ RCU, realtime RCU, sleepable RCU, performance.
+ http://www.research.ibm.com/journal/sj/472/guniguntala.pdf
+ [Viewed April 24, 2008]
+}
+}
+
+@unpublished{LaiJiangshan2008NewClassicAlgorithm
+,Author="Lai Jiangshan"
+,Title="[{RFC}][{PATCH}] rcu classic: new algorithm for callbacks-processing"
+,month="June"
+,day="3"
+,year="2008"
+,note="Available:
+\url{http://lkml.org/lkml/2008/6/2/539}
+[Viewed December 10, 2008]"
+,annotation={
+ Updated RCU classic algorithm. Introduced multi-tailed list
+ for RCU callbacks and also pulling common code into
+ __call_rcu().
+}
+}
+
+@article{PaulEMcKenney2008RCUOSR
+,author="Paul E. McKenney and Jonathan Walpole"
+,title="Introducing technology into the {Linux} kernel: a case study"
+,Year="2008"
+,journal="SIGOPS Oper. Syst. Rev."
+,volume="42"
+,number="5"
+,pages="4--17"
+,issn="0163-5980"
+,doi={http://doi.acm.org/10.1145/1400097.1400099}
+,publisher="ACM"
+,address="New York, NY, USA"
+,annotation={
+ Linux changed RCU to a far greater degree than RCU has changed Linux.
+ http://portal.acm.org/citation.cfm?doid=1400097.1400099
+}
+}
+
+@unpublished{ManfredSpraul2008StateMachineRCU
+,Author="Manfred Spraul"
+,Title="[{RFC}, {PATCH}] state machine based rcu"
+,month="August"
+,day="21"
+,year="2008"
+,note="Available:
+\url{http://lkml.org/lkml/2008/8/21/336}
+[Viewed December 8, 2008]"
+,annotation={
+ State-based RCU. One key thing that this patch does is to
+ separate the dynticks handling of NMIs and IRQs.
+}
+}
+
+@unpublished{ManfredSpraul2008dyntickIRQNMI
+,Author="Manfred Spraul"
+,Title="Re: [{RFC}, {PATCH}] v4 scalable classic {RCU} implementation"
+,month="September"
+,day="6"
+,year="2008"
+,note="Available:
+\url{http://lkml.org/lkml/2008/9/6/86}
+[Viewed December 8, 2008]"
+,annotation={
+ Manfred notes a fix required to my attempt to separate irq
+ and NMI processing for hierarchical RCU's dynticks interface.
+}
+}
+
+# Was PaulEMcKenney2011cyclicRCU
+@techreport{PaulEMcKenney2008cyclicRCU
+,author="Paul E. McKenney"
+,title="Efficient Support of Consistent Cyclic Search With Read-Copy Update"
+,institution="US Patent and Trademark Office"
+,address="Washington, DC"
+,year="2008"
+,number="US Patent 7,426,511"
+,month="September"
+,pages="23"
+,annotation={
+ Maintains an additional level of indirection to allow
+ readers to confine themselves to the desired snapshot of the
+ data structure. Only permits one update at a time.
+}
+}
+
+@unpublished{PaulEMcKenney2008HierarchicalRCU
+,Author="Paul E. McKenney"
+,Title="Hierarchical {RCU}"
+,month="November"
+,day="3"
+,year="2008"
+,note="\url{http://lwn.net/Articles/305782/}"
+,annotation={
+ RCU with combining-tree-based grace-period detection,
+ permitting it to handle thousands of CPUs.
+ [Viewed November 6, 2008]
+}
+}
+
+@unpublished{PaulEMcKenney2009BloatwatchRCU
+,Author="Paul E. McKenney"
+,Title="Re: [PATCH fyi] RCU: the bloatwatch edition"
+,month="January"
+,day="14"
+,year="2009"
+,note="Available:
+\url{http://lkml.org/lkml/2009/1/14/449}
+[Viewed January 15, 2009]"
+,annotation={
+ Small-footprint implementation of RCU for uniprocessor
+ embedded applications -- and also for exposition purposes.
+}
+}
+
+@conference{PaulEMcKenney2009MaliciousURCU
+,Author="Paul E. McKenney"
+,Title="Using a Malicious User-Level {RCU} to Torture {RCU}-Based Algorithms"
+,Booktitle="linux.conf.au 2009"
+,month="January"
+,year="2009"
+,address="Hobart, Australia"
+,note="Available:
+\url{http://www.rdrop.com/users/paulmck/RCU/urcutorture.2009.01.22a.pdf}
+[Viewed February 2, 2009]"
+,annotation={
+ Realtime RCU and torture-testing RCU uses.
+}
+}
+
+@unpublished{MathieuDesnoyers2009URCU
+,Author="Mathieu Desnoyers"
+,Title="[{RFC} git tree] Userspace {RCU} (urcu) for {Linux}"
+,month="February"
+,day="5"
+,year="2009"
+,note="\url{http://lttng.org/urcu}"
+,annotation={
+ Mathieu Desnoyers's user-space RCU implementation.
+ git://lttng.org/userspace-rcu.git
+ http://lttng.org/cgi-bin/gitweb.cgi?p=userspace-rcu.git
+ http://lttng.org/urcu
+ http://lkml.org/lkml/2009/2/5/572
+}
+}
+
+@unpublished{PaulEMcKenney2009LWNBloatWatchRCU
+,Author="Paul E. McKenney"
+,Title="{RCU}: The {Bloatwatch} Edition"
+,month="March"
+,day="17"
+,year="2009"
+,note="Available:
+\url{http://lwn.net/Articles/323929/}
+[Viewed March 20, 2009]"
+,annotation={
+ Uniprocessor assumptions allow simplified RCU implementation.
+}
+}
+
+@unpublished{EvgeniyPolyakov2009EllipticsNetwork
+,Author="Evgeniy Polyakov"
+,Title="The Elliptics Network"
+,month="April"
+,day="17"
+,year="2009"
+,note="Available:
+\url{http://www.ioremap.net/projects/elliptics}
+[Viewed April 30, 2009]"
+,annotation={
+ Distributed hash table with transactions, using elliptic
+ hash functions to distribute data.
+}
+}
+
+@unpublished{PaulEMcKenney2009expeditedRCU
+,Author="Paul E. McKenney"
+,Title="[{PATCH} -tip 0/3] expedited 'big hammer' {RCU} grace periods"
+,month="June"
+,day="25"
+,year="2009"
+,note="Available:
+\url{http://lkml.org/lkml/2009/6/25/306}
+[Viewed August 16, 2009]"
+,annotation={
+ First posting of expedited RCU to be accepted into -tip.
+}
+}
+
+@unpublished{PaulEMcKenney2009fastRTRCU
+,Author="Paul E. McKenney"
+,Title="[{PATCH} {RFC} -tip 0/4] {RCU} cleanups and simplified preemptable {RCU}"
+,month="July"
+,day="23"
+,year="2009"
+,note="Available:
+\url{http://lkml.org/lkml/2009/7/23/294}
+[Viewed August 15, 2009]"
+,annotation={
+ First posting of simple and fast preemptable RCU.
+}
+}
+
+@unpublished{JoshTriplett2009RPHash
+,Author="Josh Triplett"
+,Title="Scalable concurrent hash tables via relativistic programming"
+,month="September"
+,year="2009"
+,note="Linux Plumbers Conference presentation"
+,annotation={
+ RP fun with hash tables.
+ Superseded by JoshTriplett2010RPHash
+}
+}
+
+@phdthesis{MathieuDesnoyersPhD
+, title = "Low-Impact Operating System Tracing"
+, author = "Mathieu Desnoyers"
+, school = "Ecole Polytechnique de Montr\'{e}al"
+, month = "December"
+, year = 2009
+,note="Available:
+\url{http://www.lttng.org/pub/thesis/desnoyers-dissertation-2009-12.pdf}
+[Viewed December 9, 2009]"
+,annotation={
+ Chapter 6 (page 97) covers user-level RCU.
+}
+}
+
+@unpublished{RelativisticProgrammingWiki
+,Author="Josh Triplett and Paul E. McKenney and Jonathan Walpole"
+,Title="Relativistic Programming"
+,month="September"
+,year="2009"
+,note="Available:
+\url{http://wiki.cs.pdx.edu/rp/}
+[Viewed December 9, 2009]"
+,annotation={
+ Main Relativistic Programming Wiki.
+}
+}
+
+@conference{PaulEMcKenney2009DeterministicRCU
+,Author="Paul E. McKenney"
+,Title="Deterministic Synchronization in Multicore Systems: the Role of {RCU}"
+,Booktitle="Eleventh Real Time Linux Workshop"
+,month="September"
+,year="2009"
+,address="Dresden, Germany"
+,note="Available:
+\url{http://www.rdrop.com/users/paulmck/realtime/paper/DetSyncRCU.2009.08.18a.pdf}
+[Viewed January 14, 2009]"
+}
+
+@unpublished{PaulEMcKenney2009HuntingHeisenbugs
+,Author="Paul E. McKenney"
+,Title="Hunting Heisenbugs"
+,month="November"
+,year="2009"
+,day="1"
+,note="Available:
+\url{http://paulmck.livejournal.com/14639.html}
+[Viewed June 4, 2010]"
+,annotation={
+ Day-one bug in Tree RCU that took forever to track down.
+}
+}
+
+@unpublished{MathieuDesnoyers2009defer:rcu
+,Author="Mathieu Desnoyers"
+,Title="Kernel RCU: shrink the size of the struct rcu\_head"
+,month="December"
+,year="2009"
+,note="Available:
+\url{http://lkml.org/lkml/2009/10/18/129}
+[Viewed December 29, 2009]"
+,annotation={
+ Mathieu proposed defer_rcu() with fixed-size per-thread pool
+ of RCU callbacks.
+}
+}
+
+@unpublished{MathieuDesnoyers2009VerifPrePub
+,Author="Mathieu Desnoyers and Paul E. McKenney and Michel R. Dagenais"
+,Title="Multi-Core Systems Modeling for Formal Verification of Parallel Algorithms"
+,month="December"
+,year="2009"
+,note="Submitted to IEEE TPDS"
+,annotation={
+ OOMem model for Mathieu's user-level RCU mechanical proof of
+ correctness.
+}
+}
+
+@unpublished{MathieuDesnoyers2009URCUPrePub
+,Author="Mathieu Desnoyers and Paul E. McKenney and Alan Stern and Michel R. Dagenais and Jonathan Walpole"
+,Title="User-Level Implementations of Read-Copy Update"
+,month="December"
+,year="2010"
+,url={\url{http://www.computer.org/csdl/trans/td/2012/02/ttd2012020375-abs.html}}
+,annotation={
+ RCU overview, desiderata, semi-formal semantics, user-level RCU
+ usage scenarios, three classes of RCU implementation, wait-free
+ RCU updates, RCU grace-period batching, update overhead,
+ http://www.rdrop.com/users/paulmck/RCU/urcu-main-accepted.2011.08.30a.pdf
+ http://www.rdrop.com/users/paulmck/RCU/urcu-supp-accepted.2011.08.30a.pdf
+ Superseded by MathieuDesnoyers2012URCU.
+}
+}
+
+@inproceedings{HariKannan2009DynamicAnalysisRCU
+,author = {Kannan, Hari}
+,title = {Ordering decoupled metadata accesses in multiprocessors}
+,booktitle = {MICRO 42: Proceedings of the 42nd Annual IEEE/ACM International Symposium on Microarchitecture}
+,year = {2009}
+,isbn = {978-1-60558-798-1}
+,pages = {381--390}
+,location = {New York, New York}
+,doi = {http://doi.acm.org/10.1145/1669112.1669161}
+,publisher = {ACM}
+,address = {New York, NY, USA}
+,annotation={
+ Uses RCU to protect metadata used in dynamic analysis.
+}
+}
+
+@conference{PaulEMcKenney2010SimpleOptRCU
+,Author="Paul E. McKenney"
+,Title="Simplicity Through Optimization"
+,Booktitle="linux.conf.au 2010"
+,month="January"
+,year="2010"
+,address="Wellington, New Zealand"
+,note="Available:
+\url{http://www.rdrop.com/users/paulmck/RCU/SimplicityThruOptimization.2010.01.21f.pdf}
+[Viewed October 10, 2010]"
+,annotation={
+ TREE_PREEMPT_RCU optimizations greatly simplified the old
+ PREEMPT_RCU implementation.
+}
+}
+
+@unpublished{PaulEMcKenney2010LockdepRCU
+,Author="Paul E. McKenney"
+,Title="Lockdep-{RCU}"
+,month="February"
+,year="2010"
+,day="1"
+,note="\url{https://lwn.net/Articles/371986/}"
+,annotation={
+ CONFIG_PROVE_RCU, or at least an early version.
+ [Viewed June 4, 2010]
+}
+}
+
+@unpublished{AviKivity2010KVM2RCU
+,Author="Avi Kivity"
+,Title="[{PATCH} 37/40] {KVM}: Bump maximum vcpu count to 64"
+,month="February"
+,year="2010"
+,note="Available:
+\url{http://www.mail-archive.com/kvm@vger.kernel.org/msg28640.html}
+[Viewed March 20, 2010]"
+,annotation={
+ Use of RCU permits KVM to increase the size of guest OSes from
+ 16 CPUs to 64 CPUs.
+}
+}
+
+@unpublished{HerbertXu2010RCUResizeHash
+,Author="Herbert Xu"
+,Title="bridge: Add core IGMP snooping support"
+,month="February"
+,year="2010"
+,note="Available:
+\url{http://thread.gmane.org/gmane.linux.network/153338}
+[Viewed June 9, 2014]"
+,annotation={
+ Use a pair of list_head structures to support RCU-protected
+ resizable hash tables.
+}
+}
+
+@mastersthesis{AbhinavDuggal2010Masters
+,author="Abhinav Duggal"
+,title="Stopping Data Races Using Redflag"
+,school="Stony Brook University"
+,year="2010"
+,annotation={
+ Data-race detector incorporating RCU.
+ http://www.filesystems.org/docs/abhinav-thesis/abhinav_thesis.pdf
+}
+}
+
+@article{JoshTriplett2010RPHash
+,author="Josh Triplett and Paul E. McKenney and Jonathan Walpole"
+,title="Scalable Concurrent Hash Tables via Relativistic Programming"
+,journal="ACM Operating Systems Review"
+,year=2010
+,volume=44
+,number=3
+,month="July"
+,annotation={
+ RP fun with hash tables.
+ http://portal.acm.org/citation.cfm?id=1842733.1842750
+}
+}
+
+@unpublished{PaulEMcKenney2010RCUAPI
+,Author="Paul E. McKenney"
+,Title="The {RCU} {API}, 2010 Edition"
+,month="December"
+,day="8"
+,year="2010"
+,note="\url{http://lwn.net/Articles/418853/}"
+,annotation={
+ Includes updated software-engineering features.
+ [Viewed December 8, 2010]
+}
+}
+
+@mastersthesis{AndrejPodzimek2010masters
+,author="Andrej Podzimek"
+,title="Read-Copy-Update for OpenSolaris"
+,school="Charles University in Prague"
+,year="2010"
+,note="Available:
+\url{https://andrej.podzimek.org/thesis.pdf}
+[Viewed January 31, 2011]"
+,annotation={
+ Reviews RCU implementations and creates a few for OpenSolaris.
+ Drives quiescent-state detection from RCU read-side primitives,
+ in a manner roughly similar to that of Jim Houston.
+}
+}
+
+@unpublished{LinusTorvalds2011Linux2:6:38:rc1:NPigginVFS
+,Author="Linus Torvalds"
+,Title="Linux 2.6.38-rc1"
+,month="January"
+,year="2011"
+,note="Available:
+\url{https://lkml.org/lkml/2011/1/18/322}
+[Viewed March 4, 2011]"
+,annotation={
+ "The RCU-based name lookup is at the other end of the spectrum - the
+ absolute anti-gimmick. It's some seriously good stuff, and gets rid of
+ the last main global lock that really tends to hurt some kernel loads.
+ The dentry lock is no longer a big serializing issue. What's really
+ nice about it is that it actually improves performance a lot even for
+ single-threaded loads (on an SMP kernel), because it gets rid of some
+ of the most expensive parts of path component lookup, which was the
+ d_lock on every component lookup. So I'm seeing improvements of 30-50%
+ on some seriously pathname-lookup intensive loads."
+}
+}
+
+@techreport{JoshTriplett2011RPScalableCorrectOrdering
+,author = {Josh Triplett and Philip W. Howard and Paul E. McKenney and Jonathan Walpole}
+,title = {Scalable Correct Memory Ordering via Relativistic Programming}
+,year = {2011}
+,number = {11-03}
+,institution = {Portland State University}
+,note = {\url{http://www.cs.pdx.edu/pdfs/tr1103.pdf}}
+}
+
+@inproceedings{PhilHoward2011RCUTMRBTree
+,author = {Philip W. Howard and Jonathan Walpole}
+,title = {A Relativistic Enhancement to Software Transactional Memory}
+,booktitle = {Proceedings of the 3rd USENIX conference on Hot topics in parallelism}
+,series = {HotPar'11}
+,year = {2011}
+,location = {Berkeley, CA}
+,pages = {1--6}
+,numpages = {6}
+,url = {http://www.usenix.org/event/hotpar11/tech/final_files/Howard.pdf}
+,publisher = {USENIX Association}
+,address = {Berkeley, CA, USA}
+}
+
+@techreport{PaulEMcKenney2011cyclicparallelRCU
+,author="Paul E. McKenney and Jonathan Walpole"
+,title="Efficient Support of Consistent Cyclic Search With Read-Copy Update and Parallel Updates"
+,institution="US Patent and Trademark Office"
+,address="Washington, DC"
+,year="2011"
+,number="US Patent 7,953,778"
+,month="May"
+,pages="34"
+,annotation={
+ Maintains an array of generation numbers to track in-flight
+ updates and keeps an additional level of indirection to allow
+ readers to confine themselves to the desired snapshot of the
+ data structure.
+}
+}
+
+@inproceedings{Triplett:2011:RPHash
+,author = {Triplett, Josh and McKenney, Paul E. and Walpole, Jonathan}
+,title = {Resizable, Scalable, Concurrent Hash Tables via Relativistic Programming}
+,booktitle = {Proceedings of the 2011 USENIX Annual Technical Conference}
+,month = {June}
+,year = {2011}
+,pages = {145--158}
+,numpages = {14}
+,url={http://www.usenix.org/event/atc11/tech/final_files/Triplett.pdf}
+,publisher = {The USENIX Association}
+,address = {Portland, OR USA}
+}
+
+@unpublished{PaulEMcKenney2011RCU3.0trainwreck
+,Author="Paul E. McKenney"
+,Title="3.0 and {RCU:} what went wrong"
+,month="July"
+,day="27"
+,year="2011"
+,note="\url{http://lwn.net/Articles/453002/}"
+,annotation={
+ Analysis of the RCU trainwreck in Linux kernel 3.0.
+ [Viewed July 27, 2011]
+}
+}
+
+@unpublished{NeilBrown2011MeetTheLockers
+,Author="Neil Brown"
+,Title="Meet the {Lockers}"
+,month="August"
+,day="3"
+,year="2011"
+,note="Available:
+\url{http://lwn.net/Articles/453685/}
+[Viewed September 2, 2011]"
+,annotation={
+ The Locker family as an analogy for locking, reference counting,
+ RCU, and seqlock.
+}
+}
+
+@inproceedings{Seyster:2011:RFA:2075416.2075425
+,author = {Seyster, Justin and Radhakrishnan, Prabakar and Katoch, Samriti and Duggal, Abhinav and Stoller, Scott D. and Zadok, Erez}
+,title = {Redflag: a framework for analysis of Kernel-level concurrency}
+,booktitle = {Proceedings of the 11th international conference on Algorithms and architectures for parallel processing - Volume Part I}
+,series = {ICA3PP'11}
+,year = {2011}
+,isbn = {978-3-642-24649-4}
+,location = {Melbourne, Australia}
+,pages = {66--79}
+,numpages = {14}
+,url = {http://dl.acm.org/citation.cfm?id=2075416.2075425}
+,acmid = {2075425}
+,publisher = {Springer-Verlag}
+,address = {Berlin, Heidelberg}
+}
+
+@phdthesis{JoshTriplettPhD
+,author="Josh Triplett"
+,title="Relativistic Causal Ordering: A Memory Model for Scalable Concurrent Data Structures"
+,school="Portland State University"
+,year="2012"
+,annotation={
+ RCU-protected hash tables, barriers vs. read-side traversal order.
+ .
+ If the updater is making changes in the opposite direction from
+ the read-side traveral order, the updater need only execute a
+ memory-barrier instruction, but if in the same direction, the
+ updater needs to wait for a grace period between the individual
+ updates.
+}
+}
+
+@article{MathieuDesnoyers2012URCU
+,Author="Mathieu Desnoyers and Paul E. McKenney and Alan Stern and Michel R. Dagenais and Jonathan Walpole"
+,Title="User-Level Implementations of Read-Copy Update"
+,journal="IEEE Transactions on Parallel and Distributed Systems"
+,volume={23}
+,year="2012"
+,issn="1045-9219"
+,pages="375-382"
+,doi="http://doi.ieeecomputersociety.org/10.1109/TPDS.2011.159"
+,publisher="IEEE Computer Society"
+,address="Los Alamitos, CA, USA"
+,annotation={
+ RCU overview, desiderata, semi-formal semantics, user-level RCU
+ usage scenarios, three classes of RCU implementation, wait-free
+ RCU updates, RCU grace-period batching, update overhead,
+ http://www.rdrop.com/users/paulmck/RCU/urcu-main-accepted.2011.08.30a.pdf
+ http://www.rdrop.com/users/paulmck/RCU/urcu-supp-accepted.2011.08.30a.pdf
+ http://www.computer.org/cms/Computer.org/dl/trans/td/2012/02/extras/ttd2012020375s.pdf
+}
+}
+
+@inproceedings{AustinClements2012RCULinux:mmapsem
+,author = {Austin Clements and Frans Kaashoek and Nickolai Zeldovich}
+,title = {Scalable Address Spaces Using {RCU} Balanced Trees}
+,booktitle = {Architectural Support for Programming Languages and Operating Systems (ASPLOS 2012)}
+,month = {March}
+,year = {2012}
+,pages = {199--210}
+,numpages = {12}
+,publisher = {ACM}
+,address = {London, UK}
+,url="http://people.csail.mit.edu/nickolai/papers/clements-bonsai.pdf"
+}
+
+@unpublished{PaulEMcKenney2012ELCbattery
+,Author="Paul E. McKenney"
+,Title="Making {RCU} Safe For Battery-Powered Devices"
+,month="February"
+,day="15"
+,year="2012"
+,note="Available:
+\url{http://www.rdrop.com/users/paulmck/RCU/RCUdynticks.2012.02.15b.pdf}
+[Viewed March 1, 2012]"
+,annotation={
+ RCU_FAST_NO_HZ, round 2.
+}
+}
+
+@article{GuillermoVigueras2012RCUCrowd
+,author = {Vigueras, Guillermo and Ordu\~{n}a, Juan M. and Lozano, Miguel}
+,day = {25}
+,doi = {10.1007/s11227-012-0766-x}
+,issn = {0920-8542}
+,journal = {The Journal of Supercomputing}
+,keywords = {linux, simulation}
+,month = apr
+,posted-at = {2012-05-03 09:12:04}
+,priority = {2}
+,title = {{A Read-Copy Update based parallel server for distributed crowd simulations}}
+,url = {http://dx.doi.org/10.1007/s11227-012-0766-x}
+,year = {2012}
+}
+
+
+@unpublished{JonCorbet2012ACCESS:ONCE
+,Author="Jon Corbet"
+,Title="{ACCESS\_ONCE()}"
+,month="August"
+,day="1"
+,year="2012"
+,note="\url{http://lwn.net/Articles/508991/}"
+,annotation={
+ A couple of simple specific compiler optimizations that motivate
+ ACCESS_ONCE().
+}
+}
+
+@unpublished{AlexeyGotsman2012VerifyGraceExtended
+,Author="Alexey Gotsman and Noam Rinetzky and Hongseok Yang"
+,Title="Verifying Highly Concurrent Algorithms with Grace (extended version)"
+,month="July"
+,day="10"
+,year="2012"
+,note="\url{http://software.imdea.org/~gotsman/papers/recycling-esop13-ext.pdf}"
+,annotation={
+ Separation-logic formulation of RCU uses.
+}
+}
+
+@unpublished{PaulMcKenney2012RCUUsage
+,Author="Paul E. McKenney and Silas Boyd-Wickizer and Jonathan Walpole"
+,Title="{RCU} Usage In the Linux Kernel: One Decade Later"
+,month="September"
+,day="17"
+,year="2012"
+,url=http://rdrop.com/users/paulmck/techreports/survey.2012.09.17a.pdf
+,note="Technical report paulmck.2012.09.17"
+,annotation={
+ Overview of the first variant of no-CBs CPUs for RCU.
+}
+}
+
+@unpublished{JonCorbet2012NOCB
+,Author="Jon Corbet"
+,Title="Relocating RCU callbacks"
+,month="October"
+,day="31"
+,year="2012"
+,note="\url{http://lwn.net/Articles/522262/}"
+,annotation={
+ Overview of the first variant of no-CBs CPUs for RCU.
+}
+}
+
+@phdthesis{JustinSeyster2012PhD
+,author="Justin Seyster"
+,title="Runtime Verification of Kernel-Level Concurrency Using Compiler-Based Instrumentation"
+,school="Stony Brook University"
+,year="2012"
+,annotation={
+ Looking for data races, including those involving RCU.
+ Proposal:
+ http://www.fsl.cs.sunysb.edu/docs/jseyster-proposal/redflag.pdf
+ Dissertation:
+ http://www.fsl.cs.sunysb.edu/docs/jseyster-dissertation/redflag.pdf
+}
+}
+
+@unpublished{PaulEMcKenney2013RCUUsage
+,Author="Paul E. McKenney and Silas Boyd-Wickizer and Jonathan Walpole"
+,Title="{RCU} Usage in the {Linux} Kernel: One Decade Later"
+,month="February"
+,day="24"
+,year="2013"
+,note="\url{http://rdrop.com/users/paulmck/techreports/RCUUsage.2013.02.24a.pdf}"
+,annotation={
+ Usage of RCU within the Linux kernel.
+}
+}
+
+@inproceedings{AlexeyGotsman2013ESOPRCU
+,author = {Alexey Gotsman and Noam Rinetzky and Hongseok Yang}
+,title = {Verifying concurrent memory reclamation algorithms with grace}
+,booktitle = {ESOP'13: European Symposium on Programming}
+,year = {2013}
+,pages = {249--269}
+,publisher = {Springer}
+,address = {Rome, Italy}
+,annotation={
+ http://software.imdea.org/~gotsman/papers/recycling-esop13.pdf
+}
+}
+
+@unpublished{PaulEMcKenney2013NoTinyPreempt
+,Author="Paul E. McKenney"
+,Title="Simplifying RCU"
+,month="March"
+,day="6"
+,year="2013"
+,note="\url{http://lwn.net/Articles/541037/}"
+,annotation={
+ Getting rid of TINY_PREEMPT_RCU.
+}
+}
diff --git a/kernel/Documentation/RCU/UP.txt b/kernel/Documentation/RCU/UP.txt
new file mode 100644
index 000000000..90ec5341e
--- /dev/null
+++ b/kernel/Documentation/RCU/UP.txt
@@ -0,0 +1,135 @@
+RCU on Uniprocessor Systems
+
+
+A common misconception is that, on UP systems, the call_rcu() primitive
+may immediately invoke its function. The basis of this misconception
+is that since there is only one CPU, it should not be necessary to
+wait for anything else to get done, since there are no other CPUs for
+anything else to be happening on. Although this approach will -sort- -of-
+work a surprising amount of the time, it is a very bad idea in general.
+This document presents three examples that demonstrate exactly how bad
+an idea this is.
+
+
+Example 1: softirq Suicide
+
+Suppose that an RCU-based algorithm scans a linked list containing
+elements A, B, and C in process context, and can delete elements from
+this same list in softirq context. Suppose that the process-context scan
+is referencing element B when it is interrupted by softirq processing,
+which deletes element B, and then invokes call_rcu() to free element B
+after a grace period.
+
+Now, if call_rcu() were to directly invoke its arguments, then upon return
+from softirq, the list scan would find itself referencing a newly freed
+element B. This situation can greatly decrease the life expectancy of
+your kernel.
+
+This same problem can occur if call_rcu() is invoked from a hardware
+interrupt handler.
+
+
+Example 2: Function-Call Fatality
+
+Of course, one could avert the suicide described in the preceding example
+by having call_rcu() directly invoke its arguments only if it was called
+from process context. However, this can fail in a similar manner.
+
+Suppose that an RCU-based algorithm again scans a linked list containing
+elements A, B, and C in process contexts, but that it invokes a function
+on each element as it is scanned. Suppose further that this function
+deletes element B from the list, then passes it to call_rcu() for deferred
+freeing. This may be a bit unconventional, but it is perfectly legal
+RCU usage, since call_rcu() must wait for a grace period to elapse.
+Therefore, in this case, allowing call_rcu() to immediately invoke
+its arguments would cause it to fail to make the fundamental guarantee
+underlying RCU, namely that call_rcu() defers invoking its arguments until
+all RCU read-side critical sections currently executing have completed.
+
+Quick Quiz #1: why is it -not- legal to invoke synchronize_rcu() in
+ this case?
+
+
+Example 3: Death by Deadlock
+
+Suppose that call_rcu() is invoked while holding a lock, and that the
+callback function must acquire this same lock. In this case, if
+call_rcu() were to directly invoke the callback, the result would
+be self-deadlock.
+
+In some cases, it would possible to restructure to code so that
+the call_rcu() is delayed until after the lock is released. However,
+there are cases where this can be quite ugly:
+
+1. If a number of items need to be passed to call_rcu() within
+ the same critical section, then the code would need to create
+ a list of them, then traverse the list once the lock was
+ released.
+
+2. In some cases, the lock will be held across some kernel API,
+ so that delaying the call_rcu() until the lock is released
+ requires that the data item be passed up via a common API.
+ It is far better to guarantee that callbacks are invoked
+ with no locks held than to have to modify such APIs to allow
+ arbitrary data items to be passed back up through them.
+
+If call_rcu() directly invokes the callback, painful locking restrictions
+or API changes would be required.
+
+Quick Quiz #2: What locking restriction must RCU callbacks respect?
+
+
+Summary
+
+Permitting call_rcu() to immediately invoke its arguments breaks RCU,
+even on a UP system. So do not do it! Even on a UP system, the RCU
+infrastructure -must- respect grace periods, and -must- invoke callbacks
+from a known environment in which no locks are held.
+
+It -is- safe for synchronize_sched() and synchronize_rcu_bh() to return
+immediately on an UP system. It is also safe for synchronize_rcu()
+to return immediately on UP systems, except when running preemptable
+RCU.
+
+Quick Quiz #3: Why can't synchronize_rcu() return immediately on
+ UP systems running preemptable RCU?
+
+
+Answer to Quick Quiz #1:
+ Why is it -not- legal to invoke synchronize_rcu() in this case?
+
+ Because the calling function is scanning an RCU-protected linked
+ list, and is therefore within an RCU read-side critical section.
+ Therefore, the called function has been invoked within an RCU
+ read-side critical section, and is not permitted to block.
+
+Answer to Quick Quiz #2:
+ What locking restriction must RCU callbacks respect?
+
+ Any lock that is acquired within an RCU callback must be
+ acquired elsewhere using an _irq variant of the spinlock
+ primitive. For example, if "mylock" is acquired by an
+ RCU callback, then a process-context acquisition of this
+ lock must use something like spin_lock_irqsave() to
+ acquire the lock.
+
+ If the process-context code were to simply use spin_lock(),
+ then, since RCU callbacks can be invoked from softirq context,
+ the callback might be called from a softirq that interrupted
+ the process-context critical section. This would result in
+ self-deadlock.
+
+ This restriction might seem gratuitous, since very few RCU
+ callbacks acquire locks directly. However, a great many RCU
+ callbacks do acquire locks -indirectly-, for example, via
+ the kfree() primitive.
+
+Answer to Quick Quiz #3:
+ Why can't synchronize_rcu() return immediately on UP systems
+ running preemptable RCU?
+
+ Because some other task might have been preempted in the middle
+ of an RCU read-side critical section. If synchronize_rcu()
+ simply immediately returned, it would prematurely signal the
+ end of the grace period, which would come as a nasty shock to
+ that other thread when it started running again.
diff --git a/kernel/Documentation/RCU/arrayRCU.txt b/kernel/Documentation/RCU/arrayRCU.txt
new file mode 100644
index 000000000..453ebe695
--- /dev/null
+++ b/kernel/Documentation/RCU/arrayRCU.txt
@@ -0,0 +1,141 @@
+Using RCU to Protect Read-Mostly Arrays
+
+
+Although RCU is more commonly used to protect linked lists, it can
+also be used to protect arrays. Three situations are as follows:
+
+1. Hash Tables
+
+2. Static Arrays
+
+3. Resizeable Arrays
+
+Each of these situations are discussed below.
+
+
+Situation 1: Hash Tables
+
+Hash tables are often implemented as an array, where each array entry
+has a linked-list hash chain. Each hash chain can be protected by RCU
+as described in the listRCU.txt document. This approach also applies
+to other array-of-list situations, such as radix trees.
+
+
+Situation 2: Static Arrays
+
+Static arrays, where the data (rather than a pointer to the data) is
+located in each array element, and where the array is never resized,
+have not been used with RCU. Rik van Riel recommends using seqlock in
+this situation, which would also have minimal read-side overhead as long
+as updates are rare.
+
+Quick Quiz: Why is it so important that updates be rare when
+ using seqlock?
+
+
+Situation 3: Resizeable Arrays
+
+Use of RCU for resizeable arrays is demonstrated by the grow_ary()
+function used by the System V IPC code. The array is used to map from
+semaphore, message-queue, and shared-memory IDs to the data structure
+that represents the corresponding IPC construct. The grow_ary()
+function does not acquire any locks; instead its caller must hold the
+ids->sem semaphore.
+
+The grow_ary() function, shown below, does some limit checks, allocates a
+new ipc_id_ary, copies the old to the new portion of the new, initializes
+the remainder of the new, updates the ids->entries pointer to point to
+the new array, and invokes ipc_rcu_putref() to free up the old array.
+Note that rcu_assign_pointer() is used to update the ids->entries pointer,
+which includes any memory barriers required on whatever architecture
+you are running on.
+
+ static int grow_ary(struct ipc_ids* ids, int newsize)
+ {
+ struct ipc_id_ary* new;
+ struct ipc_id_ary* old;
+ int i;
+ int size = ids->entries->size;
+
+ if(newsize > IPCMNI)
+ newsize = IPCMNI;
+ if(newsize <= size)
+ return newsize;
+
+ new = ipc_rcu_alloc(sizeof(struct kern_ipc_perm *)*newsize +
+ sizeof(struct ipc_id_ary));
+ if(new == NULL)
+ return size;
+ new->size = newsize;
+ memcpy(new->p, ids->entries->p,
+ sizeof(struct kern_ipc_perm *)*size +
+ sizeof(struct ipc_id_ary));
+ for(i=size;i<newsize;i++) {
+ new->p[i] = NULL;
+ }
+ old = ids->entries;
+
+ /*
+ * Use rcu_assign_pointer() to make sure the memcpyed
+ * contents of the new array are visible before the new
+ * array becomes visible.
+ */
+ rcu_assign_pointer(ids->entries, new);
+
+ ipc_rcu_putref(old);
+ return newsize;
+ }
+
+The ipc_rcu_putref() function decrements the array's reference count
+and then, if the reference count has dropped to zero, uses call_rcu()
+to free the array after a grace period has elapsed.
+
+The array is traversed by the ipc_lock() function. This function
+indexes into the array under the protection of rcu_read_lock(),
+using rcu_dereference() to pick up the pointer to the array so
+that it may later safely be dereferenced -- memory barriers are
+required on the Alpha CPU. Since the size of the array is stored
+with the array itself, there can be no array-size mismatches, so
+a simple check suffices. The pointer to the structure corresponding
+to the desired IPC object is placed in "out", with NULL indicating
+a non-existent entry. After acquiring "out->lock", the "out->deleted"
+flag indicates whether the IPC object is in the process of being
+deleted, and, if not, the pointer is returned.
+
+ struct kern_ipc_perm* ipc_lock(struct ipc_ids* ids, int id)
+ {
+ struct kern_ipc_perm* out;
+ int lid = id % SEQ_MULTIPLIER;
+ struct ipc_id_ary* entries;
+
+ rcu_read_lock();
+ entries = rcu_dereference(ids->entries);
+ if(lid >= entries->size) {
+ rcu_read_unlock();
+ return NULL;
+ }
+ out = entries->p[lid];
+ if(out == NULL) {
+ rcu_read_unlock();
+ return NULL;
+ }
+ spin_lock(&out->lock);
+
+ /* ipc_rmid() may have already freed the ID while ipc_lock
+ * was spinning: here verify that the structure is still valid
+ */
+ if (out->deleted) {
+ spin_unlock(&out->lock);
+ rcu_read_unlock();
+ return NULL;
+ }
+ return out;
+ }
+
+
+Answer to Quick Quiz:
+
+ The reason that it is important that updates be rare when
+ using seqlock is that frequent updates can livelock readers.
+ One way to avoid this problem is to assign a seqlock for
+ each array entry rather than to the entire array.
diff --git a/kernel/Documentation/RCU/checklist.txt b/kernel/Documentation/RCU/checklist.txt
new file mode 100644
index 000000000..877947130
--- /dev/null
+++ b/kernel/Documentation/RCU/checklist.txt
@@ -0,0 +1,437 @@
+Review Checklist for RCU Patches
+
+
+This document contains a checklist for producing and reviewing patches
+that make use of RCU. Violating any of the rules listed below will
+result in the same sorts of problems that leaving out a locking primitive
+would cause. This list is based on experiences reviewing such patches
+over a rather long period of time, but improvements are always welcome!
+
+0. Is RCU being applied to a read-mostly situation? If the data
+ structure is updated more than about 10% of the time, then you
+ should strongly consider some other approach, unless detailed
+ performance measurements show that RCU is nonetheless the right
+ tool for the job. Yes, RCU does reduce read-side overhead by
+ increasing write-side overhead, which is exactly why normal uses
+ of RCU will do much more reading than updating.
+
+ Another exception is where performance is not an issue, and RCU
+ provides a simpler implementation. An example of this situation
+ is the dynamic NMI code in the Linux 2.6 kernel, at least on
+ architectures where NMIs are rare.
+
+ Yet another exception is where the low real-time latency of RCU's
+ read-side primitives is critically important.
+
+1. Does the update code have proper mutual exclusion?
+
+ RCU does allow -readers- to run (almost) naked, but -writers- must
+ still use some sort of mutual exclusion, such as:
+
+ a. locking,
+ b. atomic operations, or
+ c. restricting updates to a single task.
+
+ If you choose #b, be prepared to describe how you have handled
+ memory barriers on weakly ordered machines (pretty much all of
+ them -- even x86 allows later loads to be reordered to precede
+ earlier stores), and be prepared to explain why this added
+ complexity is worthwhile. If you choose #c, be prepared to
+ explain how this single task does not become a major bottleneck on
+ big multiprocessor machines (for example, if the task is updating
+ information relating to itself that other tasks can read, there
+ by definition can be no bottleneck).
+
+2. Do the RCU read-side critical sections make proper use of
+ rcu_read_lock() and friends? These primitives are needed
+ to prevent grace periods from ending prematurely, which
+ could result in data being unceremoniously freed out from
+ under your read-side code, which can greatly increase the
+ actuarial risk of your kernel.
+
+ As a rough rule of thumb, any dereference of an RCU-protected
+ pointer must be covered by rcu_read_lock(), rcu_read_lock_bh(),
+ rcu_read_lock_sched(), or by the appropriate update-side lock.
+ Disabling of preemption can serve as rcu_read_lock_sched(), but
+ is less readable.
+
+3. Does the update code tolerate concurrent accesses?
+
+ The whole point of RCU is to permit readers to run without
+ any locks or atomic operations. This means that readers will
+ be running while updates are in progress. There are a number
+ of ways to handle this concurrency, depending on the situation:
+
+ a. Use the RCU variants of the list and hlist update
+ primitives to add, remove, and replace elements on
+ an RCU-protected list. Alternatively, use the other
+ RCU-protected data structures that have been added to
+ the Linux kernel.
+
+ This is almost always the best approach.
+
+ b. Proceed as in (a) above, but also maintain per-element
+ locks (that are acquired by both readers and writers)
+ that guard per-element state. Of course, fields that
+ the readers refrain from accessing can be guarded by
+ some other lock acquired only by updaters, if desired.
+
+ This works quite well, also.
+
+ c. Make updates appear atomic to readers. For example,
+ pointer updates to properly aligned fields will
+ appear atomic, as will individual atomic primitives.
+ Sequences of perations performed under a lock will -not-
+ appear to be atomic to RCU readers, nor will sequences
+ of multiple atomic primitives.
+
+ This can work, but is starting to get a bit tricky.
+
+ d. Carefully order the updates and the reads so that
+ readers see valid data at all phases of the update.
+ This is often more difficult than it sounds, especially
+ given modern CPUs' tendency to reorder memory references.
+ One must usually liberally sprinkle memory barriers
+ (smp_wmb(), smp_rmb(), smp_mb()) through the code,
+ making it difficult to understand and to test.
+
+ It is usually better to group the changing data into
+ a separate structure, so that the change may be made
+ to appear atomic by updating a pointer to reference
+ a new structure containing updated values.
+
+4. Weakly ordered CPUs pose special challenges. Almost all CPUs
+ are weakly ordered -- even x86 CPUs allow later loads to be
+ reordered to precede earlier stores. RCU code must take all of
+ the following measures to prevent memory-corruption problems:
+
+ a. Readers must maintain proper ordering of their memory
+ accesses. The rcu_dereference() primitive ensures that
+ the CPU picks up the pointer before it picks up the data
+ that the pointer points to. This really is necessary
+ on Alpha CPUs. If you don't believe me, see:
+
+ http://www.openvms.compaq.com/wizard/wiz_2637.html
+
+ The rcu_dereference() primitive is also an excellent
+ documentation aid, letting the person reading the
+ code know exactly which pointers are protected by RCU.
+ Please note that compilers can also reorder code, and
+ they are becoming increasingly aggressive about doing
+ just that. The rcu_dereference() primitive therefore also
+ prevents destructive compiler optimizations. However,
+ with a bit of devious creativity, it is possible to
+ mishandle the return value from rcu_dereference().
+ Please see rcu_dereference.txt in this directory for
+ more information.
+
+ The rcu_dereference() primitive is used by the
+ various "_rcu()" list-traversal primitives, such
+ as the list_for_each_entry_rcu(). Note that it is
+ perfectly legal (if redundant) for update-side code to
+ use rcu_dereference() and the "_rcu()" list-traversal
+ primitives. This is particularly useful in code that
+ is common to readers and updaters. However, lockdep
+ will complain if you access rcu_dereference() outside
+ of an RCU read-side critical section. See lockdep.txt
+ to learn what to do about this.
+
+ Of course, neither rcu_dereference() nor the "_rcu()"
+ list-traversal primitives can substitute for a good
+ concurrency design coordinating among multiple updaters.
+
+ b. If the list macros are being used, the list_add_tail_rcu()
+ and list_add_rcu() primitives must be used in order
+ to prevent weakly ordered machines from misordering
+ structure initialization and pointer planting.
+ Similarly, if the hlist macros are being used, the
+ hlist_add_head_rcu() primitive is required.
+
+ c. If the list macros are being used, the list_del_rcu()
+ primitive must be used to keep list_del()'s pointer
+ poisoning from inflicting toxic effects on concurrent
+ readers. Similarly, if the hlist macros are being used,
+ the hlist_del_rcu() primitive is required.
+
+ The list_replace_rcu() and hlist_replace_rcu() primitives
+ may be used to replace an old structure with a new one
+ in their respective types of RCU-protected lists.
+
+ d. Rules similar to (4b) and (4c) apply to the "hlist_nulls"
+ type of RCU-protected linked lists.
+
+ e. Updates must ensure that initialization of a given
+ structure happens before pointers to that structure are
+ publicized. Use the rcu_assign_pointer() primitive
+ when publicizing a pointer to a structure that can
+ be traversed by an RCU read-side critical section.
+
+5. If call_rcu(), or a related primitive such as call_rcu_bh(),
+ call_rcu_sched(), or call_srcu() is used, the callback function
+ must be written to be called from softirq context. In particular,
+ it cannot block.
+
+6. Since synchronize_rcu() can block, it cannot be called from
+ any sort of irq context. The same rule applies for
+ synchronize_rcu_bh(), synchronize_sched(), synchronize_srcu(),
+ synchronize_rcu_expedited(), synchronize_rcu_bh_expedited(),
+ synchronize_sched_expedite(), and synchronize_srcu_expedited().
+
+ The expedited forms of these primitives have the same semantics
+ as the non-expedited forms, but expediting is both expensive
+ and unfriendly to real-time workloads. Use of the expedited
+ primitives should be restricted to rare configuration-change
+ operations that would not normally be undertaken while a real-time
+ workload is running.
+
+ In particular, if you find yourself invoking one of the expedited
+ primitives repeatedly in a loop, please do everyone a favor:
+ Restructure your code so that it batches the updates, allowing
+ a single non-expedited primitive to cover the entire batch.
+ This will very likely be faster than the loop containing the
+ expedited primitive, and will be much much easier on the rest
+ of the system, especially to real-time workloads running on
+ the rest of the system.
+
+ In addition, it is illegal to call the expedited forms from
+ a CPU-hotplug notifier, or while holding a lock that is acquired
+ by a CPU-hotplug notifier. Failing to observe this restriction
+ will result in deadlock.
+
+7. If the updater uses call_rcu() or synchronize_rcu(), then the
+ corresponding readers must use rcu_read_lock() and
+ rcu_read_unlock(). If the updater uses call_rcu_bh() or
+ synchronize_rcu_bh(), then the corresponding readers must
+ use rcu_read_lock_bh() and rcu_read_unlock_bh(). If the
+ updater uses call_rcu_sched() or synchronize_sched(), then
+ the corresponding readers must disable preemption, possibly
+ by calling rcu_read_lock_sched() and rcu_read_unlock_sched().
+ If the updater uses synchronize_srcu() or call_srcu(), then
+ the corresponding readers must use srcu_read_lock() and
+ srcu_read_unlock(), and with the same srcu_struct. The rules for
+ the expedited primitives are the same as for their non-expedited
+ counterparts. Mixing things up will result in confusion and
+ broken kernels.
+
+ One exception to this rule: rcu_read_lock() and rcu_read_unlock()
+ may be substituted for rcu_read_lock_bh() and rcu_read_unlock_bh()
+ in cases where local bottom halves are already known to be
+ disabled, for example, in irq or softirq context. Commenting
+ such cases is a must, of course! And the jury is still out on
+ whether the increased speed is worth it.
+
+8. Although synchronize_rcu() is slower than is call_rcu(), it
+ usually results in simpler code. So, unless update performance is
+ critically important, the updaters cannot block, or the latency of
+ synchronize_rcu() is visible from userspace, synchronize_rcu()
+ should be used in preference to call_rcu(). Furthermore,
+ kfree_rcu() usually results in even simpler code than does
+ synchronize_rcu() without synchronize_rcu()'s multi-millisecond
+ latency. So please take advantage of kfree_rcu()'s "fire and
+ forget" memory-freeing capabilities where it applies.
+
+ An especially important property of the synchronize_rcu()
+ primitive is that it automatically self-limits: if grace periods
+ are delayed for whatever reason, then the synchronize_rcu()
+ primitive will correspondingly delay updates. In contrast,
+ code using call_rcu() should explicitly limit update rate in
+ cases where grace periods are delayed, as failing to do so can
+ result in excessive realtime latencies or even OOM conditions.
+
+ Ways of gaining this self-limiting property when using call_rcu()
+ include:
+
+ a. Keeping a count of the number of data-structure elements
+ used by the RCU-protected data structure, including
+ those waiting for a grace period to elapse. Enforce a
+ limit on this number, stalling updates as needed to allow
+ previously deferred frees to complete. Alternatively,
+ limit only the number awaiting deferred free rather than
+ the total number of elements.
+
+ One way to stall the updates is to acquire the update-side
+ mutex. (Don't try this with a spinlock -- other CPUs
+ spinning on the lock could prevent the grace period
+ from ever ending.) Another way to stall the updates
+ is for the updates to use a wrapper function around
+ the memory allocator, so that this wrapper function
+ simulates OOM when there is too much memory awaiting an
+ RCU grace period. There are of course many other
+ variations on this theme.
+
+ b. Limiting update rate. For example, if updates occur only
+ once per hour, then no explicit rate limiting is
+ required, unless your system is already badly broken.
+ Older versions of the dcache subsystem take this approach,
+ guarding updates with a global lock, limiting their rate.
+
+ c. Trusted update -- if updates can only be done manually by
+ superuser or some other trusted user, then it might not
+ be necessary to automatically limit them. The theory
+ here is that superuser already has lots of ways to crash
+ the machine.
+
+ d. Use call_rcu_bh() rather than call_rcu(), in order to take
+ advantage of call_rcu_bh()'s faster grace periods. (This
+ is only a partial solution, though.)
+
+ e. Periodically invoke synchronize_rcu(), permitting a limited
+ number of updates per grace period.
+
+ The same cautions apply to call_rcu_bh(), call_rcu_sched(),
+ call_srcu(), and kfree_rcu().
+
+ Note that although these primitives do take action to avoid memory
+ exhaustion when any given CPU has too many callbacks, a determined
+ user could still exhaust memory. This is especially the case
+ if a system with a large number of CPUs has been configured to
+ offload all of its RCU callbacks onto a single CPU, or if the
+ system has relatively little free memory.
+
+9. All RCU list-traversal primitives, which include
+ rcu_dereference(), list_for_each_entry_rcu(), and
+ list_for_each_safe_rcu(), must be either within an RCU read-side
+ critical section or must be protected by appropriate update-side
+ locks. RCU read-side critical sections are delimited by
+ rcu_read_lock() and rcu_read_unlock(), or by similar primitives
+ such as rcu_read_lock_bh() and rcu_read_unlock_bh(), in which
+ case the matching rcu_dereference() primitive must be used in
+ order to keep lockdep happy, in this case, rcu_dereference_bh().
+
+ The reason that it is permissible to use RCU list-traversal
+ primitives when the update-side lock is held is that doing so
+ can be quite helpful in reducing code bloat when common code is
+ shared between readers and updaters. Additional primitives
+ are provided for this case, as discussed in lockdep.txt.
+
+10. Conversely, if you are in an RCU read-side critical section,
+ and you don't hold the appropriate update-side lock, you -must-
+ use the "_rcu()" variants of the list macros. Failing to do so
+ will break Alpha, cause aggressive compilers to generate bad code,
+ and confuse people trying to read your code.
+
+11. Note that synchronize_rcu() -only- guarantees to wait until
+ all currently executing rcu_read_lock()-protected RCU read-side
+ critical sections complete. It does -not- necessarily guarantee
+ that all currently running interrupts, NMIs, preempt_disable()
+ code, or idle loops will complete. Therefore, if your
+ read-side critical sections are protected by something other
+ than rcu_read_lock(), do -not- use synchronize_rcu().
+
+ Similarly, disabling preemption is not an acceptable substitute
+ for rcu_read_lock(). Code that attempts to use preemption
+ disabling where it should be using rcu_read_lock() will break
+ in real-time kernel builds.
+
+ If you want to wait for interrupt handlers, NMI handlers, and
+ code under the influence of preempt_disable(), you instead
+ need to use synchronize_irq() or synchronize_sched().
+
+ This same limitation also applies to synchronize_rcu_bh()
+ and synchronize_srcu(), as well as to the asynchronous and
+ expedited forms of the three primitives, namely call_rcu(),
+ call_rcu_bh(), call_srcu(), synchronize_rcu_expedited(),
+ synchronize_rcu_bh_expedited(), and synchronize_srcu_expedited().
+
+12. Any lock acquired by an RCU callback must be acquired elsewhere
+ with softirq disabled, e.g., via spin_lock_irqsave(),
+ spin_lock_bh(), etc. Failing to disable irq on a given
+ acquisition of that lock will result in deadlock as soon as
+ the RCU softirq handler happens to run your RCU callback while
+ interrupting that acquisition's critical section.
+
+13. RCU callbacks can be and are executed in parallel. In many cases,
+ the callback code simply wrappers around kfree(), so that this
+ is not an issue (or, more accurately, to the extent that it is
+ an issue, the memory-allocator locking handles it). However,
+ if the callbacks do manipulate a shared data structure, they
+ must use whatever locking or other synchronization is required
+ to safely access and/or modify that data structure.
+
+ RCU callbacks are -usually- executed on the same CPU that executed
+ the corresponding call_rcu(), call_rcu_bh(), or call_rcu_sched(),
+ but are by -no- means guaranteed to be. For example, if a given
+ CPU goes offline while having an RCU callback pending, then that
+ RCU callback will execute on some surviving CPU. (If this was
+ not the case, a self-spawning RCU callback would prevent the
+ victim CPU from ever going offline.)
+
+14. SRCU (srcu_read_lock(), srcu_read_unlock(), srcu_dereference(),
+ synchronize_srcu(), synchronize_srcu_expedited(), and call_srcu())
+ may only be invoked from process context. Unlike other forms of
+ RCU, it -is- permissible to block in an SRCU read-side critical
+ section (demarked by srcu_read_lock() and srcu_read_unlock()),
+ hence the "SRCU": "sleepable RCU". Please note that if you
+ don't need to sleep in read-side critical sections, you should be
+ using RCU rather than SRCU, because RCU is almost always faster
+ and easier to use than is SRCU.
+
+ Also unlike other forms of RCU, explicit initialization
+ and cleanup is required via init_srcu_struct() and
+ cleanup_srcu_struct(). These are passed a "struct srcu_struct"
+ that defines the scope of a given SRCU domain. Once initialized,
+ the srcu_struct is passed to srcu_read_lock(), srcu_read_unlock()
+ synchronize_srcu(), synchronize_srcu_expedited(), and call_srcu().
+ A given synchronize_srcu() waits only for SRCU read-side critical
+ sections governed by srcu_read_lock() and srcu_read_unlock()
+ calls that have been passed the same srcu_struct. This property
+ is what makes sleeping read-side critical sections tolerable --
+ a given subsystem delays only its own updates, not those of other
+ subsystems using SRCU. Therefore, SRCU is less prone to OOM the
+ system than RCU would be if RCU's read-side critical sections
+ were permitted to sleep.
+
+ The ability to sleep in read-side critical sections does not
+ come for free. First, corresponding srcu_read_lock() and
+ srcu_read_unlock() calls must be passed the same srcu_struct.
+ Second, grace-period-detection overhead is amortized only
+ over those updates sharing a given srcu_struct, rather than
+ being globally amortized as they are for other forms of RCU.
+ Therefore, SRCU should be used in preference to rw_semaphore
+ only in extremely read-intensive situations, or in situations
+ requiring SRCU's read-side deadlock immunity or low read-side
+ realtime latency.
+
+ Note that, rcu_assign_pointer() relates to SRCU just as it does
+ to other forms of RCU.
+
+15. The whole point of call_rcu(), synchronize_rcu(), and friends
+ is to wait until all pre-existing readers have finished before
+ carrying out some otherwise-destructive operation. It is
+ therefore critically important to -first- remove any path
+ that readers can follow that could be affected by the
+ destructive operation, and -only- -then- invoke call_rcu(),
+ synchronize_rcu(), or friends.
+
+ Because these primitives only wait for pre-existing readers, it
+ is the caller's responsibility to guarantee that any subsequent
+ readers will execute safely.
+
+16. The various RCU read-side primitives do -not- necessarily contain
+ memory barriers. You should therefore plan for the CPU
+ and the compiler to freely reorder code into and out of RCU
+ read-side critical sections. It is the responsibility of the
+ RCU update-side primitives to deal with this.
+
+17. Use CONFIG_PROVE_RCU, CONFIG_DEBUG_OBJECTS_RCU_HEAD, and the
+ __rcu sparse checks (enabled by CONFIG_SPARSE_RCU_POINTER) to
+ validate your RCU code. These can help find problems as follows:
+
+ CONFIG_PROVE_RCU: check that accesses to RCU-protected data
+ structures are carried out under the proper RCU
+ read-side critical section, while holding the right
+ combination of locks, or whatever other conditions
+ are appropriate.
+
+ CONFIG_DEBUG_OBJECTS_RCU_HEAD: check that you don't pass the
+ same object to call_rcu() (or friends) before an RCU
+ grace period has elapsed since the last time that you
+ passed that same object to call_rcu() (or friends).
+
+ __rcu sparse checks: tag the pointer to the RCU-protected data
+ structure with __rcu, and sparse will warn you if you
+ access that pointer without the services of one of the
+ variants of rcu_dereference().
+
+ These debugging aids can help you find problems that are
+ otherwise extremely difficult to spot.
diff --git a/kernel/Documentation/RCU/listRCU.txt b/kernel/Documentation/RCU/listRCU.txt
new file mode 100644
index 000000000..adb5a3782
--- /dev/null
+++ b/kernel/Documentation/RCU/listRCU.txt
@@ -0,0 +1,315 @@
+Using RCU to Protect Read-Mostly Linked Lists
+
+
+One of the best applications of RCU is to protect read-mostly linked lists
+("struct list_head" in list.h). One big advantage of this approach
+is that all of the required memory barriers are included for you in
+the list macros. This document describes several applications of RCU,
+with the best fits first.
+
+
+Example 1: Read-Side Action Taken Outside of Lock, No In-Place Updates
+
+The best applications are cases where, if reader-writer locking were
+used, the read-side lock would be dropped before taking any action
+based on the results of the search. The most celebrated example is
+the routing table. Because the routing table is tracking the state of
+equipment outside of the computer, it will at times contain stale data.
+Therefore, once the route has been computed, there is no need to hold
+the routing table static during transmission of the packet. After all,
+you can hold the routing table static all you want, but that won't keep
+the external Internet from changing, and it is the state of the external
+Internet that really matters. In addition, routing entries are typically
+added or deleted, rather than being modified in place.
+
+A straightforward example of this use of RCU may be found in the
+system-call auditing support. For example, a reader-writer locked
+implementation of audit_filter_task() might be as follows:
+
+ static enum audit_state audit_filter_task(struct task_struct *tsk)
+ {
+ struct audit_entry *e;
+ enum audit_state state;
+
+ read_lock(&auditsc_lock);
+ /* Note: audit_netlink_sem held by caller. */
+ list_for_each_entry(e, &audit_tsklist, list) {
+ if (audit_filter_rules(tsk, &e->rule, NULL, &state)) {
+ read_unlock(&auditsc_lock);
+ return state;
+ }
+ }
+ read_unlock(&auditsc_lock);
+ return AUDIT_BUILD_CONTEXT;
+ }
+
+Here the list is searched under the lock, but the lock is dropped before
+the corresponding value is returned. By the time that this value is acted
+on, the list may well have been modified. This makes sense, since if
+you are turning auditing off, it is OK to audit a few extra system calls.
+
+This means that RCU can be easily applied to the read side, as follows:
+
+ static enum audit_state audit_filter_task(struct task_struct *tsk)
+ {
+ struct audit_entry *e;
+ enum audit_state state;
+
+ rcu_read_lock();
+ /* Note: audit_netlink_sem held by caller. */
+ list_for_each_entry_rcu(e, &audit_tsklist, list) {
+ if (audit_filter_rules(tsk, &e->rule, NULL, &state)) {
+ rcu_read_unlock();
+ return state;
+ }
+ }
+ rcu_read_unlock();
+ return AUDIT_BUILD_CONTEXT;
+ }
+
+The read_lock() and read_unlock() calls have become rcu_read_lock()
+and rcu_read_unlock(), respectively, and the list_for_each_entry() has
+become list_for_each_entry_rcu(). The _rcu() list-traversal primitives
+insert the read-side memory barriers that are required on DEC Alpha CPUs.
+
+The changes to the update side are also straightforward. A reader-writer
+lock might be used as follows for deletion and insertion:
+
+ static inline int audit_del_rule(struct audit_rule *rule,
+ struct list_head *list)
+ {
+ struct audit_entry *e;
+
+ write_lock(&auditsc_lock);
+ list_for_each_entry(e, list, list) {
+ if (!audit_compare_rule(rule, &e->rule)) {
+ list_del(&e->list);
+ write_unlock(&auditsc_lock);
+ return 0;
+ }
+ }
+ write_unlock(&auditsc_lock);
+ return -EFAULT; /* No matching rule */
+ }
+
+ static inline int audit_add_rule(struct audit_entry *entry,
+ struct list_head *list)
+ {
+ write_lock(&auditsc_lock);
+ if (entry->rule.flags & AUDIT_PREPEND) {
+ entry->rule.flags &= ~AUDIT_PREPEND;
+ list_add(&entry->list, list);
+ } else {
+ list_add_tail(&entry->list, list);
+ }
+ write_unlock(&auditsc_lock);
+ return 0;
+ }
+
+Following are the RCU equivalents for these two functions:
+
+ static inline int audit_del_rule(struct audit_rule *rule,
+ struct list_head *list)
+ {
+ struct audit_entry *e;
+
+ /* Do not use the _rcu iterator here, since this is the only
+ * deletion routine. */
+ list_for_each_entry(e, list, list) {
+ if (!audit_compare_rule(rule, &e->rule)) {
+ list_del_rcu(&e->list);
+ call_rcu(&e->rcu, audit_free_rule);
+ return 0;
+ }
+ }
+ return -EFAULT; /* No matching rule */
+ }
+
+ static inline int audit_add_rule(struct audit_entry *entry,
+ struct list_head *list)
+ {
+ if (entry->rule.flags & AUDIT_PREPEND) {
+ entry->rule.flags &= ~AUDIT_PREPEND;
+ list_add_rcu(&entry->list, list);
+ } else {
+ list_add_tail_rcu(&entry->list, list);
+ }
+ return 0;
+ }
+
+Normally, the write_lock() and write_unlock() would be replaced by
+a spin_lock() and a spin_unlock(), but in this case, all callers hold
+audit_netlink_sem, so no additional locking is required. The auditsc_lock
+can therefore be eliminated, since use of RCU eliminates the need for
+writers to exclude readers. Normally, the write_lock() calls would
+be converted into spin_lock() calls.
+
+The list_del(), list_add(), and list_add_tail() primitives have been
+replaced by list_del_rcu(), list_add_rcu(), and list_add_tail_rcu().
+The _rcu() list-manipulation primitives add memory barriers that are
+needed on weakly ordered CPUs (most of them!). The list_del_rcu()
+primitive omits the pointer poisoning debug-assist code that would
+otherwise cause concurrent readers to fail spectacularly.
+
+So, when readers can tolerate stale data and when entries are either added
+or deleted, without in-place modification, it is very easy to use RCU!
+
+
+Example 2: Handling In-Place Updates
+
+The system-call auditing code does not update auditing rules in place.
+However, if it did, reader-writer-locked code to do so might look as
+follows (presumably, the field_count is only permitted to decrease,
+otherwise, the added fields would need to be filled in):
+
+ static inline int audit_upd_rule(struct audit_rule *rule,
+ struct list_head *list,
+ __u32 newaction,
+ __u32 newfield_count)
+ {
+ struct audit_entry *e;
+ struct audit_newentry *ne;
+
+ write_lock(&auditsc_lock);
+ /* Note: audit_netlink_sem held by caller. */
+ list_for_each_entry(e, list, list) {
+ if (!audit_compare_rule(rule, &e->rule)) {
+ e->rule.action = newaction;
+ e->rule.file_count = newfield_count;
+ write_unlock(&auditsc_lock);
+ return 0;
+ }
+ }
+ write_unlock(&auditsc_lock);
+ return -EFAULT; /* No matching rule */
+ }
+
+The RCU version creates a copy, updates the copy, then replaces the old
+entry with the newly updated entry. This sequence of actions, allowing
+concurrent reads while doing a copy to perform an update, is what gives
+RCU ("read-copy update") its name. The RCU code is as follows:
+
+ static inline int audit_upd_rule(struct audit_rule *rule,
+ struct list_head *list,
+ __u32 newaction,
+ __u32 newfield_count)
+ {
+ struct audit_entry *e;
+ struct audit_newentry *ne;
+
+ list_for_each_entry(e, list, list) {
+ if (!audit_compare_rule(rule, &e->rule)) {
+ ne = kmalloc(sizeof(*entry), GFP_ATOMIC);
+ if (ne == NULL)
+ return -ENOMEM;
+ audit_copy_rule(&ne->rule, &e->rule);
+ ne->rule.action = newaction;
+ ne->rule.file_count = newfield_count;
+ list_replace_rcu(&e->list, &ne->list);
+ call_rcu(&e->rcu, audit_free_rule);
+ return 0;
+ }
+ }
+ return -EFAULT; /* No matching rule */
+ }
+
+Again, this assumes that the caller holds audit_netlink_sem. Normally,
+the reader-writer lock would become a spinlock in this sort of code.
+
+
+Example 3: Eliminating Stale Data
+
+The auditing examples above tolerate stale data, as do most algorithms
+that are tracking external state. Because there is a delay from the
+time the external state changes before Linux becomes aware of the change,
+additional RCU-induced staleness is normally not a problem.
+
+However, there are many examples where stale data cannot be tolerated.
+One example in the Linux kernel is the System V IPC (see the ipc_lock()
+function in ipc/util.c). This code checks a "deleted" flag under a
+per-entry spinlock, and, if the "deleted" flag is set, pretends that the
+entry does not exist. For this to be helpful, the search function must
+return holding the per-entry spinlock, as ipc_lock() does in fact do.
+
+Quick Quiz: Why does the search function need to return holding the
+ per-entry lock for this deleted-flag technique to be helpful?
+
+If the system-call audit module were to ever need to reject stale data,
+one way to accomplish this would be to add a "deleted" flag and a "lock"
+spinlock to the audit_entry structure, and modify audit_filter_task()
+as follows:
+
+ static enum audit_state audit_filter_task(struct task_struct *tsk)
+ {
+ struct audit_entry *e;
+ enum audit_state state;
+
+ rcu_read_lock();
+ list_for_each_entry_rcu(e, &audit_tsklist, list) {
+ if (audit_filter_rules(tsk, &e->rule, NULL, &state)) {
+ spin_lock(&e->lock);
+ if (e->deleted) {
+ spin_unlock(&e->lock);
+ rcu_read_unlock();
+ return AUDIT_BUILD_CONTEXT;
+ }
+ rcu_read_unlock();
+ return state;
+ }
+ }
+ rcu_read_unlock();
+ return AUDIT_BUILD_CONTEXT;
+ }
+
+Note that this example assumes that entries are only added and deleted.
+Additional mechanism is required to deal correctly with the
+update-in-place performed by audit_upd_rule(). For one thing,
+audit_upd_rule() would need additional memory barriers to ensure
+that the list_add_rcu() was really executed before the list_del_rcu().
+
+The audit_del_rule() function would need to set the "deleted"
+flag under the spinlock as follows:
+
+ static inline int audit_del_rule(struct audit_rule *rule,
+ struct list_head *list)
+ {
+ struct audit_entry *e;
+
+ /* Do not need to use the _rcu iterator here, since this
+ * is the only deletion routine. */
+ list_for_each_entry(e, list, list) {
+ if (!audit_compare_rule(rule, &e->rule)) {
+ spin_lock(&e->lock);
+ list_del_rcu(&e->list);
+ e->deleted = 1;
+ spin_unlock(&e->lock);
+ call_rcu(&e->rcu, audit_free_rule);
+ return 0;
+ }
+ }
+ return -EFAULT; /* No matching rule */
+ }
+
+
+Summary
+
+Read-mostly list-based data structures that can tolerate stale data are
+the most amenable to use of RCU. The simplest case is where entries are
+either added or deleted from the data structure (or atomically modified
+in place), but non-atomic in-place modifications can be handled by making
+a copy, updating the copy, then replacing the original with the copy.
+If stale data cannot be tolerated, then a "deleted" flag may be used
+in conjunction with a per-entry spinlock in order to allow the search
+function to reject newly deleted data.
+
+
+Answer to Quick Quiz
+ Why does the search function need to return holding the per-entry
+ lock for this deleted-flag technique to be helpful?
+
+ If the search function drops the per-entry lock before returning,
+ then the caller will be processing stale data in any case. If it
+ is really OK to be processing stale data, then you don't need a
+ "deleted" flag. If processing stale data really is a problem,
+ then you need to hold the per-entry lock across all of the code
+ that uses the value that was returned.
diff --git a/kernel/Documentation/RCU/lockdep-splat.txt b/kernel/Documentation/RCU/lockdep-splat.txt
new file mode 100644
index 000000000..bf9061142
--- /dev/null
+++ b/kernel/Documentation/RCU/lockdep-splat.txt
@@ -0,0 +1,110 @@
+Lockdep-RCU was added to the Linux kernel in early 2010
+(http://lwn.net/Articles/371986/). This facility checks for some common
+misuses of the RCU API, most notably using one of the rcu_dereference()
+family to access an RCU-protected pointer without the proper protection.
+When such misuse is detected, an lockdep-RCU splat is emitted.
+
+The usual cause of a lockdep-RCU slat is someone accessing an
+RCU-protected data structure without either (1) being in the right kind of
+RCU read-side critical section or (2) holding the right update-side lock.
+This problem can therefore be serious: it might result in random memory
+overwriting or worse. There can of course be false positives, this
+being the real world and all that.
+
+So let's look at an example RCU lockdep splat from 3.0-rc5, one that
+has long since been fixed:
+
+===============================
+[ INFO: suspicious RCU usage. ]
+-------------------------------
+block/cfq-iosched.c:2776 suspicious rcu_dereference_protected() usage!
+
+other info that might help us debug this:
+
+
+rcu_scheduler_active = 1, debug_locks = 0
+3 locks held by scsi_scan_6/1552:
+ #0: (&shost->scan_mutex){+.+.+.}, at: [<ffffffff8145efca>]
+scsi_scan_host_selected+0x5a/0x150
+ #1: (&eq->sysfs_lock){+.+...}, at: [<ffffffff812a5032>]
+elevator_exit+0x22/0x60
+ #2: (&(&q->__queue_lock)->rlock){-.-...}, at: [<ffffffff812b6233>]
+cfq_exit_queue+0x43/0x190
+
+stack backtrace:
+Pid: 1552, comm: scsi_scan_6 Not tainted 3.0.0-rc5 #17
+Call Trace:
+ [<ffffffff810abb9b>] lockdep_rcu_dereference+0xbb/0xc0
+ [<ffffffff812b6139>] __cfq_exit_single_io_context+0xe9/0x120
+ [<ffffffff812b626c>] cfq_exit_queue+0x7c/0x190
+ [<ffffffff812a5046>] elevator_exit+0x36/0x60
+ [<ffffffff812a802a>] blk_cleanup_queue+0x4a/0x60
+ [<ffffffff8145cc09>] scsi_free_queue+0x9/0x10
+ [<ffffffff81460944>] __scsi_remove_device+0x84/0xd0
+ [<ffffffff8145dca3>] scsi_probe_and_add_lun+0x353/0xb10
+ [<ffffffff817da069>] ? error_exit+0x29/0xb0
+ [<ffffffff817d98ed>] ? _raw_spin_unlock_irqrestore+0x3d/0x80
+ [<ffffffff8145e722>] __scsi_scan_target+0x112/0x680
+ [<ffffffff812c690d>] ? trace_hardirqs_off_thunk+0x3a/0x3c
+ [<ffffffff817da069>] ? error_exit+0x29/0xb0
+ [<ffffffff812bcc60>] ? kobject_del+0x40/0x40
+ [<ffffffff8145ed16>] scsi_scan_channel+0x86/0xb0
+ [<ffffffff8145f0b0>] scsi_scan_host_selected+0x140/0x150
+ [<ffffffff8145f149>] do_scsi_scan_host+0x89/0x90
+ [<ffffffff8145f170>] do_scan_async+0x20/0x160
+ [<ffffffff8145f150>] ? do_scsi_scan_host+0x90/0x90
+ [<ffffffff810975b6>] kthread+0xa6/0xb0
+ [<ffffffff817db154>] kernel_thread_helper+0x4/0x10
+ [<ffffffff81066430>] ? finish_task_switch+0x80/0x110
+ [<ffffffff817d9c04>] ? retint_restore_args+0xe/0xe
+ [<ffffffff81097510>] ? __init_kthread_worker+0x70/0x70
+ [<ffffffff817db150>] ? gs_change+0xb/0xb
+
+Line 2776 of block/cfq-iosched.c in v3.0-rc5 is as follows:
+
+ if (rcu_dereference(ioc->ioc_data) == cic) {
+
+This form says that it must be in a plain vanilla RCU read-side critical
+section, but the "other info" list above shows that this is not the
+case. Instead, we hold three locks, one of which might be RCU related.
+And maybe that lock really does protect this reference. If so, the fix
+is to inform RCU, perhaps by changing __cfq_exit_single_io_context() to
+take the struct request_queue "q" from cfq_exit_queue() as an argument,
+which would permit us to invoke rcu_dereference_protected as follows:
+
+ if (rcu_dereference_protected(ioc->ioc_data,
+ lockdep_is_held(&q->queue_lock)) == cic) {
+
+With this change, there would be no lockdep-RCU splat emitted if this
+code was invoked either from within an RCU read-side critical section
+or with the ->queue_lock held. In particular, this would have suppressed
+the above lockdep-RCU splat because ->queue_lock is held (see #2 in the
+list above).
+
+On the other hand, perhaps we really do need an RCU read-side critical
+section. In this case, the critical section must span the use of the
+return value from rcu_dereference(), or at least until there is some
+reference count incremented or some such. One way to handle this is to
+add rcu_read_lock() and rcu_read_unlock() as follows:
+
+ rcu_read_lock();
+ if (rcu_dereference(ioc->ioc_data) == cic) {
+ spin_lock(&ioc->lock);
+ rcu_assign_pointer(ioc->ioc_data, NULL);
+ spin_unlock(&ioc->lock);
+ }
+ rcu_read_unlock();
+
+With this change, the rcu_dereference() is always within an RCU
+read-side critical section, which again would have suppressed the
+above lockdep-RCU splat.
+
+But in this particular case, we don't actually deference the pointer
+returned from rcu_dereference(). Instead, that pointer is just compared
+to the cic pointer, which means that the rcu_dereference() can be replaced
+by rcu_access_pointer() as follows:
+
+ if (rcu_access_pointer(ioc->ioc_data) == cic) {
+
+Because it is legal to invoke rcu_access_pointer() without protection,
+this change would also suppress the above lockdep-RCU splat.
diff --git a/kernel/Documentation/RCU/lockdep.txt b/kernel/Documentation/RCU/lockdep.txt
new file mode 100644
index 000000000..cd83d2348
--- /dev/null
+++ b/kernel/Documentation/RCU/lockdep.txt
@@ -0,0 +1,112 @@
+RCU and lockdep checking
+
+All flavors of RCU have lockdep checking available, so that lockdep is
+aware of when each task enters and leaves any flavor of RCU read-side
+critical section. Each flavor of RCU is tracked separately (but note
+that this is not the case in 2.6.32 and earlier). This allows lockdep's
+tracking to include RCU state, which can sometimes help when debugging
+deadlocks and the like.
+
+In addition, RCU provides the following primitives that check lockdep's
+state:
+
+ rcu_read_lock_held() for normal RCU.
+ rcu_read_lock_bh_held() for RCU-bh.
+ rcu_read_lock_sched_held() for RCU-sched.
+ srcu_read_lock_held() for SRCU.
+
+These functions are conservative, and will therefore return 1 if they
+aren't certain (for example, if CONFIG_DEBUG_LOCK_ALLOC is not set).
+This prevents things like WARN_ON(!rcu_read_lock_held()) from giving false
+positives when lockdep is disabled.
+
+In addition, a separate kernel config parameter CONFIG_PROVE_RCU enables
+checking of rcu_dereference() primitives:
+
+ rcu_dereference(p):
+ Check for RCU read-side critical section.
+ rcu_dereference_bh(p):
+ Check for RCU-bh read-side critical section.
+ rcu_dereference_sched(p):
+ Check for RCU-sched read-side critical section.
+ srcu_dereference(p, sp):
+ Check for SRCU read-side critical section.
+ rcu_dereference_check(p, c):
+ Use explicit check expression "c" along with
+ rcu_read_lock_held(). This is useful in code that is
+ invoked by both RCU readers and updaters.
+ rcu_dereference_bh_check(p, c):
+ Use explicit check expression "c" along with
+ rcu_read_lock_bh_held(). This is useful in code that
+ is invoked by both RCU-bh readers and updaters.
+ rcu_dereference_sched_check(p, c):
+ Use explicit check expression "c" along with
+ rcu_read_lock_sched_held(). This is useful in code that
+ is invoked by both RCU-sched readers and updaters.
+ srcu_dereference_check(p, c):
+ Use explicit check expression "c" along with
+ srcu_read_lock_held()(). This is useful in code that
+ is invoked by both SRCU readers and updaters.
+ rcu_dereference_index_check(p, c):
+ Use explicit check expression "c", but the caller
+ must supply one of the rcu_read_lock_held() functions.
+ This is useful in code that uses RCU-protected arrays
+ that is invoked by both RCU readers and updaters.
+ rcu_dereference_raw(p):
+ Don't check. (Use sparingly, if at all.)
+ rcu_dereference_protected(p, c):
+ Use explicit check expression "c", and omit all barriers
+ and compiler constraints. This is useful when the data
+ structure cannot change, for example, in code that is
+ invoked only by updaters.
+ rcu_access_pointer(p):
+ Return the value of the pointer and omit all barriers,
+ but retain the compiler constraints that prevent duplicating
+ or coalescsing. This is useful when when testing the
+ value of the pointer itself, for example, against NULL.
+ rcu_access_index(idx):
+ Return the value of the index and omit all barriers, but
+ retain the compiler constraints that prevent duplicating
+ or coalescsing. This is useful when when testing the
+ value of the index itself, for example, against -1.
+
+The rcu_dereference_check() check expression can be any boolean
+expression, but would normally include a lockdep expression. However,
+any boolean expression can be used. For a moderately ornate example,
+consider the following:
+
+ file = rcu_dereference_check(fdt->fd[fd],
+ lockdep_is_held(&files->file_lock) ||
+ atomic_read(&files->count) == 1);
+
+This expression picks up the pointer "fdt->fd[fd]" in an RCU-safe manner,
+and, if CONFIG_PROVE_RCU is configured, verifies that this expression
+is used in:
+
+1. An RCU read-side critical section (implicit), or
+2. with files->file_lock held, or
+3. on an unshared files_struct.
+
+In case (1), the pointer is picked up in an RCU-safe manner for vanilla
+RCU read-side critical sections, in case (2) the ->file_lock prevents
+any change from taking place, and finally, in case (3) the current task
+is the only task accessing the file_struct, again preventing any change
+from taking place. If the above statement was invoked only from updater
+code, it could instead be written as follows:
+
+ file = rcu_dereference_protected(fdt->fd[fd],
+ lockdep_is_held(&files->file_lock) ||
+ atomic_read(&files->count) == 1);
+
+This would verify cases #2 and #3 above, and furthermore lockdep would
+complain if this was used in an RCU read-side critical section unless one
+of these two cases held. Because rcu_dereference_protected() omits all
+barriers and compiler constraints, it generates better code than do the
+other flavors of rcu_dereference(). On the other hand, it is illegal
+to use rcu_dereference_protected() if either the RCU-protected pointer
+or the RCU-protected data that it points to can change concurrently.
+
+There are currently only "universal" versions of the rcu_assign_pointer()
+and RCU list-/tree-traversal primitives, which do not (yet) check for
+being in an RCU read-side critical section. In the future, separate
+versions of these primitives might be created.
diff --git a/kernel/Documentation/RCU/rcu.txt b/kernel/Documentation/RCU/rcu.txt
new file mode 100644
index 000000000..745f429fd
--- /dev/null
+++ b/kernel/Documentation/RCU/rcu.txt
@@ -0,0 +1,96 @@
+RCU Concepts
+
+
+The basic idea behind RCU (read-copy update) is to split destructive
+operations into two parts, one that prevents anyone from seeing the data
+item being destroyed, and one that actually carries out the destruction.
+A "grace period" must elapse between the two parts, and this grace period
+must be long enough that any readers accessing the item being deleted have
+since dropped their references. For example, an RCU-protected deletion
+from a linked list would first remove the item from the list, wait for
+a grace period to elapse, then free the element. See the listRCU.txt
+file for more information on using RCU with linked lists.
+
+
+Frequently Asked Questions
+
+o Why would anyone want to use RCU?
+
+ The advantage of RCU's two-part approach is that RCU readers need
+ not acquire any locks, perform any atomic instructions, write to
+ shared memory, or (on CPUs other than Alpha) execute any memory
+ barriers. The fact that these operations are quite expensive
+ on modern CPUs is what gives RCU its performance advantages
+ in read-mostly situations. The fact that RCU readers need not
+ acquire locks can also greatly simplify deadlock-avoidance code.
+
+o How can the updater tell when a grace period has completed
+ if the RCU readers give no indication when they are done?
+
+ Just as with spinlocks, RCU readers are not permitted to
+ block, switch to user-mode execution, or enter the idle loop.
+ Therefore, as soon as a CPU is seen passing through any of these
+ three states, we know that that CPU has exited any previous RCU
+ read-side critical sections. So, if we remove an item from a
+ linked list, and then wait until all CPUs have switched context,
+ executed in user mode, or executed in the idle loop, we can
+ safely free up that item.
+
+ Preemptible variants of RCU (CONFIG_PREEMPT_RCU) get the
+ same effect, but require that the readers manipulate CPU-local
+ counters. These counters allow limited types of blocking within
+ RCU read-side critical sections. SRCU also uses CPU-local
+ counters, and permits general blocking within RCU read-side
+ critical sections. These variants of RCU detect grace periods
+ by sampling these counters.
+
+o If I am running on a uniprocessor kernel, which can only do one
+ thing at a time, why should I wait for a grace period?
+
+ See the UP.txt file in this directory.
+
+o How can I see where RCU is currently used in the Linux kernel?
+
+ Search for "rcu_read_lock", "rcu_read_unlock", "call_rcu",
+ "rcu_read_lock_bh", "rcu_read_unlock_bh", "call_rcu_bh",
+ "srcu_read_lock", "srcu_read_unlock", "synchronize_rcu",
+ "synchronize_net", "synchronize_srcu", and the other RCU
+ primitives. Or grab one of the cscope databases from:
+
+ http://www.rdrop.com/users/paulmck/RCU/linuxusage/rculocktab.html
+
+o What guidelines should I follow when writing code that uses RCU?
+
+ See the checklist.txt file in this directory.
+
+o Why the name "RCU"?
+
+ "RCU" stands for "read-copy update". The file listRCU.txt has
+ more information on where this name came from, search for
+ "read-copy update" to find it.
+
+o I hear that RCU is patented? What is with that?
+
+ Yes, it is. There are several known patents related to RCU,
+ search for the string "Patent" in RTFP.txt to find them.
+ Of these, one was allowed to lapse by the assignee, and the
+ others have been contributed to the Linux kernel under GPL.
+ There are now also LGPL implementations of user-level RCU
+ available (http://lttng.org/?q=node/18).
+
+o I hear that RCU needs work in order to support realtime kernels?
+
+ This work is largely completed. Realtime-friendly RCU can be
+ enabled via the CONFIG_PREEMPT_RCU kernel configuration
+ parameter. However, work is in progress for enabling priority
+ boosting of preempted RCU read-side critical sections. This is
+ needed if you have CPU-bound realtime threads.
+
+o Where can I find more information on RCU?
+
+ See the RTFP.txt file in this directory.
+ Or point your browser at http://www.rdrop.com/users/paulmck/RCU/.
+
+o What are all these files in this directory?
+
+ See 00-INDEX for the list.
diff --git a/kernel/Documentation/RCU/rcu_dereference.txt b/kernel/Documentation/RCU/rcu_dereference.txt
new file mode 100644
index 000000000..ceb05da5a
--- /dev/null
+++ b/kernel/Documentation/RCU/rcu_dereference.txt
@@ -0,0 +1,371 @@
+PROPER CARE AND FEEDING OF RETURN VALUES FROM rcu_dereference()
+
+Most of the time, you can use values from rcu_dereference() or one of
+the similar primitives without worries. Dereferencing (prefix "*"),
+field selection ("->"), assignment ("="), address-of ("&"), addition and
+subtraction of constants, and casts all work quite naturally and safely.
+
+It is nevertheless possible to get into trouble with other operations.
+Follow these rules to keep your RCU code working properly:
+
+o You must use one of the rcu_dereference() family of primitives
+ to load an RCU-protected pointer, otherwise CONFIG_PROVE_RCU
+ will complain. Worse yet, your code can see random memory-corruption
+ bugs due to games that compilers and DEC Alpha can play.
+ Without one of the rcu_dereference() primitives, compilers
+ can reload the value, and won't your code have fun with two
+ different values for a single pointer! Without rcu_dereference(),
+ DEC Alpha can load a pointer, dereference that pointer, and
+ return data preceding initialization that preceded the store of
+ the pointer.
+
+ In addition, the volatile cast in rcu_dereference() prevents the
+ compiler from deducing the resulting pointer value. Please see
+ the section entitled "EXAMPLE WHERE THE COMPILER KNOWS TOO MUCH"
+ for an example where the compiler can in fact deduce the exact
+ value of the pointer, and thus cause misordering.
+
+o Do not use single-element RCU-protected arrays. The compiler
+ is within its right to assume that the value of an index into
+ such an array must necessarily evaluate to zero. The compiler
+ could then substitute the constant zero for the computation, so
+ that the array index no longer depended on the value returned
+ by rcu_dereference(). If the array index no longer depends
+ on rcu_dereference(), then both the compiler and the CPU
+ are within their rights to order the array access before the
+ rcu_dereference(), which can cause the array access to return
+ garbage.
+
+o Avoid cancellation when using the "+" and "-" infix arithmetic
+ operators. For example, for a given variable "x", avoid
+ "(x-x)". There are similar arithmetic pitfalls from other
+ arithmetic operatiors, such as "(x*0)", "(x/(x+1))" or "(x%1)".
+ The compiler is within its rights to substitute zero for all of
+ these expressions, so that subsequent accesses no longer depend
+ on the rcu_dereference(), again possibly resulting in bugs due
+ to misordering.
+
+ Of course, if "p" is a pointer from rcu_dereference(), and "a"
+ and "b" are integers that happen to be equal, the expression
+ "p+a-b" is safe because its value still necessarily depends on
+ the rcu_dereference(), thus maintaining proper ordering.
+
+o Avoid all-zero operands to the bitwise "&" operator, and
+ similarly avoid all-ones operands to the bitwise "|" operator.
+ If the compiler is able to deduce the value of such operands,
+ it is within its rights to substitute the corresponding constant
+ for the bitwise operation. Once again, this causes subsequent
+ accesses to no longer depend on the rcu_dereference(), causing
+ bugs due to misordering.
+
+ Please note that single-bit operands to bitwise "&" can also
+ be dangerous. At this point, the compiler knows that the
+ resulting value can only take on one of two possible values.
+ Therefore, a very small amount of additional information will
+ allow the compiler to deduce the exact value, which again can
+ result in misordering.
+
+o If you are using RCU to protect JITed functions, so that the
+ "()" function-invocation operator is applied to a value obtained
+ (directly or indirectly) from rcu_dereference(), you may need to
+ interact directly with the hardware to flush instruction caches.
+ This issue arises on some systems when a newly JITed function is
+ using the same memory that was used by an earlier JITed function.
+
+o Do not use the results from the boolean "&&" and "||" when
+ dereferencing. For example, the following (rather improbable)
+ code is buggy:
+
+ int a[2];
+ int index;
+ int force_zero_index = 1;
+
+ ...
+
+ r1 = rcu_dereference(i1)
+ r2 = a[r1 && force_zero_index]; /* BUGGY!!! */
+
+ The reason this is buggy is that "&&" and "||" are often compiled
+ using branches. While weak-memory machines such as ARM or PowerPC
+ do order stores after such branches, they can speculate loads,
+ which can result in misordering bugs.
+
+o Do not use the results from relational operators ("==", "!=",
+ ">", ">=", "<", or "<=") when dereferencing. For example,
+ the following (quite strange) code is buggy:
+
+ int a[2];
+ int index;
+ int flip_index = 0;
+
+ ...
+
+ r1 = rcu_dereference(i1)
+ r2 = a[r1 != flip_index]; /* BUGGY!!! */
+
+ As before, the reason this is buggy is that relational operators
+ are often compiled using branches. And as before, although
+ weak-memory machines such as ARM or PowerPC do order stores
+ after such branches, but can speculate loads, which can again
+ result in misordering bugs.
+
+o Be very careful about comparing pointers obtained from
+ rcu_dereference() against non-NULL values. As Linus Torvalds
+ explained, if the two pointers are equal, the compiler could
+ substitute the pointer you are comparing against for the pointer
+ obtained from rcu_dereference(). For example:
+
+ p = rcu_dereference(gp);
+ if (p == &default_struct)
+ do_default(p->a);
+
+ Because the compiler now knows that the value of "p" is exactly
+ the address of the variable "default_struct", it is free to
+ transform this code into the following:
+
+ p = rcu_dereference(gp);
+ if (p == &default_struct)
+ do_default(default_struct.a);
+
+ On ARM and Power hardware, the load from "default_struct.a"
+ can now be speculated, such that it might happen before the
+ rcu_dereference(). This could result in bugs due to misordering.
+
+ However, comparisons are OK in the following cases:
+
+ o The comparison was against the NULL pointer. If the
+ compiler knows that the pointer is NULL, you had better
+ not be dereferencing it anyway. If the comparison is
+ non-equal, the compiler is none the wiser. Therefore,
+ it is safe to compare pointers from rcu_dereference()
+ against NULL pointers.
+
+ o The pointer is never dereferenced after being compared.
+ Since there are no subsequent dereferences, the compiler
+ cannot use anything it learned from the comparison
+ to reorder the non-existent subsequent dereferences.
+ This sort of comparison occurs frequently when scanning
+ RCU-protected circular linked lists.
+
+ o The comparison is against a pointer that references memory
+ that was initialized "a long time ago." The reason
+ this is safe is that even if misordering occurs, the
+ misordering will not affect the accesses that follow
+ the comparison. So exactly how long ago is "a long
+ time ago"? Here are some possibilities:
+
+ o Compile time.
+
+ o Boot time.
+
+ o Module-init time for module code.
+
+ o Prior to kthread creation for kthread code.
+
+ o During some prior acquisition of the lock that
+ we now hold.
+
+ o Before mod_timer() time for a timer handler.
+
+ There are many other possibilities involving the Linux
+ kernel's wide array of primitives that cause code to
+ be invoked at a later time.
+
+ o The pointer being compared against also came from
+ rcu_dereference(). In this case, both pointers depend
+ on one rcu_dereference() or another, so you get proper
+ ordering either way.
+
+ That said, this situation can make certain RCU usage
+ bugs more likely to happen. Which can be a good thing,
+ at least if they happen during testing. An example
+ of such an RCU usage bug is shown in the section titled
+ "EXAMPLE OF AMPLIFIED RCU-USAGE BUG".
+
+ o All of the accesses following the comparison are stores,
+ so that a control dependency preserves the needed ordering.
+ That said, it is easy to get control dependencies wrong.
+ Please see the "CONTROL DEPENDENCIES" section of
+ Documentation/memory-barriers.txt for more details.
+
+ o The pointers are not equal -and- the compiler does
+ not have enough information to deduce the value of the
+ pointer. Note that the volatile cast in rcu_dereference()
+ will normally prevent the compiler from knowing too much.
+
+o Disable any value-speculation optimizations that your compiler
+ might provide, especially if you are making use of feedback-based
+ optimizations that take data collected from prior runs. Such
+ value-speculation optimizations reorder operations by design.
+
+ There is one exception to this rule: Value-speculation
+ optimizations that leverage the branch-prediction hardware are
+ safe on strongly ordered systems (such as x86), but not on weakly
+ ordered systems (such as ARM or Power). Choose your compiler
+ command-line options wisely!
+
+
+EXAMPLE OF AMPLIFIED RCU-USAGE BUG
+
+Because updaters can run concurrently with RCU readers, RCU readers can
+see stale and/or inconsistent values. If RCU readers need fresh or
+consistent values, which they sometimes do, they need to take proper
+precautions. To see this, consider the following code fragment:
+
+ struct foo {
+ int a;
+ int b;
+ int c;
+ };
+ struct foo *gp1;
+ struct foo *gp2;
+
+ void updater(void)
+ {
+ struct foo *p;
+
+ p = kmalloc(...);
+ if (p == NULL)
+ deal_with_it();
+ p->a = 42; /* Each field in its own cache line. */
+ p->b = 43;
+ p->c = 44;
+ rcu_assign_pointer(gp1, p);
+ p->b = 143;
+ p->c = 144;
+ rcu_assign_pointer(gp2, p);
+ }
+
+ void reader(void)
+ {
+ struct foo *p;
+ struct foo *q;
+ int r1, r2;
+
+ p = rcu_dereference(gp2);
+ if (p == NULL)
+ return;
+ r1 = p->b; /* Guaranteed to get 143. */
+ q = rcu_dereference(gp1); /* Guaranteed non-NULL. */
+ if (p == q) {
+ /* The compiler decides that q->c is same as p->c. */
+ r2 = p->c; /* Could get 44 on weakly order system. */
+ }
+ do_something_with(r1, r2);
+ }
+
+You might be surprised that the outcome (r1 == 143 && r2 == 44) is possible,
+but you should not be. After all, the updater might have been invoked
+a second time between the time reader() loaded into "r1" and the time
+that it loaded into "r2". The fact that this same result can occur due
+to some reordering from the compiler and CPUs is beside the point.
+
+But suppose that the reader needs a consistent view?
+
+Then one approach is to use locking, for example, as follows:
+
+ struct foo {
+ int a;
+ int b;
+ int c;
+ spinlock_t lock;
+ };
+ struct foo *gp1;
+ struct foo *gp2;
+
+ void updater(void)
+ {
+ struct foo *p;
+
+ p = kmalloc(...);
+ if (p == NULL)
+ deal_with_it();
+ spin_lock(&p->lock);
+ p->a = 42; /* Each field in its own cache line. */
+ p->b = 43;
+ p->c = 44;
+ spin_unlock(&p->lock);
+ rcu_assign_pointer(gp1, p);
+ spin_lock(&p->lock);
+ p->b = 143;
+ p->c = 144;
+ spin_unlock(&p->lock);
+ rcu_assign_pointer(gp2, p);
+ }
+
+ void reader(void)
+ {
+ struct foo *p;
+ struct foo *q;
+ int r1, r2;
+
+ p = rcu_dereference(gp2);
+ if (p == NULL)
+ return;
+ spin_lock(&p->lock);
+ r1 = p->b; /* Guaranteed to get 143. */
+ q = rcu_dereference(gp1); /* Guaranteed non-NULL. */
+ if (p == q) {
+ /* The compiler decides that q->c is same as p->c. */
+ r2 = p->c; /* Locking guarantees r2 == 144. */
+ }
+ spin_unlock(&p->lock);
+ do_something_with(r1, r2);
+ }
+
+As always, use the right tool for the job!
+
+
+EXAMPLE WHERE THE COMPILER KNOWS TOO MUCH
+
+If a pointer obtained from rcu_dereference() compares not-equal to some
+other pointer, the compiler normally has no clue what the value of the
+first pointer might be. This lack of knowledge prevents the compiler
+from carrying out optimizations that otherwise might destroy the ordering
+guarantees that RCU depends on. And the volatile cast in rcu_dereference()
+should prevent the compiler from guessing the value.
+
+But without rcu_dereference(), the compiler knows more than you might
+expect. Consider the following code fragment:
+
+ struct foo {
+ int a;
+ int b;
+ };
+ static struct foo variable1;
+ static struct foo variable2;
+ static struct foo *gp = &variable1;
+
+ void updater(void)
+ {
+ initialize_foo(&variable2);
+ rcu_assign_pointer(gp, &variable2);
+ /*
+ * The above is the only store to gp in this translation unit,
+ * and the address of gp is not exported in any way.
+ */
+ }
+
+ int reader(void)
+ {
+ struct foo *p;
+
+ p = gp;
+ barrier();
+ if (p == &variable1)
+ return p->a; /* Must be variable1.a. */
+ else
+ return p->b; /* Must be variable2.b. */
+ }
+
+Because the compiler can see all stores to "gp", it knows that the only
+possible values of "gp" are "variable1" on the one hand and "variable2"
+on the other. The comparison in reader() therefore tells the compiler
+the exact value of "p" even in the not-equals case. This allows the
+compiler to make the return values independent of the load from "gp",
+in turn destroying the ordering between this load and the loads of the
+return values. This can result in "p->b" returning pre-initialization
+garbage values.
+
+In short, rcu_dereference() is -not- optional when you are going to
+dereference the resulting pointer.
diff --git a/kernel/Documentation/RCU/rcubarrier.txt b/kernel/Documentation/RCU/rcubarrier.txt
new file mode 100644
index 000000000..b10cfe711
--- /dev/null
+++ b/kernel/Documentation/RCU/rcubarrier.txt
@@ -0,0 +1,321 @@
+RCU and Unloadable Modules
+
+[Originally published in LWN Jan. 14, 2007: http://lwn.net/Articles/217484/]
+
+RCU (read-copy update) is a synchronization mechanism that can be thought
+of as a replacement for read-writer locking (among other things), but with
+very low-overhead readers that are immune to deadlock, priority inversion,
+and unbounded latency. RCU read-side critical sections are delimited
+by rcu_read_lock() and rcu_read_unlock(), which, in non-CONFIG_PREEMPT
+kernels, generate no code whatsoever.
+
+This means that RCU writers are unaware of the presence of concurrent
+readers, so that RCU updates to shared data must be undertaken quite
+carefully, leaving an old version of the data structure in place until all
+pre-existing readers have finished. These old versions are needed because
+such readers might hold a reference to them. RCU updates can therefore be
+rather expensive, and RCU is thus best suited for read-mostly situations.
+
+How can an RCU writer possibly determine when all readers are finished,
+given that readers might well leave absolutely no trace of their
+presence? There is a synchronize_rcu() primitive that blocks until all
+pre-existing readers have completed. An updater wishing to delete an
+element p from a linked list might do the following, while holding an
+appropriate lock, of course:
+
+ list_del_rcu(p);
+ synchronize_rcu();
+ kfree(p);
+
+But the above code cannot be used in IRQ context -- the call_rcu()
+primitive must be used instead. This primitive takes a pointer to an
+rcu_head struct placed within the RCU-protected data structure and
+another pointer to a function that may be invoked later to free that
+structure. Code to delete an element p from the linked list from IRQ
+context might then be as follows:
+
+ list_del_rcu(p);
+ call_rcu(&p->rcu, p_callback);
+
+Since call_rcu() never blocks, this code can safely be used from within
+IRQ context. The function p_callback() might be defined as follows:
+
+ static void p_callback(struct rcu_head *rp)
+ {
+ struct pstruct *p = container_of(rp, struct pstruct, rcu);
+
+ kfree(p);
+ }
+
+
+Unloading Modules That Use call_rcu()
+
+But what if p_callback is defined in an unloadable module?
+
+If we unload the module while some RCU callbacks are pending,
+the CPUs executing these callbacks are going to be severely
+disappointed when they are later invoked, as fancifully depicted at
+http://lwn.net/images/ns/kernel/rcu-drop.jpg.
+
+We could try placing a synchronize_rcu() in the module-exit code path,
+but this is not sufficient. Although synchronize_rcu() does wait for a
+grace period to elapse, it does not wait for the callbacks to complete.
+
+One might be tempted to try several back-to-back synchronize_rcu()
+calls, but this is still not guaranteed to work. If there is a very
+heavy RCU-callback load, then some of the callbacks might be deferred
+in order to allow other processing to proceed. Such deferral is required
+in realtime kernels in order to avoid excessive scheduling latencies.
+
+
+rcu_barrier()
+
+We instead need the rcu_barrier() primitive. Rather than waiting for
+a grace period to elapse, rcu_barrier() waits for all outstanding RCU
+callbacks to complete. Please note that rcu_barrier() does -not- imply
+synchronize_rcu(), in particular, if there are no RCU callbacks queued
+anywhere, rcu_barrier() is within its rights to return immediately,
+without waiting for a grace period to elapse.
+
+Pseudo-code using rcu_barrier() is as follows:
+
+ 1. Prevent any new RCU callbacks from being posted.
+ 2. Execute rcu_barrier().
+ 3. Allow the module to be unloaded.
+
+There are also rcu_barrier_bh(), rcu_barrier_sched(), and srcu_barrier()
+functions for the other flavors of RCU, and you of course must match
+the flavor of rcu_barrier() with that of call_rcu(). If your module
+uses multiple flavors of call_rcu(), then it must also use multiple
+flavors of rcu_barrier() when unloading that module. For example, if
+it uses call_rcu_bh(), call_srcu() on srcu_struct_1, and call_srcu() on
+srcu_struct_2(), then the following three lines of code will be required
+when unloading:
+
+ 1 rcu_barrier_bh();
+ 2 srcu_barrier(&srcu_struct_1);
+ 3 srcu_barrier(&srcu_struct_2);
+
+The rcutorture module makes use of rcu_barrier() in its exit function
+as follows:
+
+ 1 static void
+ 2 rcu_torture_cleanup(void)
+ 3 {
+ 4 int i;
+ 5
+ 6 fullstop = 1;
+ 7 if (shuffler_task != NULL) {
+ 8 VERBOSE_PRINTK_STRING("Stopping rcu_torture_shuffle task");
+ 9 kthread_stop(shuffler_task);
+10 }
+11 shuffler_task = NULL;
+12
+13 if (writer_task != NULL) {
+14 VERBOSE_PRINTK_STRING("Stopping rcu_torture_writer task");
+15 kthread_stop(writer_task);
+16 }
+17 writer_task = NULL;
+18
+19 if (reader_tasks != NULL) {
+20 for (i = 0; i < nrealreaders; i++) {
+21 if (reader_tasks[i] != NULL) {
+22 VERBOSE_PRINTK_STRING(
+23 "Stopping rcu_torture_reader task");
+24 kthread_stop(reader_tasks[i]);
+25 }
+26 reader_tasks[i] = NULL;
+27 }
+28 kfree(reader_tasks);
+29 reader_tasks = NULL;
+30 }
+31 rcu_torture_current = NULL;
+32
+33 if (fakewriter_tasks != NULL) {
+34 for (i = 0; i < nfakewriters; i++) {
+35 if (fakewriter_tasks[i] != NULL) {
+36 VERBOSE_PRINTK_STRING(
+37 "Stopping rcu_torture_fakewriter task");
+38 kthread_stop(fakewriter_tasks[i]);
+39 }
+40 fakewriter_tasks[i] = NULL;
+41 }
+42 kfree(fakewriter_tasks);
+43 fakewriter_tasks = NULL;
+44 }
+45
+46 if (stats_task != NULL) {
+47 VERBOSE_PRINTK_STRING("Stopping rcu_torture_stats task");
+48 kthread_stop(stats_task);
+49 }
+50 stats_task = NULL;
+51
+52 /* Wait for all RCU callbacks to fire. */
+53 rcu_barrier();
+54
+55 rcu_torture_stats_print(); /* -After- the stats thread is stopped! */
+56
+57 if (cur_ops->cleanup != NULL)
+58 cur_ops->cleanup();
+59 if (atomic_read(&n_rcu_torture_error))
+60 rcu_torture_print_module_parms("End of test: FAILURE");
+61 else
+62 rcu_torture_print_module_parms("End of test: SUCCESS");
+63 }
+
+Line 6 sets a global variable that prevents any RCU callbacks from
+re-posting themselves. This will not be necessary in most cases, since
+RCU callbacks rarely include calls to call_rcu(). However, the rcutorture
+module is an exception to this rule, and therefore needs to set this
+global variable.
+
+Lines 7-50 stop all the kernel tasks associated with the rcutorture
+module. Therefore, once execution reaches line 53, no more rcutorture
+RCU callbacks will be posted. The rcu_barrier() call on line 53 waits
+for any pre-existing callbacks to complete.
+
+Then lines 55-62 print status and do operation-specific cleanup, and
+then return, permitting the module-unload operation to be completed.
+
+Quick Quiz #1: Is there any other situation where rcu_barrier() might
+ be required?
+
+Your module might have additional complications. For example, if your
+module invokes call_rcu() from timers, you will need to first cancel all
+the timers, and only then invoke rcu_barrier() to wait for any remaining
+RCU callbacks to complete.
+
+Of course, if you module uses call_rcu_bh(), you will need to invoke
+rcu_barrier_bh() before unloading. Similarly, if your module uses
+call_rcu_sched(), you will need to invoke rcu_barrier_sched() before
+unloading. If your module uses call_rcu(), call_rcu_bh(), -and-
+call_rcu_sched(), then you will need to invoke each of rcu_barrier(),
+rcu_barrier_bh(), and rcu_barrier_sched().
+
+
+Implementing rcu_barrier()
+
+Dipankar Sarma's implementation of rcu_barrier() makes use of the fact
+that RCU callbacks are never reordered once queued on one of the per-CPU
+queues. His implementation queues an RCU callback on each of the per-CPU
+callback queues, and then waits until they have all started executing, at
+which point, all earlier RCU callbacks are guaranteed to have completed.
+
+The original code for rcu_barrier() was as follows:
+
+ 1 void rcu_barrier(void)
+ 2 {
+ 3 BUG_ON(in_interrupt());
+ 4 /* Take cpucontrol mutex to protect against CPU hotplug */
+ 5 mutex_lock(&rcu_barrier_mutex);
+ 6 init_completion(&rcu_barrier_completion);
+ 7 atomic_set(&rcu_barrier_cpu_count, 0);
+ 8 on_each_cpu(rcu_barrier_func, NULL, 0, 1);
+ 9 wait_for_completion(&rcu_barrier_completion);
+10 mutex_unlock(&rcu_barrier_mutex);
+11 }
+
+Line 3 verifies that the caller is in process context, and lines 5 and 10
+use rcu_barrier_mutex to ensure that only one rcu_barrier() is using the
+global completion and counters at a time, which are initialized on lines
+6 and 7. Line 8 causes each CPU to invoke rcu_barrier_func(), which is
+shown below. Note that the final "1" in on_each_cpu()'s argument list
+ensures that all the calls to rcu_barrier_func() will have completed
+before on_each_cpu() returns. Line 9 then waits for the completion.
+
+This code was rewritten in 2008 to support rcu_barrier_bh() and
+rcu_barrier_sched() in addition to the original rcu_barrier().
+
+The rcu_barrier_func() runs on each CPU, where it invokes call_rcu()
+to post an RCU callback, as follows:
+
+ 1 static void rcu_barrier_func(void *notused)
+ 2 {
+ 3 int cpu = smp_processor_id();
+ 4 struct rcu_data *rdp = &per_cpu(rcu_data, cpu);
+ 5 struct rcu_head *head;
+ 6
+ 7 head = &rdp->barrier;
+ 8 atomic_inc(&rcu_barrier_cpu_count);
+ 9 call_rcu(head, rcu_barrier_callback);
+10 }
+
+Lines 3 and 4 locate RCU's internal per-CPU rcu_data structure,
+which contains the struct rcu_head that needed for the later call to
+call_rcu(). Line 7 picks up a pointer to this struct rcu_head, and line
+8 increments a global counter. This counter will later be decremented
+by the callback. Line 9 then registers the rcu_barrier_callback() on
+the current CPU's queue.
+
+The rcu_barrier_callback() function simply atomically decrements the
+rcu_barrier_cpu_count variable and finalizes the completion when it
+reaches zero, as follows:
+
+ 1 static void rcu_barrier_callback(struct rcu_head *notused)
+ 2 {
+ 3 if (atomic_dec_and_test(&rcu_barrier_cpu_count))
+ 4 complete(&rcu_barrier_completion);
+ 5 }
+
+Quick Quiz #2: What happens if CPU 0's rcu_barrier_func() executes
+ immediately (thus incrementing rcu_barrier_cpu_count to the
+ value one), but the other CPU's rcu_barrier_func() invocations
+ are delayed for a full grace period? Couldn't this result in
+ rcu_barrier() returning prematurely?
+
+
+rcu_barrier() Summary
+
+The rcu_barrier() primitive has seen relatively little use, since most
+code using RCU is in the core kernel rather than in modules. However, if
+you are using RCU from an unloadable module, you need to use rcu_barrier()
+so that your module may be safely unloaded.
+
+
+Answers to Quick Quizzes
+
+Quick Quiz #1: Is there any other situation where rcu_barrier() might
+ be required?
+
+Answer: Interestingly enough, rcu_barrier() was not originally
+ implemented for module unloading. Nikita Danilov was using
+ RCU in a filesystem, which resulted in a similar situation at
+ filesystem-unmount time. Dipankar Sarma coded up rcu_barrier()
+ in response, so that Nikita could invoke it during the
+ filesystem-unmount process.
+
+ Much later, yours truly hit the RCU module-unload problem when
+ implementing rcutorture, and found that rcu_barrier() solves
+ this problem as well.
+
+Quick Quiz #2: What happens if CPU 0's rcu_barrier_func() executes
+ immediately (thus incrementing rcu_barrier_cpu_count to the
+ value one), but the other CPU's rcu_barrier_func() invocations
+ are delayed for a full grace period? Couldn't this result in
+ rcu_barrier() returning prematurely?
+
+Answer: This cannot happen. The reason is that on_each_cpu() has its last
+ argument, the wait flag, set to "1". This flag is passed through
+ to smp_call_function() and further to smp_call_function_on_cpu(),
+ causing this latter to spin until the cross-CPU invocation of
+ rcu_barrier_func() has completed. This by itself would prevent
+ a grace period from completing on non-CONFIG_PREEMPT kernels,
+ since each CPU must undergo a context switch (or other quiescent
+ state) before the grace period can complete. However, this is
+ of no use in CONFIG_PREEMPT kernels.
+
+ Therefore, on_each_cpu() disables preemption across its call
+ to smp_call_function() and also across the local call to
+ rcu_barrier_func(). This prevents the local CPU from context
+ switching, again preventing grace periods from completing. This
+ means that all CPUs have executed rcu_barrier_func() before
+ the first rcu_barrier_callback() can possibly execute, in turn
+ preventing rcu_barrier_cpu_count from prematurely reaching zero.
+
+ Currently, -rt implementations of RCU keep but a single global
+ queue for RCU callbacks, and thus do not suffer from this
+ problem. However, when the -rt RCU eventually does have per-CPU
+ callback queues, things will have to change. One simple change
+ is to add an rcu_read_lock() before line 8 of rcu_barrier()
+ and an rcu_read_unlock() after line 8 of this same function. If
+ you can think of a better change, please let me know!
diff --git a/kernel/Documentation/RCU/rculist_nulls.txt b/kernel/Documentation/RCU/rculist_nulls.txt
new file mode 100644
index 000000000..18f9651ff
--- /dev/null
+++ b/kernel/Documentation/RCU/rculist_nulls.txt
@@ -0,0 +1,172 @@
+Using hlist_nulls to protect read-mostly linked lists and
+objects using SLAB_DESTROY_BY_RCU allocations.
+
+Please read the basics in Documentation/RCU/listRCU.txt
+
+Using special makers (called 'nulls') is a convenient way
+to solve following problem :
+
+A typical RCU linked list managing objects which are
+allocated with SLAB_DESTROY_BY_RCU kmem_cache can
+use following algos :
+
+1) Lookup algo
+--------------
+rcu_read_lock()
+begin:
+obj = lockless_lookup(key);
+if (obj) {
+ if (!try_get_ref(obj)) // might fail for free objects
+ goto begin;
+ /*
+ * Because a writer could delete object, and a writer could
+ * reuse these object before the RCU grace period, we
+ * must check key after getting the reference on object
+ */
+ if (obj->key != key) { // not the object we expected
+ put_ref(obj);
+ goto begin;
+ }
+}
+rcu_read_unlock();
+
+Beware that lockless_lookup(key) cannot use traditional hlist_for_each_entry_rcu()
+but a version with an additional memory barrier (smp_rmb())
+
+lockless_lookup(key)
+{
+ struct hlist_node *node, *next;
+ for (pos = rcu_dereference((head)->first);
+ pos && ({ next = pos->next; smp_rmb(); prefetch(next); 1; }) &&
+ ({ tpos = hlist_entry(pos, typeof(*tpos), member); 1; });
+ pos = rcu_dereference(next))
+ if (obj->key == key)
+ return obj;
+ return NULL;
+
+And note the traditional hlist_for_each_entry_rcu() misses this smp_rmb() :
+
+ struct hlist_node *node;
+ for (pos = rcu_dereference((head)->first);
+ pos && ({ prefetch(pos->next); 1; }) &&
+ ({ tpos = hlist_entry(pos, typeof(*tpos), member); 1; });
+ pos = rcu_dereference(pos->next))
+ if (obj->key == key)
+ return obj;
+ return NULL;
+}
+
+Quoting Corey Minyard :
+
+"If the object is moved from one list to another list in-between the
+ time the hash is calculated and the next field is accessed, and the
+ object has moved to the end of a new list, the traversal will not
+ complete properly on the list it should have, since the object will
+ be on the end of the new list and there's not a way to tell it's on a
+ new list and restart the list traversal. I think that this can be
+ solved by pre-fetching the "next" field (with proper barriers) before
+ checking the key."
+
+2) Insert algo :
+----------------
+
+We need to make sure a reader cannot read the new 'obj->obj_next' value
+and previous value of 'obj->key'. Or else, an item could be deleted
+from a chain, and inserted into another chain. If new chain was empty
+before the move, 'next' pointer is NULL, and lockless reader can
+not detect it missed following items in original chain.
+
+/*
+ * Please note that new inserts are done at the head of list,
+ * not in the middle or end.
+ */
+obj = kmem_cache_alloc(...);
+lock_chain(); // typically a spin_lock()
+obj->key = key;
+/*
+ * we need to make sure obj->key is updated before obj->next
+ * or obj->refcnt
+ */
+smp_wmb();
+atomic_set(&obj->refcnt, 1);
+hlist_add_head_rcu(&obj->obj_node, list);
+unlock_chain(); // typically a spin_unlock()
+
+
+3) Remove algo
+--------------
+Nothing special here, we can use a standard RCU hlist deletion.
+But thanks to SLAB_DESTROY_BY_RCU, beware a deleted object can be reused
+very very fast (before the end of RCU grace period)
+
+if (put_last_reference_on(obj) {
+ lock_chain(); // typically a spin_lock()
+ hlist_del_init_rcu(&obj->obj_node);
+ unlock_chain(); // typically a spin_unlock()
+ kmem_cache_free(cachep, obj);
+}
+
+
+
+--------------------------------------------------------------------------
+With hlist_nulls we can avoid extra smp_rmb() in lockless_lookup()
+and extra smp_wmb() in insert function.
+
+For example, if we choose to store the slot number as the 'nulls'
+end-of-list marker for each slot of the hash table, we can detect
+a race (some writer did a delete and/or a move of an object
+to another chain) checking the final 'nulls' value if
+the lookup met the end of chain. If final 'nulls' value
+is not the slot number, then we must restart the lookup at
+the beginning. If the object was moved to the same chain,
+then the reader doesn't care : It might eventually
+scan the list again without harm.
+
+
+1) lookup algo
+
+ head = &table[slot];
+ rcu_read_lock();
+begin:
+ hlist_nulls_for_each_entry_rcu(obj, node, head, member) {
+ if (obj->key == key) {
+ if (!try_get_ref(obj)) // might fail for free objects
+ goto begin;
+ if (obj->key != key) { // not the object we expected
+ put_ref(obj);
+ goto begin;
+ }
+ goto out;
+ }
+/*
+ * if the nulls value we got at the end of this lookup is
+ * not the expected one, we must restart lookup.
+ * We probably met an item that was moved to another chain.
+ */
+ if (get_nulls_value(node) != slot)
+ goto begin;
+ obj = NULL;
+
+out:
+ rcu_read_unlock();
+
+2) Insert function :
+--------------------
+
+/*
+ * Please note that new inserts are done at the head of list,
+ * not in the middle or end.
+ */
+obj = kmem_cache_alloc(cachep);
+lock_chain(); // typically a spin_lock()
+obj->key = key;
+/*
+ * changes to obj->key must be visible before refcnt one
+ */
+smp_wmb();
+atomic_set(&obj->refcnt, 1);
+/*
+ * insert obj in RCU way (readers might be traversing chain)
+ */
+hlist_nulls_add_head_rcu(&obj->obj_node, list);
+unlock_chain(); // typically a spin_unlock()
diff --git a/kernel/Documentation/RCU/rcuref.txt b/kernel/Documentation/RCU/rcuref.txt
new file mode 100644
index 000000000..613033ff2
--- /dev/null
+++ b/kernel/Documentation/RCU/rcuref.txt
@@ -0,0 +1,132 @@
+Reference-count design for elements of lists/arrays protected by RCU.
+
+
+Please note that the percpu-ref feature is likely your first
+stop if you need to combine reference counts and RCU. Please see
+include/linux/percpu-refcount.h for more information. However, in
+those unusual cases where percpu-ref would consume too much memory,
+please read on.
+
+------------------------------------------------------------------------
+
+Reference counting on elements of lists which are protected by traditional
+reader/writer spinlocks or semaphores are straightforward:
+
+1. 2.
+add() search_and_reference()
+{ {
+ alloc_object read_lock(&list_lock);
+ ... search_for_element
+ atomic_set(&el->rc, 1); atomic_inc(&el->rc);
+ write_lock(&list_lock); ...
+ add_element read_unlock(&list_lock);
+ ... ...
+ write_unlock(&list_lock); }
+}
+
+3. 4.
+release_referenced() delete()
+{ {
+ ... write_lock(&list_lock);
+ atomic_dec(&el->rc, relfunc) ...
+ ... remove_element
+} write_unlock(&list_lock);
+ ...
+ if (atomic_dec_and_test(&el->rc))
+ kfree(el);
+ ...
+ }
+
+If this list/array is made lock free using RCU as in changing the
+write_lock() in add() and delete() to spin_lock() and changing read_lock()
+in search_and_reference() to rcu_read_lock(), the atomic_inc() in
+search_and_reference() could potentially hold reference to an element which
+has already been deleted from the list/array. Use atomic_inc_not_zero()
+in this scenario as follows:
+
+1. 2.
+add() search_and_reference()
+{ {
+ alloc_object rcu_read_lock();
+ ... search_for_element
+ atomic_set(&el->rc, 1); if (!atomic_inc_not_zero(&el->rc)) {
+ spin_lock(&list_lock); rcu_read_unlock();
+ return FAIL;
+ add_element }
+ ... ...
+ spin_unlock(&list_lock); rcu_read_unlock();
+} }
+3. 4.
+release_referenced() delete()
+{ {
+ ... spin_lock(&list_lock);
+ if (atomic_dec_and_test(&el->rc)) ...
+ call_rcu(&el->head, el_free); remove_element
+ ... spin_unlock(&list_lock);
+} ...
+ if (atomic_dec_and_test(&el->rc))
+ call_rcu(&el->head, el_free);
+ ...
+ }
+
+Sometimes, a reference to the element needs to be obtained in the
+update (write) stream. In such cases, atomic_inc_not_zero() might be
+overkill, since we hold the update-side spinlock. One might instead
+use atomic_inc() in such cases.
+
+It is not always convenient to deal with "FAIL" in the
+search_and_reference() code path. In such cases, the
+atomic_dec_and_test() may be moved from delete() to el_free()
+as follows:
+
+1. 2.
+add() search_and_reference()
+{ {
+ alloc_object rcu_read_lock();
+ ... search_for_element
+ atomic_set(&el->rc, 1); atomic_inc(&el->rc);
+ spin_lock(&list_lock); ...
+
+ add_element rcu_read_unlock();
+ ... }
+ spin_unlock(&list_lock); 4.
+} delete()
+3. {
+release_referenced() spin_lock(&list_lock);
+{ ...
+ ... remove_element
+ if (atomic_dec_and_test(&el->rc)) spin_unlock(&list_lock);
+ kfree(el); ...
+ ... call_rcu(&el->head, el_free);
+} ...
+5. }
+void el_free(struct rcu_head *rhp)
+{
+ release_referenced();
+}
+
+The key point is that the initial reference added by add() is not removed
+until after a grace period has elapsed following removal. This means that
+search_and_reference() cannot find this element, which means that the value
+of el->rc cannot increase. Thus, once it reaches zero, there are no
+readers that can or ever will be able to reference the element. The
+element can therefore safely be freed. This in turn guarantees that if
+any reader finds the element, that reader may safely acquire a reference
+without checking the value of the reference counter.
+
+In cases where delete() can sleep, synchronize_rcu() can be called from
+delete(), so that el_free() can be subsumed into delete as follows:
+
+4.
+delete()
+{
+ spin_lock(&list_lock);
+ ...
+ remove_element
+ spin_unlock(&list_lock);
+ ...
+ synchronize_rcu();
+ if (atomic_dec_and_test(&el->rc))
+ kfree(el);
+ ...
+}
diff --git a/kernel/Documentation/RCU/stallwarn.txt b/kernel/Documentation/RCU/stallwarn.txt
new file mode 100644
index 000000000..b57c0c1cd
--- /dev/null
+++ b/kernel/Documentation/RCU/stallwarn.txt
@@ -0,0 +1,250 @@
+Using RCU's CPU Stall Detector
+
+The rcu_cpu_stall_suppress module parameter enables RCU's CPU stall
+detector, which detects conditions that unduly delay RCU grace periods.
+This module parameter enables CPU stall detection by default, but
+may be overridden via boot-time parameter or at runtime via sysfs.
+The stall detector's idea of what constitutes "unduly delayed" is
+controlled by a set of kernel configuration variables and cpp macros:
+
+CONFIG_RCU_CPU_STALL_TIMEOUT
+
+ This kernel configuration parameter defines the period of time
+ that RCU will wait from the beginning of a grace period until it
+ issues an RCU CPU stall warning. This time period is normally
+ 21 seconds.
+
+ This configuration parameter may be changed at runtime via the
+ /sys/module/rcupdate/parameters/rcu_cpu_stall_timeout, however
+ this parameter is checked only at the beginning of a cycle.
+ So if you are 10 seconds into a 40-second stall, setting this
+ sysfs parameter to (say) five will shorten the timeout for the
+ -next- stall, or the following warning for the current stall
+ (assuming the stall lasts long enough). It will not affect the
+ timing of the next warning for the current stall.
+
+ Stall-warning messages may be enabled and disabled completely via
+ /sys/module/rcupdate/parameters/rcu_cpu_stall_suppress.
+
+CONFIG_RCU_CPU_STALL_INFO
+
+ This kernel configuration parameter causes the stall warning to
+ print out additional per-CPU diagnostic information, including
+ information on scheduling-clock ticks and RCU's idle-CPU tracking.
+
+RCU_STALL_DELAY_DELTA
+
+ Although the lockdep facility is extremely useful, it does add
+ some overhead. Therefore, under CONFIG_PROVE_RCU, the
+ RCU_STALL_DELAY_DELTA macro allows five extra seconds before
+ giving an RCU CPU stall warning message. (This is a cpp
+ macro, not a kernel configuration parameter.)
+
+RCU_STALL_RAT_DELAY
+
+ The CPU stall detector tries to make the offending CPU print its
+ own warnings, as this often gives better-quality stack traces.
+ However, if the offending CPU does not detect its own stall in
+ the number of jiffies specified by RCU_STALL_RAT_DELAY, then
+ some other CPU will complain. This delay is normally set to
+ two jiffies. (This is a cpp macro, not a kernel configuration
+ parameter.)
+
+rcupdate.rcu_task_stall_timeout
+
+ This boot/sysfs parameter controls the RCU-tasks stall warning
+ interval. A value of zero or less suppresses RCU-tasks stall
+ warnings. A positive value sets the stall-warning interval
+ in jiffies. An RCU-tasks stall warning starts wtih the line:
+
+ INFO: rcu_tasks detected stalls on tasks:
+
+ And continues with the output of sched_show_task() for each
+ task stalling the current RCU-tasks grace period.
+
+For non-RCU-tasks flavors of RCU, when a CPU detects that it is stalling,
+it will print a message similar to the following:
+
+INFO: rcu_sched_state detected stall on CPU 5 (t=2500 jiffies)
+
+This message indicates that CPU 5 detected that it was causing a stall,
+and that the stall was affecting RCU-sched. This message will normally be
+followed by a stack dump of the offending CPU. On TREE_RCU kernel builds,
+RCU and RCU-sched are implemented by the same underlying mechanism,
+while on PREEMPT_RCU kernel builds, RCU is instead implemented
+by rcu_preempt_state.
+
+On the other hand, if the offending CPU fails to print out a stall-warning
+message quickly enough, some other CPU will print a message similar to
+the following:
+
+INFO: rcu_bh_state detected stalls on CPUs/tasks: { 3 5 } (detected by 2, 2502 jiffies)
+
+This message indicates that CPU 2 detected that CPUs 3 and 5 were both
+causing stalls, and that the stall was affecting RCU-bh. This message
+will normally be followed by stack dumps for each CPU. Please note that
+PREEMPT_RCU builds can be stalled by tasks as well as by CPUs,
+and that the tasks will be indicated by PID, for example, "P3421".
+It is even possible for a rcu_preempt_state stall to be caused by both
+CPUs -and- tasks, in which case the offending CPUs and tasks will all
+be called out in the list.
+
+Finally, if the grace period ends just as the stall warning starts
+printing, there will be a spurious stall-warning message:
+
+INFO: rcu_bh_state detected stalls on CPUs/tasks: { } (detected by 4, 2502 jiffies)
+
+This is rare, but does happen from time to time in real life. It is also
+possible for a zero-jiffy stall to be flagged in this case, depending
+on how the stall warning and the grace-period initialization happen to
+interact. Please note that it is not possible to entirely eliminate this
+sort of false positive without resorting to things like stop_machine(),
+which is overkill for this sort of problem.
+
+If the CONFIG_RCU_CPU_STALL_INFO kernel configuration parameter is set,
+more information is printed with the stall-warning message, for example:
+
+ INFO: rcu_preempt detected stall on CPU
+ 0: (63959 ticks this GP) idle=241/3fffffffffffffff/0 softirq=82/543
+ (t=65000 jiffies)
+
+In kernels with CONFIG_RCU_FAST_NO_HZ, even more information is
+printed:
+
+ INFO: rcu_preempt detected stall on CPU
+ 0: (64628 ticks this GP) idle=dd5/3fffffffffffffff/0 softirq=82/543 last_accelerate: a345/d342 nonlazy_posted: 25 .D
+ (t=65000 jiffies)
+
+The "(64628 ticks this GP)" indicates that this CPU has taken more
+than 64,000 scheduling-clock interrupts during the current stalled
+grace period. If the CPU was not yet aware of the current grace
+period (for example, if it was offline), then this part of the message
+indicates how many grace periods behind the CPU is.
+
+The "idle=" portion of the message prints the dyntick-idle state.
+The hex number before the first "/" is the low-order 12 bits of the
+dynticks counter, which will have an even-numbered value if the CPU is
+in dyntick-idle mode and an odd-numbered value otherwise. The hex
+number between the two "/"s is the value of the nesting, which will
+be a small positive number if in the idle loop and a very large positive
+number (as shown above) otherwise.
+
+The "softirq=" portion of the message tracks the number of RCU softirq
+handlers that the stalled CPU has executed. The number before the "/"
+is the number that had executed since boot at the time that this CPU
+last noted the beginning of a grace period, which might be the current
+(stalled) grace period, or it might be some earlier grace period (for
+example, if the CPU might have been in dyntick-idle mode for an extended
+time period. The number after the "/" is the number that have executed
+since boot until the current time. If this latter number stays constant
+across repeated stall-warning messages, it is possible that RCU's softirq
+handlers are no longer able to execute on this CPU. This can happen if
+the stalled CPU is spinning with interrupts are disabled, or, in -rt
+kernels, if a high-priority process is starving RCU's softirq handler.
+
+For CONFIG_RCU_FAST_NO_HZ kernels, the "last_accelerate:" prints the
+low-order 16 bits (in hex) of the jiffies counter when this CPU last
+invoked rcu_try_advance_all_cbs() from rcu_needs_cpu() or last invoked
+rcu_accelerate_cbs() from rcu_prepare_for_idle(). The "nonlazy_posted:"
+prints the number of non-lazy callbacks posted since the last call to
+rcu_needs_cpu(). Finally, an "L" indicates that there are currently
+no non-lazy callbacks ("." is printed otherwise, as shown above) and
+"D" indicates that dyntick-idle processing is enabled ("." is printed
+otherwise, for example, if disabled via the "nohz=" kernel boot parameter).
+
+If the relevant grace-period kthread has been unable to run prior to
+the stall warning, the following additional line is printed:
+
+ rcu_preempt kthread starved for 2023 jiffies!
+
+Starving the grace-period kthreads of CPU time can of course result in
+RCU CPU stall warnings even when all CPUs and tasks have passed through
+the required quiescent states.
+
+
+Multiple Warnings From One Stall
+
+If a stall lasts long enough, multiple stall-warning messages will be
+printed for it. The second and subsequent messages are printed at
+longer intervals, so that the time between (say) the first and second
+message will be about three times the interval between the beginning
+of the stall and the first message.
+
+
+What Causes RCU CPU Stall Warnings?
+
+So your kernel printed an RCU CPU stall warning. The next question is
+"What caused it?" The following problems can result in RCU CPU stall
+warnings:
+
+o A CPU looping in an RCU read-side critical section.
+
+o A CPU looping with interrupts disabled. This condition can
+ result in RCU-sched and RCU-bh stalls.
+
+o A CPU looping with preemption disabled. This condition can
+ result in RCU-sched stalls and, if ksoftirqd is in use, RCU-bh
+ stalls.
+
+o A CPU looping with bottom halves disabled. This condition can
+ result in RCU-sched and RCU-bh stalls.
+
+o For !CONFIG_PREEMPT kernels, a CPU looping anywhere in the
+ kernel without invoking schedule(). Note that cond_resched()
+ does not necessarily prevent RCU CPU stall warnings. Therefore,
+ if the looping in the kernel is really expected and desirable
+ behavior, you might need to replace some of the cond_resched()
+ calls with calls to cond_resched_rcu_qs().
+
+o Anything that prevents RCU's grace-period kthreads from running.
+ This can result in the "All QSes seen" console-log message.
+ This message will include information on when the kthread last
+ ran and how often it should be expected to run.
+
+o A CPU-bound real-time task in a CONFIG_PREEMPT kernel, which might
+ happen to preempt a low-priority task in the middle of an RCU
+ read-side critical section. This is especially damaging if
+ that low-priority task is not permitted to run on any other CPU,
+ in which case the next RCU grace period can never complete, which
+ will eventually cause the system to run out of memory and hang.
+ While the system is in the process of running itself out of
+ memory, you might see stall-warning messages.
+
+o A CPU-bound real-time task in a CONFIG_PREEMPT_RT kernel that
+ is running at a higher priority than the RCU softirq threads.
+ This will prevent RCU callbacks from ever being invoked,
+ and in a CONFIG_PREEMPT_RCU kernel will further prevent
+ RCU grace periods from ever completing. Either way, the
+ system will eventually run out of memory and hang. In the
+ CONFIG_PREEMPT_RCU case, you might see stall-warning
+ messages.
+
+o A hardware or software issue shuts off the scheduler-clock
+ interrupt on a CPU that is not in dyntick-idle mode. This
+ problem really has happened, and seems to be most likely to
+ result in RCU CPU stall warnings for CONFIG_NO_HZ_COMMON=n kernels.
+
+o A bug in the RCU implementation.
+
+o A hardware failure. This is quite unlikely, but has occurred
+ at least once in real life. A CPU failed in a running system,
+ becoming unresponsive, but not causing an immediate crash.
+ This resulted in a series of RCU CPU stall warnings, eventually
+ leading the realization that the CPU had failed.
+
+The RCU, RCU-sched, RCU-bh, and RCU-tasks implementations have CPU stall
+warning. Note that SRCU does -not- have CPU stall warnings. Please note
+that RCU only detects CPU stalls when there is a grace period in progress.
+No grace period, no CPU stall warnings.
+
+To diagnose the cause of the stall, inspect the stack traces.
+The offending function will usually be near the top of the stack.
+If you have a series of stall warnings from a single extended stall,
+comparing the stack traces can often help determine where the stall
+is occurring, which will usually be in the function nearest the top of
+that portion of the stack which remains the same from trace to trace.
+If you can reliably trigger the stall, ftrace can be quite helpful.
+
+RCU bugs can often be debugged with the help of CONFIG_RCU_TRACE
+and with RCU's event tracing. For information on RCU's event tracing,
+see include/trace/events/rcu.h.
diff --git a/kernel/Documentation/RCU/torture.txt b/kernel/Documentation/RCU/torture.txt
new file mode 100644
index 000000000..dac02a621
--- /dev/null
+++ b/kernel/Documentation/RCU/torture.txt
@@ -0,0 +1,334 @@
+RCU Torture Test Operation
+
+
+CONFIG_RCU_TORTURE_TEST
+
+The CONFIG_RCU_TORTURE_TEST config option is available for all RCU
+implementations. It creates an rcutorture kernel module that can
+be loaded to run a torture test. The test periodically outputs
+status messages via printk(), which can be examined via the dmesg
+command (perhaps grepping for "torture"). The test is started
+when the module is loaded, and stops when the module is unloaded.
+
+CONFIG_RCU_TORTURE_TEST_RUNNABLE
+
+It is also possible to specify CONFIG_RCU_TORTURE_TEST=y, which will
+result in the tests being loaded into the base kernel. In this case,
+the CONFIG_RCU_TORTURE_TEST_RUNNABLE config option is used to specify
+whether the RCU torture tests are to be started immediately during
+boot or whether the /proc/sys/kernel/rcutorture_runnable file is used
+to enable them. This /proc file can be used to repeatedly pause and
+restart the tests, regardless of the initial state specified by the
+CONFIG_RCU_TORTURE_TEST_RUNNABLE config option.
+
+You will normally -not- want to start the RCU torture tests during boot
+(and thus the default is CONFIG_RCU_TORTURE_TEST_RUNNABLE=n), but doing
+this can sometimes be useful in finding boot-time bugs.
+
+
+MODULE PARAMETERS
+
+This module has the following parameters:
+
+fqs_duration Duration (in microseconds) of artificially induced bursts
+ of force_quiescent_state() invocations. In RCU
+ implementations having force_quiescent_state(), these
+ bursts help force races between forcing a given grace
+ period and that grace period ending on its own.
+
+fqs_holdoff Holdoff time (in microseconds) between consecutive calls
+ to force_quiescent_state() within a burst.
+
+fqs_stutter Wait time (in seconds) between consecutive bursts
+ of calls to force_quiescent_state().
+
+gp_normal Make the fake writers use normal synchronous grace-period
+ primitives.
+
+gp_exp Make the fake writers use expedited synchronous grace-period
+ primitives. If both gp_normal and gp_exp are set, or
+ if neither gp_normal nor gp_exp are set, then randomly
+ choose the primitive so that about 50% are normal and
+ 50% expedited. By default, neither are set, which
+ gives best overall test coverage.
+
+irqreader Says to invoke RCU readers from irq level. This is currently
+ done via timers. Defaults to "1" for variants of RCU that
+ permit this. (Or, more accurately, variants of RCU that do
+ -not- permit this know to ignore this variable.)
+
+n_barrier_cbs If this is nonzero, RCU barrier testing will be conducted,
+ in which case n_barrier_cbs specifies the number of
+ RCU callbacks (and corresponding kthreads) to use for
+ this testing. The value cannot be negative. If you
+ specify this to be non-zero when torture_type indicates a
+ synchronous RCU implementation (one for which a member of
+ the synchronize_rcu() rather than the call_rcu() family is
+ used -- see the documentation for torture_type below), an
+ error will be reported and no testing will be carried out.
+
+nfakewriters This is the number of RCU fake writer threads to run. Fake
+ writer threads repeatedly use the synchronous "wait for
+ current readers" function of the interface selected by
+ torture_type, with a delay between calls to allow for various
+ different numbers of writers running in parallel.
+ nfakewriters defaults to 4, which provides enough parallelism
+ to trigger special cases caused by multiple writers, such as
+ the synchronize_srcu() early return optimization.
+
+nreaders This is the number of RCU reading threads supported.
+ The default is twice the number of CPUs. Why twice?
+ To properly exercise RCU implementations with preemptible
+ read-side critical sections.
+
+onoff_interval
+ The number of seconds between each attempt to execute a
+ randomly selected CPU-hotplug operation. Defaults to
+ zero, which disables CPU hotplugging. In HOTPLUG_CPU=n
+ kernels, rcutorture will silently refuse to do any
+ CPU-hotplug operations regardless of what value is
+ specified for onoff_interval.
+
+onoff_holdoff The number of seconds to wait until starting CPU-hotplug
+ operations. This would normally only be used when
+ rcutorture was built into the kernel and started
+ automatically at boot time, in which case it is useful
+ in order to avoid confusing boot-time code with CPUs
+ coming and going.
+
+shuffle_interval
+ The number of seconds to keep the test threads affinitied
+ to a particular subset of the CPUs, defaults to 3 seconds.
+ Used in conjunction with test_no_idle_hz.
+
+shutdown_secs The number of seconds to run the test before terminating
+ the test and powering off the system. The default is
+ zero, which disables test termination and system shutdown.
+ This capability is useful for automated testing.
+
+stall_cpu The number of seconds that a CPU should be stalled while
+ within both an rcu_read_lock() and a preempt_disable().
+ This stall happens only once per rcutorture run.
+ If you need multiple stalls, use modprobe and rmmod to
+ repeatedly run rcutorture. The default for stall_cpu
+ is zero, which prevents rcutorture from stalling a CPU.
+
+ Note that attempts to rmmod rcutorture while the stall
+ is ongoing will hang, so be careful what value you
+ choose for this module parameter! In addition, too-large
+ values for stall_cpu might well induce failures and
+ warnings in other parts of the kernel. You have been
+ warned!
+
+stall_cpu_holdoff
+ The number of seconds to wait after rcutorture starts
+ before stalling a CPU. Defaults to 10 seconds.
+
+stat_interval The number of seconds between output of torture
+ statistics (via printk()). Regardless of the interval,
+ statistics are printed when the module is unloaded.
+ Setting the interval to zero causes the statistics to
+ be printed -only- when the module is unloaded, and this
+ is the default.
+
+stutter The length of time to run the test before pausing for this
+ same period of time. Defaults to "stutter=5", so as
+ to run and pause for (roughly) five-second intervals.
+ Specifying "stutter=0" causes the test to run continuously
+ without pausing, which is the old default behavior.
+
+test_boost Whether or not to test the ability of RCU to do priority
+ boosting. Defaults to "test_boost=1", which performs
+ RCU priority-inversion testing only if the selected
+ RCU implementation supports priority boosting. Specifying
+ "test_boost=0" never performs RCU priority-inversion
+ testing. Specifying "test_boost=2" performs RCU
+ priority-inversion testing even if the selected RCU
+ implementation does not support RCU priority boosting,
+ which can be used to test rcutorture's ability to
+ carry out RCU priority-inversion testing.
+
+test_boost_interval
+ The number of seconds in an RCU priority-inversion test
+ cycle. Defaults to "test_boost_interval=7". It is
+ usually wise for this value to be relatively prime to
+ the value selected for "stutter".
+
+test_boost_duration
+ The number of seconds to do RCU priority-inversion testing
+ within any given "test_boost_interval". Defaults to
+ "test_boost_duration=4".
+
+test_no_idle_hz Whether or not to test the ability of RCU to operate in
+ a kernel that disables the scheduling-clock interrupt to
+ idle CPUs. Boolean parameter, "1" to test, "0" otherwise.
+ Defaults to omitting this test.
+
+torture_type The type of RCU to test, with string values as follows:
+
+ "rcu": rcu_read_lock(), rcu_read_unlock() and call_rcu().
+
+ "rcu_sync": rcu_read_lock(), rcu_read_unlock(), and
+ synchronize_rcu().
+
+ "rcu_expedited": rcu_read_lock(), rcu_read_unlock(), and
+ synchronize_rcu_expedited().
+
+ "rcu_bh": rcu_read_lock_bh(), rcu_read_unlock_bh(), and
+ call_rcu_bh().
+
+ "rcu_bh_sync": rcu_read_lock_bh(), rcu_read_unlock_bh(),
+ and synchronize_rcu_bh().
+
+ "rcu_bh_expedited": rcu_read_lock_bh(), rcu_read_unlock_bh(),
+ and synchronize_rcu_bh_expedited().
+
+ "srcu": srcu_read_lock(), srcu_read_unlock() and
+ call_srcu().
+
+ "srcu_sync": srcu_read_lock(), srcu_read_unlock() and
+ synchronize_srcu().
+
+ "srcu_expedited": srcu_read_lock(), srcu_read_unlock() and
+ synchronize_srcu_expedited().
+
+ "sched": preempt_disable(), preempt_enable(), and
+ call_rcu_sched().
+
+ "sched_sync": preempt_disable(), preempt_enable(), and
+ synchronize_sched().
+
+ "sched_expedited": preempt_disable(), preempt_enable(), and
+ synchronize_sched_expedited().
+
+ Defaults to "rcu".
+
+verbose Enable debug printk()s. Default is disabled.
+
+
+OUTPUT
+
+The statistics output is as follows:
+
+ rcu-torture:--- Start of test: nreaders=16 nfakewriters=4 stat_interval=30 verbose=0 test_no_idle_hz=1 shuffle_interval=3 stutter=5 irqreader=1 fqs_duration=0 fqs_holdoff=0 fqs_stutter=3 test_boost=1/0 test_boost_interval=7 test_boost_duration=4
+ rcu-torture: rtc: (null) ver: 155441 tfle: 0 rta: 155441 rtaf: 8884 rtf: 155440 rtmbe: 0 rtbe: 0 rtbke: 0 rtbre: 0 rtbf: 0 rtb: 0 nt: 3055767
+ rcu-torture: Reader Pipe: 727860534 34213 0 0 0 0 0 0 0 0 0
+ rcu-torture: Reader Batch: 727877838 17003 0 0 0 0 0 0 0 0 0
+ rcu-torture: Free-Block Circulation: 155440 155440 155440 155440 155440 155440 155440 155440 155440 155440 0
+ rcu-torture:--- End of test: SUCCESS: nreaders=16 nfakewriters=4 stat_interval=30 verbose=0 test_no_idle_hz=1 shuffle_interval=3 stutter=5 irqreader=1 fqs_duration=0 fqs_holdoff=0 fqs_stutter=3 test_boost=1/0 test_boost_interval=7 test_boost_duration=4
+
+The command "dmesg | grep torture:" will extract this information on
+most systems. On more esoteric configurations, it may be necessary to
+use other commands to access the output of the printk()s used by
+the RCU torture test. The printk()s use KERN_ALERT, so they should
+be evident. ;-)
+
+The first and last lines show the rcutorture module parameters, and the
+last line shows either "SUCCESS" or "FAILURE", based on rcutorture's
+automatic determination as to whether RCU operated correctly.
+
+The entries are as follows:
+
+o "rtc": The hexadecimal address of the structure currently visible
+ to readers.
+
+o "ver": The number of times since boot that the RCU writer task
+ has changed the structure visible to readers.
+
+o "tfle": If non-zero, indicates that the "torture freelist"
+ containing structures to be placed into the "rtc" area is empty.
+ This condition is important, since it can fool you into thinking
+ that RCU is working when it is not. :-/
+
+o "rta": Number of structures allocated from the torture freelist.
+
+o "rtaf": Number of allocations from the torture freelist that have
+ failed due to the list being empty. It is not unusual for this
+ to be non-zero, but it is bad for it to be a large fraction of
+ the value indicated by "rta".
+
+o "rtf": Number of frees into the torture freelist.
+
+o "rtmbe": A non-zero value indicates that rcutorture believes that
+ rcu_assign_pointer() and rcu_dereference() are not working
+ correctly. This value should be zero.
+
+o "rtbe": A non-zero value indicates that one of the rcu_barrier()
+ family of functions is not working correctly.
+
+o "rtbke": rcutorture was unable to create the real-time kthreads
+ used to force RCU priority inversion. This value should be zero.
+
+o "rtbre": Although rcutorture successfully created the kthreads
+ used to force RCU priority inversion, it was unable to set them
+ to the real-time priority level of 1. This value should be zero.
+
+o "rtbf": The number of times that RCU priority boosting failed
+ to resolve RCU priority inversion.
+
+o "rtb": The number of times that rcutorture attempted to force
+ an RCU priority inversion condition. If you are testing RCU
+ priority boosting via the "test_boost" module parameter, this
+ value should be non-zero.
+
+o "nt": The number of times rcutorture ran RCU read-side code from
+ within a timer handler. This value should be non-zero only
+ if you specified the "irqreader" module parameter.
+
+o "Reader Pipe": Histogram of "ages" of structures seen by readers.
+ If any entries past the first two are non-zero, RCU is broken.
+ And rcutorture prints the error flag string "!!!" to make sure
+ you notice. The age of a newly allocated structure is zero,
+ it becomes one when removed from reader visibility, and is
+ incremented once per grace period subsequently -- and is freed
+ after passing through (RCU_TORTURE_PIPE_LEN-2) grace periods.
+
+ The output displayed above was taken from a correctly working
+ RCU. If you want to see what it looks like when broken, break
+ it yourself. ;-)
+
+o "Reader Batch": Another histogram of "ages" of structures seen
+ by readers, but in terms of counter flips (or batches) rather
+ than in terms of grace periods. The legal number of non-zero
+ entries is again two. The reason for this separate view is that
+ it is sometimes easier to get the third entry to show up in the
+ "Reader Batch" list than in the "Reader Pipe" list.
+
+o "Free-Block Circulation": Shows the number of torture structures
+ that have reached a given point in the pipeline. The first element
+ should closely correspond to the number of structures allocated,
+ the second to the number that have been removed from reader view,
+ and all but the last remaining to the corresponding number of
+ passes through a grace period. The last entry should be zero,
+ as it is only incremented if a torture structure's counter
+ somehow gets incremented farther than it should.
+
+Different implementations of RCU can provide implementation-specific
+additional information. For example, SRCU provides the following
+additional line:
+
+ srcu-torture: per-CPU(idx=1): 0(0,1) 1(0,1) 2(0,0) 3(0,1)
+
+This line shows the per-CPU counter state. The numbers in parentheses are
+the values of the "old" and "current" counters for the corresponding CPU.
+The "idx" value maps the "old" and "current" values to the underlying
+array, and is useful for debugging.
+
+
+USAGE
+
+The following script may be used to torture RCU:
+
+ #!/bin/sh
+
+ modprobe rcutorture
+ sleep 3600
+ rmmod rcutorture
+ dmesg | grep torture:
+
+The output can be manually inspected for the error flag of "!!!".
+One could of course create a more elaborate script that automatically
+checked for such errors. The "rmmod" command forces a "SUCCESS",
+"FAILURE", or "RCU_HOTPLUG" indication to be printk()ed. The first
+two are self-explanatory, while the last indicates that while there
+were no RCU failures, CPU-hotplug problems were detected.
diff --git a/kernel/Documentation/RCU/trace.txt b/kernel/Documentation/RCU/trace.txt
new file mode 100644
index 000000000..08651da15
--- /dev/null
+++ b/kernel/Documentation/RCU/trace.txt
@@ -0,0 +1,554 @@
+CONFIG_RCU_TRACE debugfs Files and Formats
+
+
+The rcutree and rcutiny implementations of RCU provide debugfs trace
+output that summarizes counters and state. This information is useful for
+debugging RCU itself, and can sometimes also help to debug abuses of RCU.
+The following sections describe the debugfs files and formats, first
+for rcutree and next for rcutiny.
+
+
+CONFIG_TREE_RCU and CONFIG_PREEMPT_RCU debugfs Files and Formats
+
+These implementations of RCU provide several debugfs directories under the
+top-level directory "rcu":
+
+rcu/rcu_bh
+rcu/rcu_preempt
+rcu/rcu_sched
+
+Each directory contains files for the corresponding flavor of RCU.
+Note that rcu/rcu_preempt is only present for CONFIG_PREEMPT_RCU.
+For CONFIG_TREE_RCU, the RCU flavor maps onto the RCU-sched flavor,
+so that activity for both appears in rcu/rcu_sched.
+
+In addition, the following file appears in the top-level directory:
+rcu/rcutorture. This file displays rcutorture test progress. The output
+of "cat rcu/rcutorture" looks as follows:
+
+rcutorture test sequence: 0 (test in progress)
+rcutorture update version number: 615
+
+The first line shows the number of rcutorture tests that have completed
+since boot. If a test is currently running, the "(test in progress)"
+string will appear as shown above. The second line shows the number of
+update cycles that the current test has started, or zero if there is
+no test in progress.
+
+
+Within each flavor directory (rcu/rcu_bh, rcu/rcu_sched, and possibly
+also rcu/rcu_preempt) the following files will be present:
+
+rcudata:
+ Displays fields in struct rcu_data.
+rcuexp:
+ Displays statistics for expedited grace periods.
+rcugp:
+ Displays grace-period counters.
+rcuhier:
+ Displays the struct rcu_node hierarchy.
+rcu_pending:
+ Displays counts of the reasons rcu_pending() decided that RCU had
+ work to do.
+rcuboost:
+ Displays RCU boosting statistics. Only present if
+ CONFIG_RCU_BOOST=y.
+
+The output of "cat rcu/rcu_preempt/rcudata" looks as follows:
+
+ 0!c=30455 g=30456 pq=1/0 qp=1 dt=126535/140000000000000/0 df=2002 of=4 ql=0/0 qs=N... b=10 ci=74572 nci=0 co=1131 ca=716
+ 1!c=30719 g=30720 pq=1/0 qp=0 dt=132007/140000000000000/0 df=1874 of=10 ql=0/0 qs=N... b=10 ci=123209 nci=0 co=685 ca=982
+ 2!c=30150 g=30151 pq=1/1 qp=1 dt=138537/140000000000000/0 df=1707 of=8 ql=0/0 qs=N... b=10 ci=80132 nci=0 co=1328 ca=1458
+ 3 c=31249 g=31250 pq=1/1 qp=0 dt=107255/140000000000000/0 df=1749 of=6 ql=0/450 qs=NRW. b=10 ci=151700 nci=0 co=509 ca=622
+ 4!c=29502 g=29503 pq=1/0 qp=1 dt=83647/140000000000000/0 df=965 of=5 ql=0/0 qs=N... b=10 ci=65643 nci=0 co=1373 ca=1521
+ 5 c=31201 g=31202 pq=1/0 qp=1 dt=70422/0/0 df=535 of=7 ql=0/0 qs=.... b=10 ci=58500 nci=0 co=764 ca=698
+ 6!c=30253 g=30254 pq=1/0 qp=1 dt=95363/140000000000000/0 df=780 of=5 ql=0/0 qs=N... b=10 ci=100607 nci=0 co=1414 ca=1353
+ 7 c=31178 g=31178 pq=1/0 qp=0 dt=91536/0/0 df=547 of=4 ql=0/0 qs=.... b=10 ci=109819 nci=0 co=1115 ca=969
+
+This file has one line per CPU, or eight for this 8-CPU system.
+The fields are as follows:
+
+o The number at the beginning of each line is the CPU number.
+ CPUs numbers followed by an exclamation mark are offline,
+ but have been online at least once since boot. There will be
+ no output for CPUs that have never been online, which can be
+ a good thing in the surprisingly common case where NR_CPUS is
+ substantially larger than the number of actual CPUs.
+
+o "c" is the count of grace periods that this CPU believes have
+ completed. Offlined CPUs and CPUs in dynticks idle mode may lag
+ quite a ways behind, for example, CPU 4 under "rcu_sched" above,
+ which has been offline through 16 RCU grace periods. It is not
+ unusual to see offline CPUs lagging by thousands of grace periods.
+ Note that although the grace-period number is an unsigned long,
+ it is printed out as a signed long to allow more human-friendly
+ representation near boot time.
+
+o "g" is the count of grace periods that this CPU believes have
+ started. Again, offlined CPUs and CPUs in dynticks idle mode
+ may lag behind. If the "c" and "g" values are equal, this CPU
+ has already reported a quiescent state for the last RCU grace
+ period that it is aware of, otherwise, the CPU believes that it
+ owes RCU a quiescent state.
+
+o "pq" indicates that this CPU has passed through a quiescent state
+ for the current grace period. It is possible for "pq" to be
+ "1" and "c" different than "g", which indicates that although
+ the CPU has passed through a quiescent state, either (1) this
+ CPU has not yet reported that fact, (2) some other CPU has not
+ yet reported for this grace period, or (3) both.
+
+o "qp" indicates that RCU still expects a quiescent state from
+ this CPU. Offlined CPUs and CPUs in dyntick idle mode might
+ well have qp=1, which is OK: RCU is still ignoring them.
+
+o "dt" is the current value of the dyntick counter that is incremented
+ when entering or leaving idle, either due to a context switch or
+ due to an interrupt. This number is even if the CPU is in idle
+ from RCU's viewpoint and odd otherwise. The number after the
+ first "/" is the interrupt nesting depth when in idle state,
+ or a large number added to the interrupt-nesting depth when
+ running a non-idle task. Some architectures do not accurately
+ count interrupt nesting when running in non-idle kernel context,
+ which can result in interesting anomalies such as negative
+ interrupt-nesting levels. The number after the second "/"
+ is the NMI nesting depth.
+
+o "df" is the number of times that some other CPU has forced a
+ quiescent state on behalf of this CPU due to this CPU being in
+ idle state.
+
+o "of" is the number of times that some other CPU has forced a
+ quiescent state on behalf of this CPU due to this CPU being
+ offline. In a perfect world, this might never happen, but it
+ turns out that offlining and onlining a CPU can take several grace
+ periods, and so there is likely to be an extended period of time
+ when RCU believes that the CPU is online when it really is not.
+ Please note that erring in the other direction (RCU believing a
+ CPU is offline when it is really alive and kicking) is a fatal
+ error, so it makes sense to err conservatively.
+
+o "ql" is the number of RCU callbacks currently residing on
+ this CPU. The first number is the number of "lazy" callbacks
+ that are known to RCU to only be freeing memory, and the number
+ after the "/" is the total number of callbacks, lazy or not.
+ These counters count callbacks regardless of what phase of
+ grace-period processing that they are in (new, waiting for
+ grace period to start, waiting for grace period to end, ready
+ to invoke).
+
+o "qs" gives an indication of the state of the callback queue
+ with four characters:
+
+ "N" Indicates that there are callbacks queued that are not
+ ready to be handled by the next grace period, and thus
+ will be handled by the grace period following the next
+ one.
+
+ "R" Indicates that there are callbacks queued that are
+ ready to be handled by the next grace period.
+
+ "W" Indicates that there are callbacks queued that are
+ waiting on the current grace period.
+
+ "D" Indicates that there are callbacks queued that have
+ already been handled by a prior grace period, and are
+ thus waiting to be invoked. Note that callbacks in
+ the process of being invoked are not counted here.
+ Callbacks in the process of being invoked are those
+ that have been removed from the rcu_data structures
+ queues by rcu_do_batch(), but which have not yet been
+ invoked.
+
+ If there are no callbacks in a given one of the above states,
+ the corresponding character is replaced by ".".
+
+o "b" is the batch limit for this CPU. If more than this number
+ of RCU callbacks is ready to invoke, then the remainder will
+ be deferred.
+
+o "ci" is the number of RCU callbacks that have been invoked for
+ this CPU. Note that ci+nci+ql is the number of callbacks that have
+ been registered in absence of CPU-hotplug activity.
+
+o "nci" is the number of RCU callbacks that have been offloaded from
+ this CPU. This will always be zero unless the kernel was built
+ with CONFIG_RCU_NOCB_CPU=y and the "rcu_nocbs=" kernel boot
+ parameter was specified.
+
+o "co" is the number of RCU callbacks that have been orphaned due to
+ this CPU going offline. These orphaned callbacks have been moved
+ to an arbitrarily chosen online CPU.
+
+o "ca" is the number of RCU callbacks that have been adopted by this
+ CPU due to other CPUs going offline. Note that ci+co-ca+ql is
+ the number of RCU callbacks registered on this CPU.
+
+
+Kernels compiled with CONFIG_RCU_BOOST=y display the following from
+/debug/rcu/rcu_preempt/rcudata:
+
+ 0!c=12865 g=12866 pq=1/0 qp=1 dt=83113/140000000000000/0 df=288 of=11 ql=0/0 qs=N... kt=0/O ktl=944 b=10 ci=60709 nci=0 co=748 ca=871
+ 1 c=14407 g=14408 pq=1/0 qp=0 dt=100679/140000000000000/0 df=378 of=7 ql=0/119 qs=NRW. kt=0/W ktl=9b6 b=10 ci=109740 nci=0 co=589 ca=485
+ 2 c=14407 g=14408 pq=1/0 qp=0 dt=105486/0/0 df=90 of=9 ql=0/89 qs=NRW. kt=0/W ktl=c0c b=10 ci=83113 nci=0 co=533 ca=490
+ 3 c=14407 g=14408 pq=1/0 qp=0 dt=107138/0/0 df=142 of=8 ql=0/188 qs=NRW. kt=0/W ktl=b96 b=10 ci=121114 nci=0 co=426 ca=290
+ 4 c=14405 g=14406 pq=1/0 qp=1 dt=50238/0/0 df=706 of=7 ql=0/0 qs=.... kt=0/W ktl=812 b=10 ci=34929 nci=0 co=643 ca=114
+ 5!c=14168 g=14169 pq=1/0 qp=0 dt=45465/140000000000000/0 df=161 of=11 ql=0/0 qs=N... kt=0/O ktl=b4d b=10 ci=47712 nci=0 co=677 ca=722
+ 6 c=14404 g=14405 pq=1/0 qp=0 dt=59454/0/0 df=94 of=6 ql=0/0 qs=.... kt=0/W ktl=e57 b=10 ci=55597 nci=0 co=701 ca=811
+ 7 c=14407 g=14408 pq=1/0 qp=1 dt=68850/0/0 df=31 of=8 ql=0/0 qs=.... kt=0/W ktl=14bd b=10 ci=77475 nci=0 co=508 ca=1042
+
+This is similar to the output discussed above, but contains the following
+additional fields:
+
+o "kt" is the per-CPU kernel-thread state. The digit preceding
+ the first slash is zero if there is no work pending and 1
+ otherwise. The character between the first pair of slashes is
+ as follows:
+
+ "S" The kernel thread is stopped, in other words, all
+ CPUs corresponding to this rcu_node structure are
+ offline.
+
+ "R" The kernel thread is running.
+
+ "W" The kernel thread is waiting because there is no work
+ for it to do.
+
+ "O" The kernel thread is waiting because it has been
+ forced off of its designated CPU or because its
+ ->cpus_allowed mask permits it to run on other than
+ its designated CPU.
+
+ "Y" The kernel thread is yielding to avoid hogging CPU.
+
+ "?" Unknown value, indicates a bug.
+
+ The number after the final slash is the CPU that the kthread
+ is actually running on.
+
+ This field is displayed only for CONFIG_RCU_BOOST kernels.
+
+o "ktl" is the low-order 16 bits (in hexadecimal) of the count of
+ the number of times that this CPU's per-CPU kthread has gone
+ through its loop servicing invoke_rcu_cpu_kthread() requests.
+
+ This field is displayed only for CONFIG_RCU_BOOST kernels.
+
+
+The output of "cat rcu/rcu_preempt/rcuexp" looks as follows:
+
+s=21872 d=21872 w=0 tf=0 wd1=0 wd2=0 n=0 sc=21872 dt=21872 dl=0 dx=21872
+
+These fields are as follows:
+
+o "s" is the starting sequence number.
+
+o "d" is the ending sequence number. When the starting and ending
+ numbers differ, there is an expedited grace period in progress.
+
+o "w" is the number of times that the sequence numbers have been
+ in danger of wrapping.
+
+o "tf" is the number of times that contention has resulted in a
+ failure to begin an expedited grace period.
+
+o "wd1" and "wd2" are the number of times that an attempt to
+ start an expedited grace period found that someone else had
+ completed an expedited grace period that satisfies the
+ attempted request. "Our work is done."
+
+o "n" is number of times that contention was so great that
+ the request was demoted from an expedited grace period to
+ a normal grace period.
+
+o "sc" is the number of times that the attempt to start a
+ new expedited grace period succeeded.
+
+o "dt" is the number of times that we attempted to update
+ the "d" counter.
+
+o "dl" is the number of times that we failed to update the "d"
+ counter.
+
+o "dx" is the number of times that we succeeded in updating
+ the "d" counter.
+
+
+The output of "cat rcu/rcu_preempt/rcugp" looks as follows:
+
+completed=31249 gpnum=31250 age=1 max=18
+
+These fields are taken from the rcu_state structure, and are as follows:
+
+o "completed" is the number of grace periods that have completed.
+ It is comparable to the "c" field from rcu/rcudata in that a
+ CPU whose "c" field matches the value of "completed" is aware
+ that the corresponding RCU grace period has completed.
+
+o "gpnum" is the number of grace periods that have started. It is
+ similarly comparable to the "g" field from rcu/rcudata in that
+ a CPU whose "g" field matches the value of "gpnum" is aware that
+ the corresponding RCU grace period has started.
+
+ If these two fields are equal, then there is no grace period
+ in progress, in other words, RCU is idle. On the other hand,
+ if the two fields differ (as they are above), then an RCU grace
+ period is in progress.
+
+o "age" is the number of jiffies that the current grace period
+ has extended for, or zero if there is no grace period currently
+ in effect.
+
+o "max" is the age in jiffies of the longest-duration grace period
+ thus far.
+
+The output of "cat rcu/rcu_preempt/rcuhier" looks as follows:
+
+c=14407 g=14408 s=0 jfq=2 j=c863 nfqs=12040/nfqsng=0(12040) fqlh=1051 oqlen=0/0
+3/3 ..>. 0:7 ^0
+e/e ..>. 0:3 ^0 d/d ..>. 4:7 ^1
+
+The fields are as follows:
+
+o "c" is exactly the same as "completed" under rcu/rcu_preempt/rcugp.
+
+o "g" is exactly the same as "gpnum" under rcu/rcu_preempt/rcugp.
+
+o "s" is the current state of the force_quiescent_state()
+ state machine.
+
+o "jfq" is the number of jiffies remaining for this grace period
+ before force_quiescent_state() is invoked to help push things
+ along. Note that CPUs in idle mode throughout the grace period
+ will not report on their own, but rather must be check by some
+ other CPU via force_quiescent_state().
+
+o "j" is the low-order four hex digits of the jiffies counter.
+ Yes, Paul did run into a number of problems that turned out to
+ be due to the jiffies counter no longer counting. Why do you ask?
+
+o "nfqs" is the number of calls to force_quiescent_state() since
+ boot.
+
+o "nfqsng" is the number of useless calls to force_quiescent_state(),
+ where there wasn't actually a grace period active. This can
+ no longer happen due to grace-period processing being pushed
+ into a kthread. The number in parentheses is the difference
+ between "nfqs" and "nfqsng", or the number of times that
+ force_quiescent_state() actually did some real work.
+
+o "fqlh" is the number of calls to force_quiescent_state() that
+ exited immediately (without even being counted in nfqs above)
+ due to contention on ->fqslock.
+
+o Each element of the form "3/3 ..>. 0:7 ^0" represents one rcu_node
+ structure. Each line represents one level of the hierarchy,
+ from root to leaves. It is best to think of the rcu_data
+ structures as forming yet another level after the leaves.
+ Note that there might be either one, two, three, or even four
+ levels of rcu_node structures, depending on the relationship
+ between CONFIG_RCU_FANOUT, CONFIG_RCU_FANOUT_LEAF (possibly
+ adjusted using the rcu_fanout_leaf kernel boot parameter), and
+ CONFIG_NR_CPUS (possibly adjusted using the nr_cpu_ids count of
+ possible CPUs for the booting hardware).
+
+ o The numbers separated by the "/" are the qsmask followed
+ by the qsmaskinit. The qsmask will have one bit
+ set for each entity in the next lower level that has
+ not yet checked in for the current grace period ("e"
+ indicating CPUs 5, 6, and 7 in the example above).
+ The qsmaskinit will have one bit for each entity that is
+ currently expected to check in during each grace period.
+ The value of qsmaskinit is assigned to that of qsmask
+ at the beginning of each grace period.
+
+ o The characters separated by the ">" indicate the state
+ of the blocked-tasks lists. A "G" preceding the ">"
+ indicates that at least one task blocked in an RCU
+ read-side critical section blocks the current grace
+ period, while a "E" preceding the ">" indicates that
+ at least one task blocked in an RCU read-side critical
+ section blocks the current expedited grace period.
+ A "T" character following the ">" indicates that at
+ least one task is blocked within an RCU read-side
+ critical section, regardless of whether any current
+ grace period (expedited or normal) is inconvenienced.
+ A "." character appears if the corresponding condition
+ does not hold, so that "..>." indicates that no tasks
+ are blocked. In contrast, "GE>T" indicates maximal
+ inconvenience from blocked tasks. CONFIG_TREE_RCU
+ builds of the kernel will always show "..>.".
+
+ o The numbers separated by the ":" are the range of CPUs
+ served by this struct rcu_node. This can be helpful
+ in working out how the hierarchy is wired together.
+
+ For example, the example rcu_node structure shown above
+ has "0:7", indicating that it covers CPUs 0 through 7.
+
+ o The number after the "^" indicates the bit in the
+ next higher level rcu_node structure that this rcu_node
+ structure corresponds to. For example, the "d/d ..>. 4:7
+ ^1" has a "1" in this position, indicating that it
+ corresponds to the "1" bit in the "3" shown in the
+ "3/3 ..>. 0:7 ^0" entry on the next level up.
+
+
+The output of "cat rcu/rcu_sched/rcu_pending" looks as follows:
+
+ 0!np=26111 qsp=29 rpq=5386 cbr=1 cng=570 gpc=3674 gps=577 nn=15903 ndw=0
+ 1!np=28913 qsp=35 rpq=6097 cbr=1 cng=448 gpc=3700 gps=554 nn=18113 ndw=0
+ 2!np=32740 qsp=37 rpq=6202 cbr=0 cng=476 gpc=4627 gps=546 nn=20889 ndw=0
+ 3 np=23679 qsp=22 rpq=5044 cbr=1 cng=415 gpc=3403 gps=347 nn=14469 ndw=0
+ 4!np=30714 qsp=4 rpq=5574 cbr=0 cng=528 gpc=3931 gps=639 nn=20042 ndw=0
+ 5 np=28910 qsp=2 rpq=5246 cbr=0 cng=428 gpc=4105 gps=709 nn=18422 ndw=0
+ 6!np=38648 qsp=5 rpq=7076 cbr=0 cng=840 gpc=4072 gps=961 nn=25699 ndw=0
+ 7 np=37275 qsp=2 rpq=6873 cbr=0 cng=868 gpc=3416 gps=971 nn=25147 ndw=0
+
+The fields are as follows:
+
+o The leading number is the CPU number, with "!" indicating
+ an offline CPU.
+
+o "np" is the number of times that __rcu_pending() has been invoked
+ for the corresponding flavor of RCU.
+
+o "qsp" is the number of times that the RCU was waiting for a
+ quiescent state from this CPU.
+
+o "rpq" is the number of times that the CPU had passed through
+ a quiescent state, but not yet reported it to RCU.
+
+o "cbr" is the number of times that this CPU had RCU callbacks
+ that had passed through a grace period, and were thus ready
+ to be invoked.
+
+o "cng" is the number of times that this CPU needed another
+ grace period while RCU was idle.
+
+o "gpc" is the number of times that an old grace period had
+ completed, but this CPU was not yet aware of it.
+
+o "gps" is the number of times that a new grace period had started,
+ but this CPU was not yet aware of it.
+
+o "ndw" is the number of times that a wakeup of an rcuo
+ callback-offload kthread had to be deferred in order to avoid
+ deadlock.
+
+o "nn" is the number of times that this CPU needed nothing.
+
+
+The output of "cat rcu/rcuboost" looks as follows:
+
+0:3 tasks=.... kt=W ntb=0 neb=0 nnb=0 j=c864 bt=c894
+ balk: nt=0 egt=4695 bt=0 nb=0 ny=56 nos=0
+4:7 tasks=.... kt=W ntb=0 neb=0 nnb=0 j=c864 bt=c894
+ balk: nt=0 egt=6541 bt=0 nb=0 ny=126 nos=0
+
+This information is output only for rcu_preempt. Each two-line entry
+corresponds to a leaf rcu_node structure. The fields are as follows:
+
+o "n:m" is the CPU-number range for the corresponding two-line
+ entry. In the sample output above, the first entry covers
+ CPUs zero through three and the second entry covers CPUs four
+ through seven.
+
+o "tasks=TNEB" gives the state of the various segments of the
+ rnp->blocked_tasks list:
+
+ "T" This indicates that there are some tasks that blocked
+ while running on one of the corresponding CPUs while
+ in an RCU read-side critical section.
+
+ "N" This indicates that some of the blocked tasks are preventing
+ the current normal (non-expedited) grace period from
+ completing.
+
+ "E" This indicates that some of the blocked tasks are preventing
+ the current expedited grace period from completing.
+
+ "B" This indicates that some of the blocked tasks are in
+ need of RCU priority boosting.
+
+ Each character is replaced with "." if the corresponding
+ condition does not hold.
+
+o "kt" is the state of the RCU priority-boosting kernel
+ thread associated with the corresponding rcu_node structure.
+ The state can be one of the following:
+
+ "S" The kernel thread is stopped, in other words, all
+ CPUs corresponding to this rcu_node structure are
+ offline.
+
+ "R" The kernel thread is running.
+
+ "W" The kernel thread is waiting because there is no work
+ for it to do.
+
+ "Y" The kernel thread is yielding to avoid hogging CPU.
+
+ "?" Unknown value, indicates a bug.
+
+o "ntb" is the number of tasks boosted.
+
+o "neb" is the number of tasks boosted in order to complete an
+ expedited grace period.
+
+o "nnb" is the number of tasks boosted in order to complete a
+ normal (non-expedited) grace period. When boosting a task
+ that was blocking both an expedited and a normal grace period,
+ it is counted against the expedited total above.
+
+o "j" is the low-order 16 bits of the jiffies counter in
+ hexadecimal.
+
+o "bt" is the low-order 16 bits of the value that the jiffies
+ counter will have when we next start boosting, assuming that
+ the current grace period does not end beforehand. This is
+ also in hexadecimal.
+
+o "balk: nt" counts the number of times we didn't boost (in
+ other words, we balked) even though it was time to boost because
+ there were no blocked tasks to boost. This situation occurs
+ when there is one blocked task on one rcu_node structure and
+ none on some other rcu_node structure.
+
+o "egt" counts the number of times we balked because although
+ there were blocked tasks, none of them were blocking the
+ current grace period, whether expedited or otherwise.
+
+o "bt" counts the number of times we balked because boosting
+ had already been initiated for the current grace period.
+
+o "nb" counts the number of times we balked because there
+ was at least one task blocking the current non-expedited grace
+ period that never had blocked. If it is already running, it
+ just won't help to boost its priority!
+
+o "ny" counts the number of times we balked because it was
+ not yet time to start boosting.
+
+o "nos" counts the number of times we balked for other
+ reasons, e.g., the grace period ended first.
+
+
+CONFIG_TINY_RCU debugfs Files and Formats
+
+These implementations of RCU provides a single debugfs file under the
+top-level directory RCU, namely rcu/rcudata, which displays fields in
+rcu_bh_ctrlblk and rcu_sched_ctrlblk.
+
+The output of "cat rcu/rcudata" is as follows:
+
+rcu_sched: qlen: 0
+rcu_bh: qlen: 0
+
+This is split into rcu_sched and rcu_bh sections. The field is as
+follows:
+
+o "qlen" is the number of RCU callbacks currently waiting either
+ for an RCU grace period or waiting to be invoked. This is the
+ only field present for rcu_sched and rcu_bh, due to the
+ short-circuiting of grace period in those two cases.
diff --git a/kernel/Documentation/RCU/whatisRCU.txt b/kernel/Documentation/RCU/whatisRCU.txt
new file mode 100644
index 000000000..88dfce182
--- /dev/null
+++ b/kernel/Documentation/RCU/whatisRCU.txt
@@ -0,0 +1,1031 @@
+Please note that the "What is RCU?" LWN series is an excellent place
+to start learning about RCU:
+
+1. What is RCU, Fundamentally? http://lwn.net/Articles/262464/
+2. What is RCU? Part 2: Usage http://lwn.net/Articles/263130/
+3. RCU part 3: the RCU API http://lwn.net/Articles/264090/
+4. The RCU API, 2010 Edition http://lwn.net/Articles/418853/
+
+
+What is RCU?
+
+RCU is a synchronization mechanism that was added to the Linux kernel
+during the 2.5 development effort that is optimized for read-mostly
+situations. Although RCU is actually quite simple once you understand it,
+getting there can sometimes be a challenge. Part of the problem is that
+most of the past descriptions of RCU have been written with the mistaken
+assumption that there is "one true way" to describe RCU. Instead,
+the experience has been that different people must take different paths
+to arrive at an understanding of RCU. This document provides several
+different paths, as follows:
+
+1. RCU OVERVIEW
+2. WHAT IS RCU'S CORE API?
+3. WHAT ARE SOME EXAMPLE USES OF CORE RCU API?
+4. WHAT IF MY UPDATING THREAD CANNOT BLOCK?
+5. WHAT ARE SOME SIMPLE IMPLEMENTATIONS OF RCU?
+6. ANALOGY WITH READER-WRITER LOCKING
+7. FULL LIST OF RCU APIs
+8. ANSWERS TO QUICK QUIZZES
+
+People who prefer starting with a conceptual overview should focus on
+Section 1, though most readers will profit by reading this section at
+some point. People who prefer to start with an API that they can then
+experiment with should focus on Section 2. People who prefer to start
+with example uses should focus on Sections 3 and 4. People who need to
+understand the RCU implementation should focus on Section 5, then dive
+into the kernel source code. People who reason best by analogy should
+focus on Section 6. Section 7 serves as an index to the docbook API
+documentation, and Section 8 is the traditional answer key.
+
+So, start with the section that makes the most sense to you and your
+preferred method of learning. If you need to know everything about
+everything, feel free to read the whole thing -- but if you are really
+that type of person, you have perused the source code and will therefore
+never need this document anyway. ;-)
+
+
+1. RCU OVERVIEW
+
+The basic idea behind RCU is to split updates into "removal" and
+"reclamation" phases. The removal phase removes references to data items
+within a data structure (possibly by replacing them with references to
+new versions of these data items), and can run concurrently with readers.
+The reason that it is safe to run the removal phase concurrently with
+readers is the semantics of modern CPUs guarantee that readers will see
+either the old or the new version of the data structure rather than a
+partially updated reference. The reclamation phase does the work of reclaiming
+(e.g., freeing) the data items removed from the data structure during the
+removal phase. Because reclaiming data items can disrupt any readers
+concurrently referencing those data items, the reclamation phase must
+not start until readers no longer hold references to those data items.
+
+Splitting the update into removal and reclamation phases permits the
+updater to perform the removal phase immediately, and to defer the
+reclamation phase until all readers active during the removal phase have
+completed, either by blocking until they finish or by registering a
+callback that is invoked after they finish. Only readers that are active
+during the removal phase need be considered, because any reader starting
+after the removal phase will be unable to gain a reference to the removed
+data items, and therefore cannot be disrupted by the reclamation phase.
+
+So the typical RCU update sequence goes something like the following:
+
+a. Remove pointers to a data structure, so that subsequent
+ readers cannot gain a reference to it.
+
+b. Wait for all previous readers to complete their RCU read-side
+ critical sections.
+
+c. At this point, there cannot be any readers who hold references
+ to the data structure, so it now may safely be reclaimed
+ (e.g., kfree()d).
+
+Step (b) above is the key idea underlying RCU's deferred destruction.
+The ability to wait until all readers are done allows RCU readers to
+use much lighter-weight synchronization, in some cases, absolutely no
+synchronization at all. In contrast, in more conventional lock-based
+schemes, readers must use heavy-weight synchronization in order to
+prevent an updater from deleting the data structure out from under them.
+This is because lock-based updaters typically update data items in place,
+and must therefore exclude readers. In contrast, RCU-based updaters
+typically take advantage of the fact that writes to single aligned
+pointers are atomic on modern CPUs, allowing atomic insertion, removal,
+and replacement of data items in a linked structure without disrupting
+readers. Concurrent RCU readers can then continue accessing the old
+versions, and can dispense with the atomic operations, memory barriers,
+and communications cache misses that are so expensive on present-day
+SMP computer systems, even in absence of lock contention.
+
+In the three-step procedure shown above, the updater is performing both
+the removal and the reclamation step, but it is often helpful for an
+entirely different thread to do the reclamation, as is in fact the case
+in the Linux kernel's directory-entry cache (dcache). Even if the same
+thread performs both the update step (step (a) above) and the reclamation
+step (step (c) above), it is often helpful to think of them separately.
+For example, RCU readers and updaters need not communicate at all,
+but RCU provides implicit low-overhead communication between readers
+and reclaimers, namely, in step (b) above.
+
+So how the heck can a reclaimer tell when a reader is done, given
+that readers are not doing any sort of synchronization operations???
+Read on to learn about how RCU's API makes this easy.
+
+
+2. WHAT IS RCU'S CORE API?
+
+The core RCU API is quite small:
+
+a. rcu_read_lock()
+b. rcu_read_unlock()
+c. synchronize_rcu() / call_rcu()
+d. rcu_assign_pointer()
+e. rcu_dereference()
+
+There are many other members of the RCU API, but the rest can be
+expressed in terms of these five, though most implementations instead
+express synchronize_rcu() in terms of the call_rcu() callback API.
+
+The five core RCU APIs are described below, the other 18 will be enumerated
+later. See the kernel docbook documentation for more info, or look directly
+at the function header comments.
+
+rcu_read_lock()
+
+ void rcu_read_lock(void);
+
+ Used by a reader to inform the reclaimer that the reader is
+ entering an RCU read-side critical section. It is illegal
+ to block while in an RCU read-side critical section, though
+ kernels built with CONFIG_PREEMPT_RCU can preempt RCU
+ read-side critical sections. Any RCU-protected data structure
+ accessed during an RCU read-side critical section is guaranteed to
+ remain unreclaimed for the full duration of that critical section.
+ Reference counts may be used in conjunction with RCU to maintain
+ longer-term references to data structures.
+
+rcu_read_unlock()
+
+ void rcu_read_unlock(void);
+
+ Used by a reader to inform the reclaimer that the reader is
+ exiting an RCU read-side critical section. Note that RCU
+ read-side critical sections may be nested and/or overlapping.
+
+synchronize_rcu()
+
+ void synchronize_rcu(void);
+
+ Marks the end of updater code and the beginning of reclaimer
+ code. It does this by blocking until all pre-existing RCU
+ read-side critical sections on all CPUs have completed.
+ Note that synchronize_rcu() will -not- necessarily wait for
+ any subsequent RCU read-side critical sections to complete.
+ For example, consider the following sequence of events:
+
+ CPU 0 CPU 1 CPU 2
+ ----------------- ------------------------- ---------------
+ 1. rcu_read_lock()
+ 2. enters synchronize_rcu()
+ 3. rcu_read_lock()
+ 4. rcu_read_unlock()
+ 5. exits synchronize_rcu()
+ 6. rcu_read_unlock()
+
+ To reiterate, synchronize_rcu() waits only for ongoing RCU
+ read-side critical sections to complete, not necessarily for
+ any that begin after synchronize_rcu() is invoked.
+
+ Of course, synchronize_rcu() does not necessarily return
+ -immediately- after the last pre-existing RCU read-side critical
+ section completes. For one thing, there might well be scheduling
+ delays. For another thing, many RCU implementations process
+ requests in batches in order to improve efficiencies, which can
+ further delay synchronize_rcu().
+
+ Since synchronize_rcu() is the API that must figure out when
+ readers are done, its implementation is key to RCU. For RCU
+ to be useful in all but the most read-intensive situations,
+ synchronize_rcu()'s overhead must also be quite small.
+
+ The call_rcu() API is a callback form of synchronize_rcu(),
+ and is described in more detail in a later section. Instead of
+ blocking, it registers a function and argument which are invoked
+ after all ongoing RCU read-side critical sections have completed.
+ This callback variant is particularly useful in situations where
+ it is illegal to block or where update-side performance is
+ critically important.
+
+ However, the call_rcu() API should not be used lightly, as use
+ of the synchronize_rcu() API generally results in simpler code.
+ In addition, the synchronize_rcu() API has the nice property
+ of automatically limiting update rate should grace periods
+ be delayed. This property results in system resilience in face
+ of denial-of-service attacks. Code using call_rcu() should limit
+ update rate in order to gain this same sort of resilience. See
+ checklist.txt for some approaches to limiting the update rate.
+
+rcu_assign_pointer()
+
+ typeof(p) rcu_assign_pointer(p, typeof(p) v);
+
+ Yes, rcu_assign_pointer() -is- implemented as a macro, though it
+ would be cool to be able to declare a function in this manner.
+ (Compiler experts will no doubt disagree.)
+
+ The updater uses this function to assign a new value to an
+ RCU-protected pointer, in order to safely communicate the change
+ in value from the updater to the reader. This function returns
+ the new value, and also executes any memory-barrier instructions
+ required for a given CPU architecture.
+
+ Perhaps just as important, it serves to document (1) which
+ pointers are protected by RCU and (2) the point at which a
+ given structure becomes accessible to other CPUs. That said,
+ rcu_assign_pointer() is most frequently used indirectly, via
+ the _rcu list-manipulation primitives such as list_add_rcu().
+
+rcu_dereference()
+
+ typeof(p) rcu_dereference(p);
+
+ Like rcu_assign_pointer(), rcu_dereference() must be implemented
+ as a macro.
+
+ The reader uses rcu_dereference() to fetch an RCU-protected
+ pointer, which returns a value that may then be safely
+ dereferenced. Note that rcu_deference() does not actually
+ dereference the pointer, instead, it protects the pointer for
+ later dereferencing. It also executes any needed memory-barrier
+ instructions for a given CPU architecture. Currently, only Alpha
+ needs memory barriers within rcu_dereference() -- on other CPUs,
+ it compiles to nothing, not even a compiler directive.
+
+ Common coding practice uses rcu_dereference() to copy an
+ RCU-protected pointer to a local variable, then dereferences
+ this local variable, for example as follows:
+
+ p = rcu_dereference(head.next);
+ return p->data;
+
+ However, in this case, one could just as easily combine these
+ into one statement:
+
+ return rcu_dereference(head.next)->data;
+
+ If you are going to be fetching multiple fields from the
+ RCU-protected structure, using the local variable is of
+ course preferred. Repeated rcu_dereference() calls look
+ ugly and incur unnecessary overhead on Alpha CPUs.
+
+ Note that the value returned by rcu_dereference() is valid
+ only within the enclosing RCU read-side critical section.
+ For example, the following is -not- legal:
+
+ rcu_read_lock();
+ p = rcu_dereference(head.next);
+ rcu_read_unlock();
+ x = p->address; /* BUG!!! */
+ rcu_read_lock();
+ y = p->data; /* BUG!!! */
+ rcu_read_unlock();
+
+ Holding a reference from one RCU read-side critical section
+ to another is just as illegal as holding a reference from
+ one lock-based critical section to another! Similarly,
+ using a reference outside of the critical section in which
+ it was acquired is just as illegal as doing so with normal
+ locking.
+
+ As with rcu_assign_pointer(), an important function of
+ rcu_dereference() is to document which pointers are protected by
+ RCU, in particular, flagging a pointer that is subject to changing
+ at any time, including immediately after the rcu_dereference().
+ And, again like rcu_assign_pointer(), rcu_dereference() is
+ typically used indirectly, via the _rcu list-manipulation
+ primitives, such as list_for_each_entry_rcu().
+
+The following diagram shows how each API communicates among the
+reader, updater, and reclaimer.
+
+
+ rcu_assign_pointer()
+ +--------+
+ +---------------------->| reader |---------+
+ | +--------+ |
+ | | |
+ | | | Protect:
+ | | | rcu_read_lock()
+ | | | rcu_read_unlock()
+ | rcu_dereference() | |
+ +---------+ | |
+ | updater |<---------------------+ |
+ +---------+ V
+ | +-----------+
+ +----------------------------------->| reclaimer |
+ +-----------+
+ Defer:
+ synchronize_rcu() & call_rcu()
+
+
+The RCU infrastructure observes the time sequence of rcu_read_lock(),
+rcu_read_unlock(), synchronize_rcu(), and call_rcu() invocations in
+order to determine when (1) synchronize_rcu() invocations may return
+to their callers and (2) call_rcu() callbacks may be invoked. Efficient
+implementations of the RCU infrastructure make heavy use of batching in
+order to amortize their overhead over many uses of the corresponding APIs.
+
+There are no fewer than three RCU mechanisms in the Linux kernel; the
+diagram above shows the first one, which is by far the most commonly used.
+The rcu_dereference() and rcu_assign_pointer() primitives are used for
+all three mechanisms, but different defer and protect primitives are
+used as follows:
+
+ Defer Protect
+
+a. synchronize_rcu() rcu_read_lock() / rcu_read_unlock()
+ call_rcu() rcu_dereference()
+
+b. synchronize_rcu_bh() rcu_read_lock_bh() / rcu_read_unlock_bh()
+ call_rcu_bh() rcu_dereference_bh()
+
+c. synchronize_sched() rcu_read_lock_sched() / rcu_read_unlock_sched()
+ call_rcu_sched() preempt_disable() / preempt_enable()
+ local_irq_save() / local_irq_restore()
+ hardirq enter / hardirq exit
+ NMI enter / NMI exit
+ rcu_dereference_sched()
+
+These three mechanisms are used as follows:
+
+a. RCU applied to normal data structures.
+
+b. RCU applied to networking data structures that may be subjected
+ to remote denial-of-service attacks.
+
+c. RCU applied to scheduler and interrupt/NMI-handler tasks.
+
+Again, most uses will be of (a). The (b) and (c) cases are important
+for specialized uses, but are relatively uncommon.
+
+
+3. WHAT ARE SOME EXAMPLE USES OF CORE RCU API?
+
+This section shows a simple use of the core RCU API to protect a
+global pointer to a dynamically allocated structure. More-typical
+uses of RCU may be found in listRCU.txt, arrayRCU.txt, and NMI-RCU.txt.
+
+ struct foo {
+ int a;
+ char b;
+ long c;
+ };
+ DEFINE_SPINLOCK(foo_mutex);
+
+ struct foo *gbl_foo;
+
+ /*
+ * Create a new struct foo that is the same as the one currently
+ * pointed to by gbl_foo, except that field "a" is replaced
+ * with "new_a". Points gbl_foo to the new structure, and
+ * frees up the old structure after a grace period.
+ *
+ * Uses rcu_assign_pointer() to ensure that concurrent readers
+ * see the initialized version of the new structure.
+ *
+ * Uses synchronize_rcu() to ensure that any readers that might
+ * have references to the old structure complete before freeing
+ * the old structure.
+ */
+ void foo_update_a(int new_a)
+ {
+ struct foo *new_fp;
+ struct foo *old_fp;
+
+ new_fp = kmalloc(sizeof(*new_fp), GFP_KERNEL);
+ spin_lock(&foo_mutex);
+ old_fp = gbl_foo;
+ *new_fp = *old_fp;
+ new_fp->a = new_a;
+ rcu_assign_pointer(gbl_foo, new_fp);
+ spin_unlock(&foo_mutex);
+ synchronize_rcu();
+ kfree(old_fp);
+ }
+
+ /*
+ * Return the value of field "a" of the current gbl_foo
+ * structure. Use rcu_read_lock() and rcu_read_unlock()
+ * to ensure that the structure does not get deleted out
+ * from under us, and use rcu_dereference() to ensure that
+ * we see the initialized version of the structure (important
+ * for DEC Alpha and for people reading the code).
+ */
+ int foo_get_a(void)
+ {
+ int retval;
+
+ rcu_read_lock();
+ retval = rcu_dereference(gbl_foo)->a;
+ rcu_read_unlock();
+ return retval;
+ }
+
+So, to sum up:
+
+o Use rcu_read_lock() and rcu_read_unlock() to guard RCU
+ read-side critical sections.
+
+o Within an RCU read-side critical section, use rcu_dereference()
+ to dereference RCU-protected pointers.
+
+o Use some solid scheme (such as locks or semaphores) to
+ keep concurrent updates from interfering with each other.
+
+o Use rcu_assign_pointer() to update an RCU-protected pointer.
+ This primitive protects concurrent readers from the updater,
+ -not- concurrent updates from each other! You therefore still
+ need to use locking (or something similar) to keep concurrent
+ rcu_assign_pointer() primitives from interfering with each other.
+
+o Use synchronize_rcu() -after- removing a data element from an
+ RCU-protected data structure, but -before- reclaiming/freeing
+ the data element, in order to wait for the completion of all
+ RCU read-side critical sections that might be referencing that
+ data item.
+
+See checklist.txt for additional rules to follow when using RCU.
+And again, more-typical uses of RCU may be found in listRCU.txt,
+arrayRCU.txt, and NMI-RCU.txt.
+
+
+4. WHAT IF MY UPDATING THREAD CANNOT BLOCK?
+
+In the example above, foo_update_a() blocks until a grace period elapses.
+This is quite simple, but in some cases one cannot afford to wait so
+long -- there might be other high-priority work to be done.
+
+In such cases, one uses call_rcu() rather than synchronize_rcu().
+The call_rcu() API is as follows:
+
+ void call_rcu(struct rcu_head * head,
+ void (*func)(struct rcu_head *head));
+
+This function invokes func(head) after a grace period has elapsed.
+This invocation might happen from either softirq or process context,
+so the function is not permitted to block. The foo struct needs to
+have an rcu_head structure added, perhaps as follows:
+
+ struct foo {
+ int a;
+ char b;
+ long c;
+ struct rcu_head rcu;
+ };
+
+The foo_update_a() function might then be written as follows:
+
+ /*
+ * Create a new struct foo that is the same as the one currently
+ * pointed to by gbl_foo, except that field "a" is replaced
+ * with "new_a". Points gbl_foo to the new structure, and
+ * frees up the old structure after a grace period.
+ *
+ * Uses rcu_assign_pointer() to ensure that concurrent readers
+ * see the initialized version of the new structure.
+ *
+ * Uses call_rcu() to ensure that any readers that might have
+ * references to the old structure complete before freeing the
+ * old structure.
+ */
+ void foo_update_a(int new_a)
+ {
+ struct foo *new_fp;
+ struct foo *old_fp;
+
+ new_fp = kmalloc(sizeof(*new_fp), GFP_KERNEL);
+ spin_lock(&foo_mutex);
+ old_fp = gbl_foo;
+ *new_fp = *old_fp;
+ new_fp->a = new_a;
+ rcu_assign_pointer(gbl_foo, new_fp);
+ spin_unlock(&foo_mutex);
+ call_rcu(&old_fp->rcu, foo_reclaim);
+ }
+
+The foo_reclaim() function might appear as follows:
+
+ void foo_reclaim(struct rcu_head *rp)
+ {
+ struct foo *fp = container_of(rp, struct foo, rcu);
+
+ foo_cleanup(fp->a);
+
+ kfree(fp);
+ }
+
+The container_of() primitive is a macro that, given a pointer into a
+struct, the type of the struct, and the pointed-to field within the
+struct, returns a pointer to the beginning of the struct.
+
+The use of call_rcu() permits the caller of foo_update_a() to
+immediately regain control, without needing to worry further about the
+old version of the newly updated element. It also clearly shows the
+RCU distinction between updater, namely foo_update_a(), and reclaimer,
+namely foo_reclaim().
+
+The summary of advice is the same as for the previous section, except
+that we are now using call_rcu() rather than synchronize_rcu():
+
+o Use call_rcu() -after- removing a data element from an
+ RCU-protected data structure in order to register a callback
+ function that will be invoked after the completion of all RCU
+ read-side critical sections that might be referencing that
+ data item.
+
+If the callback for call_rcu() is not doing anything more than calling
+kfree() on the structure, you can use kfree_rcu() instead of call_rcu()
+to avoid having to write your own callback:
+
+ kfree_rcu(old_fp, rcu);
+
+Again, see checklist.txt for additional rules governing the use of RCU.
+
+
+5. WHAT ARE SOME SIMPLE IMPLEMENTATIONS OF RCU?
+
+One of the nice things about RCU is that it has extremely simple "toy"
+implementations that are a good first step towards understanding the
+production-quality implementations in the Linux kernel. This section
+presents two such "toy" implementations of RCU, one that is implemented
+in terms of familiar locking primitives, and another that more closely
+resembles "classic" RCU. Both are way too simple for real-world use,
+lacking both functionality and performance. However, they are useful
+in getting a feel for how RCU works. See kernel/rcupdate.c for a
+production-quality implementation, and see:
+
+ http://www.rdrop.com/users/paulmck/RCU
+
+for papers describing the Linux kernel RCU implementation. The OLS'01
+and OLS'02 papers are a good introduction, and the dissertation provides
+more details on the current implementation as of early 2004.
+
+
+5A. "TOY" IMPLEMENTATION #1: LOCKING
+
+This section presents a "toy" RCU implementation that is based on
+familiar locking primitives. Its overhead makes it a non-starter for
+real-life use, as does its lack of scalability. It is also unsuitable
+for realtime use, since it allows scheduling latency to "bleed" from
+one read-side critical section to another.
+
+However, it is probably the easiest implementation to relate to, so is
+a good starting point.
+
+It is extremely simple:
+
+ static DEFINE_RWLOCK(rcu_gp_mutex);
+
+ void rcu_read_lock(void)
+ {
+ read_lock(&rcu_gp_mutex);
+ }
+
+ void rcu_read_unlock(void)
+ {
+ read_unlock(&rcu_gp_mutex);
+ }
+
+ void synchronize_rcu(void)
+ {
+ write_lock(&rcu_gp_mutex);
+ write_unlock(&rcu_gp_mutex);
+ }
+
+[You can ignore rcu_assign_pointer() and rcu_dereference() without
+missing much. But here they are anyway. And whatever you do, don't
+forget about them when submitting patches making use of RCU!]
+
+ #define rcu_assign_pointer(p, v) ({ \
+ smp_wmb(); \
+ (p) = (v); \
+ })
+
+ #define rcu_dereference(p) ({ \
+ typeof(p) _________p1 = p; \
+ smp_read_barrier_depends(); \
+ (_________p1); \
+ })
+
+
+The rcu_read_lock() and rcu_read_unlock() primitive read-acquire
+and release a global reader-writer lock. The synchronize_rcu()
+primitive write-acquires this same lock, then immediately releases
+it. This means that once synchronize_rcu() exits, all RCU read-side
+critical sections that were in progress before synchronize_rcu() was
+called are guaranteed to have completed -- there is no way that
+synchronize_rcu() would have been able to write-acquire the lock
+otherwise.
+
+It is possible to nest rcu_read_lock(), since reader-writer locks may
+be recursively acquired. Note also that rcu_read_lock() is immune
+from deadlock (an important property of RCU). The reason for this is
+that the only thing that can block rcu_read_lock() is a synchronize_rcu().
+But synchronize_rcu() does not acquire any locks while holding rcu_gp_mutex,
+so there can be no deadlock cycle.
+
+Quick Quiz #1: Why is this argument naive? How could a deadlock
+ occur when using this algorithm in a real-world Linux
+ kernel? How could this deadlock be avoided?
+
+
+5B. "TOY" EXAMPLE #2: CLASSIC RCU
+
+This section presents a "toy" RCU implementation that is based on
+"classic RCU". It is also short on performance (but only for updates) and
+on features such as hotplug CPU and the ability to run in CONFIG_PREEMPT
+kernels. The definitions of rcu_dereference() and rcu_assign_pointer()
+are the same as those shown in the preceding section, so they are omitted.
+
+ void rcu_read_lock(void) { }
+
+ void rcu_read_unlock(void) { }
+
+ void synchronize_rcu(void)
+ {
+ int cpu;
+
+ for_each_possible_cpu(cpu)
+ run_on(cpu);
+ }
+
+Note that rcu_read_lock() and rcu_read_unlock() do absolutely nothing.
+This is the great strength of classic RCU in a non-preemptive kernel:
+read-side overhead is precisely zero, at least on non-Alpha CPUs.
+And there is absolutely no way that rcu_read_lock() can possibly
+participate in a deadlock cycle!
+
+The implementation of synchronize_rcu() simply schedules itself on each
+CPU in turn. The run_on() primitive can be implemented straightforwardly
+in terms of the sched_setaffinity() primitive. Of course, a somewhat less
+"toy" implementation would restore the affinity upon completion rather
+than just leaving all tasks running on the last CPU, but when I said
+"toy", I meant -toy-!
+
+So how the heck is this supposed to work???
+
+Remember that it is illegal to block while in an RCU read-side critical
+section. Therefore, if a given CPU executes a context switch, we know
+that it must have completed all preceding RCU read-side critical sections.
+Once -all- CPUs have executed a context switch, then -all- preceding
+RCU read-side critical sections will have completed.
+
+So, suppose that we remove a data item from its structure and then invoke
+synchronize_rcu(). Once synchronize_rcu() returns, we are guaranteed
+that there are no RCU read-side critical sections holding a reference
+to that data item, so we can safely reclaim it.
+
+Quick Quiz #2: Give an example where Classic RCU's read-side
+ overhead is -negative-.
+
+Quick Quiz #3: If it is illegal to block in an RCU read-side
+ critical section, what the heck do you do in
+ PREEMPT_RT, where normal spinlocks can block???
+
+
+6. ANALOGY WITH READER-WRITER LOCKING
+
+Although RCU can be used in many different ways, a very common use of
+RCU is analogous to reader-writer locking. The following unified
+diff shows how closely related RCU and reader-writer locking can be.
+
+ @@ -13,15 +14,15 @@
+ struct list_head *lp;
+ struct el *p;
+
+ - read_lock();
+ - list_for_each_entry(p, head, lp) {
+ + rcu_read_lock();
+ + list_for_each_entry_rcu(p, head, lp) {
+ if (p->key == key) {
+ *result = p->data;
+ - read_unlock();
+ + rcu_read_unlock();
+ return 1;
+ }
+ }
+ - read_unlock();
+ + rcu_read_unlock();
+ return 0;
+ }
+
+ @@ -29,15 +30,16 @@
+ {
+ struct el *p;
+
+ - write_lock(&listmutex);
+ + spin_lock(&listmutex);
+ list_for_each_entry(p, head, lp) {
+ if (p->key == key) {
+ - list_del(&p->list);
+ - write_unlock(&listmutex);
+ + list_del_rcu(&p->list);
+ + spin_unlock(&listmutex);
+ + synchronize_rcu();
+ kfree(p);
+ return 1;
+ }
+ }
+ - write_unlock(&listmutex);
+ + spin_unlock(&listmutex);
+ return 0;
+ }
+
+Or, for those who prefer a side-by-side listing:
+
+ 1 struct el { 1 struct el {
+ 2 struct list_head list; 2 struct list_head list;
+ 3 long key; 3 long key;
+ 4 spinlock_t mutex; 4 spinlock_t mutex;
+ 5 int data; 5 int data;
+ 6 /* Other data fields */ 6 /* Other data fields */
+ 7 }; 7 };
+ 8 spinlock_t listmutex; 8 spinlock_t listmutex;
+ 9 struct el head; 9 struct el head;
+
+ 1 int search(long key, int *result) 1 int search(long key, int *result)
+ 2 { 2 {
+ 3 struct list_head *lp; 3 struct list_head *lp;
+ 4 struct el *p; 4 struct el *p;
+ 5 5
+ 6 read_lock(); 6 rcu_read_lock();
+ 7 list_for_each_entry(p, head, lp) { 7 list_for_each_entry_rcu(p, head, lp) {
+ 8 if (p->key == key) { 8 if (p->key == key) {
+ 9 *result = p->data; 9 *result = p->data;
+10 read_unlock(); 10 rcu_read_unlock();
+11 return 1; 11 return 1;
+12 } 12 }
+13 } 13 }
+14 read_unlock(); 14 rcu_read_unlock();
+15 return 0; 15 return 0;
+16 } 16 }
+
+ 1 int delete(long key) 1 int delete(long key)
+ 2 { 2 {
+ 3 struct el *p; 3 struct el *p;
+ 4 4
+ 5 write_lock(&listmutex); 5 spin_lock(&listmutex);
+ 6 list_for_each_entry(p, head, lp) { 6 list_for_each_entry(p, head, lp) {
+ 7 if (p->key == key) { 7 if (p->key == key) {
+ 8 list_del(&p->list); 8 list_del_rcu(&p->list);
+ 9 write_unlock(&listmutex); 9 spin_unlock(&listmutex);
+ 10 synchronize_rcu();
+10 kfree(p); 11 kfree(p);
+11 return 1; 12 return 1;
+12 } 13 }
+13 } 14 }
+14 write_unlock(&listmutex); 15 spin_unlock(&listmutex);
+15 return 0; 16 return 0;
+16 } 17 }
+
+Either way, the differences are quite small. Read-side locking moves
+to rcu_read_lock() and rcu_read_unlock, update-side locking moves from
+a reader-writer lock to a simple spinlock, and a synchronize_rcu()
+precedes the kfree().
+
+However, there is one potential catch: the read-side and update-side
+critical sections can now run concurrently. In many cases, this will
+not be a problem, but it is necessary to check carefully regardless.
+For example, if multiple independent list updates must be seen as
+a single atomic update, converting to RCU will require special care.
+
+Also, the presence of synchronize_rcu() means that the RCU version of
+delete() can now block. If this is a problem, there is a callback-based
+mechanism that never blocks, namely call_rcu() or kfree_rcu(), that can
+be used in place of synchronize_rcu().
+
+
+7. FULL LIST OF RCU APIs
+
+The RCU APIs are documented in docbook-format header comments in the
+Linux-kernel source code, but it helps to have a full list of the
+APIs, since there does not appear to be a way to categorize them
+in docbook. Here is the list, by category.
+
+RCU list traversal:
+
+ list_entry_rcu
+ list_first_entry_rcu
+ list_next_rcu
+ list_for_each_entry_rcu
+ list_for_each_entry_continue_rcu
+ hlist_first_rcu
+ hlist_next_rcu
+ hlist_pprev_rcu
+ hlist_for_each_entry_rcu
+ hlist_for_each_entry_rcu_bh
+ hlist_for_each_entry_continue_rcu
+ hlist_for_each_entry_continue_rcu_bh
+ hlist_nulls_first_rcu
+ hlist_nulls_for_each_entry_rcu
+ hlist_bl_first_rcu
+ hlist_bl_for_each_entry_rcu
+
+RCU pointer/list update:
+
+ rcu_assign_pointer
+ list_add_rcu
+ list_add_tail_rcu
+ list_del_rcu
+ list_replace_rcu
+ hlist_add_behind_rcu
+ hlist_add_before_rcu
+ hlist_add_head_rcu
+ hlist_del_rcu
+ hlist_del_init_rcu
+ hlist_replace_rcu
+ list_splice_init_rcu()
+ hlist_nulls_del_init_rcu
+ hlist_nulls_del_rcu
+ hlist_nulls_add_head_rcu
+ hlist_bl_add_head_rcu
+ hlist_bl_del_init_rcu
+ hlist_bl_del_rcu
+ hlist_bl_set_first_rcu
+
+RCU: Critical sections Grace period Barrier
+
+ rcu_read_lock synchronize_net rcu_barrier
+ rcu_read_unlock synchronize_rcu
+ rcu_dereference synchronize_rcu_expedited
+ rcu_read_lock_held call_rcu
+ rcu_dereference_check kfree_rcu
+ rcu_dereference_protected
+
+bh: Critical sections Grace period Barrier
+
+ rcu_read_lock_bh call_rcu_bh rcu_barrier_bh
+ rcu_read_unlock_bh synchronize_rcu_bh
+ rcu_dereference_bh synchronize_rcu_bh_expedited
+ rcu_dereference_bh_check
+ rcu_dereference_bh_protected
+ rcu_read_lock_bh_held
+
+sched: Critical sections Grace period Barrier
+
+ rcu_read_lock_sched synchronize_sched rcu_barrier_sched
+ rcu_read_unlock_sched call_rcu_sched
+ [preempt_disable] synchronize_sched_expedited
+ [and friends]
+ rcu_read_lock_sched_notrace
+ rcu_read_unlock_sched_notrace
+ rcu_dereference_sched
+ rcu_dereference_sched_check
+ rcu_dereference_sched_protected
+ rcu_read_lock_sched_held
+
+
+SRCU: Critical sections Grace period Barrier
+
+ srcu_read_lock synchronize_srcu srcu_barrier
+ srcu_read_unlock call_srcu
+ srcu_dereference synchronize_srcu_expedited
+ srcu_dereference_check
+ srcu_read_lock_held
+
+SRCU: Initialization/cleanup
+ init_srcu_struct
+ cleanup_srcu_struct
+
+All: lockdep-checked RCU-protected pointer access
+
+ rcu_access_index
+ rcu_access_pointer
+ rcu_dereference_index_check
+ rcu_dereference_raw
+ rcu_lockdep_assert
+ rcu_sleep_check
+ RCU_NONIDLE
+
+See the comment headers in the source code (or the docbook generated
+from them) for more information.
+
+However, given that there are no fewer than four families of RCU APIs
+in the Linux kernel, how do you choose which one to use? The following
+list can be helpful:
+
+a. Will readers need to block? If so, you need SRCU.
+
+b. What about the -rt patchset? If readers would need to block
+ in an non-rt kernel, you need SRCU. If readers would block
+ in a -rt kernel, but not in a non-rt kernel, SRCU is not
+ necessary.
+
+c. Do you need to treat NMI handlers, hardirq handlers,
+ and code segments with preemption disabled (whether
+ via preempt_disable(), local_irq_save(), local_bh_disable(),
+ or some other mechanism) as if they were explicit RCU readers?
+ If so, RCU-sched is the only choice that will work for you.
+
+d. Do you need RCU grace periods to complete even in the face
+ of softirq monopolization of one or more of the CPUs? For
+ example, is your code subject to network-based denial-of-service
+ attacks? If so, you need RCU-bh.
+
+e. Is your workload too update-intensive for normal use of
+ RCU, but inappropriate for other synchronization mechanisms?
+ If so, consider SLAB_DESTROY_BY_RCU. But please be careful!
+
+f. Do you need read-side critical sections that are respected
+ even though they are in the middle of the idle loop, during
+ user-mode execution, or on an offlined CPU? If so, SRCU is the
+ only choice that will work for you.
+
+g. Otherwise, use RCU.
+
+Of course, this all assumes that you have determined that RCU is in fact
+the right tool for your job.
+
+
+8. ANSWERS TO QUICK QUIZZES
+
+Quick Quiz #1: Why is this argument naive? How could a deadlock
+ occur when using this algorithm in a real-world Linux
+ kernel? [Referring to the lock-based "toy" RCU
+ algorithm.]
+
+Answer: Consider the following sequence of events:
+
+ 1. CPU 0 acquires some unrelated lock, call it
+ "problematic_lock", disabling irq via
+ spin_lock_irqsave().
+
+ 2. CPU 1 enters synchronize_rcu(), write-acquiring
+ rcu_gp_mutex.
+
+ 3. CPU 0 enters rcu_read_lock(), but must wait
+ because CPU 1 holds rcu_gp_mutex.
+
+ 4. CPU 1 is interrupted, and the irq handler
+ attempts to acquire problematic_lock.
+
+ The system is now deadlocked.
+
+ One way to avoid this deadlock is to use an approach like
+ that of CONFIG_PREEMPT_RT, where all normal spinlocks
+ become blocking locks, and all irq handlers execute in
+ the context of special tasks. In this case, in step 4
+ above, the irq handler would block, allowing CPU 1 to
+ release rcu_gp_mutex, avoiding the deadlock.
+
+ Even in the absence of deadlock, this RCU implementation
+ allows latency to "bleed" from readers to other
+ readers through synchronize_rcu(). To see this,
+ consider task A in an RCU read-side critical section
+ (thus read-holding rcu_gp_mutex), task B blocked
+ attempting to write-acquire rcu_gp_mutex, and
+ task C blocked in rcu_read_lock() attempting to
+ read_acquire rcu_gp_mutex. Task A's RCU read-side
+ latency is holding up task C, albeit indirectly via
+ task B.
+
+ Realtime RCU implementations therefore use a counter-based
+ approach where tasks in RCU read-side critical sections
+ cannot be blocked by tasks executing synchronize_rcu().
+
+Quick Quiz #2: Give an example where Classic RCU's read-side
+ overhead is -negative-.
+
+Answer: Imagine a single-CPU system with a non-CONFIG_PREEMPT
+ kernel where a routing table is used by process-context
+ code, but can be updated by irq-context code (for example,
+ by an "ICMP REDIRECT" packet). The usual way of handling
+ this would be to have the process-context code disable
+ interrupts while searching the routing table. Use of
+ RCU allows such interrupt-disabling to be dispensed with.
+ Thus, without RCU, you pay the cost of disabling interrupts,
+ and with RCU you don't.
+
+ One can argue that the overhead of RCU in this
+ case is negative with respect to the single-CPU
+ interrupt-disabling approach. Others might argue that
+ the overhead of RCU is merely zero, and that replacing
+ the positive overhead of the interrupt-disabling scheme
+ with the zero-overhead RCU scheme does not constitute
+ negative overhead.
+
+ In real life, of course, things are more complex. But
+ even the theoretical possibility of negative overhead for
+ a synchronization primitive is a bit unexpected. ;-)
+
+Quick Quiz #3: If it is illegal to block in an RCU read-side
+ critical section, what the heck do you do in
+ PREEMPT_RT, where normal spinlocks can block???
+
+Answer: Just as PREEMPT_RT permits preemption of spinlock
+ critical sections, it permits preemption of RCU
+ read-side critical sections. It also permits
+ spinlocks blocking while in RCU read-side critical
+ sections.
+
+ Why the apparent inconsistency? Because it is it
+ possible to use priority boosting to keep the RCU
+ grace periods short if need be (for example, if running
+ short of memory). In contrast, if blocking waiting
+ for (say) network reception, there is no way to know
+ what should be boosted. Especially given that the
+ process we need to boost might well be a human being
+ who just went out for a pizza or something. And although
+ a computer-operated cattle prod might arouse serious
+ interest, it might also provoke serious objections.
+ Besides, how does the computer know what pizza parlor
+ the human being went to???
+
+
+ACKNOWLEDGEMENTS
+
+My thanks to the people who helped make this human-readable, including
+Jon Walpole, Josh Triplett, Serge Hallyn, Suzanne Wood, and Alan Stern.
+
+
+For more information, see http://www.rdrop.com/users/paulmck/RCU.