summaryrefslogtreecommitdiffstats
path: root/kernel/Documentation/timers/timers-howto.txt
blob: 038f8c77a0767d26f64526924178bec3439b98bc (plain)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
delays - Information on the various kernel delay / sleep mechanisms
-------------------------------------------------------------------

This document seeks to answer the common question: "What is the
RightWay (TM) to insert a delay?"

This question is most often faced by driver writers who have to
deal with hardware delays and who may not be the most intimately
familiar with the inner workings of the Linux Kernel.


Inserting Delays
----------------

The first, and most important, question you need to ask is "Is my
code in an atomic context?"  This should be followed closely by "Does
it really need to delay in atomic context?" If so...

ATOMIC CONTEXT:
	You must use the *delay family of functions. These
	functions use the jiffie estimation of clock speed
	and will busy wait for enough loop cycles to achieve
	the desired delay:

	ndelay(unsigned long nsecs)
	udelay(unsigned long usecs)
	mdelay(unsigned long msecs)

	udelay is the generally preferred API; ndelay-level
	precision may not actually exist on many non-PC devices.

	mdelay is macro wrapper around udelay, to account for
	possible overflow when passing large arguments to udelay.
	In general, use of mdelay is discouraged and code should
	be refactored to allow for the use of msleep.

NON-ATOMIC CONTEXT:
	You should use the *sleep[_range] family of functions.
	There are a few more options here, while any of them may
	work correctly, using the "right" sleep function will
	help the scheduler, power management, and just make your
	driver better :)

	-- Backed by busy-wait loop:
		udelay(unsigned long usecs)
	-- Backed by hrtimers:
		usleep_range(unsigned long min, unsigned long max)
	-- Backed by jiffies / legacy_timers
		msleep(unsigned long msecs)
		msleep_interruptible(unsigned long msecs)

	Unlike the *delay family, the underlying mechanism
	driving each of these calls varies, thus there are
	quirks you should be aware of.


	SLEEPING FOR "A FEW" USECS ( < ~10us? ):
		* Use udelay

		- Why not usleep?
			On slower systems, (embedded, OR perhaps a speed-
			stepped PC!) the overhead of setting up the hrtimers
			for usleep *may* not be worth it. Such an evaluation
			will obviously depend on your specific situation, but
			it is something to be aware of.

	SLEEPING FOR ~USECS OR SMALL MSECS ( 10us - 20ms):
		* Use usleep_range

		- Why not msleep for (1ms - 20ms)?
			Explained originally here:
				http://lkml.org/lkml/2007/8/3/250
			msleep(1~20) may not do what the caller intends, and
			will often sleep longer (~20 ms actual sleep for any
			value given in the 1~20ms range). In many cases this
			is not the desired behavior.

		- Why is there no "usleep" / What is a good range?
			Since usleep_range is built on top of hrtimers, the
			wakeup will be very precise (ish), thus a simple
			usleep function would likely introduce a large number
			of undesired interrupts.

			With the introduction of a range, the scheduler is
			free to coalesce your wakeup with any other wakeup
			that may have happened for other reasons, or at the
			worst case, fire an interrupt for your upper bound.

			The larger a range you supply, the greater a chance
			that you will not trigger an interrupt; this should
			be balanced with what is an acceptable upper bound on
			delay / performance for your specific code path. Exact
			tolerances here are very situation specific, thus it
			is left to the caller to determine a reasonable range.

	SLEEPING FOR LARGER MSECS ( 10ms+ )
		* Use msleep or possibly msleep_interruptible

		- What's the difference?
			msleep sets the current task to TASK_UNINTERRUPTIBLE
			whereas msleep_interruptible sets the current task to
			TASK_INTERRUPTIBLE before scheduling the sleep. In
			short, the difference is whether the sleep can be ended
			early by a signal. In general, just use msleep unless
			you know you have a need for the interruptible variant.
nal, whether to run the ansible upgrade steps for all services that are deployed on the role. If set to True, the operator will drive the upgrade for this role's nodes. * upgrade_batch_size: (number): batch size for upgrades where tasks are specified by services to run in batches vs all nodes at once. This defaults to 1, but larger batches may be specified here. * ServicesDefault: (list) optional default list of services to be deployed on the role, defaults to an empty list. Sets the default for the {{role.name}}Services parameter in overcloud.yaml * tags: (list) list of tags used by other parts of the deployment process to find the role for a specific type of functionality. Currently a role with both 'primary' and 'controller' is used as the primary role for the deployment process. If no roles have have 'primary' and 'controller', the first role in this file is used as the primary role. * description: (string) as few sentences describing the role and information pertaining to the usage of the role. * networks: (list), optional list of networks which the role will have access to when network isolation is enabled. The names should match those defined in network_data.yaml. Working with Roles ================== The tripleoclient provides a series of commands that can be used to view roles and generate a roles_data.yaml file for deployment. Listing Available Roles ----------------------- The ``openstack overcloud role list`` command can be used to view the list of roles provided by tripleo-heat-templates. Usage ^^^^^ .. code-block:: usage: openstack overcloud role list [-h] [--roles-path <roles directory>] List availables roles optional arguments: -h, --help show this help message and exit --roles-path <roles directory> Filesystem path containing the role yaml files. By default this is /usr/share/openstack-tripleo-heat- templates/roles Example ^^^^^^^ .. code-block:: [user@host ~]$ openstack overcloud role list BlockStorage CephStorage Compute ComputeOvsDpdk Controller ControllerOpenstack Database Messaging Networker ObjectStorage Telemetry Undercloud Viewing Role Details -------------------- The ``openstack overcloud role show`` command can be used as a quick way to view some of the information about a role. Usage ^^^^^ .. code-block:: usage: openstack overcloud role show [-h] [--roles-path <roles directory>] <role> Show information about a given role positional arguments: <role> Role to display more information about. optional arguments: -h, --help show this help message and exit --roles-path <roles directory> Filesystem path containing the role yaml files. By default this is /usr/share/openstack-tripleo-heat- templates/roles Example ^^^^^^^ .. code-block:: [user@host ~]$ openstack overcloud role show Compute ############################################################################### # Role Data for 'Compute' ############################################################################### HostnameFormatDefault: '%stackname%-novacompute-%index%' ServicesDefault: * OS::TripleO::Services::AuditD * OS::TripleO::Services::CACerts * OS::TripleO::Services::CephClient * OS::TripleO::Services::CephExternal * OS::TripleO::Services::CertmongerUser * OS::TripleO::Services::Collectd * OS::TripleO::Services::ComputeCeilometerAgent * OS::TripleO::Services::ComputeNeutronCorePlugin * OS::TripleO::Services::ComputeNeutronL3Agent * OS::TripleO::Services::ComputeNeutronMetadataAgent * OS::TripleO::Services::ComputeNeutronOvsAgent * OS::TripleO::Services::Docker * OS::TripleO::Services::FluentdClient * OS::TripleO::Services::Iscsid * OS::TripleO::Services::Kernel * OS::TripleO::Services::MySQLClient * OS::TripleO::Services::NeutronSriovAgent * OS::TripleO::Services::NeutronVppAgent * OS::TripleO::Services::NovaCompute * OS::TripleO::Services::NovaLibvirt * OS::TripleO::Services::NovaMigrationTarget * OS::TripleO::Services::Ntp * OS::TripleO::Services::OpenDaylightOvs * OS::TripleO::Services::Securetty * OS::TripleO::Services::SensuClient * OS::TripleO::Services::Snmp * OS::TripleO::Services::Sshd * OS::TripleO::Services::Timezone * OS::TripleO::Services::TripleoFirewall * OS::TripleO::Services::TripleoPackages * OS::TripleO::Services::Vpp name: 'Compute' Generate roles_data.yaml ------------------------ The ``openstack overcloud roles generate`` command can be used to generate a roles_data.yaml file for deployments. Usage ^^^^^ .. code-block:: usage: openstack overcloud roles generate [-h] [--roles-path <roles directory>] [-o <output file>] <role> [<role> ...] Generate roles_data.yaml file positional arguments: <role> List of roles to use to generate the roles_data.yaml file for the deployment. NOTE: Ordering is important if no role has the "primary" and "controller" tags. If no role is tagged then the first role listed will be considered the primary role. This usually is the controller role. optional arguments: -h, --help show this help message and exit --roles-path <roles directory> Filesystem path containing the role yaml files. By default this is /usr/share/openstack-tripleo-heat- templates/roles -o <output file>, --output-file <output file> File to capture all output to. For example, roles_data.yaml Example ^^^^^^^ .. code-block:: [user@host ~]$ openstack overcloud roles generate -o roles_data.yaml Controller Compute BlockStorage ObjectStorage CephStorage