1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
|
=================
Prometheus plugin
=================
Provides a Prometheus exporter to pass on Ceph performance counters
from the collection point in ceph-mgr. Ceph-mgr receives MMgrReport
messages from all MgrClient processes (mons and OSDs, for instance)
with performance counter schema data and actual counter data, and keeps
a circular buffer of the last N samples. This plugin creates an HTTP
endpoint (like all Prometheus exporters) and retrieves the latest sample
of every counter when polled (or "scraped" in Prometheus terminology).
The HTTP path and query parameters are ignored; all extant counters
for all reporting entities are returned in text exposition format.
(See the Prometheus `documentation <https://prometheus.io/docs/instrumenting/exposition_formats/#text-format-details>`_.)
Enabling prometheus output
==========================
The *prometheus* module is enabled with::
ceph mgr module enable prometheus
Configuration
-------------
By default the module will accept HTTP requests on port ``9283`` on all
IPv4 and IPv6 addresses on the host. The port and listen address are both
configurable with ``ceph config-key set``, with keys
``mgr/prometheus/server_addr`` and ``mgr/prometheus/server_port``.
This port is registered with Prometheus's `registry <https://github.com/prometheus/prometheus/wiki/Default-port-allocations>`_.
Statistic names and labels
==========================
The names of the stats are exactly as Ceph names them, with
illegal characters ``.``, ``-`` and ``::`` translated to ``_``,
and ``ceph_`` prefixed to all names.
All *daemon* statistics have a ``ceph_daemon`` label such as "osd.123"
that identifies the type and ID of the daemon they come from. Some
statistics can come from different types of daemon, so when querying
e.g. an OSD's RocksDB stats, you would probably want to filter
on ceph_daemon starting with "osd" to avoid mixing in the monitor
rocksdb stats.
The *cluster* statistics (i.e. those global to the Ceph cluster)
have labels appropriate to what they report on. For example,
metrics relating to pools have a ``pool_id`` label.
Pool and OSD metadata series
----------------------------
Special series are output to enable displaying and querying on
certain metadata fields.
Pools have a ``ceph_pool_metadata`` field like this:
::
ceph_pool_metadata{pool_id="2",name="cephfs_metadata_a"} 0.0
OSDs have a ``ceph_osd_metadata`` field like this:
::
ceph_osd_metadata{cluster_addr="172.21.9.34:6802/19096",device_class="ssd",id="0",public_addr="172.21.9.34:6801/19096",weight="1.0"} 0.0
Correlating drive statistics with node_exporter
-----------------------------------------------
The prometheus output from Ceph is designed to be used in conjunction
with the generic host monitoring from the Prometheus node_exporter.
To enable correlation of Ceph OSD statistics with node_exporter's
drive statistics, special series are output like this:
::
ceph_disk_occupation{ceph_daemon="osd.0",device="sdd",instance="myhost",job="ceph"}
To use this to get disk statistics by OSD ID, use the ``and on`` syntax
in your prometheus query like this:
::
rate(node_disk_bytes_written[30s]) and on (device,instance) ceph_disk_occupation{ceph_daemon="osd.0"}
See the prometheus documentation for more information about constructing
queries.
Note that for this mechanism to work, Ceph and node_exporter must agree
about the values of the ``instance`` label. See the following section
for guidance about to to set up Prometheus in a way that sets
``instance`` properly.
Configuring Prometheus server
=============================
See the prometheus documentation for full details of how to add
scrape endpoints: the notes
in this section are tips on how to configure Prometheus to capture
the Ceph statistics in the most usefully-labelled form.
This configuration is necessary because Ceph is reporting metrics
from many hosts and services via a single endpoint, and some
metrics that relate to no physical host (such as pool statistics).
honor_labels
------------
To enable Ceph to output properly-labelled data relating to any host,
use the ``honor_labels`` setting when adding the ceph-mgr endpoints
to your prometheus configuration.
Without this setting, any ``instance`` labels that Ceph outputs, such
as those in ``ceph_disk_occupation`` series, will be overridden
by Prometheus.
Ceph instance label
-------------------
By default, Prometheus applies an ``instance`` label that includes
the hostname and port of the endpoint that the series game from. Because
Ceph clusters have multiple manager daemons, this results in an ``instance``
label that changes spuriously when the active manager daemon changes.
Set a custom ``instance`` label in your Prometheus target configuration:
you might wish to set it to the hostname of your first monitor, or something
completely arbitrary like "ceph_cluster".
node_exporter instance labels
-----------------------------
Set your ``instance`` labels to match what appears in Ceph's OSD metadata
in the ``hostname`` field. This is generally the short hostname of the node.
This is only necessary if you want to correlate Ceph stats with host stats,
but you may find it useful to do it in all cases in case you want to do
the correlation in the future.
Example configuration
---------------------
This example shows a single node configuration running ceph-mgr and
node_exporter on a server called ``senta04``.
This is just an example: there are other ways to configure prometheus
scrape targets and label rewrite rules.
prometheus.yml
~~~~~~~~~~~~~~
::
global:
scrape_interval: 15s
evaluation_interval: 15s
scrape_configs:
- job_name: 'node'
file_sd_configs:
- files:
- node_targets.yml
- job_name: 'ceph'
honor_labels: true
file_sd_configs:
- files:
- ceph_targets.yml
ceph_targets.yml
~~~~~~~~~~~~~~~~
::
[
{
"targets": [ "senta04.mydomain.com:9283" ],
"labels": {
"instance": "ceph_cluster"
}
}
]
node_targets.yml
~~~~~~~~~~~~~~~~
::
[
{
"targets": [ "senta04.mydomain.com:9100" ],
"labels": {
"instance": "senta04"
}
}
]
Notes
=====
Counters and gauges are exported; currently histograms and long-running
averages are not. It's possible that Ceph's 2-D histograms could be
reduced to two separate 1-D histograms, and that long-running averages
could be exported as Prometheus' Summary type.
Timestamps, as with many Prometheus exporters, are established by
the server's scrape time (Prometheus expects that it is polling the
actual counter process synchronously). It is possible to supply a
timestamp along with the stat report, but the Prometheus team strongly
advises against this. This means that timestamps will be delayed by
an unpredictable amount; it's not clear if this will be problematic,
but it's worth knowing about.
|