summaryrefslogtreecommitdiffstats
path: root/src/ceph/doc/dev/mon-bootstrap.rst
blob: 75c8a5e7649703178a905ba8b84f085b2253953a (plain)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
===================
 Monitor bootstrap
===================

Terminology:

* ``cluster``: a set of monitors
* ``quorum``: an active set of monitors consisting of a majority of the cluster

In order to initialize a new monitor, it must always be fed:

#. a logical name
#. secret keys
#. a cluster fsid (uuid)

In addition, a monitor needs to know two things:

#. what address to bind to
#. who its peers are (if any)

There are a range of ways to do both.

Logical id
==========

The logical id should be unique across the cluster.  It will be
appended to ``mon.`` to logically describe the monitor in the Ceph
cluster.  For example, if the logical id is ``foo``, the monitor's
name will be ``mon.foo``.

For most users, there is no more than one monitor per host, which
makes the short hostname logical choice.

Secret keys
===========

The ``mon.`` secret key is stored a ``keyring`` file in the ``mon data`` directory.  It can be generated
with a command like::

        ceph-authtool --create-keyring /path/to/keyring --gen-key -n mon.

When creating a new monitor cluster, the keyring should also contain a ``client.admin`` key that can be used
to administer the system::

        ceph-authtool /path/to/keyring --gen-key -n client.admin --set-uid=0 --cap mon 'allow *' --cap osd 'allow *' --cap mds 'allow'

The resulting keyring is fed to ``ceph-mon --mkfs`` with the ``--keyring <keyring>`` command-line argument.

Cluster fsid
============

The cluster fsid is a normal uuid, like that generated by the ``uuidgen`` command.  It
can be provided to the monitor in two ways:

#. via the ``--fsid <uuid>`` command-line argument (or config file option)
#. via a monmap provided to the new monitor via the ``--monmap <path>`` command-line argument.

Monitor address
===============

The monitor address can be provided in several ways.

#. via the ``--public-addr <ip[:port]>`` command-line option (or config file option)
#. via the ``--public-network <cidr>`` command-line option (or config file option)
#. via the monmap provided via ``--monmap <path>``, if it includes a monitor with our name
#. via the bootstrap monmap (provided via ``--inject-monmap <path>`` or generated from ``--mon-host <list>``) if it includes a monitor with no name (``noname-<something>``) and an address configured on the local host.

Peers
=====

The monitor peers are provided in several ways:

#. via the initial monmap, provided via ``--monmap <filename>``
#. via the bootstrap monmap generated from ``--mon-host <list>``
#. via the bootstrap monmap generated from ``[mon.*]`` sections with ``mon addr`` in the config file
#. dynamically via the admin socket

However, these methods are not completely interchangeable because of
the complexity of creating a new monitor cluster without danger of
races.

Cluster creation
================

There are three basic approaches to creating a cluster:

#. Create a new cluster by specifying the monitor names and addresses ahead of time.
#. Create a new cluster by specifying the monitor names ahead of time, and dynamically setting the addresses as ``ceph-mon`` daemons configure themselves.
#. Create a new cluster by specifying the monitor addresses ahead of time.


Names and addresses
-------------------

Generate a monmap using ``monmaptool`` with the names and addresses of the initial
monitors.  The generated monmap will also include a cluster fsid.  Feed that monmap
to each monitor daemon::

        ceph-mon --mkfs -i <name> --monmap <initial_monmap> --keyring <initial_keyring>

When the daemons start, they will know exactly who they and their peers are.


Addresses only
--------------

The initial monitor addresses can be specified with the ``mon host`` configuration value,
either via a config file or the command-line argument.  This method has the advantage that
a single global config file for the cluster can have a line like::

     mon host = a.foo.com, b.foo.com, c.foo.com

and will also serve to inform any ceph clients or daemons who the monitors are.

The ``ceph-mon`` daemons will need to be fed the initial keyring and cluster fsid to 
initialize themselves:

     ceph-mon --mkfs -i <name> --fsid <uuid> --keyring <initial_keyring>

When the daemons first start up, they will share their names with each other and form a
new cluster.

Names only
----------

In dynamic "cloud" environments, the cluster creator may not (yet)
know what the addresses of the monitors are going to be.  Instead,
they may want machines to configure and start themselves in parallel
and, as they come up, form a new cluster on their own.  The problem is
that the monitor cluster relies on strict majorities to keep itself
consistent, and in order to "create" a new cluster, it needs to know
what the *initial* set of monitors will be.

This can be done with the ``mon initial members`` config option, which
should list the ids of the initial monitors that are allowed to create
the cluster::

     mon initial members = foo, bar, baz

The monitors can then be initialized by providing the other pieces of
information (they keyring, cluster fsid, and a way of determining
their own address).  For example::

     ceph-mon --mkfs -i <name> --mon-initial-hosts 'foo,bar,baz' --keyring <initial_keyring> --public-addr <ip>

When these daemons are started, they will know their own address, but
not their peers.  They can learn those addresses via the admin socket::

     ceph daemon mon.<id> add_bootstrap_peer_hint <peer ip>

Once they learn enough of their peers from the initial member set,
they will be able to create the cluster.


Cluster expansion
=================

Cluster expansion is slightly less demanding than creation, because
the creation of the initial quorum is not an issue and there is no
worry about creating separately independent clusters.

New nodes can be forced to join an existing cluster in two ways:

#. by providing no initial monitor peers addresses, and feeding them dynamically.
#. by specifying the ``mon initial members`` config option to prevent the new nodes from forming a new, independent cluster, and feeding some existing monitors via any available method.

Initially peerless expansion
----------------------------

Create a new monitor and give it no peer addresses other than it's own.  For
example::

     ceph-mon --mkfs -i <myid> --fsid <fsid> --keyring <mon secret key> --public-addr <ip>

Once the daemon starts, you can give it one or more peer addresses to join with::

     ceph daemon mon.<id> add_bootstrap_peer_hint <peer ip>

This monitor will never participate in cluster creation; it can only join an existing
cluster.

Expanding with initial members
------------------------------

You can feed the new monitor some peer addresses initially and avoid badness by also
setting ``mon initial members``.  For example::

     ceph-mon --mkfs -i <myid> --fsid <fsid> --keyring <mon secret key> --public-addr <ip> --mon-host foo,bar,baz

When the daemon is started, ``mon initial members`` must be set via the command line or config file::

     ceph-mon -i <myid> --mon-initial-members foo,bar,baz

to prevent any risk of split-brain.