summaryrefslogtreecommitdiffstats
path: root/src/ceph/doc/rados/deployment/ceph-deploy-osd.rst
diff options
context:
space:
mode:
authorQiaowei Ren <qiaowei.ren@intel.com>2018-01-04 13:43:33 +0800
committerQiaowei Ren <qiaowei.ren@intel.com>2018-01-05 11:59:39 +0800
commit812ff6ca9fcd3e629e49d4328905f33eee8ca3f5 (patch)
tree04ece7b4da00d9d2f98093774594f4057ae561d4 /src/ceph/doc/rados/deployment/ceph-deploy-osd.rst
parent15280273faafb77777eab341909a3f495cf248d9 (diff)
initial code repo
This patch creates initial code repo. For ceph, luminous stable release will be used for base code, and next changes and optimization for ceph will be added to it. For opensds, currently any changes can be upstreamed into original opensds repo (https://github.com/opensds/opensds), and so stor4nfv will directly clone opensds code to deploy stor4nfv environment. And the scripts for deployment based on ceph and opensds will be put into 'ci' directory. Change-Id: I46a32218884c75dda2936337604ff03c554648e4 Signed-off-by: Qiaowei Ren <qiaowei.ren@intel.com>
Diffstat (limited to 'src/ceph/doc/rados/deployment/ceph-deploy-osd.rst')
-rw-r--r--src/ceph/doc/rados/deployment/ceph-deploy-osd.rst121
1 files changed, 121 insertions, 0 deletions
diff --git a/src/ceph/doc/rados/deployment/ceph-deploy-osd.rst b/src/ceph/doc/rados/deployment/ceph-deploy-osd.rst
new file mode 100644
index 0000000..a4eb4d1
--- /dev/null
+++ b/src/ceph/doc/rados/deployment/ceph-deploy-osd.rst
@@ -0,0 +1,121 @@
+=================
+ Add/Remove OSDs
+=================
+
+Adding and removing Ceph OSD Daemons to your cluster may involve a few more
+steps when compared to adding and removing other Ceph daemons. Ceph OSD Daemons
+write data to the disk and to journals. So you need to provide a disk for the
+OSD and a path to the journal partition (i.e., this is the most common
+configuration, but you may configure your system to your own needs).
+
+In Ceph v0.60 and later releases, Ceph supports ``dm-crypt`` on disk encryption.
+You may specify the ``--dmcrypt`` argument when preparing an OSD to tell
+``ceph-deploy`` that you want to use encryption. You may also specify the
+``--dmcrypt-key-dir`` argument to specify the location of ``dm-crypt``
+encryption keys.
+
+You should test various drive configurations to gauge their throughput before
+before building out a large cluster. See `Data Storage`_ for additional details.
+
+
+List Disks
+==========
+
+To list the disks on a node, execute the following command::
+
+ ceph-deploy disk list {node-name [node-name]...}
+
+
+Zap Disks
+=========
+
+To zap a disk (delete its partition table) in preparation for use with Ceph,
+execute the following::
+
+ ceph-deploy disk zap {osd-server-name}:{disk-name}
+ ceph-deploy disk zap osdserver1:sdb
+
+.. important:: This will delete all data.
+
+
+Prepare OSDs
+============
+
+Once you create a cluster, install Ceph packages, and gather keys, you
+may prepare the OSDs and deploy them to the OSD node(s). If you need to
+identify a disk or zap it prior to preparing it for use as an OSD,
+see `List Disks`_ and `Zap Disks`_. ::
+
+ ceph-deploy osd prepare {node-name}:{data-disk}[:{journal-disk}]
+ ceph-deploy osd prepare osdserver1:sdb:/dev/ssd
+ ceph-deploy osd prepare osdserver1:sdc:/dev/ssd
+
+The ``prepare`` command only prepares the OSD. On most operating
+systems, the ``activate`` phase will automatically run when the
+partitions are created on the disk (using Ceph ``udev`` rules). If not
+use the ``activate`` command. See `Activate OSDs`_ for
+details.
+
+The foregoing example assumes a disk dedicated to one Ceph OSD Daemon, and
+a path to an SSD journal partition. We recommend storing the journal on
+a separate drive to maximize throughput. You may dedicate a single drive
+for the journal too (which may be expensive) or place the journal on the
+same disk as the OSD (not recommended as it impairs performance). In the
+foregoing example we store the journal on a partitioned solid state drive.
+
+You can use the settings --fs-type or --bluestore to choose which file system
+you want to install in the OSD drive. (More information by running
+'ceph-deploy osd prepare --help').
+
+.. note:: When running multiple Ceph OSD daemons on a single node, and
+ sharing a partioned journal with each OSD daemon, you should consider
+ the entire node the minimum failure domain for CRUSH purposes, because
+ if the SSD drive fails, all of the Ceph OSD daemons that journal to it
+ will fail too.
+
+
+Activate OSDs
+=============
+
+Once you prepare an OSD you may activate it with the following command. ::
+
+ ceph-deploy osd activate {node-name}:{data-disk-partition}[:{journal-disk-partition}]
+ ceph-deploy osd activate osdserver1:/dev/sdb1:/dev/ssd1
+ ceph-deploy osd activate osdserver1:/dev/sdc1:/dev/ssd2
+
+The ``activate`` command will cause your OSD to come ``up`` and be placed
+``in`` the cluster. The ``activate`` command uses the path to the partition
+created when running the ``prepare`` command.
+
+
+Create OSDs
+===========
+
+You may prepare OSDs, deploy them to the OSD node(s) and activate them in one
+step with the ``create`` command. The ``create`` command is a convenience method
+for executing the ``prepare`` and ``activate`` command sequentially. ::
+
+ ceph-deploy osd create {node-name}:{disk}[:{path/to/journal}]
+ ceph-deploy osd create osdserver1:sdb:/dev/ssd1
+
+.. List OSDs
+.. =========
+
+.. To list the OSDs deployed on a node(s), execute the following command::
+
+.. ceph-deploy osd list {node-name}
+
+
+Destroy OSDs
+============
+
+.. note:: Coming soon. See `Remove OSDs`_ for manual procedures.
+
+.. To destroy an OSD, execute the following command::
+
+.. ceph-deploy osd destroy {node-name}:{path-to-disk}[:{path/to/journal}]
+
+.. Destroying an OSD will take it ``down`` and ``out`` of the cluster.
+
+.. _Data Storage: ../../../start/hardware-recommendations#data-storage
+.. _Remove OSDs: ../../operations/add-or-rm-osds#removing-osds-manual