summaryrefslogtreecommitdiffstats
path: root/src/ceph/doc/ceph-volume/lvm
diff options
context:
space:
mode:
Diffstat (limited to 'src/ceph/doc/ceph-volume/lvm')
-rw-r--r--src/ceph/doc/ceph-volume/lvm/activate.rst88
-rw-r--r--src/ceph/doc/ceph-volume/lvm/create.rst24
-rw-r--r--src/ceph/doc/ceph-volume/lvm/index.rst28
-rw-r--r--src/ceph/doc/ceph-volume/lvm/list.rst173
-rw-r--r--src/ceph/doc/ceph-volume/lvm/prepare.rst241
-rw-r--r--src/ceph/doc/ceph-volume/lvm/scan.rst9
-rw-r--r--src/ceph/doc/ceph-volume/lvm/systemd.rst28
-rw-r--r--src/ceph/doc/ceph-volume/lvm/zap.rst19
8 files changed, 0 insertions, 610 deletions
diff --git a/src/ceph/doc/ceph-volume/lvm/activate.rst b/src/ceph/doc/ceph-volume/lvm/activate.rst
deleted file mode 100644
index 956a62a..0000000
--- a/src/ceph/doc/ceph-volume/lvm/activate.rst
+++ /dev/null
@@ -1,88 +0,0 @@
-.. _ceph-volume-lvm-activate:
-
-``activate``
-============
-Once :ref:`ceph-volume-lvm-prepare` is completed, and all the various steps
-that entails are done, the volume is ready to get "activated".
-
-This activation process enables a systemd unit that persists the OSD ID and its
-UUID (also called ``fsid`` in Ceph CLI tools), so that at boot time it can
-understand what OSD is enabled and needs to be mounted.
-
-.. note:: The execution of this call is fully idempotent, and there is no
- side-effects when running multiple times
-
-New OSDs
---------
-To activate newly prepared OSDs both the :term:`OSD id` and :term:`OSD uuid`
-need to be supplied. For example::
-
- ceph-volume lvm activate --bluestore 0 0263644D-0BF1-4D6D-BC34-28BD98AE3BC8
-
-.. note:: The UUID is stored in the ``osd_fsid`` file in the OSD path, which is
- generated when :ref:`ceph-volume-lvm-prepare` is used.
-
-requiring uuids
-^^^^^^^^^^^^^^^
-The :term:`OSD uuid` is being required as an extra step to ensure that the
-right OSD is being activated. It is entirely possible that a previous OSD with
-the same id exists and would end up activating the incorrect one.
-
-
-Discovery
----------
-With either existing OSDs or new ones being activated, a *discovery* process is
-performed using :term:`LVM tags` to enable the systemd units.
-
-The systemd unit will capture the :term:`OSD id` and :term:`OSD uuid` and
-persist it. Internally, the activation will enable it like::
-
- systemctl enable ceph-volume@$id-$uuid-lvm
-
-For example::
-
- systemctl enable ceph-volume@0-8715BEB4-15C5-49DE-BA6F-401086EC7B41-lvm
-
-Would start the discovery process for the OSD with an id of ``0`` and a UUID of
-``8715BEB4-15C5-49DE-BA6F-401086EC7B41``.
-
-.. note:: for more details on the systemd workflow see :ref:`ceph-volume-lvm-systemd`
-
-The systemd unit will look for the matching OSD device, and by looking at its
-:term:`LVM tags` will proceed to:
-
-# mount the device in the corresponding location (by convention this is
- ``/var/lib/ceph/osd/<cluster name>-<osd id>/``)
-
-# ensure that all required devices are ready for that OSD
-
-# start the ``ceph-osd@0`` systemd unit
-
-.. note:: The system infers the objectstore type (filestore or bluestore) by
- inspecting the LVM tags applied to the OSD devices
-
-Existing OSDs
--------------
-For exsiting OSDs that have been deployed with different tooling, the only way
-to port them over to the new mechanism is to prepare them again (losing data).
-See :ref:`ceph-volume-lvm-existing-osds` for details on how to proceed.
-
-Summary
--------
-To recap the ``activate`` process for :term:`bluestore`:
-
-#. require both :term:`OSD id` and :term:`OSD uuid`
-#. enable the system unit with matching id and uuid
-#. Create the ``tmpfs`` mount at the OSD directory in
- ``/var/lib/ceph/osd/$cluster-$id/``
-#. Recreate all the files needed with ``ceph-bluestore-tool prime-osd-dir`` by
- pointing it to the OSD ``block`` device.
-#. the systemd unit will ensure all devices are ready and linked
-#. the matching ``ceph-osd`` systemd unit will get started
-
-And for :term:`filestore`:
-
-#. require both :term:`OSD id` and :term:`OSD uuid`
-#. enable the system unit with matching id and uuid
-#. the systemd unit will ensure all devices are ready and mounted (if needed)
-#. the matching ``ceph-osd`` systemd unit will get started
diff --git a/src/ceph/doc/ceph-volume/lvm/create.rst b/src/ceph/doc/ceph-volume/lvm/create.rst
deleted file mode 100644
index c90d1f6..0000000
--- a/src/ceph/doc/ceph-volume/lvm/create.rst
+++ /dev/null
@@ -1,24 +0,0 @@
-.. _ceph-volume-lvm-create:
-
-``create``
-===========
-This subcommand wraps the two-step process to provision a new osd (calling
-``prepare`` first and then ``activate``) into a single
-one. The reason to prefer ``prepare`` and then ``activate`` is to gradually
-introduce new OSDs into a cluster, and avoiding large amounts of data being
-rebalanced.
-
-The single-call process unifies exactly what :ref:`ceph-volume-lvm-prepare` and
-:ref:`ceph-volume-lvm-activate` do, with the convenience of doing it all at
-once.
-
-There is nothing different to the process except the OSD will become up and in
-immediately after completion.
-
-The backing objectstore can be specified with:
-
-* :ref:`--filestore <ceph-volume-lvm-prepare_filestore>`
-* :ref:`--bluestore <ceph-volume-lvm-prepare_bluestore>`
-
-All command line flags and options are the same as ``ceph-volume lvm prepare``.
-Please refer to :ref:`ceph-volume-lvm-prepare` for details.
diff --git a/src/ceph/doc/ceph-volume/lvm/index.rst b/src/ceph/doc/ceph-volume/lvm/index.rst
deleted file mode 100644
index 9a2191f..0000000
--- a/src/ceph/doc/ceph-volume/lvm/index.rst
+++ /dev/null
@@ -1,28 +0,0 @@
-.. _ceph-volume-lvm:
-
-``lvm``
-=======
-Implements the functionality needed to deploy OSDs from the ``lvm`` subcommand:
-``ceph-volume lvm``
-
-**Command Line Subcommands**
-
-* :ref:`ceph-volume-lvm-prepare`
-
-* :ref:`ceph-volume-lvm-activate`
-
-* :ref:`ceph-volume-lvm-create`
-
-* :ref:`ceph-volume-lvm-list`
-
-.. not yet implemented
-.. * :ref:`ceph-volume-lvm-scan`
-
-**Internal functionality**
-
-There are other aspects of the ``lvm`` subcommand that are internal and not
-exposed to the user, these sections explain how these pieces work together,
-clarifying the workflows of the tool.
-
-:ref:`Systemd Units <ceph-volume-lvm-systemd>` |
-:ref:`lvm <ceph-volume-lvm-api>`
diff --git a/src/ceph/doc/ceph-volume/lvm/list.rst b/src/ceph/doc/ceph-volume/lvm/list.rst
deleted file mode 100644
index 19e0600..0000000
--- a/src/ceph/doc/ceph-volume/lvm/list.rst
+++ /dev/null
@@ -1,173 +0,0 @@
-.. _ceph-volume-lvm-list:
-
-``list``
-========
-This subcommand will list any devices (logical and physical) that may be
-associated with a Ceph cluster, as long as they contain enough metadata to
-allow for that discovery.
-
-Output is grouped by the OSD ID associated with the devices, and unlike
-``ceph-disk`` it does not provide any information for devices that aren't
-associated with Ceph.
-
-Command line options:
-
-* ``--format`` Allows a ``json`` or ``pretty`` value. Defaults to ``pretty``
- which will group the device information in a human-readable format.
-
-Full Reporting
---------------
-When no positional arguments are used, a full reporting will be presented. This
-means that all devices and logical volumes found in the system will be
-displayed.
-
-Full ``pretty`` reporting for two OSDs, one with a lv as a journal, and another
-one with a physical device may look similar to::
-
- # ceph-volume lvm list
-
-
- ====== osd.1 =======
-
- [journal] /dev/journals/journal1
-
- journal uuid C65n7d-B1gy-cqX3-vZKY-ZoE0-IEYM-HnIJzs
- osd id 1
- cluster fsid ce454d91-d748-4751-a318-ff7f7aa18ffd
- type journal
- osd fsid 661b24f8-e062-482b-8110-826ffe7f13fa
- data uuid SlEgHe-jX1H-QBQk-Sce0-RUls-8KlY-g8HgcZ
- journal device /dev/journals/journal1
- data device /dev/test_group/data-lv2
-
- [data] /dev/test_group/data-lv2
-
- journal uuid C65n7d-B1gy-cqX3-vZKY-ZoE0-IEYM-HnIJzs
- osd id 1
- cluster fsid ce454d91-d748-4751-a318-ff7f7aa18ffd
- type data
- osd fsid 661b24f8-e062-482b-8110-826ffe7f13fa
- data uuid SlEgHe-jX1H-QBQk-Sce0-RUls-8KlY-g8HgcZ
- journal device /dev/journals/journal1
- data device /dev/test_group/data-lv2
-
- ====== osd.0 =======
-
- [data] /dev/test_group/data-lv1
-
- journal uuid cd72bd28-002a-48da-bdf6-d5b993e84f3f
- osd id 0
- cluster fsid ce454d91-d748-4751-a318-ff7f7aa18ffd
- type data
- osd fsid 943949f0-ce37-47ca-a33c-3413d46ee9ec
- data uuid TUpfel-Q5ZT-eFph-bdGW-SiNW-l0ag-f5kh00
- journal device /dev/sdd1
- data device /dev/test_group/data-lv1
-
- [journal] /dev/sdd1
-
- PARTUUID cd72bd28-002a-48da-bdf6-d5b993e84f3f
-
-.. note:: Tags are displayed in a readable format. The ``osd id`` key is stored
- as a ``ceph.osd_id`` tag. For more information on lvm tag conventions
- see :ref:`ceph-volume-lvm-tag-api`
-
-Single Reporting
-----------------
-Single reporting can consume both devices and logical volumes as input
-(positional parameters). For logical volumes, it is required to use the group
-name as well as the logical volume name.
-
-For example the ``data-lv2`` logical volume, in the ``test_group`` volume group
-can be listed in the following way::
-
- # ceph-volume lvm list test_group/data-lv2
-
-
- ====== osd.1 =======
-
- [data] /dev/test_group/data-lv2
-
- journal uuid C65n7d-B1gy-cqX3-vZKY-ZoE0-IEYM-HnIJzs
- osd id 1
- cluster fsid ce454d91-d748-4751-a318-ff7f7aa18ffd
- type data
- osd fsid 661b24f8-e062-482b-8110-826ffe7f13fa
- data uuid SlEgHe-jX1H-QBQk-Sce0-RUls-8KlY-g8HgcZ
- journal device /dev/journals/journal1
- data device /dev/test_group/data-lv2
-
-
-.. note:: Tags are displayed in a readable format. The ``osd id`` key is stored
- as a ``ceph.osd_id`` tag. For more information on lvm tag conventions
- see :ref:`ceph-volume-lvm-tag-api`
-
-
-For plain disks, the full path to the device is required. For example, for
-a device like ``/dev/sdd1`` it can look like::
-
-
- # ceph-volume lvm list /dev/sdd1
-
-
- ====== osd.0 =======
-
- [journal] /dev/sdd1
-
- PARTUUID cd72bd28-002a-48da-bdf6-d5b993e84f3f
-
-
-
-``json`` output
----------------
-All output using ``--format=json`` will show everything the system has stored
-as metadata for the devices, including tags.
-
-No changes for readability are done with ``json`` reporting, and all
-information is presented as-is. Full output as well as single devices can be
-listed.
-
-For brevity, this is how a single logical volume would look with ``json``
-output (note how tags aren't modified)::
-
- # ceph-volume lvm list --format=json test_group/data-lv1
- {
- "0": [
- {
- "lv_name": "data-lv1",
- "lv_path": "/dev/test_group/data-lv1",
- "lv_tags": "ceph.cluster_fsid=ce454d91-d748-4751-a318-ff7f7aa18ffd,ceph.data_device=/dev/test_group/data-lv1,ceph.data_uuid=TUpfel-Q5ZT-eFph-bdGW-SiNW-l0ag-f5kh00,ceph.journal_device=/dev/sdd1,ceph.journal_uuid=cd72bd28-002a-48da-bdf6-d5b993e84f3f,ceph.osd_fsid=943949f0-ce37-47ca-a33c-3413d46ee9ec,ceph.osd_id=0,ceph.type=data",
- "lv_uuid": "TUpfel-Q5ZT-eFph-bdGW-SiNW-l0ag-f5kh00",
- "name": "data-lv1",
- "path": "/dev/test_group/data-lv1",
- "tags": {
- "ceph.cluster_fsid": "ce454d91-d748-4751-a318-ff7f7aa18ffd",
- "ceph.data_device": "/dev/test_group/data-lv1",
- "ceph.data_uuid": "TUpfel-Q5ZT-eFph-bdGW-SiNW-l0ag-f5kh00",
- "ceph.journal_device": "/dev/sdd1",
- "ceph.journal_uuid": "cd72bd28-002a-48da-bdf6-d5b993e84f3f",
- "ceph.osd_fsid": "943949f0-ce37-47ca-a33c-3413d46ee9ec",
- "ceph.osd_id": "0",
- "ceph.type": "data"
- },
- "type": "data",
- "vg_name": "test_group"
- }
- ]
- }
-
-
-Synchronized information
-------------------------
-Before any listing type, the lvm API is queried to ensure that physical devices
-that may be in use haven't changed naming. It is possible that non-persistent
-devices like ``/dev/sda1`` could change to ``/dev/sdb1``.
-
-The detection is possible because the ``PARTUUID`` is stored as part of the
-metadata in the logical volume for the data lv. Even in the case of a journal
-that is a physical device, this information is still stored on the data logical
-volume associated with it.
-
-If the name is no longer the same (as reported by ``blkid`` when using the
-``PARTUUID``), the tag will get updated and the report will use the newly
-refreshed information.
diff --git a/src/ceph/doc/ceph-volume/lvm/prepare.rst b/src/ceph/doc/ceph-volume/lvm/prepare.rst
deleted file mode 100644
index 27ebb55..0000000
--- a/src/ceph/doc/ceph-volume/lvm/prepare.rst
+++ /dev/null
@@ -1,241 +0,0 @@
-.. _ceph-volume-lvm-prepare:
-
-``prepare``
-===========
-This subcommand allows a :term:`filestore` or :term:`bluestore` setup. It is
-recommended to pre-provision a logical volume before using it with
-``ceph-volume lvm``.
-
-Logical volumes are not altered except for adding extra metadata.
-
-.. note:: This is part of a two step process to deploy an OSD. If looking for
- a single-call way, please see :ref:`ceph-volume-lvm-create`
-
-To help identify volumes, the process of preparing a volume (or volumes) to
-work with Ceph, the tool will assign a few pieces of metadata information using
-:term:`LVM tags`.
-
-:term:`LVM tags` makes volumes easy to discover later, and help identify them as
-part of a Ceph system, and what role they have (journal, filestore, bluestore,
-etc...)
-
-Although initially :term:`filestore` is supported (and supported by default)
-the back end can be specified with:
-
-
-* :ref:`--filestore <ceph-volume-lvm-prepare_filestore>`
-* :ref:`--bluestore <ceph-volume-lvm-prepare_bluestore>`
-
-.. _ceph-volume-lvm-prepare_filestore:
-
-``filestore``
--------------
-This is the OSD backend that allows preparation of logical volumes for
-a :term:`filestore` objectstore OSD.
-
-It can use a logical volume for the OSD data and a partitioned physical device
-or logical volume for the journal. No special preparation is needed for these
-volumes other than following the minimum size requirements for data and
-journal.
-
-The API call looks like::
-
- ceph-volume prepare --filestore --data data --journal journal
-
-There is flexibility to use a raw device or partition as well for ``--data``
-that will be converted to a logical volume. This is not ideal in all situations
-since ``ceph-volume`` is just going to create a unique volume group and
-a logical volume from that device.
-
-When using logical volumes for ``--data``, the value *must* be a volume group
-name and a logical volume name separated by a ``/``. Since logical volume names
-are not enforced for uniqueness, this prevents using the wrong volume. The
-``--journal`` can be either a logical volume *or* a partition.
-
-When using a partition, it *must* contain a ``PARTUUID`` discoverable by
-``blkid``, so that it can later be identified correctly regardless of the
-device name (or path).
-
-When using a partition, this is how it would look for ``/dev/sdc1``::
-
- ceph-volume prepare --filestore --data volume_group/lv_name --journal /dev/sdc1
-
-For a logical volume, just like for ``--data``, a volume group and logical
-volume name are required::
-
- ceph-volume prepare --filestore --data volume_group/lv_name --journal volume_group/journal_lv
-
-A generated uuid is used to ask the cluster for a new OSD. These two pieces are
-crucial for identifying an OSD and will later be used throughout the
-:ref:`ceph-volume-lvm-activate` process.
-
-The OSD data directory is created using the following convention::
-
- /var/lib/ceph/osd/<cluster name>-<osd id>
-
-At this point the data volume is mounted at this location, and the journal
-volume is linked::
-
- ln -s /path/to/journal /var/lib/ceph/osd/<cluster_name>-<osd-id>/journal
-
-The monmap is fetched using the bootstrap key from the OSD::
-
- /usr/bin/ceph --cluster ceph --name client.bootstrap-osd
- --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring
- mon getmap -o /var/lib/ceph/osd/<cluster name>-<osd id>/activate.monmap
-
-``ceph-osd`` will be called to populate the OSD directory, that is already
-mounted, re-using all the pieces of information from the initial steps::
-
- ceph-osd --cluster ceph --mkfs --mkkey -i <osd id> \
- --monmap /var/lib/ceph/osd/<cluster name>-<osd id>/activate.monmap --osd-data \
- /var/lib/ceph/osd/<cluster name>-<osd id> --osd-journal /var/lib/ceph/osd/<cluster name>-<osd id>/journal \
- --osd-uuid <osd uuid> --keyring /var/lib/ceph/osd/<cluster name>-<osd id>/keyring \
- --setuser ceph --setgroup ceph
-
-.. _ceph-volume-lvm-existing-osds:
-
-Existing OSDs
--------------
-For existing clusters that want to use this new system and have OSDs that are
-already running there are a few things to take into account:
-
-.. warning:: this process will forcefully format the data device, destroying
- existing data, if any.
-
-* OSD paths should follow this convention::
-
- /var/lib/ceph/osd/<cluster name>-<osd id>
-
-* Preferably, no other mechanisms to mount the volume should exist, and should
- be removed (like fstab mount points)
-* There is currently no support for encrypted volumes
-
-The one time process for an existing OSD, with an ID of 0 and
-using a ``"ceph"`` cluster name would look like::
-
- ceph-volume lvm prepare --filestore --osd-id 0 --osd-fsid E3D291C1-E7BF-4984-9794-B60D9FA139CB
-
-The command line tool will not contact the monitor to generate an OSD ID and
-will format the LVM device in addition to storing the metadata on it so that it
-can later be startednot contact the monitor to generate an OSD ID and will
-format the LVM device in addition to storing the metadata on it so that it can
-later be started (for detailed metadata description see :ref:`ceph-volume-lvm-tags`).
-
-
-.. _ceph-volume-lvm-prepare_bluestore:
-
-``bluestore``
--------------
-The :term:`bluestore` objectstore is the default for new OSDs. It offers a bit
-more flexibility for devices. Bluestore supports the following configurations:
-
-* A block device, a block.wal, and a block.db device
-* A block device and a block.wal device
-* A block device and a block.db device
-* A single block device
-
-It can accept a whole device (or partition), or a logical volume for ``block``.
-If a physical device is provided it will then be turned into a logical volume.
-This allows a simpler approach at using LVM but at the cost of flexibility:
-there are no options or configurations to change how the LV is created.
-
-The ``block`` is specified with the ``--data`` flag, and in its simplest use
-case it looks like::
-
- ceph-volume lvm prepare --bluestore --data vg/lv
-
-A raw device can be specified in the same way::
-
- ceph-volume lvm prepare --bluestore --data /path/to/device
-
-
-If a ``block.db`` or a ``block.wal`` is needed (they are optional for
-bluestore) they can be specified with ``--block.db`` and ``--block.wal``
-accordingly. These can be a physical device (they **must** be a partition) or
-a logical volume.
-
-For both ``block.db`` and ``block.wal`` partitions aren't made logical volumes
-because they can be used as-is. Logical Volumes are also allowed.
-
-While creating the OSD directory, the process will use a ``tmpfs`` mount to
-place all the files needed for the OSD. These files are initially created by
-``ceph-osd --mkfs`` and are fully ephemeral.
-
-A symlink is always created for the ``block`` device, and optionally for
-``block.db`` and ``block.wal``. For a cluster with a default name, and an OSD
-id of 0, the directory could look like::
-
- # ls -l /var/lib/ceph/osd/ceph-0
- lrwxrwxrwx. 1 ceph ceph 93 Oct 20 13:05 block -> /dev/ceph-be2b6fbd-bcf2-4c51-b35d-a35a162a02f0/osd-block-25cf0a05-2bc6-44ef-9137-79d65bd7ad62
- lrwxrwxrwx. 1 ceph ceph 93 Oct 20 13:05 block.db -> /dev/sda1
- lrwxrwxrwx. 1 ceph ceph 93 Oct 20 13:05 block.wal -> /dev/ceph/osd-wal-0
- -rw-------. 1 ceph ceph 37 Oct 20 13:05 ceph_fsid
- -rw-------. 1 ceph ceph 37 Oct 20 13:05 fsid
- -rw-------. 1 ceph ceph 55 Oct 20 13:05 keyring
- -rw-------. 1 ceph ceph 6 Oct 20 13:05 ready
- -rw-------. 1 ceph ceph 10 Oct 20 13:05 type
- -rw-------. 1 ceph ceph 2 Oct 20 13:05 whoami
-
-In the above case, a device was used for ``block`` so ``ceph-volume`` create
-a volume group and a logical volume using the following convention:
-
-* volume group name: ``ceph-{cluster fsid}`` or if the vg exists already
- ``ceph-{random uuid}``
-
-* logical volume name: ``osd-block-{osd_fsid}``
-
-
-Storing metadata
-----------------
-The following tags will get applied as part of the preparation process
-regardless of the type of volume (journal or data) or OSD objectstore:
-
-* ``cluster_fsid``
-* ``encrypted``
-* ``osd_fsid``
-* ``osd_id``
-
-For :term:`filestore` these tags will be added:
-
-* ``journal_device``
-* ``journal_uuid``
-
-For :term:`bluestore` these tags will be added:
-
-* ``block_device``
-* ``block_uuid``
-* ``db_device``
-* ``db_uuid``
-* ``wal_device``
-* ``wal_uuid``
-
-.. note:: For the complete lvm tag conventions see :ref:`ceph-volume-lvm-tag-api`
-
-
-Summary
--------
-To recap the ``prepare`` process for :term:`bluestore`:
-
-#. Accept a logical volume for block or a raw device (that will get converted
- to an lv)
-#. Accept partitions or logical volumes for ``block.wal`` or ``block.db``
-#. Generate a UUID for the OSD
-#. Ask the monitor get an OSD ID reusing the generated UUID
-#. OSD data directory is created on a tmpfs mount.
-#. ``block``, ``block.wal``, and ``block.db`` are symlinked if defined.
-#. monmap is fetched for activation
-#. Data directory is populated by ``ceph-osd``
-#. Logical Volumes are are assigned all the Ceph metadata using lvm tags
-
-
-And the ``prepare`` process for :term:`filestore`:
-
-#. Accept only logical volumes for data and journal (both required)
-#. Generate a UUID for the OSD
-#. Ask the monitor get an OSD ID reusing the generated UUID
-#. OSD data directory is created and data volume mounted
-#. Journal is symlinked from data volume to journal location
-#. monmap is fetched for activation
-#. devices is mounted and data directory is populated by ``ceph-osd``
-#. data and journal volumes are assigned all the Ceph metadata using lvm tags
diff --git a/src/ceph/doc/ceph-volume/lvm/scan.rst b/src/ceph/doc/ceph-volume/lvm/scan.rst
deleted file mode 100644
index 96d2719..0000000
--- a/src/ceph/doc/ceph-volume/lvm/scan.rst
+++ /dev/null
@@ -1,9 +0,0 @@
-scan
-====
-This sub-command will allow to discover Ceph volumes previously setup by the
-tool by looking into the system's logical volumes and their tags.
-
-As part of the the :ref:`ceph-volume-lvm-prepare` process, the logical volumes are assigned
-a few tags with important pieces of information.
-
-.. note:: This sub-command is not yet implemented
diff --git a/src/ceph/doc/ceph-volume/lvm/systemd.rst b/src/ceph/doc/ceph-volume/lvm/systemd.rst
deleted file mode 100644
index 30260de..0000000
--- a/src/ceph/doc/ceph-volume/lvm/systemd.rst
+++ /dev/null
@@ -1,28 +0,0 @@
-.. _ceph-volume-lvm-systemd:
-
-systemd
-=======
-Upon startup, it will identify the logical volume using :term:`LVM tags`,
-finding a matching ID and later ensuring it is the right one with
-the :term:`OSD uuid`.
-
-After identifying the correct volume it will then proceed to mount it by using
-the OSD destination conventions, that is::
-
- /var/lib/ceph/osd/<cluster name>-<osd id>
-
-For our example OSD with an id of ``0``, that means the identified device will
-be mounted at::
-
-
- /var/lib/ceph/osd/ceph-0
-
-
-Once that process is complete, a call will be made to start the OSD::
-
- systemctl start ceph-osd@0
-
-The systemd portion of this process is handled by the ``ceph-volume lvm
-trigger`` sub-command, which is only in charge of parsing metadata coming from
-systemd and startup, and then dispatching to ``ceph-volume lvm activate`` which
-would proceed with activation.
diff --git a/src/ceph/doc/ceph-volume/lvm/zap.rst b/src/ceph/doc/ceph-volume/lvm/zap.rst
deleted file mode 100644
index 8d42a90..0000000
--- a/src/ceph/doc/ceph-volume/lvm/zap.rst
+++ /dev/null
@@ -1,19 +0,0 @@
-.. _ceph-volume-lvm-zap:
-
-``zap``
-=======
-
-This subcommand is used to zap lvs or partitions that have been used
-by ceph OSDs so that they may be reused. If given a path to a logical
-volume it must be in the format of vg/lv. Any filesystems present
-on the given lv or partition will be removed and all data will be purged.
-
-.. note:: The lv or partition will be kept intact.
-
-Zapping a logical volume::
-
- ceph-volume lvm zap {vg name/lv name}
-
-Zapping a partition::
-
- ceph-volume lvm zap /dev/sdc1