summaryrefslogtreecommitdiffstats
path: root/src/ceph/doc/ceph-volume/lvm/list.rst
blob: 19e06000b8429497b72f140240757abb9eb226f2 (plain)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
.. _ceph-volume-lvm-list:

``list``
========
This subcommand will list any devices (logical and physical) that may be
associated with a Ceph cluster, as long as they contain enough metadata to
allow for that discovery.

Output is grouped by the OSD ID associated with the devices, and unlike
``ceph-disk`` it does not provide any information for devices that aren't
associated with Ceph.

Command line options:

* ``--format`` Allows a ``json`` or ``pretty`` value. Defaults to ``pretty``
  which will group the device information in a human-readable format.

Full Reporting
--------------
When no positional arguments are used, a full reporting will be presented. This
means that all devices and logical volumes found in the system will be
displayed.

Full ``pretty`` reporting for two OSDs, one with a lv as a journal, and another
one with a physical device may look similar to::

    # ceph-volume lvm list


    ====== osd.1 =======

      [journal]    /dev/journals/journal1

          journal uuid              C65n7d-B1gy-cqX3-vZKY-ZoE0-IEYM-HnIJzs
          osd id                    1
          cluster fsid              ce454d91-d748-4751-a318-ff7f7aa18ffd
          type                      journal
          osd fsid                  661b24f8-e062-482b-8110-826ffe7f13fa
          data uuid                 SlEgHe-jX1H-QBQk-Sce0-RUls-8KlY-g8HgcZ
          journal device            /dev/journals/journal1
          data device               /dev/test_group/data-lv2

      [data]    /dev/test_group/data-lv2

          journal uuid              C65n7d-B1gy-cqX3-vZKY-ZoE0-IEYM-HnIJzs
          osd id                    1
          cluster fsid              ce454d91-d748-4751-a318-ff7f7aa18ffd
          type                      data
          osd fsid                  661b24f8-e062-482b-8110-826ffe7f13fa
          data uuid                 SlEgHe-jX1H-QBQk-Sce0-RUls-8KlY-g8HgcZ
          journal device            /dev/journals/journal1
          data device               /dev/test_group/data-lv2

    ====== osd.0 =======

      [data]    /dev/test_group/data-lv1

          journal uuid              cd72bd28-002a-48da-bdf6-d5b993e84f3f
          osd id                    0
          cluster fsid              ce454d91-d748-4751-a318-ff7f7aa18ffd
          type                      data
          osd fsid                  943949f0-ce37-47ca-a33c-3413d46ee9ec
          data uuid                 TUpfel-Q5ZT-eFph-bdGW-SiNW-l0ag-f5kh00
          journal device            /dev/sdd1
          data device               /dev/test_group/data-lv1

      [journal]    /dev/sdd1

          PARTUUID                  cd72bd28-002a-48da-bdf6-d5b993e84f3f

.. note:: Tags are displayed in a readable format. The ``osd id`` key is stored
          as a ``ceph.osd_id`` tag. For more information on lvm tag conventions
          see :ref:`ceph-volume-lvm-tag-api`

Single Reporting
----------------
Single reporting can consume both devices and logical volumes as input
(positional parameters). For logical volumes, it is required to use the group
name as well as the logical volume name.

For example the ``data-lv2`` logical volume, in the ``test_group`` volume group
can be listed in the following way::

    # ceph-volume lvm list test_group/data-lv2


    ====== osd.1 =======

      [data]    /dev/test_group/data-lv2

          journal uuid              C65n7d-B1gy-cqX3-vZKY-ZoE0-IEYM-HnIJzs
          osd id                    1
          cluster fsid              ce454d91-d748-4751-a318-ff7f7aa18ffd
          type                      data
          osd fsid                  661b24f8-e062-482b-8110-826ffe7f13fa
          data uuid                 SlEgHe-jX1H-QBQk-Sce0-RUls-8KlY-g8HgcZ
          journal device            /dev/journals/journal1
          data device               /dev/test_group/data-lv2


.. note:: Tags are displayed in a readable format. The ``osd id`` key is stored
          as a ``ceph.osd_id`` tag. For more information on lvm tag conventions
          see :ref:`ceph-volume-lvm-tag-api`


For plain disks, the full path to the device is required. For example, for
a device like ``/dev/sdd1`` it can look like::


    # ceph-volume lvm list /dev/sdd1


    ====== osd.0 =======

      [journal]    /dev/sdd1

          PARTUUID                  cd72bd28-002a-48da-bdf6-d5b993e84f3f



``json`` output
---------------
All output using ``--format=json`` will show everything the system has stored
as metadata for the devices, including tags.

No changes for readability are done with ``json`` reporting, and all
information is presented as-is. Full output as well as single devices can be
listed.

For brevity, this is how a single logical volume would look with ``json``
output (note how tags aren't modified)::

    # ceph-volume lvm list --format=json test_group/data-lv1
    {
        "0": [
            {
                "lv_name": "data-lv1",
                "lv_path": "/dev/test_group/data-lv1",
                "lv_tags": "ceph.cluster_fsid=ce454d91-d748-4751-a318-ff7f7aa18ffd,ceph.data_device=/dev/test_group/data-lv1,ceph.data_uuid=TUpfel-Q5ZT-eFph-bdGW-SiNW-l0ag-f5kh00,ceph.journal_device=/dev/sdd1,ceph.journal_uuid=cd72bd28-002a-48da-bdf6-d5b993e84f3f,ceph.osd_fsid=943949f0-ce37-47ca-a33c-3413d46ee9ec,ceph.osd_id=0,ceph.type=data",
                "lv_uuid": "TUpfel-Q5ZT-eFph-bdGW-SiNW-l0ag-f5kh00",
                "name": "data-lv1",
                "path": "/dev/test_group/data-lv1",
                "tags": {
                    "ceph.cluster_fsid": "ce454d91-d748-4751-a318-ff7f7aa18ffd",
                    "ceph.data_device": "/dev/test_group/data-lv1",
                    "ceph.data_uuid": "TUpfel-Q5ZT-eFph-bdGW-SiNW-l0ag-f5kh00",
                    "ceph.journal_device": "/dev/sdd1",
                    "ceph.journal_uuid": "cd72bd28-002a-48da-bdf6-d5b993e84f3f",
                    "ceph.osd_fsid": "943949f0-ce37-47ca-a33c-3413d46ee9ec",
                    "ceph.osd_id": "0",
                    "ceph.type": "data"
                },
                "type": "data",
                "vg_name": "test_group"
            }
        ]
    }


Synchronized information
------------------------
Before any listing type, the lvm API is queried to ensure that physical devices
that may be in use haven't changed naming. It is possible that non-persistent
devices like ``/dev/sda1`` could change to ``/dev/sdb1``.

The detection is possible because the ``PARTUUID`` is stored as part of the
metadata in the logical volume for the data lv. Even in the case of a journal
that is a physical device, this information is still stored on the data logical
volume associated with it.

If the name is no longer the same (as reported by ``blkid`` when using the
``PARTUUID``), the tag will get updated and the report will use the newly
refreshed information.