summaryrefslogtreecommitdiffstats
path: root/src/ceph/doc/rados/operations/pg-states.rst
diff options
context:
space:
mode:
authorQiaowei Ren <qiaowei.ren@intel.com>2018-01-04 13:43:33 +0800
committerQiaowei Ren <qiaowei.ren@intel.com>2018-01-05 11:59:39 +0800
commit812ff6ca9fcd3e629e49d4328905f33eee8ca3f5 (patch)
tree04ece7b4da00d9d2f98093774594f4057ae561d4 /src/ceph/doc/rados/operations/pg-states.rst
parent15280273faafb77777eab341909a3f495cf248d9 (diff)
initial code repo
This patch creates initial code repo. For ceph, luminous stable release will be used for base code, and next changes and optimization for ceph will be added to it. For opensds, currently any changes can be upstreamed into original opensds repo (https://github.com/opensds/opensds), and so stor4nfv will directly clone opensds code to deploy stor4nfv environment. And the scripts for deployment based on ceph and opensds will be put into 'ci' directory. Change-Id: I46a32218884c75dda2936337604ff03c554648e4 Signed-off-by: Qiaowei Ren <qiaowei.ren@intel.com>
Diffstat (limited to 'src/ceph/doc/rados/operations/pg-states.rst')
-rw-r--r--src/ceph/doc/rados/operations/pg-states.rst80
1 files changed, 80 insertions, 0 deletions
diff --git a/src/ceph/doc/rados/operations/pg-states.rst b/src/ceph/doc/rados/operations/pg-states.rst
new file mode 100644
index 0000000..0fbd3dc
--- /dev/null
+++ b/src/ceph/doc/rados/operations/pg-states.rst
@@ -0,0 +1,80 @@
+========================
+ Placement Group States
+========================
+
+When checking a cluster's status (e.g., running ``ceph -w`` or ``ceph -s``),
+Ceph will report on the status of the placement groups. A placement group has
+one or more states. The optimum state for placement groups in the placement group
+map is ``active + clean``.
+
+*Creating*
+ Ceph is still creating the placement group.
+
+*Active*
+ Ceph will process requests to the placement group.
+
+*Clean*
+ Ceph replicated all objects in the placement group the correct number of times.
+
+*Down*
+ A replica with necessary data is down, so the placement group is offline.
+
+*Scrubbing*
+ Ceph is checking the placement group for inconsistencies.
+
+*Degraded*
+ Ceph has not replicated some objects in the placement group the correct number of times yet.
+
+*Inconsistent*
+ Ceph detects inconsistencies in the one or more replicas of an object in the placement group
+ (e.g. objects are the wrong size, objects are missing from one replica *after* recovery finished, etc.).
+
+*Peering*
+ The placement group is undergoing the peering process
+
+*Repair*
+ Ceph is checking the placement group and repairing any inconsistencies it finds (if possible).
+
+*Recovering*
+ Ceph is migrating/synchronizing objects and their replicas.
+
+*Forced-Recovery*
+ High recovery priority of that PG is enforced by user.
+
+*Backfill*
+ Ceph is scanning and synchronizing the entire contents of a placement group
+ instead of inferring what contents need to be synchronized from the logs of
+ recent operations. *Backfill* is a special case of recovery.
+
+*Forced-Backfill*
+ High backfill priority of that PG is enforced by user.
+
+*Wait-backfill*
+ The placement group is waiting in line to start backfill.
+
+*Backfill-toofull*
+ A backfill operation is waiting because the destination OSD is over its
+ full ratio.
+
+*Incomplete*
+ Ceph detects that a placement group is missing information about
+ writes that may have occurred, or does not have any healthy
+ copies. If you see this state, try to start any failed OSDs that may
+ contain the needed information. In the case of an erasure coded pool
+ temporarily reducing min_size may allow recovery.
+
+*Stale*
+ The placement group is in an unknown state - the monitors have not received
+ an update for it since the placement group mapping changed.
+
+*Remapped*
+ The placement group is temporarily mapped to a different set of OSDs from what
+ CRUSH specified.
+
+*Undersized*
+ The placement group fewer copies than the configured pool replication level.
+
+*Peered*
+ The placement group has peered, but cannot serve client IO due to not having
+ enough copies to reach the pool's configured min_size parameter. Recovery
+ may occur in this state, so the pg may heal up to min_size eventually.