summaryrefslogtreecommitdiffstats
path: root/src/ceph/doc/rados/index.rst
diff options
context:
space:
mode:
authorQiaowei Ren <qiaowei.ren@intel.com>2018-01-04 13:43:33 +0800
committerQiaowei Ren <qiaowei.ren@intel.com>2018-01-05 11:59:39 +0800
commit812ff6ca9fcd3e629e49d4328905f33eee8ca3f5 (patch)
tree04ece7b4da00d9d2f98093774594f4057ae561d4 /src/ceph/doc/rados/index.rst
parent15280273faafb77777eab341909a3f495cf248d9 (diff)
initial code repo
This patch creates initial code repo. For ceph, luminous stable release will be used for base code, and next changes and optimization for ceph will be added to it. For opensds, currently any changes can be upstreamed into original opensds repo (https://github.com/opensds/opensds), and so stor4nfv will directly clone opensds code to deploy stor4nfv environment. And the scripts for deployment based on ceph and opensds will be put into 'ci' directory. Change-Id: I46a32218884c75dda2936337604ff03c554648e4 Signed-off-by: Qiaowei Ren <qiaowei.ren@intel.com>
Diffstat (limited to 'src/ceph/doc/rados/index.rst')
-rw-r--r--src/ceph/doc/rados/index.rst76
1 files changed, 76 insertions, 0 deletions
diff --git a/src/ceph/doc/rados/index.rst b/src/ceph/doc/rados/index.rst
new file mode 100644
index 0000000..929bb7e
--- /dev/null
+++ b/src/ceph/doc/rados/index.rst
@@ -0,0 +1,76 @@
+======================
+ Ceph Storage Cluster
+======================
+
+The :term:`Ceph Storage Cluster` is the foundation for all Ceph deployments.
+Based upon :abbr:`RADOS (Reliable Autonomic Distributed Object Store)`, Ceph
+Storage Clusters consist of two types of daemons: a :term:`Ceph OSD Daemon`
+(OSD) stores data as objects on a storage node; and a :term:`Ceph Monitor` (MON)
+maintains a master copy of the cluster map. A Ceph Storage Cluster may contain
+thousands of storage nodes. A minimal system will have at least one
+Ceph Monitor and two Ceph OSD Daemons for data replication.
+
+The Ceph Filesystem, Ceph Object Storage and Ceph Block Devices read data from
+and write data to the Ceph Storage Cluster.
+
+.. raw:: html
+
+ <style type="text/css">div.body h3{margin:5px 0px 0px 0px;}</style>
+ <table cellpadding="10"><colgroup><col width="33%"><col width="33%"><col width="33%"></colgroup><tbody valign="top"><tr><td><h3>Config and Deploy</h3>
+
+Ceph Storage Clusters have a few required settings, but most configuration
+settings have default values. A typical deployment uses a deployment tool
+to define a cluster and bootstrap a monitor. See `Deployment`_ for details
+on ``ceph-deploy.``
+
+.. toctree::
+ :maxdepth: 2
+
+ Configuration <configuration/index>
+ Deployment <deployment/index>
+
+.. raw:: html
+
+ </td><td><h3>Operations</h3>
+
+Once you have a deployed a Ceph Storage Cluster, you may begin operating
+your cluster.
+
+.. toctree::
+ :maxdepth: 2
+
+
+ Operations <operations/index>
+
+.. toctree::
+ :maxdepth: 1
+
+ Man Pages <man/index>
+
+
+.. toctree::
+ :hidden:
+
+ troubleshooting/index
+
+.. raw:: html
+
+ </td><td><h3>APIs</h3>
+
+Most Ceph deployments use `Ceph Block Devices`_, `Ceph Object Storage`_ and/or the
+`Ceph Filesystem`_. You may also develop applications that talk directly to
+the Ceph Storage Cluster.
+
+.. toctree::
+ :maxdepth: 2
+
+ APIs <api/index>
+
+.. raw:: html
+
+ </td></tr></tbody></table>
+
+.. _Ceph Block Devices: ../rbd/
+.. _Ceph Filesystem: ../cephfs/
+.. _Ceph Object Storage: ../radosgw/
+.. _Deployment: ../rados/deployment/