From 7da45d65be36d36b880cc55c5036e96c24b53f00 Mon Sep 17 00:00:00 2001 From: Qiaowei Ren Date: Thu, 1 Mar 2018 14:38:11 +0800 Subject: remove ceph code This patch removes initial ceph code, due to license issue. Change-Id: I092d44f601cdf34aed92300fe13214925563081c Signed-off-by: Qiaowei Ren --- src/ceph/doc/start/intro.rst | 87 -------------------------------------------- 1 file changed, 87 deletions(-) delete mode 100644 src/ceph/doc/start/intro.rst (limited to 'src/ceph/doc/start/intro.rst') diff --git a/src/ceph/doc/start/intro.rst b/src/ceph/doc/start/intro.rst deleted file mode 100644 index 95b51dd..0000000 --- a/src/ceph/doc/start/intro.rst +++ /dev/null @@ -1,87 +0,0 @@ -=============== - Intro to Ceph -=============== - -Whether you want to provide :term:`Ceph Object Storage` and/or -:term:`Ceph Block Device` services to :term:`Cloud Platforms`, deploy -a :term:`Ceph Filesystem` or use Ceph for another purpose, all -:term:`Ceph Storage Cluster` deployments begin with setting up each -:term:`Ceph Node`, your network, and the Ceph Storage Cluster. A Ceph -Storage Cluster requires at least one Ceph Monitor, Ceph Manager, and -Ceph OSD (Object Storage Daemon). The Ceph Metadata Server is also -required when running Ceph Filesystem clients. - -.. ditaa:: +---------------+ +------------+ +------------+ +---------------+ - | OSDs | | Monitors | | Managers | | MDSs | - +---------------+ +------------+ +------------+ +---------------+ - -- **Monitors**: A :term:`Ceph Monitor` (``ceph-mon``) maintains maps - of the cluster state, including the monitor map, manager map, the - OSD map, and the CRUSH map. These maps are critical cluster state - required for Ceph daemons to coordinate with each other. Monitors - are also responsible for managing authentication between daemons and - clients. At least three monitors are normally required for - redundancy and high availability. - -- **Managers**: A :term:`Ceph Manager` daemon (``ceph-mgr``) is - responsible for keeping track of runtime metrics and the current - state of the Ceph cluster, including storage utilization, current - performance metrics, and system load. The Ceph Manager daemons also - host python-based plugins to manage and expose Ceph cluster - information, including a web-based `dashboard`_ and `REST API`_. At - least two managers are normally required for high availability. - -- **Ceph OSDs**: A :term:`Ceph OSD` (object storage daemon, - ``ceph-osd``) stores data, handles data replication, recovery, - rebalancing, and provides some monitoring information to Ceph - Monitors and Managers by checking other Ceph OSD Daemons for a - heartbeat. At least 3 Ceph OSDs are normally required for redundancy - and high availability. - -- **MDSs**: A :term:`Ceph Metadata Server` (MDS, ``ceph-mds``) stores - metadata on behalf of the :term:`Ceph Filesystem` (i.e., Ceph Block - Devices and Ceph Object Storage do not use MDS). Ceph Metadata - Servers allow POSIX file system users to execute basic commands (like - ``ls``, ``find``, etc.) without placing an enormous burden on the - Ceph Storage Cluster. - -Ceph stores data as objects within logical storage pools. Using the -:term:`CRUSH` algorithm, Ceph calculates which placement group should -contain the object, and further calculates which Ceph OSD Daemon -should store the placement group. The CRUSH algorithm enables the -Ceph Storage Cluster to scale, rebalance, and recover dynamically. - -.. _dashboard: ../../mgr/dashboard -.. _REST API: ../../mgr/restful - -.. raw:: html - - -

Recommendations

- -To begin using Ceph in production, you should review our hardware -recommendations and operating system recommendations. - -.. toctree:: - :maxdepth: 2 - - Hardware Recommendations - OS Recommendations - - -.. raw:: html - -

Get Involved

- - You can avail yourself of help or contribute documentation, source - code or bugs by getting involved in the Ceph community. - -.. toctree:: - :maxdepth: 2 - - get-involved - documenting-ceph - -.. raw:: html - -
-- cgit 1.2.3-korg