From 7da45d65be36d36b880cc55c5036e96c24b53f00 Mon Sep 17 00:00:00 2001 From: Qiaowei Ren Date: Thu, 1 Mar 2018 14:38:11 +0800 Subject: remove ceph code This patch removes initial ceph code, due to license issue. Change-Id: I092d44f601cdf34aed92300fe13214925563081c Signed-off-by: Qiaowei Ren --- src/ceph/doc/cephfs/index.rst | 116 ------------------------------------------ 1 file changed, 116 deletions(-) delete mode 100644 src/ceph/doc/cephfs/index.rst (limited to 'src/ceph/doc/cephfs/index.rst') diff --git a/src/ceph/doc/cephfs/index.rst b/src/ceph/doc/cephfs/index.rst deleted file mode 100644 index c63364f..0000000 --- a/src/ceph/doc/cephfs/index.rst +++ /dev/null @@ -1,116 +0,0 @@ -================= - Ceph Filesystem -================= - -The :term:`Ceph Filesystem` (Ceph FS) is a POSIX-compliant filesystem that uses -a Ceph Storage Cluster to store its data. The Ceph filesystem uses the same Ceph -Storage Cluster system as Ceph Block Devices, Ceph Object Storage with its S3 -and Swift APIs, or native bindings (librados). - -.. note:: If you are evaluating CephFS for the first time, please review - the best practices for deployment: :doc:`/cephfs/best-practices` - -.. ditaa:: - +-----------------------+ +------------------------+ - | | | CephFS FUSE | - | | +------------------------+ - | | - | | +------------------------+ - | CephFS Kernel Object | | CephFS Library | - | | +------------------------+ - | | - | | +------------------------+ - | | | librados | - +-----------------------+ +------------------------+ - - +---------------+ +---------------+ +---------------+ - | OSDs | | MDSs | | Monitors | - +---------------+ +---------------+ +---------------+ - - -Using CephFS -============ - -Using the Ceph Filesystem requires at least one :term:`Ceph Metadata Server` in -your Ceph Storage Cluster. - - - -.. raw:: html - - -

Step 1: Metadata Server

- -To run the Ceph Filesystem, you must have a running Ceph Storage Cluster with at -least one :term:`Ceph Metadata Server` running. - - -.. toctree:: - :maxdepth: 1 - - Add/Remove MDS(s) <../../rados/deployment/ceph-deploy-mds> - MDS failover and standby configuration - MDS Configuration Settings - Client Configuration Settings - Journaler Configuration - Manpage ceph-mds <../../man/8/ceph-mds> - -.. raw:: html - -

Step 2: Mount CephFS

- -Once you have a healthy Ceph Storage Cluster with at least -one Ceph Metadata Server, you may create and mount your Ceph Filesystem. -Ensure that you client has network connectivity and the proper -authentication keyring. - -.. toctree:: - :maxdepth: 1 - - Create CephFS - Mount CephFS - Mount CephFS as FUSE - Mount CephFS in fstab - Manpage ceph-fuse <../../man/8/ceph-fuse> - Manpage mount.ceph <../../man/8/mount.ceph> - - -.. raw:: html - -

Additional Details

- -.. toctree:: - :maxdepth: 1 - - Deployment best practices - Administrative commands - POSIX compatibility - Experimental Features - CephFS Quotas - Using Ceph with Hadoop - cephfs-journal-tool - File layouts - Client eviction - Handling full filesystems - Health messages - Troubleshooting - Disaster recovery - Client authentication - Upgrading old filesystems - Configuring directory fragmentation - Configuring multiple active MDS daemons - -.. raw:: html - -
- -For developers -============== - -.. toctree:: - :maxdepth: 1 - - Client's Capabilities - libcephfs <../../api/libcephfs-java/> - Mantle - -- cgit 1.2.3-korg