summaryrefslogtreecommitdiffstats
path: root/src/ceph/doc/cephfs/index.rst
diff options
context:
space:
mode:
authorQiaowei Ren <qiaowei.ren@intel.com>2018-01-04 13:43:33 +0800
committerQiaowei Ren <qiaowei.ren@intel.com>2018-01-05 11:59:39 +0800
commit812ff6ca9fcd3e629e49d4328905f33eee8ca3f5 (patch)
tree04ece7b4da00d9d2f98093774594f4057ae561d4 /src/ceph/doc/cephfs/index.rst
parent15280273faafb77777eab341909a3f495cf248d9 (diff)
initial code repo
This patch creates initial code repo. For ceph, luminous stable release will be used for base code, and next changes and optimization for ceph will be added to it. For opensds, currently any changes can be upstreamed into original opensds repo (https://github.com/opensds/opensds), and so stor4nfv will directly clone opensds code to deploy stor4nfv environment. And the scripts for deployment based on ceph and opensds will be put into 'ci' directory. Change-Id: I46a32218884c75dda2936337604ff03c554648e4 Signed-off-by: Qiaowei Ren <qiaowei.ren@intel.com>
Diffstat (limited to 'src/ceph/doc/cephfs/index.rst')
-rw-r--r--src/ceph/doc/cephfs/index.rst116
1 files changed, 116 insertions, 0 deletions
diff --git a/src/ceph/doc/cephfs/index.rst b/src/ceph/doc/cephfs/index.rst
new file mode 100644
index 0000000..c63364f
--- /dev/null
+++ b/src/ceph/doc/cephfs/index.rst
@@ -0,0 +1,116 @@
+=================
+ Ceph Filesystem
+=================
+
+The :term:`Ceph Filesystem` (Ceph FS) is a POSIX-compliant filesystem that uses
+a Ceph Storage Cluster to store its data. The Ceph filesystem uses the same Ceph
+Storage Cluster system as Ceph Block Devices, Ceph Object Storage with its S3
+and Swift APIs, or native bindings (librados).
+
+.. note:: If you are evaluating CephFS for the first time, please review
+ the best practices for deployment: :doc:`/cephfs/best-practices`
+
+.. ditaa::
+ +-----------------------+ +------------------------+
+ | | | CephFS FUSE |
+ | | +------------------------+
+ | |
+ | | +------------------------+
+ | CephFS Kernel Object | | CephFS Library |
+ | | +------------------------+
+ | |
+ | | +------------------------+
+ | | | librados |
+ +-----------------------+ +------------------------+
+
+ +---------------+ +---------------+ +---------------+
+ | OSDs | | MDSs | | Monitors |
+ +---------------+ +---------------+ +---------------+
+
+
+Using CephFS
+============
+
+Using the Ceph Filesystem requires at least one :term:`Ceph Metadata Server` in
+your Ceph Storage Cluster.
+
+
+
+.. raw:: html
+
+ <style type="text/css">div.body h3{margin:5px 0px 0px 0px;}</style>
+ <table cellpadding="10"><colgroup><col width="33%"><col width="33%"><col width="33%"></colgroup><tbody valign="top"><tr><td><h3>Step 1: Metadata Server</h3>
+
+To run the Ceph Filesystem, you must have a running Ceph Storage Cluster with at
+least one :term:`Ceph Metadata Server` running.
+
+
+.. toctree::
+ :maxdepth: 1
+
+ Add/Remove MDS(s) <../../rados/deployment/ceph-deploy-mds>
+ MDS failover and standby configuration <standby>
+ MDS Configuration Settings <mds-config-ref>
+ Client Configuration Settings <client-config-ref>
+ Journaler Configuration <journaler>
+ Manpage ceph-mds <../../man/8/ceph-mds>
+
+.. raw:: html
+
+ </td><td><h3>Step 2: Mount CephFS</h3>
+
+Once you have a healthy Ceph Storage Cluster with at least
+one Ceph Metadata Server, you may create and mount your Ceph Filesystem.
+Ensure that you client has network connectivity and the proper
+authentication keyring.
+
+.. toctree::
+ :maxdepth: 1
+
+ Create CephFS <createfs>
+ Mount CephFS <kernel>
+ Mount CephFS as FUSE <fuse>
+ Mount CephFS in fstab <fstab>
+ Manpage ceph-fuse <../../man/8/ceph-fuse>
+ Manpage mount.ceph <../../man/8/mount.ceph>
+
+
+.. raw:: html
+
+ </td><td><h3>Additional Details</h3>
+
+.. toctree::
+ :maxdepth: 1
+
+ Deployment best practices <best-practices>
+ Administrative commands <administration>
+ POSIX compatibility <posix>
+ Experimental Features <experimental-features>
+ CephFS Quotas <quota>
+ Using Ceph with Hadoop <hadoop>
+ cephfs-journal-tool <cephfs-journal-tool>
+ File layouts <file-layouts>
+ Client eviction <eviction>
+ Handling full filesystems <full>
+ Health messages <health-messages>
+ Troubleshooting <troubleshooting>
+ Disaster recovery <disaster-recovery>
+ Client authentication <client-auth>
+ Upgrading old filesystems <upgrading>
+ Configuring directory fragmentation <dirfrags>
+ Configuring multiple active MDS daemons <multimds>
+
+.. raw:: html
+
+ </td></tr></tbody></table>
+
+For developers
+==============
+
+.. toctree::
+ :maxdepth: 1
+
+ Client's Capabilities <capabilities>
+ libcephfs <../../api/libcephfs-java/>
+ Mantle <mantle>
+