summaryrefslogtreecommitdiffstats
path: root/src/ceph/doc/cephfs/administration.rst
diff options
context:
space:
mode:
authorQiaowei Ren <qiaowei.ren@intel.com>2018-01-04 13:43:33 +0800
committerQiaowei Ren <qiaowei.ren@intel.com>2018-01-05 11:59:39 +0800
commit812ff6ca9fcd3e629e49d4328905f33eee8ca3f5 (patch)
tree04ece7b4da00d9d2f98093774594f4057ae561d4 /src/ceph/doc/cephfs/administration.rst
parent15280273faafb77777eab341909a3f495cf248d9 (diff)
initial code repo
This patch creates initial code repo. For ceph, luminous stable release will be used for base code, and next changes and optimization for ceph will be added to it. For opensds, currently any changes can be upstreamed into original opensds repo (https://github.com/opensds/opensds), and so stor4nfv will directly clone opensds code to deploy stor4nfv environment. And the scripts for deployment based on ceph and opensds will be put into 'ci' directory. Change-Id: I46a32218884c75dda2936337604ff03c554648e4 Signed-off-by: Qiaowei Ren <qiaowei.ren@intel.com>
Diffstat (limited to 'src/ceph/doc/cephfs/administration.rst')
-rw-r--r--src/ceph/doc/cephfs/administration.rst198
1 files changed, 198 insertions, 0 deletions
diff --git a/src/ceph/doc/cephfs/administration.rst b/src/ceph/doc/cephfs/administration.rst
new file mode 100644
index 0000000..e9d9195
--- /dev/null
+++ b/src/ceph/doc/cephfs/administration.rst
@@ -0,0 +1,198 @@
+
+CephFS Administrative commands
+==============================
+
+Filesystems
+-----------
+
+These commands operate on the CephFS filesystems in your Ceph cluster.
+Note that by default only one filesystem is permitted: to enable
+creation of multiple filesystems use ``ceph fs flag set enable_multiple true``.
+
+::
+
+ fs new <filesystem name> <metadata pool name> <data pool name>
+
+::
+
+ fs ls
+
+::
+
+ fs rm <filesystem name> [--yes-i-really-mean-it]
+
+::
+
+ fs reset <filesystem name>
+
+::
+
+ fs get <filesystem name>
+
+::
+
+ fs set <filesystem name> <var> <val>
+
+::
+
+ fs add_data_pool <filesystem name> <pool name/id>
+
+::
+
+ fs rm_data_pool <filesystem name> <pool name/id>
+
+
+Settings
+--------
+
+::
+
+ fs set <fs name> max_file_size <size in bytes>
+
+CephFS has a configurable maximum file size, and it's 1TB by default.
+You may wish to set this limit higher if you expect to store large files
+in CephFS. It is a 64-bit field.
+
+Setting ``max_file_size`` to 0 does not disable the limit. It would
+simply limit clients to only creating empty files.
+
+
+Maximum file sizes and performance
+----------------------------------
+
+CephFS enforces the maximum file size limit at the point of appending to
+files or setting their size. It does not affect how anything is stored.
+
+When users create a file of an enormous size (without necessarily
+writing any data to it), some operations (such as deletes) cause the MDS
+to have to do a large number of operations to check if any of the RADOS
+objects within the range that could exist (according to the file size)
+really existed.
+
+The ``max_file_size`` setting prevents users from creating files that
+appear to be eg. exabytes in size, causing load on the MDS as it tries
+to enumerate the objects during operations like stats or deletes.
+
+
+Daemons
+-------
+
+These commands act on specific mds daemons or ranks.
+
+::
+
+ mds fail <gid/name/role>
+
+Mark an MDS daemon as failed. This is equivalent to what the cluster
+would do if an MDS daemon had failed to send a message to the mon
+for ``mds_beacon_grace`` second. If the daemon was active and a suitable
+standby is available, using ``mds fail`` will force a failover to the standby.
+
+If the MDS daemon was in reality still running, then using ``mds fail``
+will cause the daemon to restart. If it was active and a standby was
+available, then the "failed" daemon will return as a standby.
+
+::
+
+ mds deactivate <role>
+
+Deactivate an MDS, causing it to flush its entire journal to
+backing RADOS objects and close all open client sessions. Deactivating an MDS
+is primarily intended for bringing down a rank after reducing the number of
+active MDS (max_mds). Once the rank is deactivated, the MDS daemon will rejoin the
+cluster as a standby.
+``<role>`` can take one of three forms:
+
+::
+
+ <fs_name>:<rank>
+ <fs_id>:<rank>
+ <rank>
+
+Use ``mds deactivate`` in conjunction with adjustments to ``max_mds`` to
+shrink an MDS cluster. See :doc:`/cephfs/multimds`
+
+::
+
+ tell mds.<daemon name>
+
+::
+
+ mds metadata <gid/name/role>
+
+::
+
+ mds repaired <role>
+
+
+Global settings
+---------------
+
+::
+
+ fs dump
+
+::
+
+ fs flag set <flag name> <flag val> [<confirmation string>]
+
+"flag name" must be one of ['enable_multiple']
+
+Some flags require you to confirm your intentions with "--yes-i-really-mean-it"
+or a similar string they will prompt you with. Consider these actions carefully
+before proceeding; they are placed on especially dangerous activities.
+
+
+Advanced
+--------
+
+These commands are not required in normal operation, and exist
+for use in exceptional circumstances. Incorrect use of these
+commands may cause serious problems, such as an inaccessible
+filesystem.
+
+::
+
+ mds compat rm_compat
+
+::
+
+ mds compat rm_incompat
+
+::
+
+ mds compat show
+
+::
+
+ mds getmap
+
+::
+
+ mds set_state
+
+::
+
+ mds rmfailed
+
+Legacy
+------
+
+The ``ceph mds set`` command is the deprecated version of ``ceph fs set``,
+from before there was more than one filesystem per cluster. It operates
+on whichever filesystem is marked as the default (see ``ceph fs
+set-default``.)
+
+::
+
+ mds stat
+ mds dump # replaced by "fs get"
+ mds stop # replaced by "mds deactivate"
+ mds set_max_mds # replaced by "fs set max_mds"
+ mds set # replaced by "fs set"
+ mds cluster_down # replaced by "fs set cluster_down"
+ mds cluster_up # replaced by "fs set cluster_up"
+ mds newfs # replaced by "fs new"
+ mds add_data_pool # replaced by "fs add_data_pool"
+ mds remove_data_pool #replaced by "fs remove_data_pool"
+