diff options
author | Qiaowei Ren <qiaowei.ren@intel.com> | 2018-01-04 13:43:33 +0800 |
---|---|---|
committer | Qiaowei Ren <qiaowei.ren@intel.com> | 2018-01-05 11:59:39 +0800 |
commit | 812ff6ca9fcd3e629e49d4328905f33eee8ca3f5 (patch) | |
tree | 04ece7b4da00d9d2f98093774594f4057ae561d4 /src/ceph/doc/cephfs/createfs.rst | |
parent | 15280273faafb77777eab341909a3f495cf248d9 (diff) |
initial code repo
This patch creates initial code repo.
For ceph, luminous stable release will be used for base code,
and next changes and optimization for ceph will be added to it.
For opensds, currently any changes can be upstreamed into original
opensds repo (https://github.com/opensds/opensds), and so stor4nfv
will directly clone opensds code to deploy stor4nfv environment.
And the scripts for deployment based on ceph and opensds will be
put into 'ci' directory.
Change-Id: I46a32218884c75dda2936337604ff03c554648e4
Signed-off-by: Qiaowei Ren <qiaowei.ren@intel.com>
Diffstat (limited to 'src/ceph/doc/cephfs/createfs.rst')
-rw-r--r-- | src/ceph/doc/cephfs/createfs.rst | 62 |
1 files changed, 62 insertions, 0 deletions
diff --git a/src/ceph/doc/cephfs/createfs.rst b/src/ceph/doc/cephfs/createfs.rst new file mode 100644 index 0000000..005ede8 --- /dev/null +++ b/src/ceph/doc/cephfs/createfs.rst @@ -0,0 +1,62 @@ +======================== +Create a Ceph filesystem +======================== + +.. tip:: + + The ``ceph fs new`` command was introduced in Ceph 0.84. Prior to this release, + no manual steps are required to create a filesystem, and pools named ``data`` and + ``metadata`` exist by default. + + The Ceph command line now includes commands for creating and removing filesystems, + but at present only one filesystem may exist at a time. + +A Ceph filesystem requires at least two RADOS pools, one for data and one for metadata. +When configuring these pools, you might consider: + +- Using a higher replication level for the metadata pool, as any data + loss in this pool can render the whole filesystem inaccessible. +- Using lower-latency storage such as SSDs for the metadata pool, as this + will directly affect the observed latency of filesystem operations + on clients. + +Refer to :doc:`/rados/operations/pools` to learn more about managing pools. For +example, to create two pools with default settings for use with a filesystem, you +might run the following commands: + +.. code:: bash + + $ ceph osd pool create cephfs_data <pg_num> + $ ceph osd pool create cephfs_metadata <pg_num> + +Once the pools are created, you may enable the filesystem using the ``fs new`` command: + +.. code:: bash + + $ ceph fs new <fs_name> <metadata> <data> + +For example: + +.. code:: bash + + $ ceph fs new cephfs cephfs_metadata cephfs_data + $ ceph fs ls + name: cephfs, metadata pool: cephfs_metadata, data pools: [cephfs_data ] + +Once a filesystem has been created, your MDS(s) will be able to enter +an *active* state. For example, in a single MDS system: + +.. code:: bash + + $ ceph mds stat + e5: 1/1/1 up {0=a=up:active} + +Once the filesystem is created and the MDS is active, you are ready to mount +the filesystem. If you have created more than one filesystem, you will +choose which to use when mounting. + + - `Mount CephFS`_ + - `Mount CephFS as FUSE`_ + +.. _Mount CephFS: ../../cephfs/kernel +.. _Mount CephFS as FUSE: ../../cephfs/fuse |