summaryrefslogtreecommitdiffstats
path: root/src/ceph/doc/install
diff options
context:
space:
mode:
authorQiaowei Ren <qiaowei.ren@intel.com>2018-01-04 13:43:33 +0800
committerQiaowei Ren <qiaowei.ren@intel.com>2018-01-05 11:59:39 +0800
commit812ff6ca9fcd3e629e49d4328905f33eee8ca3f5 (patch)
tree04ece7b4da00d9d2f98093774594f4057ae561d4 /src/ceph/doc/install
parent15280273faafb77777eab341909a3f495cf248d9 (diff)
initial code repo
This patch creates initial code repo. For ceph, luminous stable release will be used for base code, and next changes and optimization for ceph will be added to it. For opensds, currently any changes can be upstreamed into original opensds repo (https://github.com/opensds/opensds), and so stor4nfv will directly clone opensds code to deploy stor4nfv environment. And the scripts for deployment based on ceph and opensds will be put into 'ci' directory. Change-Id: I46a32218884c75dda2936337604ff03c554648e4 Signed-off-by: Qiaowei Ren <qiaowei.ren@intel.com>
Diffstat (limited to 'src/ceph/doc/install')
-rw-r--r--src/ceph/doc/install/build-ceph.rst102
-rw-r--r--src/ceph/doc/install/clone-source.rst101
-rw-r--r--src/ceph/doc/install/get-packages.rst436
-rw-r--r--src/ceph/doc/install/get-tarballs.rst14
-rw-r--r--src/ceph/doc/install/index.rst71
-rw-r--r--src/ceph/doc/install/install-ceph-deploy.rst23
-rw-r--r--src/ceph/doc/install/install-ceph-gateway.rst605
-rw-r--r--src/ceph/doc/install/install-storage-cluster.rst91
-rw-r--r--src/ceph/doc/install/install-vm-cloud.rst128
-rw-r--r--src/ceph/doc/install/manual-deployment.rst488
-rw-r--r--src/ceph/doc/install/manual-freebsd-deployment.rst623
-rw-r--r--src/ceph/doc/install/mirrors.rst67
-rw-r--r--src/ceph/doc/install/upgrading-ceph.rst746
13 files changed, 3495 insertions, 0 deletions
diff --git a/src/ceph/doc/install/build-ceph.rst b/src/ceph/doc/install/build-ceph.rst
new file mode 100644
index 0000000..9c834b9
--- /dev/null
+++ b/src/ceph/doc/install/build-ceph.rst
@@ -0,0 +1,102 @@
+============
+ Build Ceph
+============
+
+You can get Ceph software by retrieving Ceph source code and building it yourself.
+To build Ceph, you need to set up a development environment, compile Ceph,
+and then either install in user space or build packages and install the packages.
+
+Build Prerequisites
+===================
+
+
+.. tip:: Check this section to see if there are specific prerequisites for your
+ Linux/Unix distribution.
+
+Before you can build Ceph source code, you need to install several libraries
+and tools::
+
+ ./install-deps.sh
+
+.. note:: Some distributions that support Google's memory profiler tool may use
+ a different package name (e.g., ``libgoogle-perftools4``).
+
+Build Ceph
+==========
+
+Ceph is built using cmake. To build Ceph, navigate to your cloned Ceph
+repository and execute the following::
+
+ cd ceph
+ ./do_cmake.sh
+ cd build
+ make
+
+.. topic:: Hyperthreading
+
+ You can use ``make -j`` to execute multiple jobs depending upon your system. For
+ example, ``make -j4`` for a dual core processor may build faster.
+
+See `Installing a Build`_ to install a build in user space.
+
+Build Ceph Packages
+===================
+
+To build packages, you must clone the `Ceph`_ repository. You can create
+installation packages from the latest code using ``dpkg-buildpackage`` for
+Debian/Ubuntu or ``rpmbuild`` for the RPM Package Manager.
+
+.. tip:: When building on a multi-core CPU, use the ``-j`` and the number of
+ cores * 2. For example, use ``-j4`` for a dual-core processor to accelerate
+ the build.
+
+
+Advanced Package Tool (APT)
+---------------------------
+
+To create ``.deb`` packages for Debian/Ubuntu, ensure that you have cloned the
+`Ceph`_ repository, installed the `Build Prerequisites`_ and installed
+``debhelper``::
+
+ sudo apt-get install debhelper
+
+Once you have installed debhelper, you can build the packages::
+
+ sudo dpkg-buildpackage
+
+For multi-processor CPUs use the ``-j`` option to accelerate the build.
+
+
+RPM Package Manager
+-------------------
+
+To create ``.rpm`` packages, ensure that you have cloned the `Ceph`_ repository,
+installed the `Build Prerequisites`_ and installed ``rpm-build`` and
+``rpmdevtools``::
+
+ yum install rpm-build rpmdevtools
+
+Once you have installed the tools, setup an RPM compilation environment::
+
+ rpmdev-setuptree
+
+Fetch the source tarball for the RPM compilation environment::
+
+ wget -P ~/rpmbuild/SOURCES/ http://ceph.com/download/ceph-<version>.tar.bz2
+
+Or from the EU mirror::
+
+ wget -P ~/rpmbuild/SOURCES/ http://eu.ceph.com/download/ceph-<version>.tar.bz2
+
+Extract the specfile::
+
+ tar --strip-components=1 -C ~/rpmbuild/SPECS/ --no-anchored -xvjf ~/rpmbuild/SOURCES/ceph-<version>.tar.bz2 "ceph.spec"
+
+Build the RPM packages::
+
+ rpmbuild -ba ~/rpmbuild/SPECS/ceph.spec
+
+For multi-processor CPUs use the ``-j`` option to accelerate the build.
+
+.. _Ceph: ../clone-source
+.. _Installing a Build: ../install-storage-cluster#installing-a-build
diff --git a/src/ceph/doc/install/clone-source.rst b/src/ceph/doc/install/clone-source.rst
new file mode 100644
index 0000000..9ed32c0
--- /dev/null
+++ b/src/ceph/doc/install/clone-source.rst
@@ -0,0 +1,101 @@
+=========================================
+ Cloning the Ceph Source Code Repository
+=========================================
+
+You may clone a Ceph branch of the Ceph source code by going to `github Ceph
+Repository`_, selecting a branch (``master`` by default), and clicking the
+**Download ZIP** button.
+
+.. _github Ceph Repository: https://github.com/ceph/ceph
+
+
+To clone the entire git repository, install and configure ``git``.
+
+
+Install Git
+===========
+
+To install ``git`` on Debian/Ubuntu, execute::
+
+ sudo apt-get install git
+
+
+To install ``git`` on CentOS/RHEL, execute::
+
+ sudo yum install git
+
+
+You must also have a ``github`` account. If you do not have a
+``github`` account, go to `github.com`_ and register.
+Follow the directions for setting up git at
+`Set Up Git`_.
+
+.. _github.com: http://github.com
+.. _Set Up Git: http://help.github.com/linux-set-up-git
+
+
+Add SSH Keys (Optional)
+=======================
+
+If you intend to commit code to Ceph or to clone using SSH
+(``git@github.com:ceph/ceph.git``), you must generate SSH keys for github.
+
+.. tip:: If you only intend to clone the repository, you may
+ use ``git clone --recursive https://github.com/ceph/ceph.git``
+ without generating SSH keys.
+
+To generate SSH keys for ``github``, execute::
+
+ ssh-keygen
+
+Get the key to add to your ``github`` account (the following example
+assumes you used the default file path)::
+
+ cat .ssh/id_rsa.pub
+
+Copy the public key.
+
+Go to your ``github`` account, click on "Account Settings" (i.e., the
+'tools' icon); then, click "SSH Keys" on the left side navbar.
+
+Click "Add SSH key" in the "SSH Keys" list, enter a name for the key, paste the
+key you generated, and press the "Add key" button.
+
+
+Clone the Source
+================
+
+To clone the Ceph source code repository, execute::
+
+ git clone --recursive https://github.com/ceph/ceph.git
+
+Once ``git clone`` executes, you should have a full copy of the Ceph
+repository.
+
+.. tip:: Make sure you maintain the latest copies of the submodules
+ included in the repository. Running ``git status`` will tell you if
+ the submodules are out of date.
+
+::
+
+ cd ceph
+ git status
+
+If your submodules are out of date, run::
+
+ git submodule update --force --init --recursive
+
+Choose a Branch
+===============
+
+Once you clone the source code and submodules, your Ceph repository
+will be on the ``master`` branch by default, which is the unstable
+development branch. You may choose other branches too.
+
+- ``master``: The unstable development branch.
+- ``stable``: The bugfix branch.
+- ``next``: The release candidate branch.
+
+::
+
+ git checkout master
diff --git a/src/ceph/doc/install/get-packages.rst b/src/ceph/doc/install/get-packages.rst
new file mode 100644
index 0000000..02a24cd
--- /dev/null
+++ b/src/ceph/doc/install/get-packages.rst
@@ -0,0 +1,436 @@
+==============
+ Get Packages
+==============
+
+To install Ceph and other enabling software, you need to retrieve packages from
+the Ceph repository. Follow this guide to get packages; then, proceed to the
+`Install Ceph Object Storage`_.
+
+
+Getting Packages
+================
+
+There are two ways to get packages:
+
+- **Add Repositories:** Adding repositories is the easiest way to get packages,
+ because package management tools will retrieve the packages and all enabling
+ software for you in most cases. However, to use this approach, each
+ :term:`Ceph Node` in your cluster must have internet access.
+
+- **Download Packages Manually:** Downloading packages manually is a convenient
+ way to install Ceph if your environment does not allow a :term:`Ceph Node` to
+ access the internet.
+
+
+Requirements
+============
+
+All Ceph deployments require Ceph packages (except for development). You should
+also add keys and recommended packages.
+
+- **Keys: (Recommended)** Whether you add repositories or download packages
+ manually, you should download keys to verify the packages. If you do not get
+ the keys, you may encounter security warnings. There are two keys: one for
+ releases (common) and one for development (programmers and QA only). Choose
+ the key that suits your needs. See `Add Keys`_ for details.
+
+- **Ceph: (Required)** All Ceph deployments require Ceph release packages,
+ except for deployments that use development packages (development, QA, and
+ bleeding edge deployments only). See `Add Ceph`_ for details.
+
+- **Ceph Development: (Optional)** If you are developing for Ceph, testing Ceph
+ development builds, or if you want features from the bleeding edge of Ceph
+ development, you may get Ceph development packages. See
+ `Add Ceph Development`_ for details.
+
+- **Apache/FastCGI: (Optional)** If you are deploying a
+ :term:`Ceph Object Storage` service, you must install Apache and FastCGI.
+ Ceph provides Apache and FastCGI builds that are identical to those available
+ from Apache, but with 100-continue support. If you want to enable
+ :term:`Ceph Object Gateway` daemons with 100-continue support, you must
+ retrieve Apache/FastCGI packages from the Ceph repository.
+ See `Add Apache/FastCGI`_ for details.
+
+
+If you intend to download packages manually, see Section `Download Packages`_.
+
+
+Add Keys
+========
+
+Add a key to your system's list of trusted keys to avoid a security warning. For
+major releases (e.g., ``hammer``, ``jewel``) and development releases
+(``release-name-rc1``, ``release-name-rc2``), use the ``release.asc`` key. For
+development testing packages, use the ``autobuild.asc`` key (developers and
+QA).
+
+
+APT
+---
+
+To install the ``release.asc`` key, execute the following::
+
+ wget -q -O- 'https://download.ceph.com/keys/release.asc' | sudo apt-key add -
+
+
+To install the ``autobuild.asc`` key, execute the following
+(QA and developers only)::
+
+ wget -q -O- 'https://download.ceph.com/keys/autobuild.asc' | sudo apt-key add -
+
+
+RPM
+---
+
+To install the ``release.asc`` key, execute the following::
+
+ sudo rpm --import 'https://download.ceph.com/keys/release.asc'
+
+To install the ``autobuild.asc`` key, execute the following
+(QA and developers only)::
+
+ sudo rpm --import 'https://download.ceph.com/keys/autobuild.asc'
+
+
+Add Ceph
+========
+
+Release repositories use the ``release.asc`` key to verify packages.
+To install Ceph packages with the Advanced Package Tool (APT) or
+Yellowdog Updater, Modified (YUM), you must add Ceph repositories.
+
+You may find releases for Debian/Ubuntu (installed with APT) at::
+
+ https://download.ceph.com/debian-{release-name}
+
+You may find releases for CentOS/RHEL and others (installed with YUM) at::
+
+ https://download.ceph.com/rpm-{release-name}
+
+The major releases of Ceph are summarized at: :doc:`/releases`.
+
+Every second major release is considered Long Term Stable (LTS). Critical
+bugfixes are backported to LTS releases until their retirement. Since retired
+releases are no longer maintained, we recommend that users upgrade their
+clusters regularly - preferably to the latest LTS release.
+
+The most recent LTS release is Jewel (10.2.x).
+
+.. tip:: For international users: There might be a mirror close to you where download Ceph from. For more information see: `Ceph Mirrors`_.
+
+Debian Packages
+---------------
+
+Add a Ceph package repository to your system's list of APT sources. For newer
+versions of Debian/Ubuntu, call ``lsb_release -sc`` on the command line to
+get the short codename, and replace ``{codename}`` in the following command. ::
+
+ sudo apt-add-repository 'deb https://download.ceph.com/debian-jewel/ {codename} main'
+
+For early Linux distributions, you may execute the following command::
+
+ echo deb https://download.ceph.com/debian-jewel/ $(lsb_release -sc) main | sudo tee /etc/apt/sources.list.d/ceph.list
+
+For earlier Ceph releases, replace ``{release-name}`` with the name with the
+name of the Ceph release. You may call ``lsb_release -sc`` on the command line
+to get the short codename, and replace ``{codename}`` in the following command.
+::
+
+ sudo apt-add-repository 'deb https://download.ceph.com/debian-{release-name}/ {codename} main'
+
+For older Linux distributions, replace ``{release-name}`` with the name of the
+release::
+
+ echo deb https://download.ceph.com/debian-{release-name}/ $(lsb_release -sc) main | sudo tee /etc/apt/sources.list.d/ceph.list
+
+Ceph on ARM processors requires Google's memory profiling tools (``google-perftools``).
+The Ceph repository should have a copy at
+https://download.ceph.com/packages/google-perftools/debian. ::
+
+ echo deb https://download.ceph.com/packages/google-perftools/debian $(lsb_release -sc) main | sudo tee /etc/apt/sources.list.d/google-perftools.list
+
+
+For development release packages, add our package repository to your system's
+list of APT sources. See `the testing Debian repository`_ for a complete list
+of Debian and Ubuntu releases supported. ::
+
+ echo deb https://download.ceph.com/debian-testing/ $(lsb_release -sc) main | sudo tee /etc/apt/sources.list.d/ceph.list
+
+.. tip:: For international users: There might be a mirror close to you where download Ceph from. For more information see: `Ceph Mirrors`_.
+
+RPM Packages
+------------
+
+For major releases, you may add a Ceph entry to the ``/etc/yum.repos.d``
+directory. Create a ``ceph.repo`` file. In the example below, replace
+``{ceph-release}`` with a major release of Ceph (e.g., ``hammer``, ``jewel``,
+etc.) and ``{distro}`` with your Linux distribution (e.g., ``el7``, etc.). You
+may view https://download.ceph.com/rpm-{ceph-release}/ directory to see which
+distributions Ceph supports. Some Ceph packages (e.g., EPEL) must take priority
+over standard packages, so you must ensure that you set
+``priority=2``. ::
+
+ [ceph]
+ name=Ceph packages for $basearch
+ baseurl=https://download.ceph.com/rpm-{ceph-release}/{distro}/$basearch
+ enabled=1
+ priority=2
+ gpgcheck=1
+ gpgkey=https://download.ceph.com/keys/release.asc
+
+ [ceph-noarch]
+ name=Ceph noarch packages
+ baseurl=https://download.ceph.com/rpm-{ceph-release}/{distro}/noarch
+ enabled=1
+ priority=2
+ gpgcheck=1
+ gpgkey=https://download.ceph.com/keys/release.asc
+
+ [ceph-source]
+ name=Ceph source packages
+ baseurl=https://download.ceph.com/rpm-{ceph-release}/{distro}/SRPMS
+ enabled=0
+ priority=2
+ gpgcheck=1
+ gpgkey=https://download.ceph.com/keys/release.asc
+
+
+For development release packages, you may specify the repository
+for development releases instead. ::
+
+ [ceph]
+ name=Ceph packages for $basearch/$releasever
+ baseurl=https://download.ceph.com/rpm-testing/{distro}/$basearch
+ enabled=1
+ priority=2
+ gpgcheck=1
+ gpgkey=https://download.ceph.com/keys/release.asc
+
+ [ceph-noarch]
+ name=Ceph noarch packages
+ baseurl=https://download.ceph.com/rpm-testing/{distro}/noarch
+ enabled=1
+ priority=2
+ gpgcheck=1
+ gpgkey=https://download.ceph.com/keys/release.asc
+
+ [ceph-source]
+ name=Ceph source packages
+ baseurl=https://download.ceph.com/rpm-testing/{distro}/SRPMS
+ enabled=0
+ priority=2
+ gpgcheck=1
+ gpgkey=https://download.ceph.com/keys/release.asc
+
+
+For specific packages, you may retrieve them by specifically downloading the
+release package by name. Our development process generates a new release of Ceph
+every 3-4 weeks. These packages are faster-moving than the major releases.
+Development packages have new features integrated quickly, while still
+undergoing several weeks of QA prior to release.
+
+The repository package installs the repository details on your local system for
+use with ``yum``. Replace ``{distro}`` with your Linux distribution, and
+``{release}`` with the specific release of Ceph::
+
+ su -c 'rpm -Uvh https://download.ceph.com/rpms/{distro}/x86_64/ceph-{release}.el7.noarch.rpm'
+
+You can download the RPMs directly from::
+
+ https://download.ceph.com/rpm-testing
+
+.. tip:: For international users: There might be a mirror close to you where download Ceph from. For more information see: `Ceph Mirrors`_.
+
+
+Add Ceph Development
+====================
+
+Development repositories use the ``autobuild.asc`` key to verify packages.
+If you are developing Ceph and need to deploy and test specific Ceph branches,
+ensure that you remove repository entries for major releases first.
+
+
+Debian Packages
+---------------
+
+We automatically build Debian and Ubuntu packages for current
+development branches in the Ceph source code repository. These
+packages are intended for developers and QA only.
+
+Add our package repository to your system's list of APT sources, but
+replace ``{BRANCH}`` with the branch you'd like to use (e.g., chef-3,
+wip-hack, master). See `the gitbuilder page`_ for a complete
+list of distributions we build. ::
+
+ echo deb http://gitbuilder.ceph.com/ceph-deb-$(lsb_release -sc)-x86_64-basic/ref/{BRANCH} $(lsb_release -sc) main | sudo tee /etc/apt/sources.list.d/ceph.list
+
+
+RPM Packages
+------------
+
+For current development branches, you may add a Ceph entry to the
+``/etc/yum.repos.d`` directory. Create a ``ceph.repo`` file. In the example
+below, replace ``{distro}`` with your Linux distribution (e.g., ``el7``), and
+``{branch}`` with the name of the branch you want to install. ::
+
+
+ [ceph-source]
+ name=Ceph source packages
+ baseurl=http://gitbuilder.ceph.com/ceph-rpm-{distro}-x86_64-basic/ref/{branch}/SRPMS
+ enabled=0
+ gpgcheck=1
+ gpgkey=https://download.ceph.com/keys/autobuild.asc
+
+
+You may view http://gitbuilder.ceph.com directory to see which distributions
+Ceph supports.
+
+
+Add Apache/FastCGI
+==================
+
+Ceph Object Gateway works with ordinary Apache and FastCGI libraries. However,
+Ceph builds Apache and FastCGI packages that support 100-continue. To use the
+Ceph Apache and FastCGI packages, add them to your repository.
+
+
+Debian Packages
+---------------
+
+Add our Apache and FastCGI packages to your system's list of APT sources if you intend to
+use 100-continue. ::
+
+ echo deb http://gitbuilder.ceph.com/apache2-deb-$(lsb_release -sc)-x86_64-basic/ref/master $(lsb_release -sc) main | sudo tee /etc/apt/sources.list.d/ceph-apache.list
+ echo deb http://gitbuilder.ceph.com/libapache-mod-fastcgi-deb-$(lsb_release -sc)-x86_64-basic/ref/master $(lsb_release -sc) main | sudo tee /etc/apt/sources.list.d/ceph-fastcgi.list
+
+
+RPM Packages
+------------
+
+You may add a Ceph entry to the ``/etc/yum.repos.d`` directory. Create a
+``ceph-apache.repo`` file. In the example below, replace ``{distro}`` with your
+Linux distribution (e.g., ``el7``). You may view http://gitbuilder.ceph.com
+directory to see which distributions Ceph supports.
+::
+
+
+ [apache2-ceph-noarch]
+ name=Apache noarch packages for Ceph
+ baseurl=http://gitbuilder.ceph.com/apache2-rpm-{distro}-x86_64-basic/ref/master
+ enabled=1
+ priority=2
+ gpgcheck=1
+ gpgkey=https://download.ceph.com/keys/autobuild.asc
+
+ [apache2-ceph-source]
+ name=Apache source packages for Ceph
+ baseurl=http://gitbuilder.ceph.com/apache2-rpm-{distro}-x86_64-basic/ref/master
+ enabled=0
+ priority=2
+ gpgcheck=1
+ gpgkey=https://download.ceph.com/keys/autobuild.asc
+
+
+Repeat the forgoing process by creating a ``ceph-fastcgi.repo`` file. ::
+
+ [fastcgi-ceph-basearch]
+ name=FastCGI basearch packages for Ceph
+ baseurl=http://gitbuilder.ceph.com/mod_fastcgi-rpm-{distro}-x86_64-basic/ref/master
+ enabled=1
+ priority=2
+ gpgcheck=1
+ gpgkey=https://download.ceph.com/keys/autobuild.asc
+
+ [fastcgi-ceph-noarch]
+ name=FastCGI noarch packages for Ceph
+ baseurl=http://gitbuilder.ceph.com/mod_fastcgi-rpm-{distro}-x86_64-basic/ref/master
+ enabled=1
+ priority=2
+ gpgcheck=1
+ gpgkey=https://download.ceph.com/keys/autobuild.asc
+
+ [fastcgi-ceph-source]
+ name=FastCGI source packages for Ceph
+ baseurl=http://gitbuilder.ceph.com/mod_fastcgi-rpm-{distro}-x86_64-basic/ref/master
+ enabled=0
+ priority=2
+ gpgcheck=1
+ gpgkey=https://download.ceph.com/keys/autobuild.asc
+
+
+Download Packages
+=================
+
+If you are attempting to install behind a firewall in an environment without internet
+access, you must retrieve the packages (mirrored with all the necessary dependencies)
+before attempting an install.
+
+Debian Packages
+---------------
+
+Ceph requires additional additional third party libraries.
+
+- libaio1
+- libsnappy1
+- libcurl3
+- curl
+- libgoogle-perftools4
+- google-perftools
+- libleveldb1
+
+
+The repository package installs the repository details on your local system for
+use with ``apt``. Replace ``{release}`` with the latest Ceph release. Replace
+``{version}`` with the latest Ceph version number. Replace ``{distro}`` with
+your Linux distribution codename. Replace ``{arch}`` with the CPU architecture.
+
+::
+
+ wget -q https://download.ceph.com/debian-{release}/pool/main/c/ceph/ceph_{version}{distro}_{arch}.deb
+
+
+RPM Packages
+------------
+
+Ceph requires additional additional third party libraries.
+To add the EPEL repository, execute the following::
+
+ sudo yum install -y https://dl.fedoraproject.org/pub/epel/epel-release-latest-7.noarch.rpm
+
+Ceph requires the following packages:
+
+- snappy
+- leveldb
+- gdisk
+- python-argparse
+- gperftools-libs
+
+
+Packages are currently built for the RHEL/CentOS7 (``el7``) platforms. The
+repository package installs the repository details on your local system for use
+with ``yum``. Replace ``{distro}`` with your distribution. ::
+
+ su -c 'rpm -Uvh https://download.ceph.com/rpm-jewel/{distro}/noarch/ceph-{version}.{distro}.noarch.rpm'
+
+For example, for CentOS 7 (``el7``)::
+
+ su -c 'rpm -Uvh https://download.ceph.com/rpm-jewel/el7/noarch/ceph-release-1-0.el7.noarch.rpm'
+
+You can download the RPMs directly from::
+
+ https://download.ceph.com/rpm-jewel
+
+
+For earlier Ceph releases, replace ``{release-name}`` with the name
+with the name of the Ceph release. You may call ``lsb_release -sc`` on the command
+line to get the short codename. ::
+
+ su -c 'rpm -Uvh https://download.ceph.com/rpm-{release-name}/{distro}/noarch/ceph-{version}.{distro}.noarch.rpm'
+
+
+
+
+.. _Install Ceph Object Storage: ../install-storage-cluster
+.. _the testing Debian repository: https://download.ceph.com/debian-testing/dists
+.. _the gitbuilder page: http://gitbuilder.ceph.com
+.. _Ceph Mirrors: ../mirrors
diff --git a/src/ceph/doc/install/get-tarballs.rst b/src/ceph/doc/install/get-tarballs.rst
new file mode 100644
index 0000000..175d039
--- /dev/null
+++ b/src/ceph/doc/install/get-tarballs.rst
@@ -0,0 +1,14 @@
+====================================
+ Downloading a Ceph Release Tarball
+====================================
+
+As Ceph development progresses, the Ceph team releases new versions of the
+source code. You may download source code tarballs for Ceph releases here:
+
+`Ceph Release Tarballs`_
+
+.. tip:: For international users: There might be a mirror close to you where download Ceph from. For more information see: `Ceph Mirrors`_.
+
+
+.. _Ceph Release Tarballs: https://download.ceph.com/tarballs/
+.. _Ceph Mirrors: ../mirrors
diff --git a/src/ceph/doc/install/index.rst b/src/ceph/doc/install/index.rst
new file mode 100644
index 0000000..d9dde72
--- /dev/null
+++ b/src/ceph/doc/install/index.rst
@@ -0,0 +1,71 @@
+=======================
+ Installation (Manual)
+=======================
+
+
+Get Software
+============
+
+There are several methods for getting Ceph software. The easiest and most common
+method is to `get packages`_ by adding repositories for use with package
+management tools such as the Advanced Package Tool (APT) or Yellowdog Updater,
+Modified (YUM). You may also retrieve pre-compiled packages from the Ceph
+repository. Finally, you can retrieve tarballs or clone the Ceph source code
+repository and build Ceph yourself.
+
+
+.. toctree::
+ :maxdepth: 1
+
+ Get Packages <get-packages>
+ Get Tarballs <get-tarballs>
+ Clone Source <clone-source>
+ Build Ceph <build-ceph>
+ Ceph Mirrors <mirrors>
+
+
+Install Software
+================
+
+Once you have the Ceph software (or added repositories), installing the software
+is easy. To install packages on each :term:`Ceph Node` in your cluster. You may
+use ``ceph-deploy`` to install Ceph for your storage cluster, or use package
+management tools. You should install Yum Priorities for RHEL/CentOS and other
+distributions that use Yum if you intend to install the Ceph Object Gateway or
+QEMU.
+
+.. toctree::
+ :maxdepth: 1
+
+ Install ceph-deploy <install-ceph-deploy>
+ Install Ceph Storage Cluster <install-storage-cluster>
+ Install Ceph Object Gateway <install-ceph-gateway>
+ Install Virtualization for Block <install-vm-cloud>
+
+
+Deploy a Cluster Manually
+=========================
+
+Once you have Ceph installed on your nodes, you can deploy a cluster manually.
+The manual procedure is primarily for exemplary purposes for those developing
+deployment scripts with Chef, Juju, Puppet, etc.
+
+.. toctree::
+
+ Manual Deployment <manual-deployment>
+ Manual Deployment on FreeBSD <manual-freebsd-deployment>
+
+Upgrade Software
+================
+
+As new versions of Ceph become available, you may upgrade your cluster to take
+advantage of new functionality. Read the upgrade documentation before you
+upgrade your cluster. Sometimes upgrading Ceph requires you to follow an upgrade
+sequence.
+
+.. toctree::
+ :maxdepth: 2
+
+ Upgrading Ceph <upgrading-ceph>
+
+.. _get packages: ../install/get-packages
diff --git a/src/ceph/doc/install/install-ceph-deploy.rst b/src/ceph/doc/install/install-ceph-deploy.rst
new file mode 100644
index 0000000..d6516ad
--- /dev/null
+++ b/src/ceph/doc/install/install-ceph-deploy.rst
@@ -0,0 +1,23 @@
+=====================
+ Install Ceph Deploy
+=====================
+
+The ``ceph-deploy`` tool enables you to set up and tear down Ceph clusters
+for development, testing and proof-of-concept projects.
+
+
+APT
+---
+
+To install ``ceph-deploy`` with ``apt``, execute the following::
+
+ sudo apt-get update && sudo apt-get install ceph-deploy
+
+
+RPM
+---
+
+To install ``ceph-deploy`` with ``yum``, execute the following::
+
+ sudo yum install ceph-deploy
+
diff --git a/src/ceph/doc/install/install-ceph-gateway.rst b/src/ceph/doc/install/install-ceph-gateway.rst
new file mode 100644
index 0000000..cf599cd
--- /dev/null
+++ b/src/ceph/doc/install/install-ceph-gateway.rst
@@ -0,0 +1,605 @@
+===========================
+Install Ceph Object Gateway
+===========================
+
+As of `firefly` (v0.80), Ceph Object Gateway is running on Civetweb (embedded
+into the ``ceph-radosgw`` daemon) instead of Apache and FastCGI. Using Civetweb
+simplifies the Ceph Object Gateway installation and configuration.
+
+.. note:: To run the Ceph Object Gateway service, you should have a running
+ Ceph storage cluster, and the gateway host should have access to the
+ public network.
+
+.. note:: In version 0.80, the Ceph Object Gateway does not support SSL. You
+ may setup a reverse proxy server with SSL to dispatch HTTPS requests
+ as HTTP requests to CivetWeb.
+
+Execute the Pre-Installation Procedure
+--------------------------------------
+
+See Preflight_ and execute the pre-installation procedures on your Ceph Object
+Gateway node. Specifically, you should disable ``requiretty`` on your Ceph
+Deploy user, set SELinux to ``Permissive`` and set up a Ceph Deploy user with
+password-less ``sudo``. For Ceph Object Gateways, you will need to open the
+port that Civetweb will use in production.
+
+.. note:: Civetweb runs on port ``7480`` by default.
+
+Install Ceph Object Gateway
+---------------------------
+
+From the working directory of your administration server, install the Ceph
+Object Gateway package on the Ceph Object Gateway node. For example::
+
+ ceph-deploy install --rgw <gateway-node1> [<gateway-node2> ...]
+
+The ``ceph-common`` package is a dependency, so ``ceph-deploy`` will install
+this too. The ``ceph`` CLI tools are intended for administrators. To make your
+Ceph Object Gateway node an administrator node, execute the following from the
+working directory of your administration server::
+
+ ceph-deploy admin <node-name>
+
+Create a Gateway Instance
+-------------------------
+
+From the working directory of your administration server, create an instance of
+the Ceph Object Gateway on the Ceph Object Gateway. For example::
+
+ ceph-deploy rgw create <gateway-node1>
+
+Once the gateway is running, you should be able to access it on port ``7480``
+with an unauthenticated request like this::
+
+ http://client-node:7480
+
+If the gateway instance is working properly, you should receive a response like
+this::
+
+ <?xml version="1.0" encoding="UTF-8"?>
+ <ListAllMyBucketsResult xmlns="http://s3.amazonaws.com/doc/2006-03-01/">
+ <Owner>
+ <ID>anonymous</ID>
+ <DisplayName></DisplayName>
+ </Owner>
+ <Buckets>
+ </Buckets>
+ </ListAllMyBucketsResult>
+
+If at any point you run into trouble and you want to start over, execute the
+following to purge the configuration::
+
+ ceph-deploy purge <gateway-node1> [<gateway-node2>]
+ ceph-deploy purgedata <gateway-node1> [<gateway-node2>]
+
+If you execute ``purge``, you must re-install Ceph.
+
+Change the Default Port
+-----------------------
+
+Civetweb runs on port ``7480`` by default. To change the default port (e.g., to
+port ``80``), modify your Ceph configuration file in the working directory of
+your administration server. Add a section entitled
+``[client.rgw.<gateway-node>]``, replacing ``<gateway-node>`` with the short
+node name of your Ceph Object Gateway node (i.e., ``hostname -s``).
+
+.. note:: As of version 11.0.1, the Ceph Object Gateway **does** support SSL.
+ See `Using SSL with Civetweb`_ for information on how to set that up.
+
+For example, if your node name is ``gateway-node1``, add a section like this
+after the ``[global]`` section::
+
+ [client.rgw.gateway-node1]
+ rgw_frontends = "civetweb port=80"
+
+.. note:: Ensure that you leave no whitespace between ``port=<port-number>`` in
+ the ``rgw_frontends`` key/value pair. The ``[client.rgw.gateway-node1]``
+ heading identifies this portion of the Ceph configuration file as
+ configuring a Ceph Storage Cluster client where the client type is a Ceph
+ Object Gateway (i.e., ``rgw``), and the name of the instance is
+ ``gateway-node1``.
+
+Push the updated configuration file to your Ceph Object Gateway node
+(and other Ceph nodes)::
+
+ ceph-deploy --overwrite-conf config push <gateway-node> [<other-nodes>]
+
+To make the new port setting take effect, restart the Ceph Object
+Gateway::
+
+ sudo systemctl restart ceph-radosgw.service
+
+Finally, check to ensure that the port you selected is open on the node's
+firewall (e.g., port ``80``). If it is not open, add the port and reload the
+firewall configuration. If you use the ``firewalld`` daemon, execute::
+
+ sudo firewall-cmd --list-all
+ sudo firewall-cmd --zone=public --add-port 80/tcp --permanent
+ sudo firewall-cmd --reload
+
+If you use ``iptables``, execute::
+
+ sudo iptables --list
+ sudo iptables -I INPUT 1 -i <iface> -p tcp -s <ip-address>/<netmask> --dport 80 -j ACCEPT
+
+Replace ``<iface>`` and ``<ip-address>/<netmask>`` with the relevant values for
+your Ceph Object Gateway node.
+
+Once you have finished configuring ``iptables``, ensure that you make the
+change persistent so that it will be in effect when your Ceph Object Gateway
+node reboots. Execute::
+
+ sudo apt-get install iptables-persistent
+
+A terminal UI will open up. Select ``yes`` for the prompts to save current
+``IPv4`` iptables rules to ``/etc/iptables/rules.v4`` and current ``IPv6``
+iptables rules to ``/etc/iptables/rules.v6``.
+
+The ``IPv4`` iptables rule that you set in the earlier step will be loaded in
+``/etc/iptables/rules.v4`` and will be persistent across reboots.
+
+If you add a new ``IPv4`` iptables rule after installing
+``iptables-persistent`` you will have to add it to the rule file. In such case,
+execute the following as the ``root`` user::
+
+ iptables-save > /etc/iptables/rules.v4
+
+Using SSL with Civetweb
+-----------------------
+.. _Using SSL with Civetweb:
+
+Before using SSL with civetweb, you will need a certificate that will match
+the host name that that will be used to access the Ceph Object Gateway.
+You may wish to obtain one that has `subject alternate name` fields for
+more flexibility. If you intend to use S3-style subdomains
+(`Add Wildcard to DNS`_), you will need a `wildcard` certificate.
+
+Civetweb requires that the server key, server certificate, and any other
+CA or intermediate certificates be supplied in one file. Each of these
+items must be in `pem` form. Because the combined file contains the
+secret key, it should be protected from unauthorized access.
+
+To configure ssl operation, append ``s`` to the port number. Currently
+it is not possible to configure the radosgw to listen on both
+http and https, you must pick only one. So::
+
+ [client.rgw.gateway-node1]
+ rgw_frontends = civetweb port=443s ssl_certificate=/etc/ceph/private/keyandcert.pem
+
+Migrating from Apache to Civetweb
+---------------------------------
+
+If you are running the Ceph Object Gateway on Apache and FastCGI with Ceph
+Storage v0.80 or above, you are already running Civetweb--it starts with the
+``ceph-radosgw`` daemon and it's running on port 7480 by default so that it
+doesn't conflict with your Apache and FastCGI installation and other commonly
+used web service ports. Migrating to use Civetweb basically involves removing
+your Apache installation. Then, you must remove Apache and FastCGI settings
+from your Ceph configuration file and reset ``rgw_frontends`` to Civetweb.
+
+Referring back to the description for installing a Ceph Object Gateway with
+``ceph-deploy``, notice that the configuration file only has one setting
+``rgw_frontends`` (and that's assuming you elected to change the default port).
+The ``ceph-deploy`` utility generates the data directory and the keyring for
+you--placing the keyring in ``/var/lib/ceph/radosgw/{rgw-intance}``. The daemon
+looks in default locations, whereas you may have specified different settings
+in your Ceph configuration file. Since you already have keys and a data
+directory, you will want to maintain those paths in your Ceph configuration
+file if you used something other than default paths.
+
+A typical Ceph Object Gateway configuration file for an Apache-based deployment
+looks something similar as the following:
+
+On Red Hat Enterprise Linux::
+
+ [client.radosgw.gateway-node1]
+ host = {hostname}
+ keyring = /etc/ceph/ceph.client.radosgw.keyring
+ rgw socket path = ""
+ log file = /var/log/radosgw/client.radosgw.gateway-node1.log
+ rgw frontends = fastcgi socket\_port=9000 socket\_host=0.0.0.0
+ rgw print continue = false
+
+On Ubuntu::
+
+ [client.radosgw.gateway-node]
+ host = {hostname}
+ keyring = /etc/ceph/ceph.client.radosgw.keyring
+ rgw socket path = /var/run/ceph/ceph.radosgw.gateway.fastcgi.sock
+ log file = /var/log/radosgw/client.radosgw.gateway-node1.log
+
+To modify it for use with Civetweb, simply remove the Apache-specific settings
+such as ``rgw_socket_path`` and ``rgw_print_continue``. Then, change the
+``rgw_frontends`` setting to reflect Civetweb rather than the Apache FastCGI
+front end and specify the port number you intend to use. For example::
+
+ [client.radosgw.gateway-node1]
+ host = {hostname}
+ keyring = /etc/ceph/ceph.client.radosgw.keyring
+ log file = /var/log/radosgw/client.radosgw.gateway-node1.log
+ rgw_frontends = civetweb port=80
+
+Finally, restart the Ceph Object Gateway. On Red Hat Enterprise Linux execute::
+
+ sudo systemctl restart ceph-radosgw.service
+
+On Ubuntu execute::
+
+ sudo service radosgw restart id=rgw.<short-hostname>
+
+If you used a port number that is not open, you will also need to open that
+port on your firewall.
+
+Configure Bucket Sharding
+-------------------------
+
+A Ceph Object Gateway stores bucket index data in the ``index_pool``, which
+defaults to ``.rgw.buckets.index``. Sometimes users like to put many objects
+(hundreds of thousands to millions of objects) in a single bucket. If you do
+not use the gateway administration interface to set quotas for the maximum
+number of objects per bucket, the bucket index can suffer significant
+performance degradation when users place large numbers of objects into a
+bucket.
+
+In Ceph 0.94, you may shard bucket indices to help prevent performance
+bottlenecks when you allow a high number of objects per bucket. The
+``rgw_override_bucket_index_max_shards`` setting allows you to set a maximum
+number of shards per bucket. The default value is ``0``, which means bucket
+index sharding is off by default.
+
+To turn bucket index sharding on, set ``rgw_override_bucket_index_max_shards``
+to a value greater than ``0``.
+
+For simple configurations, you may add ``rgw_override_bucket_index_max_shards``
+to your Ceph configuration file. Add it under ``[global]`` to create a
+system-wide value. You can also set it for each instance in your Ceph
+configuration file.
+
+Once you have changed your bucket sharding configuration in your Ceph
+configuration file, restart your gateway. On Red Hat Enteprise Linux execute::
+
+ sudo systemctl restart ceph-radosgw.service
+
+On Ubuntu execute::
+
+ sudo service radosgw restart id=rgw.<short-hostname>
+
+For federated configurations, each zone may have a different ``index_pool``
+setting for failover. To make the value consistent for a region's zones, you
+may set ``rgw_override_bucket_index_max_shards`` in a gateway's region
+configuration. For example::
+
+ radosgw-admin region get > region.json
+
+Open the ``region.json`` file and edit the ``bucket_index_max_shards`` setting
+for each named zone. Save the ``region.json`` file and reset the region. For
+example::
+
+ radosgw-admin region set < region.json
+
+Once you have updated your region, update the region map. For example::
+
+ radosgw-admin regionmap update --name client.rgw.ceph-client
+
+Where ``client.rgw.ceph-client`` is the name of the gateway user.
+
+.. note:: Mapping the index pool (for each zone, if applicable) to a CRUSH
+ ruleset of SSD-based OSDs may also help with bucket index performance.
+
+Add Wildcard to DNS
+-------------------
+.. _Add Wildcard to DNS:
+
+To use Ceph with S3-style subdomains (e.g., bucket-name.domain-name.com), you
+need to add a wildcard to the DNS record of the DNS server you use with the
+``ceph-radosgw`` daemon.
+
+The address of the DNS must also be specified in the Ceph configuration file
+with the ``rgw dns name = {hostname}`` setting.
+
+For ``dnsmasq``, add the following address setting with a dot (.) prepended to
+the host name::
+
+ address=/.{hostname-or-fqdn}/{host-ip-address}
+
+For example::
+
+ address=/.gateway-node1/192.168.122.75
+
+
+For ``bind``, add a wildcard to the DNS record. For example::
+
+ $TTL 604800
+ @ IN SOA gateway-node1. root.gateway-node1. (
+ 2 ; Serial
+ 604800 ; Refresh
+ 86400 ; Retry
+ 2419200 ; Expire
+ 604800 ) ; Negative Cache TTL
+ ;
+ @ IN NS gateway-node1.
+ @ IN A 192.168.122.113
+ * IN CNAME @
+
+Restart your DNS server and ping your server with a subdomain to ensure that
+your DNS configuration works as expected::
+
+ ping mybucket.{hostname}
+
+For example::
+
+ ping mybucket.gateway-node1
+
+Add Debugging (if needed)
+-------------------------
+
+Once you finish the setup procedure, if you encounter issues with your
+configuration, you can add debugging to the ``[global]`` section of your Ceph
+configuration file and restart the gateway(s) to help troubleshoot any
+configuration issues. For example::
+
+ [global]
+ #append the following in the global section.
+ debug ms = 1
+ debug rgw = 20
+
+Using the Gateway
+-----------------
+
+To use the REST interfaces, first create an initial Ceph Object Gateway user
+for the S3 interface. Then, create a subuser for the Swift interface. You then
+need to verify if the created users are able to access the gateway.
+
+Create a RADOSGW User for S3 Access
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+A ``radosgw`` user needs to be created and granted access. The command ``man
+radosgw-admin`` will provide information on additional command options.
+
+To create the user, execute the following on the ``gateway host``::
+
+ sudo radosgw-admin user create --uid="testuser" --display-name="First User"
+
+The output of the command will be something like the following::
+
+ {
+ "user_id": "testuser",
+ "display_name": "First User",
+ "email": "",
+ "suspended": 0,
+ "max_buckets": 1000,
+ "auid": 0,
+ "subusers": [],
+ "keys": [{
+ "user": "testuser",
+ "access_key": "I0PJDPCIYZ665MW88W9R",
+ "secret_key": "dxaXZ8U90SXydYzyS5ivamEP20hkLSUViiaR+ZDA"
+ }],
+ "swift_keys": [],
+ "caps": [],
+ "op_mask": "read, write, delete",
+ "default_placement": "",
+ "placement_tags": [],
+ "bucket_quota": {
+ "enabled": false,
+ "max_size_kb": -1,
+ "max_objects": -1
+ },
+ "user_quota": {
+ "enabled": false,
+ "max_size_kb": -1,
+ "max_objects": -1
+ },
+ "temp_url_keys": []
+ }
+
+.. note:: The values of ``keys->access_key`` and ``keys->secret_key`` are
+ needed for access validation.
+
+.. important:: Check the key output. Sometimes ``radosgw-admin`` generates a
+ JSON escape character ``\`` in ``access_key`` or ``secret_key``
+ and some clients do not know how to handle JSON escape
+ characters. Remedies include removing the JSON escape character
+ ``\``, encapsulating the string in quotes, regenerating the key
+ and ensuring that it does not have a JSON escape character or
+ specify the key and secret manually. Also, if ``radosgw-admin``
+ generates a JSON escape character ``\`` and a forward slash ``/``
+ together in a key, like ``\/``, only remove the JSON escape
+ character ``\``. Do not remove the forward slash ``/`` as it is
+ a valid character in the key.
+
+Create a Swift User
+^^^^^^^^^^^^^^^^^^^
+
+A Swift subuser needs to be created if this kind of access is needed. Creating
+a Swift user is a two step process. The first step is to create the user. The
+second is to create the secret key.
+
+Execute the following steps on the ``gateway host``:
+
+Create the Swift user::
+
+ sudo radosgw-admin subuser create --uid=testuser --subuser=testuser:swift --access=full
+
+The output will be something like the following::
+
+ {
+ "user_id": "testuser",
+ "display_name": "First User",
+ "email": "",
+ "suspended": 0,
+ "max_buckets": 1000,
+ "auid": 0,
+ "subusers": [{
+ "id": "testuser:swift",
+ "permissions": "full-control"
+ }],
+ "keys": [{
+ "user": "testuser:swift",
+ "access_key": "3Y1LNW4Q6X0Y53A52DET",
+ "secret_key": ""
+ }, {
+ "user": "testuser",
+ "access_key": "I0PJDPCIYZ665MW88W9R",
+ "secret_key": "dxaXZ8U90SXydYzyS5ivamEP20hkLSUViiaR+ZDA"
+ }],
+ "swift_keys": [],
+ "caps": [],
+ "op_mask": "read, write, delete",
+ "default_placement": "",
+ "placement_tags": [],
+ "bucket_quota": {
+ "enabled": false,
+ "max_size_kb": -1,
+ "max_objects": -1
+ },
+ "user_quota": {
+ "enabled": false,
+ "max_size_kb": -1,
+ "max_objects": -1
+ },
+ "temp_url_keys": []
+ }
+
+Create the secret key::
+
+ sudo radosgw-admin key create --subuser=testuser:swift --key-type=swift --gen-secret
+
+The output will be something like the following::
+
+ {
+ "user_id": "testuser",
+ "display_name": "First User",
+ "email": "",
+ "suspended": 0,
+ "max_buckets": 1000,
+ "auid": 0,
+ "subusers": [{
+ "id": "testuser:swift",
+ "permissions": "full-control"
+ }],
+ "keys": [{
+ "user": "testuser:swift",
+ "access_key": "3Y1LNW4Q6X0Y53A52DET",
+ "secret_key": ""
+ }, {
+ "user": "testuser",
+ "access_key": "I0PJDPCIYZ665MW88W9R",
+ "secret_key": "dxaXZ8U90SXydYzyS5ivamEP20hkLSUViiaR+ZDA"
+ }],
+ "swift_keys": [{
+ "user": "testuser:swift",
+ "secret_key": "244+fz2gSqoHwR3lYtSbIyomyPHf3i7rgSJrF\/IA"
+ }],
+ "caps": [],
+ "op_mask": "read, write, delete",
+ "default_placement": "",
+ "placement_tags": [],
+ "bucket_quota": {
+ "enabled": false,
+ "max_size_kb": -1,
+ "max_objects": -1
+ },
+ "user_quota": {
+ "enabled": false,
+ "max_size_kb": -1,
+ "max_objects": -1
+ },
+ "temp_url_keys": []
+ }
+
+Access Verification
+^^^^^^^^^^^^^^^^^^^
+
+Test S3 Access
+""""""""""""""
+
+You need to write and run a Python test script for verifying S3 access. The S3
+access test script will connect to the ``radosgw``, create a new bucket and
+list all buckets. The values for ``aws_access_key_id`` and
+``aws_secret_access_key`` are taken from the values of ``access_key`` and
+``secret_key`` returned by the ``radosgw-admin`` command.
+
+Execute the following steps:
+
+#. You will need to install the ``python-boto`` package::
+
+ sudo yum install python-boto
+
+#. Create the Python script::
+
+ vi s3test.py
+
+#. Add the following contents to the file::
+
+ import boto.s3.connection
+
+ access_key = 'I0PJDPCIYZ665MW88W9R'
+ secret_key = 'dxaXZ8U90SXydYzyS5ivamEP20hkLSUViiaR+ZDA'
+ conn = boto.connect_s3(
+ aws_access_key_id=access_key,
+ aws_secret_access_key=secret_key,
+ host='{hostname}', port={port},
+ is_secure=False, calling_format=boto.s3.connection.OrdinaryCallingFormat(),
+ )
+
+ bucket = conn.create_bucket('my-new-bucket')
+ for bucket in conn.get_all_buckets():
+ print "{name} {created}".format(
+ name=bucket.name,
+ created=bucket.creation_date,
+ )
+
+
+ Replace ``{hostname}`` with the hostname of the host where you have
+ configured the gateway service i.e., the ``gateway host``. Replace ``{port}``
+ with the port number you are using with Civetweb.
+
+#. Run the script::
+
+ python s3test.py
+
+ The output will be something like the following::
+
+ my-new-bucket 2015-02-16T17:09:10.000Z
+
+Test swift access
+"""""""""""""""""
+
+Swift access can be verified via the ``swift`` command line client. The command
+``man swift`` will provide more information on available command line options.
+
+To install ``swift`` client, execute the following commands. On Red Hat
+Enterprise Linux::
+
+ sudo yum install python-setuptools
+ sudo easy_install pip
+ sudo pip install --upgrade setuptools
+ sudo pip install --upgrade python-swiftclient
+
+On Debian-based distributions::
+
+ sudo apt-get install python-setuptools
+ sudo easy_install pip
+ sudo pip install --upgrade setuptools
+ sudo pip install --upgrade python-swiftclient
+
+To test swift access, execute the following::
+
+ swift -A http://{IP ADDRESS}:{port}/auth/1.0 -U testuser:swift -K '{swift_secret_key}' list
+
+Replace ``{IP ADDRESS}`` with the public IP address of the gateway server and
+``{swift_secret_key}`` with its value from the output of ``radosgw-admin key
+create`` command executed for the ``swift`` user. Replace {port} with the port
+number you are using with Civetweb (e.g., ``7480`` is the default). If you
+don't replace the port, it will default to port ``80``.
+
+For example::
+
+ swift -A http://10.19.143.116:7480/auth/1.0 -U testuser:swift -K '244+fz2gSqoHwR3lYtSbIyomyPHf3i7rgSJrF/IA' list
+
+The output should be::
+
+ my-new-bucket
+
+.. _Preflight: ../../start/quick-start-preflight
diff --git a/src/ceph/doc/install/install-storage-cluster.rst b/src/ceph/doc/install/install-storage-cluster.rst
new file mode 100644
index 0000000..5da2c68
--- /dev/null
+++ b/src/ceph/doc/install/install-storage-cluster.rst
@@ -0,0 +1,91 @@
+==============================
+ Install Ceph Storage Cluster
+==============================
+
+This guide describes installing Ceph packages manually. This procedure
+is only for users who are not installing with a deployment tool such as
+``ceph-deploy``, ``chef``, ``juju``, etc.
+
+.. tip:: You can also use ``ceph-deploy`` to install Ceph packages, which may
+ be more convenient since you can install ``ceph`` on multiple hosts with
+ a single command.
+
+
+Installing with APT
+===================
+
+Once you have added either release or development packages to APT, you should
+update APT's database and install Ceph::
+
+ sudo apt-get update && sudo apt-get install ceph ceph-mds
+
+
+Installing with RPM
+===================
+
+To install Ceph with RPMs, execute the following steps:
+
+
+#. Install ``yum-plugin-priorities``. ::
+
+ sudo yum install yum-plugin-priorities
+
+#. Ensure ``/etc/yum/pluginconf.d/priorities.conf`` exists.
+
+#. Ensure ``priorities.conf`` enables the plugin. ::
+
+ [main]
+ enabled = 1
+
+#. Ensure your YUM ``ceph.repo`` entry includes ``priority=2``. See
+ `Get Packages`_ for details::
+
+ [ceph]
+ name=Ceph packages for $basearch
+ baseurl=https://download.ceph.com/rpm-{ceph-release}/{distro}/$basearch
+ enabled=1
+ priority=2
+ gpgcheck=1
+ gpgkey=https://download.ceph.com/keys/release.asc
+
+ [ceph-noarch]
+ name=Ceph noarch packages
+ baseurl=https://download.ceph.com/rpm-{ceph-release}/{distro}/noarch
+ enabled=1
+ priority=2
+ gpgcheck=1
+ gpgkey=https://download.ceph.com/keys/release.asc
+
+ [ceph-source]
+ name=Ceph source packages
+ baseurl=https://download.ceph.com/rpm-{ceph-release}/{distro}/SRPMS
+ enabled=0
+ priority=2
+ gpgcheck=1
+ gpgkey=https://download.ceph.com/keys/release.asc
+
+
+#. Install pre-requisite packages::
+
+ sudo yum install snappy leveldb gdisk python-argparse gperftools-libs
+
+
+Once you have added either release or development packages, or added a
+``ceph.repo`` file to ``/etc/yum.repos.d``, you can install Ceph packages. ::
+
+ sudo yum install ceph
+
+
+Installing a Build
+==================
+
+If you build Ceph from source code, you may install Ceph in user space
+by executing the following::
+
+ sudo make install
+
+If you install Ceph locally, ``make`` will place the executables in
+``usr/local/bin``. You may add the Ceph configuration file to the
+``usr/local/bin`` directory to run Ceph from a single directory.
+
+.. _Get Packages: ../get-packages
diff --git a/src/ceph/doc/install/install-vm-cloud.rst b/src/ceph/doc/install/install-vm-cloud.rst
new file mode 100644
index 0000000..a28a76a
--- /dev/null
+++ b/src/ceph/doc/install/install-vm-cloud.rst
@@ -0,0 +1,128 @@
+=========================================
+ Install Virtualization for Block Device
+=========================================
+
+If you intend to use Ceph Block Devices and the Ceph Storage Cluster as a
+backend for Virtual Machines (VMs) or :term:`Cloud Platforms` the QEMU/KVM and
+``libvirt`` packages are important for enabling VMs and cloud platforms.
+Examples of VMs include: QEMU/KVM, XEN, VMWare, LXC, VirtualBox, etc. Examples
+of Cloud Platforms include OpenStack, CloudStack, OpenNebula, etc.
+
+
+.. ditaa:: +---------------------------------------------------+
+ | libvirt |
+ +------------------------+--------------------------+
+ |
+ | configures
+ v
+ +---------------------------------------------------+
+ | QEMU |
+ +---------------------------------------------------+
+ | librbd |
+ +------------------------+-+------------------------+
+ | OSDs | | Monitors |
+ +------------------------+ +------------------------+
+
+
+Install QEMU
+============
+
+QEMU KVM can interact with Ceph Block Devices via ``librbd``, which is an
+important feature for using Ceph with cloud platforms. Once you install QEMU,
+see `QEMU and Block Devices`_ for usage.
+
+
+Debian Packages
+---------------
+
+QEMU packages are incorporated into Ubuntu 12.04 Precise Pangolin and later
+versions. To install QEMU, execute the following::
+
+ sudo apt-get install qemu
+
+
+RPM Packages
+------------
+
+To install QEMU, execute the following:
+
+
+#. Update your repositories. ::
+
+ sudo yum update
+
+#. Install QEMU for Ceph. ::
+
+ sudo yum install qemu-kvm qemu-kvm-tools qemu-img
+
+#. Install additional QEMU packages (optional)::
+
+ sudo yum install qemu-guest-agent qemu-guest-agent-win32
+
+
+Building QEMU
+-------------
+
+To build QEMU from source, use the following procedure::
+
+ cd {your-development-directory}
+ git clone git://git.qemu.org/qemu.git
+ cd qemu
+ ./configure --enable-rbd
+ make; make install
+
+
+
+Install libvirt
+===============
+
+To use ``libvirt`` with Ceph, you must have a running Ceph Storage Cluster, and
+you must have installed and configured QEMU. See `Using libvirt with Ceph Block
+Device`_ for usage.
+
+
+Debian Packages
+---------------
+
+``libvirt`` packages are incorporated into Ubuntu 12.04 Precise Pangolin and
+later versions of Ubuntu. To install ``libvirt`` on these distributions,
+execute the following::
+
+ sudo apt-get update && sudo apt-get install libvirt-bin
+
+
+RPM Packages
+------------
+
+To use ``libvirt`` with a Ceph Storage Cluster, you must have a running Ceph
+Storage Cluster and you must also install a version of QEMU with ``rbd`` format
+support. See `Install QEMU`_ for details.
+
+
+``libvirt`` packages are incorporated into the recent CentOS/RHEL distributions.
+To install ``libvirt``, execute the following::
+
+ sudo yum install libvirt
+
+
+Building ``libvirt``
+--------------------
+
+To build ``libvirt`` from source, clone the ``libvirt`` repository and use
+`AutoGen`_ to generate the build. Then, execute ``make`` and ``make install`` to
+complete the installation. For example::
+
+ git clone git://libvirt.org/libvirt.git
+ cd libvirt
+ ./autogen.sh
+ make
+ sudo make install
+
+See `libvirt Installation`_ for details.
+
+
+
+.. _libvirt Installation: http://www.libvirt.org/compiling.html
+.. _AutoGen: http://www.gnu.org/software/autogen/
+.. _QEMU and Block Devices: ../../rbd/qemu-rbd
+.. _Using libvirt with Ceph Block Device: ../../rbd/libvirt
diff --git a/src/ceph/doc/install/manual-deployment.rst b/src/ceph/doc/install/manual-deployment.rst
new file mode 100644
index 0000000..0d789a6
--- /dev/null
+++ b/src/ceph/doc/install/manual-deployment.rst
@@ -0,0 +1,488 @@
+===================
+ Manual Deployment
+===================
+
+All Ceph clusters require at least one monitor, and at least as many OSDs as
+copies of an object stored on the cluster. Bootstrapping the initial monitor(s)
+is the first step in deploying a Ceph Storage Cluster. Monitor deployment also
+sets important criteria for the entire cluster, such as the number of replicas
+for pools, the number of placement groups per OSD, the heartbeat intervals,
+whether authentication is required, etc. Most of these values are set by
+default, so it's useful to know about them when setting up your cluster for
+production.
+
+Following the same configuration as `Installation (Quick)`_, we will set up a
+cluster with ``node1`` as the monitor node, and ``node2`` and ``node3`` for
+OSD nodes.
+
+
+
+.. ditaa::
+ /------------------\ /----------------\
+ | Admin Node | | node1 |
+ | +-------->+ |
+ | | | cCCC |
+ \---------+--------/ \----------------/
+ |
+ | /----------------\
+ | | node2 |
+ +----------------->+ |
+ | | cCCC |
+ | \----------------/
+ |
+ | /----------------\
+ | | node3 |
+ +----------------->| |
+ | cCCC |
+ \----------------/
+
+
+Monitor Bootstrapping
+=====================
+
+Bootstrapping a monitor (a Ceph Storage Cluster, in theory) requires
+a number of things:
+
+- **Unique Identifier:** The ``fsid`` is a unique identifier for the cluster,
+ and stands for File System ID from the days when the Ceph Storage Cluster was
+ principally for the Ceph Filesystem. Ceph now supports native interfaces,
+ block devices, and object storage gateway interfaces too, so ``fsid`` is a
+ bit of a misnomer.
+
+- **Cluster Name:** Ceph clusters have a cluster name, which is a simple string
+ without spaces. The default cluster name is ``ceph``, but you may specify
+ a different cluster name. Overriding the default cluster name is
+ especially useful when you are working with multiple clusters and you need to
+ clearly understand which cluster your are working with.
+
+ For example, when you run multiple clusters in a `federated architecture`_,
+ the cluster name (e.g., ``us-west``, ``us-east``) identifies the cluster for
+ the current CLI session. **Note:** To identify the cluster name on the
+ command line interface, specify the Ceph configuration file with the
+ cluster name (e.g., ``ceph.conf``, ``us-west.conf``, ``us-east.conf``, etc.).
+ Also see CLI usage (``ceph --cluster {cluster-name}``).
+
+- **Monitor Name:** Each monitor instance within a cluster has a unique name.
+ In common practice, the Ceph Monitor name is the host name (we recommend one
+ Ceph Monitor per host, and no commingling of Ceph OSD Daemons with
+ Ceph Monitors). You may retrieve the short hostname with ``hostname -s``.
+
+- **Monitor Map:** Bootstrapping the initial monitor(s) requires you to
+ generate a monitor map. The monitor map requires the ``fsid``, the cluster
+ name (or uses the default), and at least one host name and its IP address.
+
+- **Monitor Keyring**: Monitors communicate with each other via a
+ secret key. You must generate a keyring with a monitor secret and provide
+ it when bootstrapping the initial monitor(s).
+
+- **Administrator Keyring**: To use the ``ceph`` CLI tools, you must have
+ a ``client.admin`` user. So you must generate the admin user and keyring,
+ and you must also add the ``client.admin`` user to the monitor keyring.
+
+The foregoing requirements do not imply the creation of a Ceph Configuration
+file. However, as a best practice, we recommend creating a Ceph configuration
+file and populating it with the ``fsid``, the ``mon initial members`` and the
+``mon host`` settings.
+
+You can get and set all of the monitor settings at runtime as well. However,
+a Ceph Configuration file may contain only those settings that override the
+default values. When you add settings to a Ceph configuration file, these
+settings override the default settings. Maintaining those settings in a
+Ceph configuration file makes it easier to maintain your cluster.
+
+The procedure is as follows:
+
+
+#. Log in to the initial monitor node(s)::
+
+ ssh {hostname}
+
+ For example::
+
+ ssh node1
+
+
+#. Ensure you have a directory for the Ceph configuration file. By default,
+ Ceph uses ``/etc/ceph``. When you install ``ceph``, the installer will
+ create the ``/etc/ceph`` directory automatically. ::
+
+ ls /etc/ceph
+
+ **Note:** Deployment tools may remove this directory when purging a
+ cluster (e.g., ``ceph-deploy purgedata {node-name}``, ``ceph-deploy purge
+ {node-name}``).
+
+#. Create a Ceph configuration file. By default, Ceph uses
+ ``ceph.conf``, where ``ceph`` reflects the cluster name. ::
+
+ sudo vim /etc/ceph/ceph.conf
+
+
+#. Generate a unique ID (i.e., ``fsid``) for your cluster. ::
+
+ uuidgen
+
+
+#. Add the unique ID to your Ceph configuration file. ::
+
+ fsid = {UUID}
+
+ For example::
+
+ fsid = a7f64266-0894-4f1e-a635-d0aeaca0e993
+
+
+#. Add the initial monitor(s) to your Ceph configuration file. ::
+
+ mon initial members = {hostname}[,{hostname}]
+
+ For example::
+
+ mon initial members = node1
+
+
+#. Add the IP address(es) of the initial monitor(s) to your Ceph configuration
+ file and save the file. ::
+
+ mon host = {ip-address}[,{ip-address}]
+
+ For example::
+
+ mon host = 192.168.0.1
+
+ **Note:** You may use IPv6 addresses instead of IPv4 addresses, but
+ you must set ``ms bind ipv6`` to ``true``. See `Network Configuration
+ Reference`_ for details about network configuration.
+
+#. Create a keyring for your cluster and generate a monitor secret key. ::
+
+ ceph-authtool --create-keyring /tmp/ceph.mon.keyring --gen-key -n mon. --cap mon 'allow *'
+
+
+#. Generate an administrator keyring, generate a ``client.admin`` user and add
+ the user to the keyring. ::
+
+ sudo ceph-authtool --create-keyring /etc/ceph/ceph.client.admin.keyring --gen-key -n client.admin --set-uid=0 --cap mon 'allow *' --cap osd 'allow *' --cap mds 'allow *' --cap mgr 'allow *'
+
+
+#. Add the ``client.admin`` key to the ``ceph.mon.keyring``. ::
+
+ ceph-authtool /tmp/ceph.mon.keyring --import-keyring /etc/ceph/ceph.client.admin.keyring
+
+
+#. Generate a monitor map using the hostname(s), host IP address(es) and the FSID.
+ Save it as ``/tmp/monmap``::
+
+ monmaptool --create --add {hostname} {ip-address} --fsid {uuid} /tmp/monmap
+
+ For example::
+
+ monmaptool --create --add node1 192.168.0.1 --fsid a7f64266-0894-4f1e-a635-d0aeaca0e993 /tmp/monmap
+
+
+#. Create a default data directory (or directories) on the monitor host(s). ::
+
+ sudo mkdir /var/lib/ceph/mon/{cluster-name}-{hostname}
+
+ For example::
+
+ sudo mkdir /var/lib/ceph/mon/ceph-node1
+
+ See `Monitor Config Reference - Data`_ for details.
+
+#. Populate the monitor daemon(s) with the monitor map and keyring. ::
+
+ sudo -u ceph ceph-mon [--cluster {cluster-name}] --mkfs -i {hostname} --monmap /tmp/monmap --keyring /tmp/ceph.mon.keyring
+
+ For example::
+
+ sudo -u ceph ceph-mon --mkfs -i node1 --monmap /tmp/monmap --keyring /tmp/ceph.mon.keyring
+
+
+#. Consider settings for a Ceph configuration file. Common settings include
+ the following::
+
+ [global]
+ fsid = {cluster-id}
+ mon initial members = {hostname}[, {hostname}]
+ mon host = {ip-address}[, {ip-address}]
+ public network = {network}[, {network}]
+ cluster network = {network}[, {network}]
+ auth cluster required = cephx
+ auth service required = cephx
+ auth client required = cephx
+ osd journal size = {n}
+ osd pool default size = {n} # Write an object n times.
+ osd pool default min size = {n} # Allow writing n copy in a degraded state.
+ osd pool default pg num = {n}
+ osd pool default pgp num = {n}
+ osd crush chooseleaf type = {n}
+
+ In the foregoing example, the ``[global]`` section of the configuration might
+ look like this::
+
+ [global]
+ fsid = a7f64266-0894-4f1e-a635-d0aeaca0e993
+ mon initial members = node1
+ mon host = 192.168.0.1
+ public network = 192.168.0.0/24
+ auth cluster required = cephx
+ auth service required = cephx
+ auth client required = cephx
+ osd journal size = 1024
+ osd pool default size = 2
+ osd pool default min size = 1
+ osd pool default pg num = 333
+ osd pool default pgp num = 333
+ osd crush chooseleaf type = 1
+
+#. Touch the ``done`` file.
+
+ Mark that the monitor is created and ready to be started::
+
+ sudo touch /var/lib/ceph/mon/ceph-node1/done
+
+#. Start the monitor(s).
+
+ For Ubuntu, use Upstart::
+
+ sudo start ceph-mon id=node1 [cluster={cluster-name}]
+
+ In this case, to allow the start of the daemon at each reboot you
+ must create two empty files like this::
+
+ sudo touch /var/lib/ceph/mon/{cluster-name}-{hostname}/upstart
+
+ For example::
+
+ sudo touch /var/lib/ceph/mon/ceph-node1/upstart
+
+ For Debian/CentOS/RHEL, use sysvinit::
+
+ sudo /etc/init.d/ceph start mon.node1
+
+
+#. Verify that Ceph created the default pools. ::
+
+ ceph osd lspools
+
+ You should see output like this::
+
+ 0 data,1 metadata,2 rbd,
+
+
+#. Verify that the monitor is running. ::
+
+ ceph -s
+
+ You should see output that the monitor you started is up and running, and
+ you should see a health error indicating that placement groups are stuck
+ inactive. It should look something like this::
+
+ cluster a7f64266-0894-4f1e-a635-d0aeaca0e993
+ health HEALTH_ERR 192 pgs stuck inactive; 192 pgs stuck unclean; no osds
+ monmap e1: 1 mons at {node1=192.168.0.1:6789/0}, election epoch 1, quorum 0 node1
+ osdmap e1: 0 osds: 0 up, 0 in
+ pgmap v2: 192 pgs, 3 pools, 0 bytes data, 0 objects
+ 0 kB used, 0 kB / 0 kB avail
+ 192 creating
+
+ **Note:** Once you add OSDs and start them, the placement group health errors
+ should disappear. See the next section for details.
+
+Manager daemon configuration
+============================
+
+On each node where you run a ceph-mon daemon, you should also set up a ceph-mgr daemon.
+
+See :ref:`mgr-administrator-guide`
+
+Adding OSDs
+===========
+
+Once you have your initial monitor(s) running, you should add OSDs. Your cluster
+cannot reach an ``active + clean`` state until you have enough OSDs to handle the
+number of copies of an object (e.g., ``osd pool default size = 2`` requires at
+least two OSDs). After bootstrapping your monitor, your cluster has a default
+CRUSH map; however, the CRUSH map doesn't have any Ceph OSD Daemons mapped to
+a Ceph Node.
+
+
+Short Form
+----------
+
+Ceph provides the ``ceph-disk`` utility, which can prepare a disk, partition or
+directory for use with Ceph. The ``ceph-disk`` utility creates the OSD ID by
+incrementing the index. Additionally, ``ceph-disk`` will add the new OSD to the
+CRUSH map under the host for you. Execute ``ceph-disk -h`` for CLI details.
+The ``ceph-disk`` utility automates the steps of the `Long Form`_ below. To
+create the first two OSDs with the short form procedure, execute the following
+on ``node2`` and ``node3``:
+
+
+#. Prepare the OSD. ::
+
+ ssh {node-name}
+ sudo ceph-disk prepare --cluster {cluster-name} --cluster-uuid {uuid} {data-path} [{journal-path}]
+
+ For example::
+
+ ssh node1
+ sudo ceph-disk prepare --cluster ceph --cluster-uuid a7f64266-0894-4f1e-a635-d0aeaca0e993 --fs-type ext4 /dev/hdd1
+
+
+#. Activate the OSD::
+
+ sudo ceph-disk activate {data-path} [--activate-key {path}]
+
+ For example::
+
+ sudo ceph-disk activate /dev/hdd1
+
+ **Note:** Use the ``--activate-key`` argument if you do not have a copy
+ of ``/var/lib/ceph/bootstrap-osd/{cluster}.keyring`` on the Ceph Node.
+
+
+Long Form
+---------
+
+Without the benefit of any helper utilities, create an OSD and add it to the
+cluster and CRUSH map with the following procedure. To create the first two
+OSDs with the long form procedure, execute the following steps for each OSD.
+
+.. note:: This procedure does not describe deployment on top of dm-crypt
+ making use of the dm-crypt 'lockbox'.
+
+#. Connect to the OSD host and become root. ::
+
+ ssh {node-name}
+ sudo bash
+
+#. Generate a UUID for the OSD. ::
+
+ UUID=$(uuidgen)
+
+#. Generate a cephx key for the OSD. ::
+
+ OSD_SECRET=$(ceph-authtool --gen-print-key)
+
+#. Create the OSD. Note that an OSD ID can be provided as an
+ additional argument to ``ceph osd new`` if you need to reuse a
+ previously-destroyed OSD id. We assume that the
+ ``client.bootstrap-osd`` key is present on the machine. You may
+ alternatively execute this command as ``client.admin`` on a
+ different host where that key is present.::
+
+ ID=$(echo "{\"cephx_secret\": \"$OSD_SECRET\"}" | \
+ ceph osd new $UUID -i - \
+ -n client.bootstrap-osd -k /var/lib/ceph/bootstrap-osd/ceph.keyring)
+
+#. Create the default directory on your new OSD. ::
+
+ mkdir /var/lib/ceph/osd/ceph-$ID
+
+#. If the OSD is for a drive other than the OS drive, prepare it
+ for use with Ceph, and mount it to the directory you just created. ::
+
+ mkfs.xfs /dev/{DEV}
+ mount /dev/{DEV} /var/lib/ceph/osd/ceph-$ID
+
+#. Write the secret to the OSD keyring file. ::
+
+ ceph-authtool --create-keyring /var/lib/ceph/osd/ceph-$ID/keyring \
+ --name osd.$ID --add-key $OSD_SECRET
+
+#. Initialize the OSD data directory. ::
+
+ ceph-osd -i $ID --mkfs --osd-uuid $UUID
+
+#. Fix ownership. ::
+
+ chown -R ceph:ceph /var/lib/ceph/osd/ceph-$ID
+
+#. After you add an OSD to Ceph, the OSD is in your configuration. However,
+ it is not yet running. You must start
+ your new OSD before it can begin receiving data.
+
+ For modern systemd distributions::
+
+ systemctl enable ceph-osd@$ID
+ systemctl start ceph-osd@$ID
+
+ For example::
+
+ systemctl enable ceph-osd@12
+ systemctl start ceph-osd@12
+
+
+Adding MDS
+==========
+
+In the below instructions, ``{id}`` is an arbitrary name, such as the hostname of the machine.
+
+#. Create the mds data directory.::
+
+ mkdir -p /var/lib/ceph/mds/{cluster-name}-{id}
+
+#. Create a keyring.::
+
+ ceph-authtool --create-keyring /var/lib/ceph/mds/{cluster-name}-{id}/keyring --gen-key -n mds.{id}
+
+#. Import the keyring and set caps.::
+
+ ceph auth add mds.{id} osd "allow rwx" mds "allow" mon "allow profile mds" -i /var/lib/ceph/mds/{cluster}-{id}/keyring
+
+#. Add to ceph.conf.::
+
+ [mds.{id}]
+ host = {id}
+
+#. Start the daemon the manual way.::
+
+ ceph-mds --cluster {cluster-name} -i {id} -m {mon-hostname}:{mon-port} [-f]
+
+#. Start the daemon the right way (using ceph.conf entry).::
+
+ service ceph start
+
+#. If starting the daemon fails with this error::
+
+ mds.-1.0 ERROR: failed to authenticate: (22) Invalid argument
+
+ Then make sure you do not have a keyring set in ceph.conf in the global section; move it to the client section; or add a keyring setting specific to this mds daemon. And verify that you see the same key in the mds data directory and ``ceph auth get mds.{id}`` output.
+
+#. Now you are ready to `create a Ceph filesystem`_.
+
+
+Summary
+=======
+
+Once you have your monitor and two OSDs up and running, you can watch the
+placement groups peer by executing the following::
+
+ ceph -w
+
+To view the tree, execute the following::
+
+ ceph osd tree
+
+You should see output that looks something like this::
+
+ # id weight type name up/down reweight
+ -1 2 root default
+ -2 2 host node1
+ 0 1 osd.0 up 1
+ -3 1 host node2
+ 1 1 osd.1 up 1
+
+To add (or remove) additional monitors, see `Add/Remove Monitors`_.
+To add (or remove) additional Ceph OSD Daemons, see `Add/Remove OSDs`_.
+
+
+.. _federated architecture: ../../radosgw/federated-config
+.. _Installation (Quick): ../../start
+.. _Add/Remove Monitors: ../../rados/operations/add-or-rm-mons
+.. _Add/Remove OSDs: ../../rados/operations/add-or-rm-osds
+.. _Network Configuration Reference: ../../rados/configuration/network-config-ref
+.. _Monitor Config Reference - Data: ../../rados/configuration/mon-config-ref#data
+.. _create a Ceph filesystem: ../../cephfs/createfs
diff --git a/src/ceph/doc/install/manual-freebsd-deployment.rst b/src/ceph/doc/install/manual-freebsd-deployment.rst
new file mode 100644
index 0000000..99386ae
--- /dev/null
+++ b/src/ceph/doc/install/manual-freebsd-deployment.rst
@@ -0,0 +1,623 @@
+==============================
+ Manual Deployment on FreeBSD
+==============================
+
+This a largely a copy of the regular Manual Deployment with FreeBSD specifics.
+The difference lies in two parts: The underlying diskformat, and the way to use
+the tools.
+
+All Ceph clusters require at least one monitor, and at least as many OSDs as
+copies of an object stored on the cluster. Bootstrapping the initial monitor(s)
+is the first step in deploying a Ceph Storage Cluster. Monitor deployment also
+sets important criteria for the entire cluster, such as the number of replicas
+for pools, the number of placement groups per OSD, the heartbeat intervals,
+whether authentication is required, etc. Most of these values are set by
+default, so it's useful to know about them when setting up your cluster for
+production.
+
+Following the same configuration as `Installation (Quick)`_, we will set up a
+cluster with ``node1`` as the monitor node, and ``node2`` and ``node3`` for
+OSD nodes.
+
+
+
+.. ditaa::
+ /------------------\ /----------------\
+ | Admin Node | | node1 |
+ | +-------->+ |
+ | | | cCCC |
+ \---------+--------/ \----------------/
+ |
+ | /----------------\
+ | | node2 |
+ +----------------->+ |
+ | | cCCC |
+ | \----------------/
+ |
+ | /----------------\
+ | | node3 |
+ +----------------->| |
+ | cCCC |
+ \----------------/
+
+
+
+Disklayout on FreeBSD
+=====================
+
+Current implementation works on ZFS pools
+
+* All Ceph data is created in /var/lib/ceph
+* Log files go into /var/log/ceph
+* PID files go into /var/log/run
+* One ZFS pool is allocated per OSD, like::
+
+ gpart create -s GPT ada1
+ gpart add -t freebsd-zfs -l osd1 ada1
+ zpool create -o mountpoint=/var/lib/ceph/osd/osd.1 osd
+
+* Some cache and log (ZIL) can be attached.
+ Please note that this is different from the Ceph journals. Cache and log are
+ totally transparent for Ceph, and help the filesystem to keep the system
+ consistant and help performance.
+ Assuming that ada2 is an SSD::
+
+ gpart create -s GPT ada2
+ gpart add -t freebsd-zfs -l osd1-log -s 1G ada2
+ zpool add osd1 log gpt/osd1-log
+ gpart add -t freebsd-zfs -l osd1-cache -s 10G ada2
+ zpool add osd1 log gpt/osd1-cache
+
+* Note: *UFS2 does not allow large xattribs*
+
+
+Configuration
+-------------
+
+As per FreeBSD default parts of extra software go into ``/usr/local/``. Which
+means that for ``/etc/ceph.conf`` the default location is
+``/usr/local/etc/ceph/ceph.conf``. Smartest thing to do is to create a softlink
+from ``/etc/ceph`` to ``/usr/local/etc/ceph``::
+
+ ln -s /usr/local/etc/ceph /etc/ceph
+
+A sample file is provided in ``/usr/local/share/doc/ceph/sample.ceph.conf``
+Note that ``/usr/local/etc/ceph/ceph.conf`` will be found by most tools,
+linking it to ``/etc/ceph/ceph.conf`` will help with any scripts that are found
+in extra tools, scripts, and/or discussionlists.
+
+Monitor Bootstrapping
+=====================
+
+Bootstrapping a monitor (a Ceph Storage Cluster, in theory) requires
+a number of things:
+
+- **Unique Identifier:** The ``fsid`` is a unique identifier for the cluster,
+ and stands for File System ID from the days when the Ceph Storage Cluster was
+ principally for the Ceph Filesystem. Ceph now supports native interfaces,
+ block devices, and object storage gateway interfaces too, so ``fsid`` is a
+ bit of a misnomer.
+
+- **Cluster Name:** Ceph clusters have a cluster name, which is a simple string
+ without spaces. The default cluster name is ``ceph``, but you may specify
+ a different cluster name. Overriding the default cluster name is
+ especially useful when you are working with multiple clusters and you need to
+ clearly understand which cluster your are working with.
+
+ For example, when you run multiple clusters in a `federated architecture`_,
+ the cluster name (e.g., ``us-west``, ``us-east``) identifies the cluster for
+ the current CLI session. **Note:** To identify the cluster name on the
+ command line interface, specify the a Ceph configuration file with the
+ cluster name (e.g., ``ceph.conf``, ``us-west.conf``, ``us-east.conf``, etc.).
+ Also see CLI usage (``ceph --cluster {cluster-name}``).
+
+- **Monitor Name:** Each monitor instance within a cluster has a unique name.
+ In common practice, the Ceph Monitor name is the host name (we recommend one
+ Ceph Monitor per host, and no commingling of Ceph OSD Daemons with
+ Ceph Monitors). You may retrieve the short hostname with ``hostname -s``.
+
+- **Monitor Map:** Bootstrapping the initial monitor(s) requires you to
+ generate a monitor map. The monitor map requires the ``fsid``, the cluster
+ name (or uses the default), and at least one host name and its IP address.
+
+- **Monitor Keyring**: Monitors communicate with each other via a
+ secret key. You must generate a keyring with a monitor secret and provide
+ it when bootstrapping the initial monitor(s).
+
+- **Administrator Keyring**: To use the ``ceph`` CLI tools, you must have
+ a ``client.admin`` user. So you must generate the admin user and keyring,
+ and you must also add the ``client.admin`` user to the monitor keyring.
+
+The foregoing requirements do not imply the creation of a Ceph Configuration
+file. However, as a best practice, we recommend creating a Ceph configuration
+file and populating it with the ``fsid``, the ``mon initial members`` and the
+``mon host`` settings.
+
+You can get and set all of the monitor settings at runtime as well. However,
+a Ceph Configuration file may contain only those settings that override the
+default values. When you add settings to a Ceph configuration file, these
+settings override the default settings. Maintaining those settings in a
+Ceph configuration file makes it easier to maintain your cluster.
+
+The procedure is as follows:
+
+
+#. Log in to the initial monitor node(s)::
+
+ ssh {hostname}
+
+ For example::
+
+ ssh node1
+
+
+#. Ensure you have a directory for the Ceph configuration file. By default,
+ Ceph uses ``/etc/ceph``. When you install ``ceph``, the installer will
+ create the ``/etc/ceph`` directory automatically. ::
+
+ ls /etc/ceph
+
+ **Note:** Deployment tools may remove this directory when purging a
+ cluster (e.g., ``ceph-deploy purgedata {node-name}``, ``ceph-deploy purge
+ {node-name}``).
+
+#. Create a Ceph configuration file. By default, Ceph uses
+ ``ceph.conf``, where ``ceph`` reflects the cluster name. ::
+
+ sudo vim /etc/ceph/ceph.conf
+
+
+#. Generate a unique ID (i.e., ``fsid``) for your cluster. ::
+
+ uuidgen
+
+
+#. Add the unique ID to your Ceph configuration file. ::
+
+ fsid = {UUID}
+
+ For example::
+
+ fsid = a7f64266-0894-4f1e-a635-d0aeaca0e993
+
+
+#. Add the initial monitor(s) to your Ceph configuration file. ::
+
+ mon initial members = {hostname}[,{hostname}]
+
+ For example::
+
+ mon initial members = node1
+
+
+#. Add the IP address(es) of the initial monitor(s) to your Ceph configuration
+ file and save the file. ::
+
+ mon host = {ip-address}[,{ip-address}]
+
+ For example::
+
+ mon host = 192.168.0.1
+
+ **Note:** You may use IPv6 addresses instead of IPv4 addresses, but
+ you must set ``ms bind ipv6`` to ``true``. See `Network Configuration
+ Reference`_ for details about network configuration.
+
+#. Create a keyring for your cluster and generate a monitor secret key. ::
+
+ ceph-authtool --create-keyring /tmp/ceph.mon.keyring --gen-key -n mon. --cap mon 'allow *'
+
+
+#. Generate an administrator keyring, generate a ``client.admin`` user and add
+ the user to the keyring. ::
+
+ sudo ceph-authtool --create-keyring /etc/ceph/ceph.client.admin.keyring --gen-key -n client.admin --set-uid=0 --cap mon 'allow *' --cap osd 'allow *' --cap mds 'allow *' --cap mgr 'allow *'
+
+
+#. Add the ``client.admin`` key to the ``ceph.mon.keyring``. ::
+
+ ceph-authtool /tmp/ceph.mon.keyring --import-keyring /etc/ceph/ceph.client.admin.keyring
+
+
+#. Generate a monitor map using the hostname(s), host IP address(es) and the FSID.
+ Save it as ``/tmp/monmap``::
+
+ monmaptool --create --add {hostname} {ip-address} --fsid {uuid} /tmp/monmap
+
+ For example::
+
+ monmaptool --create --add node1 192.168.0.1 --fsid a7f64266-0894-4f1e-a635-d0aeaca0e993 /tmp/monmap
+
+
+#. Create a default data directory (or directories) on the monitor host(s). ::
+
+ sudo mkdir /var/lib/ceph/mon/{cluster-name}-{hostname}
+
+ For example::
+
+ sudo mkdir /var/lib/ceph/mon/ceph-node1
+
+ See `Monitor Config Reference - Data`_ for details.
+
+#. Populate the monitor daemon(s) with the monitor map and keyring. ::
+
+ sudo -u ceph ceph-mon [--cluster {cluster-name}] --mkfs -i {hostname} --monmap /tmp/monmap --keyring /tmp/ceph.mon.keyring
+
+ For example::
+
+ sudo -u ceph ceph-mon --mkfs -i node1 --monmap /tmp/monmap --keyring /tmp/ceph.mon.keyring
+
+
+#. Consider settings for a Ceph configuration file. Common settings include
+ the following::
+
+ [global]
+ fsid = {cluster-id}
+ mon initial members = {hostname}[, {hostname}]
+ mon host = {ip-address}[, {ip-address}]
+ public network = {network}[, {network}]
+ cluster network = {network}[, {network}]
+ auth cluster required = cephx
+ auth service required = cephx
+ auth client required = cephx
+ osd journal size = {n}
+ osd pool default size = {n} # Write an object n times.
+ osd pool default min size = {n} # Allow writing n copy in a degraded state.
+ osd pool default pg num = {n}
+ osd pool default pgp num = {n}
+ osd crush chooseleaf type = {n}
+
+ In the foregoing example, the ``[global]`` section of the configuration might
+ look like this::
+
+ [global]
+ fsid = a7f64266-0894-4f1e-a635-d0aeaca0e993
+ mon initial members = node1
+ mon host = 192.168.0.1
+ public network = 192.168.0.0/24
+ auth cluster required = cephx
+ auth service required = cephx
+ auth client required = cephx
+ osd journal size = 1024
+ osd pool default size = 2
+ osd pool default min size = 1
+ osd pool default pg num = 333
+ osd pool default pgp num = 333
+ osd crush chooseleaf type = 1
+
+#. Touch the ``done`` file.
+
+ Mark that the monitor is created and ready to be started::
+
+ sudo touch /var/lib/ceph/mon/ceph-node1/done
+
+#. And for FreeBSD an entry for every monitor needs to be added to the config
+ file. (The requirement will be removed in future releases).
+
+ The entry should look like::
+
+ [mon]
+ [mon.node1]
+ host = node1 # this name can be resolve
+
+
+#. Start the monitor(s).
+
+ For Ubuntu, use Upstart::
+
+ sudo start ceph-mon id=node1 [cluster={cluster-name}]
+
+ In this case, to allow the start of the daemon at each reboot you
+ must create two empty files like this::
+
+ sudo touch /var/lib/ceph/mon/{cluster-name}-{hostname}/upstart
+
+ For example::
+
+ sudo touch /var/lib/ceph/mon/ceph-node1/upstart
+
+ For Debian/CentOS/RHEL, use sysvinit::
+
+ sudo /etc/init.d/ceph start mon.node1
+
+ For FreeBSD we use the rc.d init scripts (called bsdrc in Ceph)::
+
+ sudo service ceph start start mon.node1
+
+ For this to work /etc/rc.conf also needs the entry to enable ceph::
+ cat 'ceph_enable="YES"' >> /etc/rc.conf
+
+
+#. Verify that Ceph created the default pools. ::
+
+ ceph osd lspools
+
+ You should see output like this::
+
+ 0 data,1 metadata,2 rbd,
+
+
+#. Verify that the monitor is running. ::
+
+ ceph -s
+
+ You should see output that the monitor you started is up and running, and
+ you should see a health error indicating that placement groups are stuck
+ inactive. It should look something like this::
+
+ cluster a7f64266-0894-4f1e-a635-d0aeaca0e993
+ health HEALTH_ERR 192 pgs stuck inactive; 192 pgs stuck unclean; no osds
+ monmap e1: 1 mons at {node1=192.168.0.1:6789/0}, election epoch 1, quorum 0 node1
+ osdmap e1: 0 osds: 0 up, 0 in
+ pgmap v2: 192 pgs, 3 pools, 0 bytes data, 0 objects
+ 0 kB used, 0 kB / 0 kB avail
+ 192 creating
+
+ **Note:** Once you add OSDs and start them, the placement group health errors
+ should disappear. See the next section for details.
+
+
+Adding OSDs
+===========
+
+Once you have your initial monitor(s) running, you should add OSDs. Your cluster
+cannot reach an ``active + clean`` state until you have enough OSDs to handle the
+number of copies of an object (e.g., ``osd pool default size = 2`` requires at
+least two OSDs). After bootstrapping your monitor, your cluster has a default
+CRUSH map; however, the CRUSH map doesn't have any Ceph OSD Daemons mapped to
+a Ceph Node.
+
+
+Short Form
+----------
+
+Ceph provides the ``ceph-disk`` utility, which can prepare a disk, partition or
+directory for use with Ceph. The ``ceph-disk`` utility creates the OSD ID by
+incrementing the index. Additionally, ``ceph-disk`` will add the new OSD to the
+CRUSH map under the host for you. Execute ``ceph-disk -h`` for CLI details.
+The ``ceph-disk`` utility automates the steps of the `Long Form`_ below. To
+create the first two OSDs with the short form procedure, execute the following
+on ``node2`` and ``node3``:
+
+
+#. Prepare the OSD.
+
+ On FreeBSD only existing directories can be use to create OSDs in::
+
+ ssh {node-name}
+ sudo ceph-disk prepare --cluster {cluster-name} --cluster-uuid {uuid} {path-to-ceph-osd-directory}
+
+ For example::
+
+ ssh node1
+ sudo ceph-disk prepare --cluster ceph --cluster-uuid a7f64266-0894-4f1e-a635-d0aeaca0e993 /var/lib/ceph/osd/osd.1
+
+
+#. Activate the OSD::
+
+ sudo ceph-disk activate {data-path} [--activate-key {path}]
+
+ For example::
+
+ sudo ceph-disk activate /var/lib/ceph/osd/osd.1
+
+ **Note:** Use the ``--activate-key`` argument if you do not have a copy
+ of ``/var/lib/ceph/bootstrap-osd/{cluster}.keyring`` on the Ceph Node.
+
+ FreeBSD does not auto start the OSDs, but also requires a entry in
+ ``ceph.conf``. One for each OSD::
+
+ [osd]
+ [osd.1]
+ host = node1 # this name can be resolve
+
+
+Long Form
+---------
+
+Without the benefit of any helper utilities, create an OSD and add it to the
+cluster and CRUSH map with the following procedure. To create the first two
+OSDs with the long form procedure, execute the following on ``node2`` and
+``node3``:
+
+#. Connect to the OSD host. ::
+
+ ssh {node-name}
+
+#. Generate a UUID for the OSD. ::
+
+ uuidgen
+
+
+#. Create the OSD. If no UUID is given, it will be set automatically when the
+ OSD starts up. The following command will output the OSD number, which you
+ will need for subsequent steps. ::
+
+ ceph osd create [{uuid} [{id}]]
+
+
+#. Create the default directory on your new OSD. ::
+
+ ssh {new-osd-host}
+ sudo mkdir /var/lib/ceph/osd/{cluster-name}-{osd-number}
+
+ Above are the ZFS instructions to do this for FreeBSD.
+
+
+#. If the OSD is for a drive other than the OS drive, prepare it
+ for use with Ceph, and mount it to the directory you just created.
+
+
+#. Initialize the OSD data directory. ::
+
+ ssh {new-osd-host}
+ sudo ceph-osd -i {osd-num} --mkfs --mkkey --osd-uuid [{uuid}]
+
+ The directory must be empty before you can run ``ceph-osd`` with the
+ ``--mkkey`` option. In addition, the ceph-osd tool requires specification
+ of custom cluster names with the ``--cluster`` option.
+
+
+#. Register the OSD authentication key. The value of ``ceph`` for
+ ``ceph-{osd-num}`` in the path is the ``$cluster-$id``. If your
+ cluster name differs from ``ceph``, use your cluster name instead.::
+
+ sudo ceph auth add osd.{osd-num} osd 'allow *' mon 'allow profile osd' -i /var/lib/ceph/osd/{cluster-name}-{osd-num}/keyring
+
+
+#. Add your Ceph Node to the CRUSH map. ::
+
+ ceph [--cluster {cluster-name}] osd crush add-bucket {hostname} host
+
+ For example::
+
+ ceph osd crush add-bucket node1 host
+
+
+#. Place the Ceph Node under the root ``default``. ::
+
+ ceph osd crush move node1 root=default
+
+
+#. Add the OSD to the CRUSH map so that it can begin receiving data. You may
+ also decompile the CRUSH map, add the OSD to the device list, add the host as a
+ bucket (if it's not already in the CRUSH map), add the device as an item in the
+ host, assign it a weight, recompile it and set it. ::
+
+ ceph [--cluster {cluster-name}] osd crush add {id-or-name} {weight} [{bucket-type}={bucket-name} ...]
+
+ For example::
+
+ ceph osd crush add osd.0 1.0 host=node1
+
+
+#. After you add an OSD to Ceph, the OSD is in your configuration. However,
+ it is not yet running. The OSD is ``down`` and ``in``. You must start
+ your new OSD before it can begin receiving data.
+
+ For Ubuntu, use Upstart::
+
+ sudo start ceph-osd id={osd-num} [cluster={cluster-name}]
+
+ For example::
+
+ sudo start ceph-osd id=0
+ sudo start ceph-osd id=1
+
+ For Debian/CentOS/RHEL, use sysvinit::
+
+ sudo /etc/init.d/ceph start osd.{osd-num} [--cluster {cluster-name}]
+
+ For example::
+
+ sudo /etc/init.d/ceph start osd.0
+ sudo /etc/init.d/ceph start osd.1
+
+ In this case, to allow the start of the daemon at each reboot you
+ must create an empty file like this::
+
+ sudo touch /var/lib/ceph/osd/{cluster-name}-{osd-num}/sysvinit
+
+ For example::
+
+ sudo touch /var/lib/ceph/osd/ceph-0/sysvinit
+ sudo touch /var/lib/ceph/osd/ceph-1/sysvinit
+
+ Once you start your OSD, it is ``up`` and ``in``.
+
+ For FreeBSD using rc.d init.
+
+ After adding the OSD to ``ceph.conf``::
+
+ sudo service ceph start osd.{osd-num}
+
+ For example::
+
+ sudo service ceph start osd.0
+ sudo service ceph start osd.1
+
+ In this case, to allow the start of the daemon at each reboot you
+ must create an empty file like this::
+
+ sudo touch /var/lib/ceph/osd/{cluster-name}-{osd-num}/bsdrc
+
+ For example::
+
+ sudo touch /var/lib/ceph/osd/ceph-0/bsdrc
+ sudo touch /var/lib/ceph/osd/ceph-1/bsdrc
+
+ Once you start your OSD, it is ``up`` and ``in``.
+
+
+
+Adding MDS
+==========
+
+In the below instructions, ``{id}`` is an arbitrary name, such as the hostname of the machine.
+
+#. Create the mds data directory.::
+
+ mkdir -p /var/lib/ceph/mds/{cluster-name}-{id}
+
+#. Create a keyring.::
+
+ ceph-authtool --create-keyring /var/lib/ceph/mds/{cluster-name}-{id}/keyring --gen-key -n mds.{id}
+
+#. Import the keyring and set caps.::
+
+ ceph auth add mds.{id} osd "allow rwx" mds "allow" mon "allow profile mds" -i /var/lib/ceph/mds/{cluster}-{id}/keyring
+
+#. Add to ceph.conf.::
+
+ [mds.{id}]
+ host = {id}
+
+#. Start the daemon the manual way.::
+
+ ceph-mds --cluster {cluster-name} -i {id} -m {mon-hostname}:{mon-port} [-f]
+
+#. Start the daemon the right way (using ceph.conf entry).::
+
+ service ceph start
+
+#. If starting the daemon fails with this error::
+
+ mds.-1.0 ERROR: failed to authenticate: (22) Invalid argument
+
+ Then make sure you do not have a keyring set in ceph.conf in the global section; move it to the client section; or add a keyring setting specific to this mds daemon. And verify that you see the same key in the mds data directory and ``ceph auth get mds.{id}`` output.
+
+#. Now you are ready to `create a Ceph filesystem`_.
+
+
+Summary
+=======
+
+Once you have your monitor and two OSDs up and running, you can watch the
+placement groups peer by executing the following::
+
+ ceph -w
+
+To view the tree, execute the following::
+
+ ceph osd tree
+
+You should see output that looks something like this::
+
+ # id weight type name up/down reweight
+ -1 2 root default
+ -2 2 host node1
+ 0 1 osd.0 up 1
+ -3 1 host node2
+ 1 1 osd.1 up 1
+
+To add (or remove) additional monitors, see `Add/Remove Monitors`_.
+To add (or remove) additional Ceph OSD Daemons, see `Add/Remove OSDs`_.
+
+
+.. _federated architecture: ../../radosgw/federated-config
+.. _Installation (Quick): ../../start
+.. _Add/Remove Monitors: ../../rados/operations/add-or-rm-mons
+.. _Add/Remove OSDs: ../../rados/operations/add-or-rm-osds
+.. _Network Configuration Reference: ../../rados/configuration/network-config-ref
+.. _Monitor Config Reference - Data: ../../rados/configuration/mon-config-ref#data
+.. _create a Ceph filesystem: ../../cephfs/createfs
diff --git a/src/ceph/doc/install/mirrors.rst b/src/ceph/doc/install/mirrors.rst
new file mode 100644
index 0000000..49742d1
--- /dev/null
+++ b/src/ceph/doc/install/mirrors.rst
@@ -0,0 +1,67 @@
+=============
+ Ceph Mirrors
+=============
+
+For improved user experience multiple mirrors for Ceph are available around the
+world.
+
+These mirrors are kindly sponsored by various companies who want to support the
+Ceph project.
+
+
+Locations
+=========
+
+These mirrors are available on the following locations:
+
+- **EU: Netherlands**: http://eu.ceph.com/
+- **AU: Australia**: http://au.ceph.com/
+- **CZ: Czech Republic**: http://cz.ceph.com/
+- **SE: Sweden**: http://se.ceph.com/
+- **DE: Germany**: http://de.ceph.com/
+- **HK: Hong Kong**: http://hk.ceph.com/
+- **FR: France**: http://fr.ceph.com/
+- **UK: UK**: http://uk.ceph.com
+- **US-East: US East Coast**: http://us-east.ceph.com/
+- **US-West: US West Coast**: http://us-west.ceph.com/
+- **CN: China**: http://cn.ceph.com/
+
+You can replace all download.ceph.com URLs with any of the mirrors, for example:
+
+- http://download.ceph.com/tarballs/
+- http://download.ceph.com/debian-hammer/
+- http://download.ceph.com/rpm-hammer/
+
+Change this to:
+
+- http://eu.ceph.com/tarballs/
+- http://eu.ceph.com/debian-hammer/
+- http://eu.ceph.com/rpm-hammer/
+
+
+Mirroring
+=========
+
+You can easily mirror Ceph yourself using a Bash script and rsync. A easy to use
+script can be found at `Github`_.
+
+When mirroring Ceph, please keep the following guidelines in mind:
+
+- Choose a mirror close to you
+- Do not sync in a interval shorter than 3 hours
+- Avoid syncing at minute 0 of the hour, use something between 0 and 59
+
+
+Becoming a mirror
+=================
+
+If you want to provide a public mirror for other users of Ceph you can opt to
+become a official mirror.
+
+To make sure all mirrors meet the same standards some requirements have been
+set for all mirrors. These can be found on `Github`_.
+
+If you want to apply for an official mirror, please contact the ceph-users mailinglist.
+
+
+.. _Github: https://github.com/ceph/ceph/tree/master/mirroring
diff --git a/src/ceph/doc/install/upgrading-ceph.rst b/src/ceph/doc/install/upgrading-ceph.rst
new file mode 100644
index 0000000..962b90c
--- /dev/null
+++ b/src/ceph/doc/install/upgrading-ceph.rst
@@ -0,0 +1,746 @@
+================
+ Upgrading Ceph
+================
+
+Each release of Ceph may have additional steps. Refer to the release-specific
+sections in this document and the `release notes`_ document to identify
+release-specific procedures for your cluster before using the upgrade
+procedures.
+
+
+Summary
+=======
+
+You can upgrade daemons in your Ceph cluster while the cluster is online and in
+service! Certain types of daemons depend upon others. For example, Ceph Metadata
+Servers and Ceph Object Gateways depend upon Ceph Monitors and Ceph OSD Daemons.
+We recommend upgrading in this order:
+
+#. `Ceph Deploy`_
+#. Ceph Monitors
+#. Ceph OSD Daemons
+#. Ceph Metadata Servers
+#. Ceph Object Gateways
+
+As a general rule, we recommend upgrading all the daemons of a specific type
+(e.g., all ``ceph-mon`` daemons, all ``ceph-osd`` daemons, etc.) to ensure that
+they are all on the same release. We also recommend that you upgrade all the
+daemons in your cluster before you try to exercise new functionality in a
+release.
+
+The `Upgrade Procedures`_ are relatively simple, but please look at
+distribution-specific sections before upgrading. The basic process involves
+three steps:
+
+#. Use ``ceph-deploy`` on your admin node to upgrade the packages for
+ multiple hosts (using the ``ceph-deploy install`` command), or login to each
+ host and upgrade the Ceph package `manually`_. For example, when
+ `Upgrading Monitors`_, the ``ceph-deploy`` syntax might look like this::
+
+ ceph-deploy install --release {release-name} ceph-node1[ ceph-node2]
+ ceph-deploy install --release firefly mon1 mon2 mon3
+
+ **Note:** The ``ceph-deploy install`` command will upgrade the packages
+ in the specified node(s) from the old release to the release you specify.
+ There is no ``ceph-deploy upgrade`` command.
+
+#. Login in to each Ceph node and restart each Ceph daemon.
+ See `Operating a Cluster`_ for details.
+
+#. Ensure your cluster is healthy. See `Monitoring a Cluster`_ for details.
+
+.. important:: Once you upgrade a daemon, you cannot downgrade it.
+
+
+Ceph Deploy
+===========
+
+Before upgrading Ceph daemons, upgrade the ``ceph-deploy`` tool. ::
+
+ sudo pip install -U ceph-deploy
+
+Or::
+
+ sudo apt-get install ceph-deploy
+
+Or::
+
+ sudo yum install ceph-deploy python-pushy
+
+
+Argonaut to Bobtail
+===================
+
+When upgrading from Argonaut to Bobtail, you need to be aware of several things:
+
+#. Authentication now defaults to **ON**, but used to default to **OFF**.
+#. Monitors use a new internal on-wire protocol.
+#. RBD ``format2`` images require upgrading all OSDs before using it.
+
+Ensure that you update package repository paths. For example::
+
+ sudo rm /etc/apt/sources.list.d/ceph.list
+ echo deb http://download.ceph.com/debian-bobtail/ $(lsb_release -sc) main | sudo tee /etc/apt/sources.list.d/ceph.list
+
+See the following sections for additional details.
+
+Authentication
+--------------
+
+The Ceph Bobtail release enables authentication by default. Bobtail also has
+finer-grained authentication configuration settings. In previous versions of
+Ceph (i.e., actually v 0.55 and earlier), you could simply specify::
+
+ auth supported = [cephx | none]
+
+This option still works, but is deprecated. New releases support
+``cluster``, ``service`` and ``client`` authentication settings as
+follows::
+
+ auth cluster required = [cephx | none] # default cephx
+ auth service required = [cephx | none] # default cephx
+ auth client required = [cephx | none] # default cephx,none
+
+.. important:: If your cluster does not currently have an ``auth
+ supported`` line that enables authentication, you must explicitly
+ turn it off in Bobtail using the settings below.::
+
+ auth cluster required = none
+ auth service required = none
+
+ This will disable authentication on the cluster, but still leave
+ clients with the default configuration where they can talk to a
+ cluster that does enable it, but do not require it.
+
+.. important:: If your cluster already has an ``auth supported`` option defined in
+ the configuration file, no changes are necessary.
+
+See `User Management - Backward Compatibility`_ for details.
+
+
+Monitor On-wire Protocol
+------------------------
+
+We recommend upgrading all monitors to Bobtail. A mixture of Bobtail and
+Argonaut monitors will not be able to use the new on-wire protocol, as the
+protocol requires all monitors to be Bobtail or greater. Upgrading only a
+majority of the nodes (e.g., two out of three) may expose the cluster to a
+situation where a single additional failure may compromise availability (because
+the non-upgraded daemon cannot participate in the new protocol). We recommend
+not waiting for an extended period of time between ``ceph-mon`` upgrades.
+
+
+RBD Images
+----------
+
+The Bobtail release supports ``format 2`` images! However, you should not create
+or use ``format 2`` RBD images until after all ``ceph-osd`` daemons have been
+upgraded. Note that ``format 1`` is still the default. You can use the new
+``ceph osd ls`` and ``ceph tell osd.N version`` commands to doublecheck your
+cluster. ``ceph osd ls`` will give a list of all OSD IDs that are part of the
+cluster, and you can use that to write a simple shell loop to display all the
+OSD version strings: ::
+
+ for i in $(ceph osd ls); do
+ ceph tell osd.${i} version
+ done
+
+
+Argonaut to Cuttlefish
+======================
+
+To upgrade your cluster from Argonaut to Cuttlefish, please read this
+section, and the sections on upgrading from Argonaut to Bobtail and
+upgrading from Bobtail to Cuttlefish carefully. When upgrading from
+Argonaut to Cuttlefish, **YOU MUST UPGRADE YOUR MONITORS FROM ARGONAUT
+TO BOBTAIL v0.56.5 FIRST!!!**. All other Ceph daemons can upgrade from
+Argonaut to Cuttlefish without the intermediate upgrade to Bobtail.
+
+.. important:: Ensure that the repository specified points to Bobtail, not
+ Cuttlefish.
+
+For example::
+
+ sudo rm /etc/apt/sources.list.d/ceph.list
+ echo deb http://download.ceph.com/debian-bobtail/ $(lsb_release -sc) main | sudo tee /etc/apt/sources.list.d/ceph.list
+
+We recommend upgrading all monitors to Bobtail before proceeding with the
+upgrade of the monitors to Cuttlefish. A mixture of Bobtail and Argonaut
+monitors will not be able to use the new on-wire protocol, as the protocol
+requires all monitors to be Bobtail or greater. Upgrading only a majority of the
+nodes (e.g., two out of three) may expose the cluster to a situation where a
+single additional failure may compromise availability (because the non-upgraded
+daemon cannot participate in the new protocol). We recommend not waiting for an
+extended period of time between ``ceph-mon`` upgrades. See `Upgrading
+Monitors`_ for details.
+
+.. note:: See the `Authentication`_ section and the
+ `User Management - Backward Compatibility`_ for additional information
+ on authentication backward compatibility settings for Bobtail.
+
+Once you complete the upgrade of your monitors from Argonaut to
+Bobtail, and have restarted the monitor daemons, you must upgrade the
+monitors from Bobtail to Cuttlefish. Ensure that you have a quorum
+before beginning this upgrade procedure. Before upgrading, remember to
+replace the reference to the Bobtail repository with a reference to
+the Cuttlefish repository. For example::
+
+ sudo rm /etc/apt/sources.list.d/ceph.list
+ echo deb http://download.ceph.com/debian-cuttlefish/ $(lsb_release -sc) main | sudo tee /etc/apt/sources.list.d/ceph.list
+
+See `Upgrading Monitors`_ for details.
+
+The architecture of the monitors changed significantly from Argonaut to
+Cuttlefish. See `Monitor Config Reference`_ and `Joao's blog post`_ for details.
+Once you complete the monitor upgrade, you can upgrade the OSD daemons and the
+MDS daemons using the generic procedures. See `Upgrading an OSD`_ and `Upgrading
+a Metadata Server`_ for details.
+
+
+Bobtail to Cuttlefish
+=====================
+
+Upgrading your cluster from Bobtail to Cuttlefish has a few important
+considerations. First, the monitor uses a new architecture, so you should
+upgrade the full set of monitors to use Cuttlefish. Second, if you run multiple
+metadata servers in a cluster, ensure the metadata servers have unique names.
+See the following sections for details.
+
+Replace any ``apt`` reference to older repositories with a reference to the
+Cuttlefish repository. For example::
+
+ sudo rm /etc/apt/sources.list.d/ceph.list
+ echo deb http://download.ceph.com/debian-cuttlefish/ $(lsb_release -sc) main | sudo tee /etc/apt/sources.list.d/ceph.list
+
+
+Monitor
+-------
+
+The architecture of the monitors changed significantly from Bobtail to
+Cuttlefish. See `Monitor Config Reference`_ and `Joao's blog post`_ for
+details. This means that v0.59 and pre-v0.59 monitors do not talk to each other
+(Cuttlefish is v.0.61). When you upgrade each monitor, it will convert its
+local data store to the new format. Once you upgrade a majority of monitors,
+the monitors form a quorum using the new protocol and the old monitors will be
+blocked until they get upgraded. For this reason, we recommend upgrading the
+monitors in immediate succession.
+
+.. important:: Do not run a mixed-version cluster for an extended period.
+
+
+MDS Unique Names
+----------------
+
+The monitor now enforces that MDS names be unique. If you have multiple metadata
+server daemons that start with the same ID (e.g., mds.a) the second
+metadata server will implicitly mark the first metadata server as ``failed``.
+Multi-MDS configurations with identical names must be adjusted accordingly to
+give daemons unique names. If you run your cluster with one metadata server,
+you can disregard this notice for now.
+
+
+ceph-deploy
+-----------
+
+The ``ceph-deploy`` tool is now the preferred method of provisioning new clusters.
+For existing clusters created via the obsolete ``mkcephfs`` tool that would like to transition to the
+new tool, there is a migration path, documented at `Transitioning to ceph-deploy`_.
+
+Cuttlefish to Dumpling
+======================
+
+When upgrading from Cuttlefish (v0.61-v0.61.7) you may perform a rolling
+upgrade. However, there are a few important considerations. First, you must
+upgrade the ``ceph`` command line utility, because it has changed significantly.
+Second, you must upgrade the full set of monitors to use Dumpling, because of a
+protocol change.
+
+Replace any reference to older repositories with a reference to the
+Dumpling repository. For example, with ``apt`` perform the following::
+
+ sudo rm /etc/apt/sources.list.d/ceph.list
+ echo deb http://download.ceph.com/debian-dumpling/ $(lsb_release -sc) main | sudo tee /etc/apt/sources.list.d/ceph.list
+
+With CentOS/Red Hat distributions, remove the old repository. ::
+
+ sudo rm /etc/yum.repos.d/ceph.repo
+
+Then add a new ``ceph.repo`` repository entry with the following contents.
+
+.. code-block:: ini
+
+ [ceph]
+ name=Ceph Packages and Backports $basearch
+ baseurl=http://download.ceph.com/rpm/el6/$basearch
+ enabled=1
+ gpgcheck=1
+ gpgkey=https://download.ceph.com/keys/release.asc
+
+
+.. note:: Ensure you use the correct URL for your distribution. Check the
+ http://download.ceph.com/rpm directory for your distribution.
+
+.. note:: Since you can upgrade using ``ceph-deploy`` you will only need to add
+ the repository on Ceph Client nodes where you use the ``ceph`` command line
+ interface or the ``ceph-deploy`` tool.
+
+
+Dumpling to Emperor
+===================
+
+When upgrading from Dumpling (v0.64) you may perform a rolling
+upgrade.
+
+Replace any reference to older repositories with a reference to the
+Emperor repository. For example, with ``apt`` perform the following::
+
+ sudo rm /etc/apt/sources.list.d/ceph.list
+ echo deb http://download.ceph.com/debian-emperor/ $(lsb_release -sc) main | sudo tee /etc/apt/sources.list.d/ceph.list
+
+With CentOS/Red Hat distributions, remove the old repository. ::
+
+ sudo rm /etc/yum.repos.d/ceph.repo
+
+Then add a new ``ceph.repo`` repository entry with the following contents and
+replace ``{distro}`` with your distribution (e.g., ``el6``, ``rhel6``, etc).
+
+.. code-block:: ini
+
+ [ceph]
+ name=Ceph Packages and Backports $basearch
+ baseurl=http://download.ceph.com/rpm-emperor/{distro}/$basearch
+ enabled=1
+ gpgcheck=1
+ gpgkey=https://download.ceph.com/keys/release.asc
+
+
+.. note:: Ensure you use the correct URL for your distribution. Check the
+ http://download.ceph.com/rpm directory for your distribution.
+
+.. note:: Since you can upgrade using ``ceph-deploy`` you will only need to add
+ the repository on Ceph Client nodes where you use the ``ceph`` command line
+ interface or the ``ceph-deploy`` tool.
+
+
+Command Line Utility
+--------------------
+
+In V0.65, the ``ceph`` commandline interface (CLI) utility changed
+significantly. You will not be able to use the old CLI with Dumpling. This means
+that you must upgrade the ``ceph-common`` library on all nodes that access the
+Ceph Storage Cluster with the ``ceph`` CLI before upgrading Ceph daemons. ::
+
+ sudo apt-get update && sudo apt-get install ceph-common
+
+Ensure that you have the latest version (v0.67 or later). If you do not,
+you may need to uninstall, auto remove dependencies and reinstall.
+
+See `v0.65`_ for details on the new command line interface.
+
+.. _v0.65: http://docs.ceph.com/docs/master/release-notes/#v0-65
+
+
+Monitor
+-------
+
+Dumpling (v0.67) ``ceph-mon`` daemons have an internal protocol change. This
+means that v0.67 daemons cannot talk to v0.66 or older daemons. Once you
+upgrade a majority of monitors, the monitors form a quorum using the new
+protocol and the old monitors will be blocked until they get upgraded. For this
+reason, we recommend upgrading all monitors at once (or in relatively quick
+succession) to minimize the possibility of downtime.
+
+.. important:: Do not run a mixed-version cluster for an extended period.
+
+
+
+Dumpling to Firefly
+===================
+
+If your existing cluster is running a version older than v0.67 Dumpling, please
+first upgrade to the latest Dumpling release before upgrading to v0.80 Firefly.
+
+
+Monitor
+-------
+
+Dumpling (v0.67) ``ceph-mon`` daemons have an internal protocol change. This
+means that v0.67 daemons cannot talk to v0.66 or older daemons. Once you
+upgrade a majority of monitors, the monitors form a quorum using the new
+protocol and the old monitors will be blocked until they get upgraded. For this
+reason, we recommend upgrading all monitors at once (or in relatively quick
+succession) to minimize the possibility of downtime.
+
+.. important:: Do not run a mixed-version cluster for an extended period.
+
+
+Ceph Config File Changes
+------------------------
+
+We recommand adding the following to the ``[mon]`` section of your
+``ceph.conf`` prior to upgrade::
+
+ mon warn on legacy crush tunables = false
+
+This will prevent health warnings due to the use of legacy CRUSH placement.
+Although it is possible to rebalance existing data across your cluster, we do
+not normally recommend it for production environments as a large amount of data
+will move and there is a significant performance impact from the rebalancing.
+
+
+Command Line Utility
+--------------------
+
+In V0.65, the ``ceph`` commandline interface (CLI) utility changed
+significantly. You will not be able to use the old CLI with Firefly. This means
+that you must upgrade the ``ceph-common`` library on all nodes that access the
+Ceph Storage Cluster with the ``ceph`` CLI before upgrading Ceph daemons.
+
+For Debian/Ubuntu, execute::
+
+ sudo apt-get update && sudo apt-get install ceph-common
+
+For CentOS/RHEL, execute::
+
+ sudo yum install ceph-common
+
+Ensure that you have the latest version. If you do not,
+you may need to uninstall, auto remove dependencies and reinstall.
+
+See `v0.65`_ for details on the new command line interface.
+
+.. _v0.65: http://docs.ceph.com/docs/master/release-notes/#v0-65
+
+
+Upgrade Sequence
+----------------
+
+Replace any reference to older repositories with a reference to the
+Firely repository. For example, with ``apt`` perform the following::
+
+ sudo rm /etc/apt/sources.list.d/ceph.list
+ echo deb http://download.ceph.com/debian-firefly/ $(lsb_release -sc) main | sudo tee /etc/apt/sources.list.d/ceph.list
+
+With CentOS/Red Hat distributions, remove the old repository. ::
+
+ sudo rm /etc/yum.repos.d/ceph.repo
+
+Then add a new ``ceph.repo`` repository entry with the following contents and
+replace ``{distro}`` with your distribution (e.g., ``el6``, ``rhel6``,
+``rhel7``, etc.).
+
+.. code-block:: ini
+
+ [ceph]
+ name=Ceph Packages and Backports $basearch
+ baseurl=http://download.ceph.com/rpm-firefly/{distro}/$basearch
+ enabled=1
+ gpgcheck=1
+ gpgkey=https://download.ceph.com/keys/release.asc
+
+
+Upgrade daemons in the following order:
+
+#. **Monitors:** If the ``ceph-mon`` daemons are not restarted prior to the
+ ``ceph-osd`` daemons, the monitors will not correctly register their new
+ capabilities with the cluster and new features may not be usable until
+ the monitors are restarted a second time.
+
+#. **OSDs**
+
+#. **MDSs:** If the ``ceph-mds`` daemon is restarted first, it will wait until
+ all OSDs have been upgraded before finishing its startup sequence.
+
+#. **Gateways:** Upgrade ``radosgw`` daemons together. There is a subtle change
+ in behavior for multipart uploads that prevents a multipart request that
+ was initiated with a new ``radosgw`` from being completed by an old
+ ``radosgw``.
+
+.. note:: Make sure you upgrade your **ALL** of your Ceph monitors **AND**
+ restart them **BEFORE** upgrading and restarting OSDs, MDSs, and gateways!
+
+
+Emperor to Firefly
+==================
+
+If your existing cluster is running a version older than v0.67 Dumpling, please
+first upgrade to the latest Dumpling release before upgrading to v0.80 Firefly.
+Please refer to `Cuttlefish to Dumpling`_ and the `Firefly release notes`_ for
+details. To upgrade from a post-Emperor point release, see the `Firefly release
+notes`_ for details.
+
+
+Ceph Config File Changes
+------------------------
+
+We recommand adding the following to the ``[mon]`` section of your
+``ceph.conf`` prior to upgrade::
+
+ mon warn on legacy crush tunables = false
+
+This will prevent health warnings due to the use of legacy CRUSH placement.
+Although it is possible to rebalance existing data across your cluster, we do
+not normally recommend it for production environments as a large amount of data
+will move and there is a significant performance impact from the rebalancing.
+
+
+Upgrade Sequence
+----------------
+
+Replace any reference to older repositories with a reference to the
+Firefly repository. For example, with ``apt`` perform the following::
+
+ sudo rm /etc/apt/sources.list.d/ceph.list
+ echo deb http://download.ceph.com/debian-firefly/ $(lsb_release -sc) main | sudo tee /etc/apt/sources.list.d/ceph.list
+
+With CentOS/Red Hat distributions, remove the old repository. ::
+
+ sudo rm /etc/yum.repos.d/ceph.repo
+
+Then add a new ``ceph.repo`` repository entry with the following contents, but
+replace ``{distro}`` with your distribution (e.g., ``el6``, ``rhel6``,
+``rhel7``, etc.).
+
+.. code-block:: ini
+
+ [ceph]
+ name=Ceph Packages and Backports $basearch
+ baseurl=http://download.ceph.com/rpm/{distro}/$basearch
+ enabled=1
+ gpgcheck=1
+ gpgkey=https://download.ceph.com/keys/release.asc
+
+
+.. note:: Ensure you use the correct URL for your distribution. Check the
+ http://download.ceph.com/rpm directory for your distribution.
+
+.. note:: Since you can upgrade using ``ceph-deploy`` you will only need to add
+ the repository on Ceph Client nodes where you use the ``ceph`` command line
+ interface or the ``ceph-deploy`` tool.
+
+
+Upgrade daemons in the following order:
+
+#. **Monitors:** If the ``ceph-mon`` daemons are not restarted prior to the
+ ``ceph-osd`` daemons, the monitors will not correctly register their new
+ capabilities with the cluster and new features may not be usable until
+ the monitors are restarted a second time.
+
+#. **OSDs**
+
+#. **MDSs:** If the ``ceph-mds`` daemon is restarted first, it will wait until
+ all OSDs have been upgraded before finishing its startup sequence.
+
+#. **Gateways:** Upgrade ``radosgw`` daemons together. There is a subtle change
+ in behavior for multipart uploads that prevents a multipart request that
+ was initiated with a new ``radosgw`` from being completed by an old
+ ``radosgw``.
+
+
+Upgrade Procedures
+==================
+
+The following sections describe the upgrade process.
+
+.. important:: Each release of Ceph may have some additional steps. Refer to
+ release-specific sections for details **BEFORE** you begin upgrading daemons.
+
+
+Upgrading Monitors
+------------------
+
+To upgrade monitors, perform the following steps:
+
+#. Upgrade the Ceph package for each daemon instance.
+
+ You may use ``ceph-deploy`` to address all monitor nodes at once.
+ For example::
+
+ ceph-deploy install --release {release-name} ceph-node1[ ceph-node2]
+ ceph-deploy install --release hammer mon1 mon2 mon3
+
+ You may also use the package manager for your Linux distribution on
+ each individual node. To upgrade packages manually on each Debian/Ubuntu
+ host, perform the following steps . ::
+
+ ssh {mon-host}
+ sudo apt-get update && sudo apt-get install ceph
+
+ On CentOS/Red Hat hosts, perform the following steps::
+
+ ssh {mon-host}
+ sudo yum update && sudo yum install ceph
+
+
+#. Restart each monitor. For Ubuntu distributions, use::
+
+ sudo restart ceph-mon id={hostname}
+
+ For CentOS/Red Hat/Debian distributions, use::
+
+ sudo /etc/init.d/ceph restart {mon-id}
+
+ For CentOS/Red Hat distributions deployed with ``ceph-deploy``,
+ the monitor ID is usually ``mon.{hostname}``.
+
+#. Ensure each monitor has rejoined the quorum. ::
+
+ ceph mon stat
+
+Ensure that you have completed the upgrade cycle for all of your Ceph Monitors.
+
+
+Upgrading an OSD
+----------------
+
+To upgrade a Ceph OSD Daemon, perform the following steps:
+
+#. Upgrade the Ceph OSD Daemon package.
+
+ You may use ``ceph-deploy`` to address all Ceph OSD Daemon nodes at
+ once. For example::
+
+ ceph-deploy install --release {release-name} ceph-node1[ ceph-node2]
+ ceph-deploy install --release hammer osd1 osd2 osd3
+
+ You may also use the package manager on each node to upgrade packages
+ manually. For Debian/Ubuntu hosts, perform the following steps on each
+ host. ::
+
+ ssh {osd-host}
+ sudo apt-get update && sudo apt-get install ceph
+
+ For CentOS/Red Hat hosts, perform the following steps::
+
+ ssh {osd-host}
+ sudo yum update && sudo yum install ceph
+
+
+#. Restart the OSD, where ``N`` is the OSD number. For Ubuntu, use::
+
+ sudo restart ceph-osd id=N
+
+ For multiple OSDs on a host, you may restart all of them with Upstart. ::
+
+ sudo restart ceph-osd-all
+
+ For CentOS/Red Hat/Debian distributions, use::
+
+ sudo /etc/init.d/ceph restart N
+
+
+#. Ensure each upgraded Ceph OSD Daemon has rejoined the cluster::
+
+ ceph osd stat
+
+Ensure that you have completed the upgrade cycle for all of your
+Ceph OSD Daemons.
+
+
+Upgrading a Metadata Server
+---------------------------
+
+To upgrade a Ceph Metadata Server, perform the following steps:
+
+#. Upgrade the Ceph Metadata Server package. You may use ``ceph-deploy`` to
+ address all Ceph Metadata Server nodes at once, or use the package manager
+ on each node. For example::
+
+ ceph-deploy install --release {release-name} ceph-node1
+ ceph-deploy install --release hammer mds1
+
+ To upgrade packages manually, perform the following steps on each
+ Debian/Ubuntu host. ::
+
+ ssh {mon-host}
+ sudo apt-get update && sudo apt-get install ceph-mds
+
+ Or the following steps on CentOS/Red Hat hosts::
+
+ ssh {mon-host}
+ sudo yum update && sudo yum install ceph-mds
+
+
+#. Restart the metadata server. For Ubuntu, use::
+
+ sudo restart ceph-mds id={hostname}
+
+ For CentOS/Red Hat/Debian distributions, use::
+
+ sudo /etc/init.d/ceph restart mds.{hostname}
+
+ For clusters deployed with ``ceph-deploy``, the name is usually either
+ the name you specified on creation or the hostname.
+
+#. Ensure the metadata server is up and running::
+
+ ceph mds stat
+
+
+Upgrading a Client
+------------------
+
+Once you have upgraded the packages and restarted daemons on your Ceph
+cluster, we recommend upgrading ``ceph-common`` and client libraries
+(``librbd1`` and ``librados2``) on your client nodes too.
+
+#. Upgrade the package::
+
+ ssh {client-host}
+ apt-get update && sudo apt-get install ceph-common librados2 librbd1 python-rados python-rbd
+
+#. Ensure that you have the latest version::
+
+ ceph --version
+
+If you do not have the latest version, you may need to uninstall, auto remove
+dependencies and reinstall.
+
+
+Transitioning to ceph-deploy
+============================
+
+If you have an existing cluster that you deployed with ``mkcephfs`` (usually
+Argonaut or Bobtail releases), you will need to make a few changes to your
+configuration to ensure that your cluster will work with ``ceph-deploy``.
+
+
+Monitor Keyring
+---------------
+
+You will need to add ``caps mon = "allow *"`` to your monitor keyring if it is
+not already in the keyring. By default, the monitor keyring is located under
+``/var/lib/ceph/mon/ceph-$id/keyring``. When you have added the ``caps``
+setting, your monitor keyring should look something like this::
+
+ [mon.]
+ key = AQBJIHhRuHCwDRAAZjBTSJcIBIoGpdOR9ToiyQ==
+ caps mon = "allow *"
+
+Adding ``caps mon = "allow *"`` will ease the transition from ``mkcephfs`` to
+``ceph-deploy`` by allowing ``ceph-create-keys`` to use the ``mon.`` keyring
+file in ``$mon_data`` and get the caps it needs.
+
+
+Use Default Paths
+-----------------
+
+Under the ``/var/lib/ceph`` directory, the ``mon`` and ``osd`` directories need
+to use the default paths.
+
+- **OSDs**: The path should be ``/var/lib/ceph/osd/ceph-$id``
+- **MON**: The path should be ``/var/lib/ceph/mon/ceph-$id``
+
+Under those directories, the keyring should be in a file named ``keyring``.
+
+
+
+
+.. _Monitor Config Reference: ../../rados/configuration/mon-config-ref
+.. _Joao's blog post: http://ceph.com/dev-notes/cephs-new-monitor-changes
+.. _User Management - Backward Compatibility: ../../rados/configuration/auth-config-ref/#backward-compatibility
+.. _manually: ../install-storage-cluster/
+.. _Operating a Cluster: ../../rados/operations/operating
+.. _Monitoring a Cluster: ../../rados/operations/monitoring
+.. _Firefly release notes: ../../release-notes/#v0-80-firefly
+.. _release notes: ../../release-notes