summaryrefslogtreecommitdiffstats
path: root/src/ceph/doc/rados/deployment
diff options
context:
space:
mode:
authorQiaowei Ren <qiaowei.ren@intel.com>2018-01-04 13:43:33 +0800
committerQiaowei Ren <qiaowei.ren@intel.com>2018-01-05 11:59:39 +0800
commit812ff6ca9fcd3e629e49d4328905f33eee8ca3f5 (patch)
tree04ece7b4da00d9d2f98093774594f4057ae561d4 /src/ceph/doc/rados/deployment
parent15280273faafb77777eab341909a3f495cf248d9 (diff)
initial code repo
This patch creates initial code repo. For ceph, luminous stable release will be used for base code, and next changes and optimization for ceph will be added to it. For opensds, currently any changes can be upstreamed into original opensds repo (https://github.com/opensds/opensds), and so stor4nfv will directly clone opensds code to deploy stor4nfv environment. And the scripts for deployment based on ceph and opensds will be put into 'ci' directory. Change-Id: I46a32218884c75dda2936337604ff03c554648e4 Signed-off-by: Qiaowei Ren <qiaowei.ren@intel.com>
Diffstat (limited to 'src/ceph/doc/rados/deployment')
-rw-r--r--src/ceph/doc/rados/deployment/ceph-deploy-admin.rst38
-rw-r--r--src/ceph/doc/rados/deployment/ceph-deploy-install.rst46
-rw-r--r--src/ceph/doc/rados/deployment/ceph-deploy-keys.rst32
-rw-r--r--src/ceph/doc/rados/deployment/ceph-deploy-mds.rst46
-rw-r--r--src/ceph/doc/rados/deployment/ceph-deploy-mon.rst56
-rw-r--r--src/ceph/doc/rados/deployment/ceph-deploy-new.rst66
-rw-r--r--src/ceph/doc/rados/deployment/ceph-deploy-osd.rst121
-rw-r--r--src/ceph/doc/rados/deployment/ceph-deploy-purge.rst25
-rw-r--r--src/ceph/doc/rados/deployment/index.rst58
-rw-r--r--src/ceph/doc/rados/deployment/preflight-checklist.rst109
10 files changed, 597 insertions, 0 deletions
diff --git a/src/ceph/doc/rados/deployment/ceph-deploy-admin.rst b/src/ceph/doc/rados/deployment/ceph-deploy-admin.rst
new file mode 100644
index 0000000..a91f69c
--- /dev/null
+++ b/src/ceph/doc/rados/deployment/ceph-deploy-admin.rst
@@ -0,0 +1,38 @@
+=============
+ Admin Tasks
+=============
+
+Once you have set up a cluster with ``ceph-deploy``, you may
+provide the client admin key and the Ceph configuration file
+to another host so that a user on the host may use the ``ceph``
+command line as an administrative user.
+
+
+Create an Admin Host
+====================
+
+To enable a host to execute ceph commands with administrator
+privileges, use the ``admin`` command. ::
+
+ ceph-deploy admin {host-name [host-name]...}
+
+
+Deploy Config File
+==================
+
+To send an updated copy of the Ceph configuration file to hosts
+in your cluster, use the ``config push`` command. ::
+
+ ceph-deploy config push {host-name [host-name]...}
+
+.. tip:: With a base name and increment host-naming convention,
+ it is easy to deploy configuration files via simple scripts
+ (e.g., ``ceph-deploy config hostname{1,2,3,4,5}``).
+
+Retrieve Config File
+====================
+
+To retrieve a copy of the Ceph configuration file from a host
+in your cluster, use the ``config pull`` command. ::
+
+ ceph-deploy config pull {host-name [host-name]...}
diff --git a/src/ceph/doc/rados/deployment/ceph-deploy-install.rst b/src/ceph/doc/rados/deployment/ceph-deploy-install.rst
new file mode 100644
index 0000000..849d68e
--- /dev/null
+++ b/src/ceph/doc/rados/deployment/ceph-deploy-install.rst
@@ -0,0 +1,46 @@
+====================
+ Package Management
+====================
+
+Install
+=======
+
+To install Ceph packages on your cluster hosts, open a command line on your
+client machine and type the following::
+
+ ceph-deploy install {hostname [hostname] ...}
+
+Without additional arguments, ``ceph-deploy`` will install the most recent
+major release of Ceph to the cluster host(s). To specify a particular package,
+you may select from the following:
+
+- ``--release <code-name>``
+- ``--testing``
+- ``--dev <branch-or-tag>``
+
+For example::
+
+ ceph-deploy install --release cuttlefish hostname1
+ ceph-deploy install --testing hostname2
+ ceph-deploy install --dev wip-some-branch hostname{1,2,3,4,5}
+
+For additional usage, execute::
+
+ ceph-deploy install -h
+
+
+Uninstall
+=========
+
+To uninstall Ceph packages from your cluster hosts, open a terminal on
+your admin host and type the following::
+
+ ceph-deploy uninstall {hostname [hostname] ...}
+
+On a Debian or Ubuntu system, you may also::
+
+ ceph-deploy purge {hostname [hostname] ...}
+
+The tool will unininstall ``ceph`` packages from the specified hosts. Purge
+additionally removes configuration files.
+
diff --git a/src/ceph/doc/rados/deployment/ceph-deploy-keys.rst b/src/ceph/doc/rados/deployment/ceph-deploy-keys.rst
new file mode 100644
index 0000000..3e106c9
--- /dev/null
+++ b/src/ceph/doc/rados/deployment/ceph-deploy-keys.rst
@@ -0,0 +1,32 @@
+=================
+ Keys Management
+=================
+
+
+Gather Keys
+===========
+
+Before you can provision a host to run OSDs or metadata servers, you must gather
+monitor keys and the OSD and MDS bootstrap keyrings. To gather keys, enter the
+following::
+
+ ceph-deploy gatherkeys {monitor-host}
+
+
+.. note:: To retrieve the keys, you specify a host that has a
+ Ceph monitor.
+
+.. note:: If you have specified multiple monitors in the setup of the cluster,
+ make sure, that all monitors are up and running. If the monitors haven't
+ formed quorum, ``ceph-create-keys`` will not finish and the keys are not
+ generated.
+
+Forget Keys
+===========
+
+When you are no longer using ``ceph-deploy`` (or if you are recreating a
+cluster), you should delete the keys in the local directory of your admin host.
+To delete keys, enter the following::
+
+ ceph-deploy forgetkeys
+
diff --git a/src/ceph/doc/rados/deployment/ceph-deploy-mds.rst b/src/ceph/doc/rados/deployment/ceph-deploy-mds.rst
new file mode 100644
index 0000000..d2afaec
--- /dev/null
+++ b/src/ceph/doc/rados/deployment/ceph-deploy-mds.rst
@@ -0,0 +1,46 @@
+============================
+ Add/Remove Metadata Server
+============================
+
+With ``ceph-deploy``, adding and removing metadata servers is a simple task. You
+just add or remove one or more metadata servers on the command line with one
+command.
+
+.. important:: You must deploy at least one metadata server to use CephFS.
+ There is experimental support for running multiple metadata servers.
+ Do not run multiple active metadata servers in production.
+
+See `MDS Config Reference`_ for details on configuring metadata servers.
+
+
+Add a Metadata Server
+=====================
+
+Once you deploy monitors and OSDs you may deploy the metadata server(s). ::
+
+ ceph-deploy mds create {host-name}[:{daemon-name}] [{host-name}[:{daemon-name}] ...]
+
+You may specify a daemon instance a name (optional) if you would like to run
+multiple daemons on a single server.
+
+
+Remove a Metadata Server
+========================
+
+Coming soon...
+
+.. If you have a metadata server in your cluster that you'd like to remove, you may use
+.. the ``destroy`` option. ::
+
+.. ceph-deploy mds destroy {host-name}[:{daemon-name}] [{host-name}[:{daemon-name}] ...]
+
+.. You may specify a daemon instance a name (optional) if you would like to destroy
+.. a particular daemon that runs on a single server with multiple MDS daemons.
+
+.. .. note:: Ensure that if you remove a metadata server, the remaining metadata
+ servers will be able to service requests from CephFS clients. If that is not
+ possible, consider adding a metadata server before destroying the metadata
+ server you would like to take offline.
+
+
+.. _MDS Config Reference: ../../../cephfs/mds-config-ref
diff --git a/src/ceph/doc/rados/deployment/ceph-deploy-mon.rst b/src/ceph/doc/rados/deployment/ceph-deploy-mon.rst
new file mode 100644
index 0000000..bda34fe
--- /dev/null
+++ b/src/ceph/doc/rados/deployment/ceph-deploy-mon.rst
@@ -0,0 +1,56 @@
+=====================
+ Add/Remove Monitors
+=====================
+
+With ``ceph-deploy``, adding and removing monitors is a simple task. You just
+add or remove one or more monitors on the command line with one command. Before
+``ceph-deploy``, the process of `adding and removing monitors`_ involved
+numerous manual steps. Using ``ceph-deploy`` imposes a restriction: **you may
+only install one monitor per host.**
+
+.. note:: We do not recommend comingling monitors and OSDs on
+ the same host.
+
+For high availability, you should run a production Ceph cluster with **AT
+LEAST** three monitors. Ceph uses the Paxos algorithm, which requires a
+consensus among the majority of monitors in a quorum. With Paxos, the monitors
+cannot determine a majority for establishing a quorum with only two monitors. A
+majority of monitors must be counted as such: 1:1, 2:3, 3:4, 3:5, 4:6, etc.
+
+See `Monitor Config Reference`_ for details on configuring monitors.
+
+
+Add a Monitor
+=============
+
+Once you create a cluster and install Ceph packages to the monitor host(s), you
+may deploy the monitor(s) to the monitor host(s). When using ``ceph-deploy``,
+the tool enforces a single monitor per host. ::
+
+ ceph-deploy mon create {host-name [host-name]...}
+
+
+.. note:: Ensure that you add monitors such that they may arrive at a consensus
+ among a majority of monitors, otherwise other steps (like ``ceph-deploy gatherkeys``)
+ will fail.
+
+.. note:: When adding a monitor on a host that was not in hosts initially defined
+ with the ``ceph-deploy new`` command, a ``public network`` statement needs
+ to be added to the ceph.conf file.
+
+Remove a Monitor
+================
+
+If you have a monitor in your cluster that you'd like to remove, you may use
+the ``destroy`` option. ::
+
+ ceph-deploy mon destroy {host-name [host-name]...}
+
+
+.. note:: Ensure that if you remove a monitor, the remaining monitors will be
+ able to establish a consensus. If that is not possible, consider adding a
+ monitor before removing the monitor you would like to take offline.
+
+
+.. _adding and removing monitors: ../../operations/add-or-rm-mons
+.. _Monitor Config Reference: ../../configuration/mon-config-ref
diff --git a/src/ceph/doc/rados/deployment/ceph-deploy-new.rst b/src/ceph/doc/rados/deployment/ceph-deploy-new.rst
new file mode 100644
index 0000000..5eb37a9
--- /dev/null
+++ b/src/ceph/doc/rados/deployment/ceph-deploy-new.rst
@@ -0,0 +1,66 @@
+==================
+ Create a Cluster
+==================
+
+The first step in using Ceph with ``ceph-deploy`` is to create a new Ceph
+cluster. A new Ceph cluster has:
+
+- A Ceph configuration file, and
+- A monitor keyring.
+
+The Ceph configuration file consists of at least:
+
+- Its own filesystem ID (``fsid``)
+- The initial monitor(s) hostname(s), and
+- The initial monitor(s) and IP address(es).
+
+For additional details, see the `Monitor Configuration Reference`_.
+
+The ``ceph-deploy`` tool also creates a monitor keyring and populates it with a
+``[mon.]`` key. For additional details, see the `Cephx Guide`_.
+
+
+Usage
+-----
+
+To create a cluster with ``ceph-deploy``, use the ``new`` command and specify
+the host(s) that will be initial members of the monitor quorum. ::
+
+ ceph-deploy new {host [host], ...}
+
+For example::
+
+ ceph-deploy new mon1.foo.com
+ ceph-deploy new mon{1,2,3}
+
+The ``ceph-deploy`` utility will use DNS to resolve hostnames to IP
+addresses. The monitors will be named using the first component of
+the name (e.g., ``mon1`` above). It will add the specified host names
+to the Ceph configuration file. For additional details, execute::
+
+ ceph-deploy new -h
+
+
+Naming a Cluster
+----------------
+
+By default, Ceph clusters have a cluster name of ``ceph``. You can specify
+a cluster name if you want to run multiple clusters on the same hardware. For
+example, if you want to optimize a cluster for use with block devices, and
+another for use with the gateway, you can run two different clusters on the same
+hardware if they have a different ``fsid`` and cluster name. ::
+
+ ceph-deploy --cluster {cluster-name} new {host [host], ...}
+
+For example::
+
+ ceph-deploy --cluster rbdcluster new ceph-mon1
+ ceph-deploy --cluster rbdcluster new ceph-mon{1,2,3}
+
+.. note:: If you run multiple clusters, ensure you adjust the default
+ port settings and open ports for your additional cluster(s) so that
+ the networks of the two different clusters don't conflict with each other.
+
+
+.. _Monitor Configuration Reference: ../../configuration/mon-config-ref
+.. _Cephx Guide: ../../../dev/mon-bootstrap#secret-keys
diff --git a/src/ceph/doc/rados/deployment/ceph-deploy-osd.rst b/src/ceph/doc/rados/deployment/ceph-deploy-osd.rst
new file mode 100644
index 0000000..a4eb4d1
--- /dev/null
+++ b/src/ceph/doc/rados/deployment/ceph-deploy-osd.rst
@@ -0,0 +1,121 @@
+=================
+ Add/Remove OSDs
+=================
+
+Adding and removing Ceph OSD Daemons to your cluster may involve a few more
+steps when compared to adding and removing other Ceph daemons. Ceph OSD Daemons
+write data to the disk and to journals. So you need to provide a disk for the
+OSD and a path to the journal partition (i.e., this is the most common
+configuration, but you may configure your system to your own needs).
+
+In Ceph v0.60 and later releases, Ceph supports ``dm-crypt`` on disk encryption.
+You may specify the ``--dmcrypt`` argument when preparing an OSD to tell
+``ceph-deploy`` that you want to use encryption. You may also specify the
+``--dmcrypt-key-dir`` argument to specify the location of ``dm-crypt``
+encryption keys.
+
+You should test various drive configurations to gauge their throughput before
+before building out a large cluster. See `Data Storage`_ for additional details.
+
+
+List Disks
+==========
+
+To list the disks on a node, execute the following command::
+
+ ceph-deploy disk list {node-name [node-name]...}
+
+
+Zap Disks
+=========
+
+To zap a disk (delete its partition table) in preparation for use with Ceph,
+execute the following::
+
+ ceph-deploy disk zap {osd-server-name}:{disk-name}
+ ceph-deploy disk zap osdserver1:sdb
+
+.. important:: This will delete all data.
+
+
+Prepare OSDs
+============
+
+Once you create a cluster, install Ceph packages, and gather keys, you
+may prepare the OSDs and deploy them to the OSD node(s). If you need to
+identify a disk or zap it prior to preparing it for use as an OSD,
+see `List Disks`_ and `Zap Disks`_. ::
+
+ ceph-deploy osd prepare {node-name}:{data-disk}[:{journal-disk}]
+ ceph-deploy osd prepare osdserver1:sdb:/dev/ssd
+ ceph-deploy osd prepare osdserver1:sdc:/dev/ssd
+
+The ``prepare`` command only prepares the OSD. On most operating
+systems, the ``activate`` phase will automatically run when the
+partitions are created on the disk (using Ceph ``udev`` rules). If not
+use the ``activate`` command. See `Activate OSDs`_ for
+details.
+
+The foregoing example assumes a disk dedicated to one Ceph OSD Daemon, and
+a path to an SSD journal partition. We recommend storing the journal on
+a separate drive to maximize throughput. You may dedicate a single drive
+for the journal too (which may be expensive) or place the journal on the
+same disk as the OSD (not recommended as it impairs performance). In the
+foregoing example we store the journal on a partitioned solid state drive.
+
+You can use the settings --fs-type or --bluestore to choose which file system
+you want to install in the OSD drive. (More information by running
+'ceph-deploy osd prepare --help').
+
+.. note:: When running multiple Ceph OSD daemons on a single node, and
+ sharing a partioned journal with each OSD daemon, you should consider
+ the entire node the minimum failure domain for CRUSH purposes, because
+ if the SSD drive fails, all of the Ceph OSD daemons that journal to it
+ will fail too.
+
+
+Activate OSDs
+=============
+
+Once you prepare an OSD you may activate it with the following command. ::
+
+ ceph-deploy osd activate {node-name}:{data-disk-partition}[:{journal-disk-partition}]
+ ceph-deploy osd activate osdserver1:/dev/sdb1:/dev/ssd1
+ ceph-deploy osd activate osdserver1:/dev/sdc1:/dev/ssd2
+
+The ``activate`` command will cause your OSD to come ``up`` and be placed
+``in`` the cluster. The ``activate`` command uses the path to the partition
+created when running the ``prepare`` command.
+
+
+Create OSDs
+===========
+
+You may prepare OSDs, deploy them to the OSD node(s) and activate them in one
+step with the ``create`` command. The ``create`` command is a convenience method
+for executing the ``prepare`` and ``activate`` command sequentially. ::
+
+ ceph-deploy osd create {node-name}:{disk}[:{path/to/journal}]
+ ceph-deploy osd create osdserver1:sdb:/dev/ssd1
+
+.. List OSDs
+.. =========
+
+.. To list the OSDs deployed on a node(s), execute the following command::
+
+.. ceph-deploy osd list {node-name}
+
+
+Destroy OSDs
+============
+
+.. note:: Coming soon. See `Remove OSDs`_ for manual procedures.
+
+.. To destroy an OSD, execute the following command::
+
+.. ceph-deploy osd destroy {node-name}:{path-to-disk}[:{path/to/journal}]
+
+.. Destroying an OSD will take it ``down`` and ``out`` of the cluster.
+
+.. _Data Storage: ../../../start/hardware-recommendations#data-storage
+.. _Remove OSDs: ../../operations/add-or-rm-osds#removing-osds-manual
diff --git a/src/ceph/doc/rados/deployment/ceph-deploy-purge.rst b/src/ceph/doc/rados/deployment/ceph-deploy-purge.rst
new file mode 100644
index 0000000..685c3c4
--- /dev/null
+++ b/src/ceph/doc/rados/deployment/ceph-deploy-purge.rst
@@ -0,0 +1,25 @@
+==============
+ Purge a Host
+==============
+
+When you remove Ceph daemons and uninstall Ceph, there may still be extraneous
+data from the cluster on your server. The ``purge`` and ``purgedata`` commands
+provide a convenient means of cleaning up a host.
+
+
+Purge Data
+==========
+
+To remove all data from ``/var/lib/ceph`` (but leave Ceph packages intact),
+execute the ``purgedata`` command.
+
+ ceph-deploy purgedata {hostname} [{hostname} ...]
+
+
+Purge
+=====
+
+To remove all data from ``/var/lib/ceph`` and uninstall Ceph packages, execute
+the ``purge`` command.
+
+ ceph-deploy purge {hostname} [{hostname} ...] \ No newline at end of file
diff --git a/src/ceph/doc/rados/deployment/index.rst b/src/ceph/doc/rados/deployment/index.rst
new file mode 100644
index 0000000..0853e4a
--- /dev/null
+++ b/src/ceph/doc/rados/deployment/index.rst
@@ -0,0 +1,58 @@
+=================
+ Ceph Deployment
+=================
+
+The ``ceph-deploy`` tool is a way to deploy Ceph relying only upon SSH access to
+the servers, ``sudo``, and some Python. It runs on your workstation, and does
+not require servers, databases, or any other tools. If you set up and
+tear down Ceph clusters a lot, and want minimal extra bureaucracy,
+``ceph-deploy`` is an ideal tool. The ``ceph-deploy`` tool is not a generic
+deployment system. It was designed exclusively for Ceph users who want to get
+Ceph up and running quickly with sensible initial configuration settings without
+the overhead of installing Chef, Puppet or Juju. Users who want fine-control
+over security settings, partitions or directory locations should use a tool
+such as Juju, Puppet, `Chef`_ or Crowbar.
+
+
+With ``ceph-deploy``, you can develop scripts to install Ceph packages on remote
+hosts, create a cluster, add monitors, gather (or forget) keys, add OSDs and
+metadata servers, configure admin hosts, and tear down the clusters.
+
+.. raw:: html
+
+ <table cellpadding="10"><tbody valign="top"><tr><td>
+
+.. toctree::
+
+ Preflight Checklist <preflight-checklist>
+ Install Ceph <ceph-deploy-install>
+
+.. raw:: html
+
+ </td><td>
+
+.. toctree::
+
+ Create a Cluster <ceph-deploy-new>
+ Add/Remove Monitor(s) <ceph-deploy-mon>
+ Key Management <ceph-deploy-keys>
+ Add/Remove OSD(s) <ceph-deploy-osd>
+ Add/Remove MDS(s) <ceph-deploy-mds>
+
+
+.. raw:: html
+
+ </td><td>
+
+.. toctree::
+
+ Purge Hosts <ceph-deploy-purge>
+ Admin Tasks <ceph-deploy-admin>
+
+
+.. raw:: html
+
+ </td></tr></tbody></table>
+
+
+.. _Chef: http://tracker.ceph.com/projects/ceph/wiki/Deploying_Ceph_with_Chef
diff --git a/src/ceph/doc/rados/deployment/preflight-checklist.rst b/src/ceph/doc/rados/deployment/preflight-checklist.rst
new file mode 100644
index 0000000..64a669f
--- /dev/null
+++ b/src/ceph/doc/rados/deployment/preflight-checklist.rst
@@ -0,0 +1,109 @@
+=====================
+ Preflight Checklist
+=====================
+
+.. versionadded:: 0.60
+
+This **Preflight Checklist** will help you prepare an admin node for use with
+``ceph-deploy``, and server nodes for use with passwordless ``ssh`` and
+``sudo``.
+
+Before you can deploy Ceph using ``ceph-deploy``, you need to ensure that you
+have a few things set up first on your admin node and on nodes running Ceph
+daemons.
+
+
+Install an Operating System
+===========================
+
+Install a recent release of Debian or Ubuntu (e.g., 12.04 LTS, 14.04 LTS) on
+your nodes. For additional details on operating systems or to use other
+operating systems other than Debian or Ubuntu, see `OS Recommendations`_.
+
+
+Install an SSH Server
+=====================
+
+The ``ceph-deploy`` utility requires ``ssh``, so your server node(s) require an
+SSH server. ::
+
+ sudo apt-get install openssh-server
+
+
+Create a User
+=============
+
+Create a user on nodes running Ceph daemons.
+
+.. tip:: We recommend a username that brute force attackers won't
+ guess easily (e.g., something other than ``root``, ``ceph``, etc).
+
+::
+
+ ssh user@ceph-server
+ sudo useradd -d /home/ceph -m ceph
+ sudo passwd ceph
+
+
+``ceph-deploy`` installs packages onto your nodes. This means that
+the user you create requires passwordless ``sudo`` privileges.
+
+.. note:: We **DO NOT** recommend enabling the ``root`` password
+ for security reasons.
+
+To provide full privileges to the user, add the following to
+``/etc/sudoers.d/ceph``. ::
+
+ echo "ceph ALL = (root) NOPASSWD:ALL" | sudo tee /etc/sudoers.d/ceph
+ sudo chmod 0440 /etc/sudoers.d/ceph
+
+
+Configure SSH
+=============
+
+Configure your admin machine with password-less SSH access to each node
+running Ceph daemons (leave the passphrase empty). ::
+
+ ssh-keygen
+ Generating public/private key pair.
+ Enter file in which to save the key (/ceph-client/.ssh/id_rsa):
+ Enter passphrase (empty for no passphrase):
+ Enter same passphrase again:
+ Your identification has been saved in /ceph-client/.ssh/id_rsa.
+ Your public key has been saved in /ceph-client/.ssh/id_rsa.pub.
+
+Copy the key to each node running Ceph daemons::
+
+ ssh-copy-id ceph@ceph-server
+
+Modify your ~/.ssh/config file of your admin node so that it defaults
+to logging in as the user you created when no username is specified. ::
+
+ Host ceph-server
+ Hostname ceph-server.fqdn-or-ip-address.com
+ User ceph
+
+
+Install ceph-deploy
+===================
+
+To install ``ceph-deploy``, execute the following::
+
+ wget -q -O- 'https://download.ceph.com/keys/release.asc' | sudo apt-key add -
+ echo deb http://ceph.com/debian-dumpling/ $(lsb_release -sc) main | sudo tee /etc/apt/sources.list.d/ceph.list
+ sudo apt-get update
+ sudo apt-get install ceph-deploy
+
+
+Ensure Connectivity
+===================
+
+Ensure that your Admin node has connectivity to the network and to your Server
+node (e.g., ensure ``iptables``, ``ufw`` or other tools that may prevent
+connections, traffic forwarding, etc. to allow what you need).
+
+
+Once you have completed this pre-flight checklist, you are ready to begin using
+``ceph-deploy``.
+
+.. _OS Recommendations: ../../../start/os-recommendations