summaryrefslogtreecommitdiffstats
diff options
context:
space:
mode:
authorTim Rozet <trozet@redhat.com>2017-03-30 22:08:05 -0400
committerDan Radez <dradez@redhat.com>2017-03-31 02:33:27 +0000
commit55427ec21a17cd601a4abb2cab4de4694ba1a702 (patch)
treea218adbac3b93b0b71463a7c2b674850725403f4
parentb4af8e13fd656423fe7275d6f37b7cfee10cd1aa (diff)
More doc updates for Danube
Change-Id: I125548acde034d8c4c85cce93fb001ba06c8ab33 Signed-off-by: Tim Rozet <trozet@redhat.com> (cherry picked from commit 1d655538cfe9a665c5e12dc8b910ddaaae21bae7)
-rw-r--r--docs/release/installation/architecture.rst18
-rw-r--r--docs/release/installation/baremetal.rst6
-rw-r--r--docs/release/installation/virtualinstall.rst23
-rw-r--r--docs/release/release-notes/release-notes.rst3
4 files changed, 38 insertions, 12 deletions
diff --git a/docs/release/installation/architecture.rst b/docs/release/installation/architecture.rst
index ae634cc4..eff5d35d 100644
--- a/docs/release/installation/architecture.rst
+++ b/docs/release/installation/architecture.rst
@@ -30,7 +30,7 @@ Undercloud
----------
The undercloud is not Highly Available. End users do not depend on the
-underloud. It is only for management purposes.
+undercloud. It is only for management purposes.
Overcloud
---------
@@ -47,7 +47,7 @@ will run the following services:
- Ceph Monitors and OSDs
Stateless OpenStack services
- All running statesless OpenStack services are load balanced by HA Proxy.
+ All running stateless OpenStack services are load balanced by HA Proxy.
Pacemaker monitors the services and ensures that they are running.
Stateful OpenStack services
@@ -65,9 +65,13 @@ RabbitMQ
establishment of clustering across cluster members.
OpenDaylight
- OpenDaylight is currently installed on all three control nodes but only
- started on the first control node. OpenDaylight's HA capabilities are not yet
- mature enough to be enabled.
+ OpenDaylight is currently installed on all three control nodes and started as
+ an HA cluster unless otherwise noted for that scenario. OpenDaylight's
+ database, known as MD-SAL, breaks up pieces of the database into "shards".
+ Each shard will have its own election take place, which will determine
+ which OpenDaylight node is the leader for that shard. The other
+ OpenDaylight nodes in the cluster will be in standby. Every Open vSwitch
+ node connects to every OpenDaylight to enable HA.
HA Proxy
HA Proxy is monitored by Pacemaker to ensure it is running across all nodes
@@ -127,9 +131,9 @@ issues per scenario. The following scenarios correspond to a supported
+-------------------------+-------------+---------------+
| os-nosdn-ovs-noha | OVS for NFV | Yes |
+-------------------------+-------------+---------------+
-| os-nosdn-fdio-ha | FDS | Yes |
+| os-nosdn-fdio-ha | FDS | No |
+-------------------------+-------------+---------------+
-| os-nosdn-fdio-noha | FDS | Yes |
+| os-nosdn-fdio-noha | FDS | No |
+-------------------------+-------------+---------------+
| os-nosdn-kvm-ha | KVM for NFV | Yes |
+-------------------------+-------------+---------------+
diff --git a/docs/release/installation/baremetal.rst b/docs/release/installation/baremetal.rst
index 83cda326..dcee817a 100644
--- a/docs/release/installation/baremetal.rst
+++ b/docs/release/installation/baremetal.rst
@@ -50,7 +50,7 @@ images provided by the undercloud. These disk images include all the necessary
packages and configuration for an OPNFV deployment to execute. Once the disk
images have been written to node's disks the nodes will boot locally and
execute cloud-init which will execute the final node configuration. This
-configuration is largly completed by executing a puppet apply on each node.
+configuration is largely completed by executing a puppet apply on each node.
Installation High-Level Overview - VM Deployment
================================================
@@ -62,7 +62,7 @@ VM a collection of VMs (3 control nodes + 2 compute for an HA deployment or 1
control node and 1 or more compute nodes for a Non-HA Deployment) will be
defined for the target OPNFV deployment. The part of the toolchain that
executes IPMI power instructions calls into libvirt instead of the IPMI
-interfaces on baremetal servers to operate the power managment. These VMs are
+interfaces on baremetal servers to operate the power management. These VMs are
then provisioned with the same disk images and configuration that baremetal
would be.
@@ -235,7 +235,7 @@ help you customize them.
(``/etc/opnfv-apex/``). These files are named with the naming convention
os-sdn_controller-enabled_feature-[no]ha.yaml. These files can be used in
place of the (``/etc/opnfv-apex/deploy_settings.yaml``) file if one suites
- your deployment needs. If a pre-built deploy_settings file is choosen there
+ your deployment needs. If a pre-built deploy_settings file is chosen there
is no need to customize (``/etc/opnfv-apex/deploy_settings.yaml``). The
pre-built file can be used in place of the
(``/etc/opnfv-apex/deploy_settings.yaml``) file.
diff --git a/docs/release/installation/virtualinstall.rst b/docs/release/installation/virtualinstall.rst
index 5da2ee3c..61fc4be6 100644
--- a/docs/release/installation/virtualinstall.rst
+++ b/docs/release/installation/virtualinstall.rst
@@ -8,7 +8,7 @@ undercloud VM. In addition to the undercloud VM a collection of VMs
or more compute nodes for a non-HA Deployment) will be defined for the target
OPNFV deployment. The part of the toolchain that executes IPMI power
instructions calls into libvirt instead of the IPMI interfaces on baremetal
-servers to operate the power managment. These VMs are then provisioned with
+servers to operate the power management. These VMs are then provisioned with
the same disk images and configuration that baremetal would be. To Triple-O
these nodes look like they have just built and registered the same way as bare
metal nodes, the main difference is the use of a libvirt driver for the power
@@ -22,6 +22,25 @@ Installation Guide - Virtual Deployment
This section goes step-by-step on how to correctly install and provision the
OPNFV target system to VM nodes.
+Special Requirements for Virtual Deployments
+--------------------------------------------
+
+In scenarios where advanced performance options or features are used, such
+as using huge pages with nova instances, DPDK, or iommu; it is required to
+enabled nested KVM support. This allows hardware extensions to be passed to
+the overcloud VMs, which will allow the overcloud compute nodes to bring up
+KVM guest nova instances, rather than QEMU. This also provides a great
+performance increase even in non-required scenarios and is recommended to be
+enabled.
+
+During deployment the Apex installer will detect if nested KVM is enabled,
+and if not, it will attempt to enable it; while printing a warning message
+if it cannot. Check to make sure before deployment that Nested
+Virtualization is enabled in BIOS, and that the output of ``cat
+/sys/module/kvm_intel/parameters/nested`` returns "Y". Also verify using
+``lsmod`` that the kvm_intel module is loaded for x86_64 machines, and
+kvm_amd is loaded for AMD64 machines.
+
Install Jumphost
----------------
@@ -32,7 +51,7 @@ Running ``opnfv-deploy``
You are now ready to deploy OPNFV!
``opnfv-deploy`` has virtual deployment capability that includes all of
-the configuration nessesary to deploy OPNFV with no modifications.
+the configuration necessary to deploy OPNFV with no modifications.
If no modifications are made to the included configurations the target
environment will deploy with the following architecture:
diff --git a/docs/release/release-notes/release-notes.rst b/docs/release/release-notes/release-notes.rst
index 9bb51551..5a292e3c 100644
--- a/docs/release/release-notes/release-notes.rst
+++ b/docs/release/release-notes/release-notes.rst
@@ -215,6 +215,9 @@ Bug Corrections
| JIRA: APEX-406 | ODL FDIO neutron patches to all |
| | scenarios |
+--------------------------------------+--------------------------------------+
+| JIRA: APEX-407 | VPP service does not start upon |
+| | reboot |
++--------------------------------------+--------------------------------------+
| JIRA: APEX-408 | Quagga's bgpd cannot start due to |
| | permissions |
+--------------------------------------+--------------------------------------+