summaryrefslogtreecommitdiffstats
path: root/docs/release/scenarios/kvmfornfv.scenarios.description.rst
diff options
context:
space:
mode:
Diffstat (limited to 'docs/release/scenarios/kvmfornfv.scenarios.description.rst')
-rw-r--r--docs/release/scenarios/kvmfornfv.scenarios.description.rst423
1 files changed, 395 insertions, 28 deletions
diff --git a/docs/release/scenarios/kvmfornfv.scenarios.description.rst b/docs/release/scenarios/kvmfornfv.scenarios.description.rst
index 5a5328666..bc8c585e7 100644
--- a/docs/release/scenarios/kvmfornfv.scenarios.description.rst
+++ b/docs/release/scenarios/kvmfornfv.scenarios.description.rst
@@ -29,10 +29,10 @@ Version Features
+-----------------------------+---------------------------------------------+
| | - High Availability/No-High Availability |
| | deployment configuration of KVM4NFV |
-| | software suite |
-| Danube | - Multi-node setup with 3 controller and |
+| | software suite using Fuel |
+| | - Multi-node setup with 3 controller and |
| | 2 compute nodes are deployed for HA |
-| | - Multi-node setup with 1 controller and |
+| Danube | - Multi-node setup with 1 controller and |
| | 3 compute nodes are deployed for NO-HA |
| | - Scenarios os-nosdn-kvm_ovs_dpdk-ha, |
| | os-nosdn-kvm_ovs_dpdk_bar-ha, |
@@ -40,6 +40,18 @@ Version Features
| | os-nosdn-kvm_ovs_dpdk_bar-noha |
| | are supported |
+-----------------------------+---------------------------------------------+
+| | - High Availability/No-High Availability |
+| | deployment configuration of KVM4NFV |
+| | software suite using Apex |
+| | - Multi-node setup with 3 controller and |
+| Euphrates | 2 compute nodes are deployed for HA |
+| | - Multi-node setup with 1 controller and |
+| | 1 compute node are deployed for NO-HA |
+| | - Scenarios os-nosdn-kvm_ovs_dpdk-ha, |
+| | os-nosdn-kvm_ovs_dpdk-noha, |
+| | are supported |
++-----------------------------+---------------------------------------------+
+
Introduction
@@ -53,14 +65,18 @@ This OPNFV software suite includes OPNFV KVM4NFV latest software packages
for Linux Kernel and QEMU patches for achieving low latency and also OPNFV Barometer for traffic,
performance and platform monitoring.
-High Availability feature is achieved by deploying OpenStack
-multi-node setup with 1 Fuel-Master,3 controllers and 2 computes nodes.
+When using Fuel installer, High Availability feature is achieved by deploying OpenStack
+multi-node setup with 1 Fuel-Master,3 controllers and 2 computes nodes. No-High Availability
+feature is achieved by deploying OpenStack multi-node setup with 1 Fuel-Master,1 controllers
+and 3 computes nodes.
-No-High Availability feature is achieved by deploying OpenStack
-multi-node setup with 1 Fuel-Master,1 controllers and 3 computes nodes.
+When using Apex installer, High Availability feature is achieved by deploying Openstack
+multi-node setup with 1 undercloud, 3 overcloud controllers and 2 overcloud compute nodes.
+No-High Availability feature is achieved by deploying Openstack multi-node setup with
+1 undercloud, 1 overcloud controller and 1 overcloud compute nodes.
KVM4NFV packages will be installed on compute nodes as part of deployment.
-The scenario testcase deploys a multi-node setup by using OPNFV Fuel deployer.
+The scenario testcase deploys a multi-node setup by using OPNFV Fuel and Apex deployer.
System pre-requisites
---------------------
@@ -103,11 +119,26 @@ If Nested virtualization is disabled, enable it by,
Environment Setup
-----------------
+**Enable network access after the installation**
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+For **CentOS**.,
+Login as "root" user. After the installation complete, the Ethernet interfaces are not enabled by
+the default in Centos 7, you need to change the line "ONBOOT=no" to "ONBOOT=yes" in the network
+interface configuration file (such as ifcfg-enp6s0f0 or ifcfg-em1 … whichever you want to connect)
+in /etc/sysconfig/network-scripts sub-directory. The default BOOTPROTO is dhcp in the network
+interface configuration file. Then use following command to enable the network access:
+
+.. code:: bash
+
+ systemctl restart network
+
**Configuring Proxy**
~~~~~~~~~~~~~~~~~~~~~
For **Ubuntu**.,
-Create an apt.conf file in /etc/apt if it doesn't exist. Used to set proxy for apt-get if working behind a proxy server.
+Create an apt.conf file in /etc/apt if it doesn't exist. Used to set proxy for apt-get if working
+behind a proxy server.
.. code:: bash
@@ -123,6 +154,124 @@ Edit /etc/yum.conf to work behind a proxy server by adding the below line.
$ echo "proxy=http://<username>:<password>@<proxy>:<port>/" >> /etc/yum.conf
+**Install redsocks**
+~~~~~~~~~~~~~~~~~~~~
+
+For **CentOS**.,
+Since there is no redsocks package for CentOS Linux release 7.2.1511, you need build redsocks from
+source yourself. Using following commands to create “proxy_redsocks” sub-directory at /root:
+
+.. code:: bash
+
+ cd ~
+ mkdir proxy_redsocks
+
+Since you can’t download file at your Centos system yet. At other Centos or Ubuntu system, use
+following command to download redsocks source for Centos into a file “redsocks-src”;
+
+.. code:: bash
+
+ wget -O redsocks-src --no-check-certificate https://github.com/darkk/redsocks/zipball/master
+
+Also download libevent-devel-2.0.21-4.el7.x86_64.rpm by:
+
+.. code:: bash
+
+ wget ftp://fr2.rpmfind.net/linux/centos/7.2.1511/os/x86_64/Packages/libevent-devel-2.0.21-4.el7.x86_64.rpm
+
+Copy both redsock-src and libevent-devel-2.0.21-4.el7.x86_64.rpm files into ~/proxy_redsocks in your
+Centos system by “scp”.
+
+Back to your Centos system, first install libevent-devel using libevent-devel-2.0.21-4.el7.x86_64.rpm
+as below:
+
+.. code:: bash
+
+ cd ~/proxy_redsocks
+ yum install –y libevent-devel-2.0.21-4.el7.x86_64.rpm
+
+Build redsocks by:
+
+.. code:: bash
+
+ cd ~/proxy_redsocks
+ unzip redsocks-src
+ cd darkk-redsocks-78a73fc
+ yum –y install gcc
+ make
+ cp redsocks ~/proxy_redsocks/.
+
+Create a redsocks.conf in ~/proxy_redsocks with following contents:
+
+.. code:: bash
+
+ base {
+ log_debug = on;
+ log_info = on;
+ log = "file:/root/proxy.log";
+ daemon = on;
+ redirector = iptables;
+ }
+ redsocks {
+ local_ip = 0.0.0.0;
+ local_port = 6666;
+ // socks5 proxy server
+ ip = <proxy>;
+ port = 1080;
+ type = socks5;
+ }
+ redudp {
+ local_ip = 0.0.0.0;
+ local_port = 8888;
+ ip = <proxy>;
+ port = 1080;
+ }
+ dnstc {
+ local_ip = 127.0.0.1;
+ local_port = 5300;
+ }
+
+Start redsocks service by:
+
+.. code:: bash
+
+ cd ~/proxy_redsocks
+ ./redsocks –c redsocks.conf
+
+*Note*
+The redsocks service is not persistent and you need to execute the above-mentioned commands after
+every reboot.
+
+Create intc-proxy.sh in ~/proxy_redsocks with following contents and make it executable by
+“chmod +x intc-proxy.sh”:
+
+.. code:: bash
+
+ iptables -t nat -N REDSOCKS
+ iptables -t nat -A REDSOCKS -d 0.0.0.0/8 -j RETURN
+ iptables -t nat -A REDSOCKS -d 10.0.0.0/8 -j RETURN
+ iptables -t nat -A REDSOCKS -d 127.0.0.0/8 -j RETURN
+ iptables -t nat -A REDSOCKS -d 169.254.0.0/16 -j RETURN
+ iptables -t nat -A REDSOCKS -d 172.16.0.0/12 -j RETURN
+ iptables -t nat -A REDSOCKS -d 192.168.0.0/16 -j RETURN
+ iptables -t nat -A REDSOCKS -d 224.0.0.0/4 -j RETURN
+ iptables -t nat -A REDSOCKS -d 240.0.0.0/4 -j RETURN
+ iptables -t nat -A REDSOCKS -p tcp -j REDIRECT --to-ports 6666
+ iptables -t nat -A REDSOCKS -p udp -j REDIRECT --to-ports 8888
+ iptables -t nat -A OUTPUT -p tcp -j REDSOCKS
+ iptables -t nat -A PREROUTING -p tcp -j REDSOCKS
+
+Enable the REDSOCKS nat chain rule by:
+
+.. code:: bash
+
+ cd ~/proxy_redsocks
+ ./intc-proxy.sh
+
+*Note*
+These REDSOCKS nat chain rules are not persistent and you need to execute the above-mentioned
+commands after every reboot.
+
**Network Time Protocol (NTP) setup and configuration**
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
@@ -133,7 +282,8 @@ Install ntp by:
$ sudo apt-get update
$ sudo apt-get install -y ntp
-Insert the following two lines after “server ntp.ubuntu.com” line and before “ # Access control configuration; see `link`_ for” line in /etc/ntp.conf file:
+Insert the following two lines after “server ntp.ubuntu.com” line and before “ # Access control
+configuration; see `link`_ for” line in /etc/ntp.conf file:
.. _link: /usr/share/doc/ntp-doc/html/accopt.html
@@ -153,8 +303,9 @@ Scenario Testing
There are three ways of performing scenario testing,
- 1 Fuel
- - 2 OPNFV-Playground
- - 3 Jenkins Project
+ - 2 Apex
+ - 3 OPNFV-Playground
+ - 4 Jenkins Project
Fuel
~~~~
@@ -235,14 +386,16 @@ To include fuel plugins in the deployment configuration file, use the “stack-e
#module-config overrides
**Note:**
-The “module-config-name” and “module-config-version” should be same as the name of plugin configuration file.
+The “module-config-name” and “module-config-version” should be same as the name of plugin
+configuration file.
The “module-config-override” is used to configure the plugin by overrriding the corresponding keys in
the plugin config yaml file present in ~/fuel/deploy/config/plugins/.
``(iv). “dea-override-config” Module``
-To configure the HA/No-HA mode, network segmentation types and role to node assignments, use the “dea-override-config” key.
+To configure the HA/No-HA mode, network segmentation types and role to node assignments, use the
+“dea-override-config” key.
.. code:: bash
@@ -271,16 +424,22 @@ To configure the HA/No-HA mode, network segmentation types and role to node assi
editable:
storage:
ephemeral_ceph:
- description: Configures Nova to store ephemeral volumes in RBD. This works best if Ceph is enabled for volumes and images, too. Enables live migration of all types of Ceph backed VMs (without this option, live migration will only work with VMs launched from Cinder volumes).
+ description: Configures Nova to store ephemeral volumes in RBD.
+ This works best if Ceph is enabled for volumes and images, too.
+ Enables live migration of all types of Ceph backed VMs (without this
+ option, live migration will only work with VMs launched from
+ Cinder volumes).
label: Ceph RBD for ephemeral volumes (Nova)
type: checkbox
value: true
weight: 75
images_ceph:
- description: Configures Glance to use the Ceph RBD backend to store images.If enabled, this option will prevent Swift from installing.
+ description: Configures Glance to use the Ceph RBD backend to store
+ images.If enabled, this option will prevent Swift from installing.
label: Ceph RBD for images (Glance)
restrictions:
- - settings:storage.images_vcenter.value == true: Only one Glance backend could be selected.
+ - settings:storage.images_vcenter.value == true: Only one Glance
+ backend could be selected.
type: checkbox
value: true
weight: 30
@@ -296,7 +455,8 @@ This is an optional key present at the ending of the scenario file.
``(vi). Mapping to short scenario name``
-The scenario.yaml file is used to map the short names of scenario's to the one or more deployment scenario configuration yaml files.
+The scenario.yaml file is used to map the short names of scenario's to the one or more deployment
+scenario configuration yaml files.
The short scenario names should follow the scheme below:
.. code:: bash
@@ -306,7 +466,8 @@ The short scenario names should follow the scheme below:
[os]: mandatory
possible value: os
-Please note that this field is needed in order to select parent jobs to list and do blocking relations between them.
+Please note that this field is needed in order to select parent jobs to list and do blocking
+relations between them.
.. code:: bash
@@ -349,7 +510,8 @@ Command to deploy the os-nosdn-kvm_ovs_dpdk-ha scenario:
.. code:: bash
$ cd ~/fuel/ci/
- $ sudo ./deploy.sh -f -b file:///tmp/opnfv-fuel/deploy/config -l devel-pipeline -p default -s ha_nfv-kvm_nfv-ovs-dpdk_heat_ceilometer_scenario.yaml -i file:///tmp/opnfv.iso
+ $ sudo ./deploy.sh -f -b file:///tmp/opnfv-fuel/deploy/config -l devel-pipeline -p default \
+ -s ha_nfv-kvm_nfv-ovs-dpdk_heat_ceilometer_scenario.yaml -i file:///tmp/opnfv.iso
where,
``-b`` is used to specify the configuration directory
@@ -370,6 +532,203 @@ where,
Check $ sudo ./deploy.sh -h for further information.
+Apex
+~~~~
+
+Apex installer uses CentOS as the platform.
+
+**1 Install Packages :**
+
+Install necessary packages by following:
+
+.. code:: bash
+
+ cd ~
+ yum install –y git rpm-build python-setuptools python-setuptools-devel
+ yum install –y epel-release gcc
+ curl -O https://bootstrap.pypa.io/get-pip.py
+ um install –y python3 python34
+ /usr/bin/python3.4 get-pip.py
+ yum install –y python34-devel python34-setuptools
+ yum install –y libffi-devel python-devel openssl-devel
+ yum -y install libxslt-devel libxml2-devel
+
+Then you can use “dev_deploy_check.sh“ in Apex installer source to install the remaining necessary
+packages by following:
+
+.. code:: bash
+
+ cd ~
+ git clone https://gerrit.opnfv.org/gerrit/p/apex.git
+ export CONFIG=$(pwd)/apex/build
+ export LIB=$(pwd)/apex/lib
+ export PYTHONPATH=$PYTHONPATH:$(pwd)/apex/lib/python
+ cd ci
+ ./dev_deploy_check.sh
+ yum install –y python2-oslo-config python2-debtcollector
+
+
+**2 Create ssh key :**
+
+Use following commands to create ssh key, when asked for passphrase, just enter return for empty
+passphrase:
+
+.. code:: bash
+
+ cd ~
+ ssh-keygen -t rsa
+
+Then prepare the authorized_keys for Apex scenario deployment:
+
+.. code:: bash
+
+ cat $HOME/.ssh/id_rsa.pub >> $HOME/.ssh/authorized_keys
+
+**3 Create default pool :**
+
+Use following command to default pool device:
+
+.. code:: bash
+
+ cd ~
+ virsh pool-define /dev/stdin <<EOF
+ <pool type='dir'>
+ <name>default</name>
+ <target>
+ <path>/var/lib/libvirt/images</path>
+ </target>
+ </pool>
+ EOF
+
+Use following commands to start and set autostart the default pool device:
+
+.. code:: bash
+
+ virsh pool-start default
+ virsh pool-autostart default
+
+Use following commands to verify the success of the creation of the default pool device and starting
+and setting autostart of the default pool device:
+
+.. code:: bash
+
+ virsh pool-list
+ virsh pool-info default
+
+**4 Get Apex source code :**
+
+Get Apex installer source code:
+
+.. code:: bash
+
+ git clone https://gerrit.opnfv.org/gerrit/p/apex.git
+ cd apex
+
+**5 Modify code to work behind proxy :**
+
+In “lib” sub-directory of Apex source, change line 284 “if ping -c 2 www.google.com > /dev/null;
+then” to “if curl www.google.com > /dev/null; then” in “common-functions.sh” file, since we can’t
+ping www.google.com behind Intel proxy.
+
+**6 Setup build environment :**
+
+Setup build environment by:
+
+.. code:: bash
+
+ cd ~
+ export BASE=$(pwd)/apex/build
+ export LIB=$(pwd)/apex/lib
+ export PYTHONPATH=$PYTHONPATH:$(pwd)/apex/lib/python
+ export IMAGES=$(pwd)/apex/.build
+
+**7 Build Apex installer :**
+
+Build undercloud image by:
+
+.. code:: bash
+
+ cd ~/apex/build
+ make images-clean
+ make undercloud
+
+You can look at the targets in ~/apex/build/Makefile to build image for specific feature.
+Following show how to build vanilla ODL image (this can be used to build the overcloud image for
+basic (nosdn-nofeature) and opendaylight test scenario:
+
+.. code:: bash
+
+ cd ~/apex/build
+ make overcloud-opendaylight
+
+You can build the complete full set of images (undercloud, overcloud-full, overcloud-opendaylight,
+overcloud-onos) by:
+
+.. code:: bash
+
+ cd ~/apex/build
+ make images
+
+**8 Modification of network_settings.yaml :**
+
+Since we are working behind proxy, we need to modify the network_settings.yaml in ~/apex/config/network
+to make the deployment work properly. In order to avoid checking our modification into the repo
+accidentally, it is recommend that you copy “network_settings.yaml” to “intc_network_settings.yaml”
+in the ~/apex/config/network and do following modification in intc_network_settings.yaml:
+
+Change dns_nameservers settings from
+
+.. code:: bash
+
+ dns_servers: ["8.8.8.8", "8.8.4.4"]
+to
+
+.. code:: bash
+
+ dns_servers: ["<ip-address>"]
+
+Also, you need to modify deploy.sh in apex/ci from “ntp_server="pool.ntp.org"” to
+“ntp_server="<ip-address>"” to reflect that fact we couldn’t reach outside NTP server, just use
+local time.
+
+**9 Commands to deploy scenario :**
+
+Following shows the commands used to deploy os-nosdn-kvm_ovs_dpdk-noha scenario behind the proxy:
+
+.. code:: bash
+
+ cd ~/apex/ci
+ ./clean.sh
+ ./dev_deploy_check.sh
+ ./deploy.sh -v --ping-site <ping_ip-address> --dnslookup-site <dns_ip-address> -n \
+ ~/apex/config/network/intc_network_settings.yaml -d \
+ ~/apex/config/deploy/os-nosdn-kvm_ovs_dpdk-noha.yaml
+
+**10 Accessing the Overcloud dashboard :**
+
+If the deployment completes successfully, the last few output lines from the deployment will look
+like the following:
+
+.. code:: bash
+
+ INFO: Undercloud VM has been setup to NAT Overcloud public network
+ Undercloud IP: <ip-address>, please connect by doing 'opnfv-util undercloud'
+ Overcloud dashboard available at http://<ip-address>/dashboard
+ INFO: Post Install Configuration Complete
+
+**11 Accessing the Undercloud and Overcloud through command line :**
+
+At the end of the deployment we obtain the Undercloud ip. One can login to the Undercloud and obtain
+the Overcloud ip as follows:
+
+.. code:: bash
+
+ cd ~/apex/ci/
+ ./util.sh undercloud
+ source stackrc
+ nova list
+ ssh heat-admin@<overcloud-ip>
+
OPNFV-Playground
~~~~~~~~~~~~~~~~
@@ -382,8 +741,10 @@ Install OPNFV-playground (the tool chain to deploy/test CI scenarios in fuel@opn
$ git clone https://github.com/jonasbjurel/OPNFV-Playground.git
$ cd OPNFV-Playground/ci_fuel_opnfv/
-- Follow the README.rst in this ~/OPNFV-Playground/ci_fuel_opnfv sub-holder to complete all necessary installation and setup.
-- Section “RUNNING THE PIPELINE” in README.rst explain how to use this ci_pipeline to deploy/test CI test scenarios, you can also use
+- Follow the README.rst in this ~/OPNFV-Playground/ci_fuel_opnfv sub-holder to complete all necessary
+installation and setup.
+- Section “RUNNING THE PIPELINE” in README.rst explain how to use this ci_pipeline to deploy/test CI
+test scenarios, you can also use
.. code:: bash
@@ -393,14 +754,16 @@ Install OPNFV-playground (the tool chain to deploy/test CI scenarios in fuel@opn
``1 Downgrade paramiko package from 2.x.x to 1.10.0``
-The paramiko package 2.x.x doesn’t work with OPNFV-playground tool chain now, Jira ticket FUEL - 188 has been raised for the same.
+The paramiko package 2.x.x doesn’t work with OPNFV-playground tool chain now, Jira ticket FUEL - 188
+has been raised for the same.
Check paramiko package version by following below steps in your system:
.. code:: bash
$ python
- Python 2.7.6 (default, Jun 22 2015, 17:58:13) [GCC 4.8.2] on linux2 Type "help", "copyright", "credits" or "license" for more information.
+ Python 2.7.6 (default, Jun 22 2015, 17:58:13) [GCC 4.8.2] on linux2 Type "help", "copyright",
+ "credits" or "license" for more information.
>>> import paramiko
>>> print paramiko.__version__
@@ -448,7 +811,8 @@ Implement the scenario file as described in 3.1.4
``4 Deploying the scenario``
-You can use the following command to deploy/test os-nosdn kvm_ovs_dpdk-(no)ha and os-nosdn-kvm_ovs_dpdk_bar-(no)ha scenario
+You can use the following command to deploy/test os-nosdn kvm_ovs_dpdk-(no)ha and
+os-nosdn-kvm_ovs_dpdk_bar-(no)ha scenario
.. code:: bash
@@ -482,12 +846,15 @@ Note:
Jenkins Project
~~~~~~~~~~~~~~~
-os-nosdn-kvm_ovs_dpdk-(no)ha and os-nosdn-kvm_ovs_dpdk_bar-(no)ha scenario can be executed from the jenkins project :
+os-nosdn-kvm_ovs_dpdk-(no)ha and os-nosdn-kvm_ovs_dpdk_bar-(no)ha scenario can be executed from the
+jenkins project :
``HA scenarios:``
1. "fuel-os-nosdn-kvm_ovs_dpdk-ha-baremetal-daily-master" (os-nosdn-kvm_ovs_dpdk-ha)
2. "fuel-os-nosdn-kvm_ovs_dpdk_bar-ha-baremetal-daily-master" (os-nosdn-kvm_ovs_dpdk_bar-ha)
+ 3. "apex-os-nosdn-kvm_ovs_dpdk-ha-baremetal-master" (os-nosdn-kvm_ovs_dpdk-ha)
``NOHA scenarios:``
- 1. "fuel-os-nosdn-kvm_ovs_dpdk-noha-virtual-daily-master" (os-nosdn-kvm_ovs_dpdk-noha)
- 2. "fuel-os-nosdn-kvm_ovs_dpdk_bar-noha-virtual-daily-master" (os-nosdn-kvm_ovs_dpdk_bar-noha)
+ 1. "fuel-os-nosdn-kvm_ovs_dpdk-noha-virtual-daily-master" (os-nosdn-kvm_ovs_dpdk-noha)
+ 2. "fuel-os-nosdn-kvm_ovs_dpdk_bar-noha-virtual-daily-master" (os-nosdn-kvm_ovs_dpdk_bar-noha)
+ 3. "apex-os-nosdn-kvm_ovs_dpdk-noha-baremetal-master" (os-nosdn-kvm_ovs_dpdk-noha)