summaryrefslogtreecommitdiffstats
diff options
context:
space:
mode:
-rw-r--r--docs/release/installation/index.rst144
-rw-r--r--docs/release/release-notes/release-notes.rst10
-rw-r--r--requirements.txt2
-rw-r--r--sdnvpn/artifacts/quagga_setup.sh4
-rw-r--r--sdnvpn/artifacts/testcase_1bis.yaml234
-rw-r--r--sdnvpn/artifacts/testcase_2bis.yaml289
-rw-r--r--sdnvpn/artifacts/testcase_4bis.yaml247
-rw-r--r--sdnvpn/artifacts/testcase_8bis.yaml173
-rw-r--r--sdnvpn/artifacts/testcase_8bis_upd.yaml17
-rw-r--r--sdnvpn/lib/openstack_utils.py82
-rw-r--r--sdnvpn/lib/quagga.py16
-rw-r--r--sdnvpn/lib/utils.py347
-rwxr-xr-xsdnvpn/sh_utils/fetch-log-script.sh12
-rw-r--r--sdnvpn/test/functest/config.yaml112
-rw-r--r--sdnvpn/test/functest/testcase_1bis.py209
-rw-r--r--sdnvpn/test/functest/testcase_2bis.py188
-rw-r--r--sdnvpn/test/functest/testcase_3.py112
-rw-r--r--sdnvpn/test/functest/testcase_4bis.py215
-rw-r--r--sdnvpn/test/functest/testcase_8bis.py176
19 files changed, 2274 insertions, 315 deletions
diff --git a/docs/release/installation/index.rst b/docs/release/installation/index.rst
index ded6a78..089bc55 100644
--- a/docs/release/installation/index.rst
+++ b/docs/release/installation/index.rst
@@ -31,7 +31,9 @@ spec>.
When ODL is used as an SDN Controller in an OPNFV virtual deployment, ODL is
running on the OpenStack Controller VMs. It is therefore recommended to
-increase the amount of resources for these VMs.
+increase the amount of resources for these VMs. ODL is running in a separate
+VM in case of Fuel, thus, the below recommendation is not applicable when
+deploying the scenario on Fuel installer.
Our recommendation is to have 2 additional virtual cores and 8GB
additional virtual memory on top of the normally recommended
@@ -50,11 +52,11 @@ Installation using Fuel installer
Preparing the host to install Fuel by script
============================================
-.. Not all of these options are relevant for all scenarios. I advise following the
+.. Not all of these options are relevant for all scenarios. I advise following the
.. instructions applicable to the deploy tool used in the scenario.
-Before starting the installation of the os-odl-bgpnvp scenario some
-preparation of the machine that will host the Fuel VM must be done.
+Before starting the installation of the os-odl-bgpvpn-noha scenario the following
+preparation must be done on the machine that will host the Fuel VM.
Installation of required packages
@@ -64,17 +66,8 @@ Jumphost (or the host which serves the VMs for the virtual deployment) needs to
install the following packages:
::
- sudo apt-get install -y git make curl libvirt-bin libpq-dev qemu-kvm \
- qemu-system tightvncserver virt-manager sshpass \
- fuseiso genisoimage blackbox xterm python-pip \
- python-git python-dev python-oslo.config \
- python-pip python-dev libffi-dev libxml2-dev \
- libxslt1-dev libffi-dev libxml2-dev libxslt1-dev \
- expect curl python-netaddr p7zip-full
-
- sudo pip install GitPython pyyaml netaddr paramiko lxml scp \
- python-novaclient python-neutronclient python-glanceclient \
- python-keystoneclient debtcollector netifaces enum
+ sudo apt-get install -y git make curl libvirt-bin qemu-kvm \
+ python-pip python-dev
Download the source code and artifact
-------------------------------------
@@ -91,131 +84,32 @@ To check out a specific version of OPNFV, checkout the appropriate branch:
cd fuel
git checkout stable/gambia
-Now download the corresponding OPNFV Fuel ISO into an appropriate folder from
-the website https://www.opnfv.org/software/downloads/release-archives
-
-Have in mind that the fuel repo version needs to map with the downloaded
-artifact. Note: it is also possible to build the Fuel image using the
-tools found in the fuel git repository, but this is out of scope of the
-procedure described here. Check the Fuel project documentation for more
-information on building the Fuel ISO.
-
-
Simplified scenario deployment procedure using Fuel
===================================================
-This section describes the installation of the os-odl-bgpvpn-ha or
+This section describes the installation of the
os-odl-bgpvpn-noha OPNFV reference platform stack across a server cluster
or a single host as a virtual deployment.
-Scenario Preparation
---------------------
-dea.yaml and dha.yaml need to be copied and changed according to the lab-name/host
-where you deploy.
-Copy the full lab config from:
-::
-
- cp <path-to-opnfv-fuel-repo>/deploy/config/labs/devel-pipeline/elx \
- <path-to-opnfv-fuel-repo>/deploy/config/labs/devel-pipeline/<your-lab-name>
-
-Add at the bottom of dha.yaml
-::
-
- disks:
- fuel: 100G
- controller: 100G
- compute: 100G
-
- define_vms:
- controller:
- vcpu:
- value: 4
- memory:
- attribute_equlas:
- unit: KiB
- value: 16388608
- currentMemory:
- attribute_equlas:
- unit: KiB
- value: 16388608
-
-
-Check if the default settings in dea.yaml are in line with your intentions
-and make changes as required.
-
Installation procedures
-----------------------
-We describe several alternative procedures in the following.
-First, we describe several methods that are based on the deploy.sh script,
-which is also used by the OPNFV CI system.
-It can be found in the Fuel repository.
-
-In addition, the SDNVPN feature can also be configured manually in the Fuel GUI.
-This is described in the last subsection.
-
-Before starting any of the following procedures, go to
-::
-
- cd <opnfv-fuel-repo>/ci
-
-Full automatic virtual deployment High Availablity Mode
-^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
-
-The following command will deploy the high-availability flavor of SDNVPN scenario os-odl-bgpvpn-ha
-in a fully automatic way, i.e. all installation steps (Fuel server installation, configuration,
-node discovery and platform deployment) will take place without any further prompt for user input.
-::
-
- sudo bash ./deploy.sh -b file://<path-to-opnfv-fuel-repo>/config/ -l devel-pipeline -p <your-lab-name> -s os-odl_l2-bgpvpn-ha -i file://<path-to-fuel-iso>
+This chapter describes how to deploy the scenario with the use of deploy.sh script,
+which is also used by the OPNFV CI system. Script can be found in the Fuel
+repository.
Full automatic virtual deployment NO High Availability Mode
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
-The following command will deploy the SDNVPN scenario in its non-high-availability flavor (note the
-different scenario name for the -s switch). Otherwise it does the same as described above.
-::
-
- sudo bash ./deploy.sh -b file://<path-to-opnfv-fuel-repo>/config/ -l devel-pipeline -p <your-lab-name> -s os-odl_l2-bgpvpn-noha -i file://<path-to-fuel-iso>
-
-Automatic Fuel installation and manual scenario deployment
-^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
-
-A useful alternative to the full automatic procedure is to only autodeploy the Fuel host and to run host selection, role assignment and SDNVPN scenario configuration manually.
+The following command will deploy the SDNVPN scenario in its non-high-availability flavor.
::
- sudo bash ./deploy.sh -b file://<path-to-opnfv-fuel-repo>/config/ -l devel-pipeline -p <your-lab-name> -s os-odl_l2-bgpvpn-ha -i file://<path-to-fuel-iso> -e
-
-With -e option the installer does not launch environment deployment, so
-a user can do some modification before the scenario is really deployed.
-Another interesting option is the -f option which deploys the scenario using an existing Fuel host.
-
-The result of this installation is a fuel sever with the right config for
-BGPVPN. Now the deploy button on fuel dashboard can be used to deploy the environment.
-It is as well possible to do the configuration manuell.
-
-Feature configuration on existing Fuel
-^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
-If a Fuel server is already provided but the fuel plugins for Opendaylight, Openvswitch
-and BGPVPN are not provided install them by:
-::
-
- cd /opt/opnfv/
- fuel plugins --install fuel-plugin-ovs-*.noarch.rpm
- fuel plugins --install opendaylight-*.noarch.rpm
- fuel plugins --install bgpvpn-*.noarch.rpm
-
-If plugins are installed and you want to update them use --force flag.
-
-Now the feature can be configured. Create a new environment with "Neutron with ML2 plugin" and
-in there "Neutron with tunneling segmentation".
-Go to Networks/Settings/Other and check "Assign public network to all nodes". This is required for
-features such as floating IP, which require the Compute hosts to have public interfaces.
-Then go to settings/other and check "OpenDaylight plugin", "Use ODL to manage L3 traffic",
-"BGPVPN plugin" and set the OpenDaylight package version to "5.2.0-1". Then you should
-be able to check "BGPVPN extensions" in OpenDaylight plugin section.
-
-Now the deploy button on fuel dashboard can be used to deploy the environment.
+ ci/deploy.sh -l <lab_name> \
+ -p <pod_name> \
+ -b <URI to configuration repo containing the PDF file> \
+ -s os-odl-bgpvpn-noha \
+ -D \
+ -S <Storage directory for disk images> |& tee deploy.log
Virtual deployment using Apex installer
=======================================
diff --git a/docs/release/release-notes/release-notes.rst b/docs/release/release-notes/release-notes.rst
index 54ada98..a5b671c 100644
--- a/docs/release/release-notes/release-notes.rst
+++ b/docs/release/release-notes/release-notes.rst
@@ -39,16 +39,16 @@ Release Data
| **Project** | sdnvpn |
| | |
+--------------------------------------+-------------------------------------------+
-| **Repo/tag** | opnfv-7.0.0 |
+| **Repo/tag** | opnfv-7.1.0 |
| | |
+--------------------------------------+-------------------------------------------+
-| **Release designation** | Gambia 7.0 |
+| **Release designation** | Gambia 7.1 |
| | |
+--------------------------------------+-------------------------------------------+
-| **Release date** | Nov 02 2018 |
+| **Release date** | Dec 14, 2018 |
| | |
+--------------------------------------+-------------------------------------------+
-| **Purpose of the delivery** | New test cases |
+| **Purpose of the delivery** | OPNFV Gambia 7.1 release |
| | |
+--------------------------------------+-------------------------------------------+
@@ -156,4 +156,4 @@ References
==========
.. [0] https://jira.opnfv.org/projects/SDNVPN/issues/SDNVPN-94
.. [1] https://jira.opnfv.org/projects/SDNVPN/issues/SDNVPN-99
-.. [2] https://jira.opendaylight.org/browse/NETVIRT-932 \ No newline at end of file
+.. [2] https://jira.opendaylight.org/browse/NETVIRT-932
diff --git a/requirements.txt b/requirements.txt
index 1979004..252b214 100644
--- a/requirements.txt
+++ b/requirements.txt
@@ -6,8 +6,6 @@ requests # Apache-2.0
opnfv
PyYAML # MIT
networking-bgpvpn>=7.0.0 # Apache-2.0
-python-cinderclient!=4.0.0 # Apache-2.0
-python-heatclient # Apache-2.0
python-keystoneclient!=2.1.0 # Apache-2.0
python-neutronclient # Apache-2.0
xtesting # Apache-2.0
diff --git a/sdnvpn/artifacts/quagga_setup.sh b/sdnvpn/artifacts/quagga_setup.sh
index b3bf786..c6e6a9c 100644
--- a/sdnvpn/artifacts/quagga_setup.sh
+++ b/sdnvpn/artifacts/quagga_setup.sh
@@ -35,10 +35,10 @@ done
if [ -z "$quagga_int" ]
then
echo 'No available network interface'
-fi
-
+else
ip link set $quagga_int up
ip addr add $OWN_IP/$EXT_NET_MASK dev $quagga_int
+fi
# Download quagga/zrpc rpms
cd /root
diff --git a/sdnvpn/artifacts/testcase_1bis.yaml b/sdnvpn/artifacts/testcase_1bis.yaml
new file mode 100644
index 0000000..f269943
--- /dev/null
+++ b/sdnvpn/artifacts/testcase_1bis.yaml
@@ -0,0 +1,234 @@
+heat_template_version: 2013-05-23
+
+description: >
+ Template for SDNVPN testcase 1
+ VPN provides connectivity between subnets
+
+parameters:
+ flavor:
+ type: string
+ description: flavor for the servers to be created
+ constraints:
+ - custom_constraint: nova.flavor
+ image_n:
+ type: string
+ description: image for the servers to be created
+ constraints:
+ - custom_constraint: glance.image
+ av_zone_1:
+ type: string
+ description: availability zone 1
+ av_zone_2:
+ type: string
+ description: availability zone 2
+
+ net_1_name:
+ type: string
+ description: network 1
+ subnet_1_name:
+ type: string
+ description: subnet 1 name
+ subnet_1_cidr:
+ type: string
+ description: subnet 1 cidr
+ net_2_name:
+ type: string
+ description: network 2
+ subnet_2_name:
+ type: string
+ description: subnet 2 name
+ subnet_2_cidr:
+ type: string
+ description: subnet 1 cidr
+
+ secgroup_name:
+ type: string
+ description: security group name
+ secgroup_descr:
+ type: string
+ description: security group slogan
+
+ instance_1_name:
+ type: string
+ description: instance name
+ instance_2_name:
+ type: string
+ description: instance name
+ instance_3_name:
+ type: string
+ description: instance name
+ instance_4_name:
+ type: string
+ description: instance name
+ instance_5_name:
+ type: string
+ description: instance name
+
+ ping_count:
+ type: string
+ description: ping count for user data script
+ default: 10
+
+resources:
+ net_1:
+ type: OS::Neutron::Net
+ properties:
+ name: { get_param: net_1_name }
+ subnet_1:
+ type: OS::Neutron::Subnet
+ properties:
+ name: { get_param: subnet_1_name }
+ network: { get_resource: net_1 }
+ cidr: { get_param: subnet_1_cidr }
+ net_2:
+ type: OS::Neutron::Net
+ properties:
+ name: { get_param: net_2_name }
+ subnet_2:
+ type: OS::Neutron::Subnet
+ properties:
+ name: { get_param: subnet_2_name }
+ network: { get_resource: net_2 }
+ cidr: { get_param: subnet_2_cidr }
+
+ sec_group:
+ type: OS::Neutron::SecurityGroup
+ properties:
+ name: { get_param: secgroup_name }
+ description: { get_param: secgroup_descr }
+ rules:
+ - protocol: icmp
+ remote_ip_prefix: 0.0.0.0/0
+ - protocol: tcp
+ port_range_min: 22
+ port_range_max: 22
+ remote_ip_prefix: 0.0.0.0/0
+
+ vm1:
+ type: OS::Nova::Server
+ depends_on: [ vm2, vm3, vm4, vm5 ]
+ properties:
+ name: { get_param: instance_1_name }
+ image: { get_param: image_n }
+ flavor: { get_param: flavor }
+ availability_zone: { get_param: av_zone_1 }
+ security_groups:
+ - { get_resource: sec_group }
+ networks:
+ - subnet: { get_resource: subnet_1 }
+ config_drive: True
+ user_data_format: RAW
+ user_data:
+ str_replace:
+ template: |
+ #!/bin/sh
+ set $IP_VM2 $IP_VM3 $IP_VM4 $IP_VM5
+ while true; do
+ for i do
+ ip=$i
+ ping -c $COUNT $ip 2>&1 >/dev/null
+ RES=$?
+ if [ \"Z$RES\" = \"Z0\" ] ; then
+ echo ping $ip OK
+ else echo ping $ip KO
+ fi
+ done
+ sleep 1
+ done
+ params:
+ $IP_VM2: { get_attr: [vm2, addresses, { get_resource: net_1}, 0, addr] }
+ $IP_VM3: { get_attr: [vm3, addresses, { get_resource: net_1}, 0, addr] }
+ $IP_VM4: { get_attr: [vm4, addresses, { get_resource: net_2}, 0, addr] }
+ $IP_VM5: { get_attr: [vm5, addresses, { get_resource: net_2}, 0, addr] }
+ $COUNT: { get_param: ping_count }
+ vm2:
+ type: OS::Nova::Server
+ properties:
+ name: { get_param: instance_2_name }
+ image: { get_param: image_n }
+ flavor: { get_param: flavor }
+ availability_zone: { get_param: av_zone_1 }
+ security_groups:
+ - { get_resource: sec_group }
+ networks:
+ - subnet: { get_resource: subnet_1 }
+ vm3:
+ type: OS::Nova::Server
+ properties:
+ name: { get_param: instance_3_name }
+ image: { get_param: image_n }
+ flavor: { get_param: flavor }
+ availability_zone: { get_param: av_zone_2 }
+ security_groups:
+ - { get_resource: sec_group }
+ networks:
+ - subnet: { get_resource: subnet_1 }
+ vm4:
+ type: OS::Nova::Server
+ depends_on: vm5
+ properties:
+ name: { get_param: instance_4_name }
+ image: { get_param: image_n }
+ flavor: { get_param: flavor }
+ availability_zone: { get_param: av_zone_1 }
+ security_groups:
+ - { get_resource: sec_group }
+ networks:
+ - subnet: { get_resource: subnet_2 }
+ config_drive: True
+ user_data_format: RAW
+ user_data:
+ str_replace:
+ template: |
+ #!/bin/sh
+ set $IP_VM5
+ while true; do
+ for i do
+ ip=$i
+ ping -c $COUNT $ip 2>&1 >/dev/null
+ RES=$?
+ if [ \"Z$RES\" = \"Z0\" ] ; then
+ echo ping $ip OK
+ else echo ping $ip KO
+ fi
+ done
+ sleep 1
+ done
+ params:
+ $IP_VM5: { get_attr: [vm5, addresses, { get_resource: net_2}, 0, addr] }
+ $COUNT: { get_param: ping_count }
+
+ vm5:
+ type: OS::Nova::Server
+ properties:
+ name: { get_param: instance_5_name }
+ image: { get_param: image_n }
+ flavor: { get_param: flavor }
+ availability_zone: { get_param: av_zone_2 }
+ security_groups:
+ - { get_resource: sec_group }
+ networks:
+ - subnet: { get_resource: subnet_2 }
+
+outputs:
+ net_1_o:
+ description: the id of network 1
+ value: { get_attr: [net_1, show, id] }
+ net_2_o:
+ description: the id of network 2
+ value: { get_attr: [net_2, show, id] }
+ vm1_o:
+ description: the deployed vm resource
+ value: { get_attr: [vm1, show, name] }
+ vm2_o:
+ description: the deployed vm resource
+ value: { get_attr: [vm2, show, name] }
+ vm3_o:
+ description: the deployed vm resource
+ value: { get_attr: [vm3, show, name] }
+ vm4_o:
+ description: the deployed vm resource
+ value: { get_attr: [vm4, show, name] }
+ vm5_o:
+ description: the deployed vm resource
+ value: { get_attr: [vm5, show, name] }
diff --git a/sdnvpn/artifacts/testcase_2bis.yaml b/sdnvpn/artifacts/testcase_2bis.yaml
new file mode 100644
index 0000000..0319a6d
--- /dev/null
+++ b/sdnvpn/artifacts/testcase_2bis.yaml
@@ -0,0 +1,289 @@
+heat_template_version: 2013-05-23
+
+description: >
+ Template for SDNVPN testcase 2
+ tenant separation
+
+parameters:
+ flavor:
+ type: string
+ description: flavor for the servers to be created
+ constraints:
+ - custom_constraint: nova.flavor
+ image_n:
+ type: string
+ description: image for the servers to be created
+ constraints:
+ - custom_constraint: glance.image
+ av_zone_1:
+ type: string
+ description: availability zone 1
+ id_rsa_key:
+ type: string
+ description: id_rsa file contents for the vms
+
+ net_1_name:
+ type: string
+ description: network 1
+ subnet_1a_name:
+ type: string
+ description: subnet 1a name
+ subnet_1a_cidr:
+ type: string
+ description: subnet 1a cidr
+ subnet_1b_name:
+ type: string
+ description: subnet 1b name
+ subnet_1b_cidr:
+ type: string
+ description: subnet 1b cidr
+ router_1_name:
+ type: string
+ description: router 1 name
+ net_2_name:
+ type: string
+ description: network 2
+ subnet_2a_name:
+ type: string
+ description: subnet 2a name
+ subnet_2a_cidr:
+ type: string
+ description: subnet 2a cidr
+ subnet_2b_name:
+ type: string
+ description: subnet 2b name
+ subnet_2b_cidr:
+ type: string
+ description: subnet 2b cidr
+ router_2_name:
+ type: string
+ description: router 2 name
+
+ secgroup_name:
+ type: string
+ description: security group name
+ secgroup_descr:
+ type: string
+ description: security group slogan
+
+ instance_1_name:
+ type: string
+ description: instance name
+ instance_2_name:
+ type: string
+ description: instance name
+ instance_3_name:
+ type: string
+ description: instance name
+ instance_4_name:
+ type: string
+ description: instance name
+ instance_5_name:
+ type: string
+ description: instance name
+
+ instance_1_ip:
+ type: string
+ description: instance fixed ip
+ instance_2_ip:
+ type: string
+ description: instance fixed ip
+ instance_3_ip:
+ type: string
+ description: instance fixed ip
+ instance_4_ip:
+ type: string
+ description: instance fixed ip
+ instance_5_ip:
+ type: string
+ description: instance fixed ip
+
+resources:
+ net_1:
+ type: OS::Neutron::Net
+ properties:
+ name: { get_param: net_1_name }
+ subnet_1a:
+ type: OS::Neutron::Subnet
+ properties:
+ name: { get_param: subnet_1a_name }
+ network: { get_resource: net_1 }
+ cidr: { get_param: subnet_1a_cidr }
+ net_2:
+ type: OS::Neutron::Net
+ properties:
+ name: { get_param: net_2_name }
+ subnet_2b:
+ type: OS::Neutron::Subnet
+ properties:
+ name: { get_param: subnet_2b_name }
+ network: { get_resource: net_2 }
+ cidr: { get_param: subnet_2b_cidr }
+
+ sec_group:
+ type: OS::Neutron::SecurityGroup
+ properties:
+ name: { get_param: secgroup_name }
+ description: { get_param: secgroup_descr }
+ rules:
+ - protocol: icmp
+ remote_ip_prefix: 0.0.0.0/0
+ - protocol: tcp
+ port_range_min: 22
+ port_range_max: 22
+ remote_ip_prefix: 0.0.0.0/0
+
+ vm1:
+ type: OS::Nova::Server
+ depends_on: [ vm2, vm4 ]
+ properties:
+ name: { get_param: instance_1_name }
+ image: { get_param: image_n }
+ flavor: { get_param: flavor }
+ availability_zone: { get_param: av_zone_1 }
+ security_groups:
+ - { get_resource: sec_group }
+ networks:
+ - network: { get_resource: net_1 }
+ fixed_ip: { get_param: instance_1_ip }
+ user_data_format: RAW
+ user_data:
+ str_replace:
+ template: |
+ #!/bin/sh
+ sudo mkdir -p /home/cirros/.ssh/
+ sudo chown cirros:cirros /home/cirros/.ssh/
+ sudo echo $ID_RSA > /home/cirros/.ssh/id_rsa.enc
+ sudo base64 -d /home/cirros/.ssh/id_rsa.enc > /home/cirros/.ssh/id_rsa
+ sudo chown cirros:cirros /home/cirros/.ssh/id_rsa
+ sudo echo $AUTH_KEYS > /home/cirros/.ssh/authorized_keys
+ sudo chown cirros:cirros /home/cirros/.ssh/authorized_keys
+ chmod 700 /home/cirros/.ssh
+ chmod 644 /home/cirros/.ssh/authorized_keys
+ chmod 600 /home/cirros/.ssh/id_rsa
+ echo gocubsgo > cirros_passwd
+ set $IP_VM2 $IP_VM4
+ echo will try to ssh to $IP_VM2 and $IP_VM4
+ while true; do
+ for i do
+ ip=$i
+ hostname=$(ssh -y -i /home/cirros/.ssh/id_rsa cirros@$ip 'hostname' </dev/zero 2>/dev/null)
+ RES=$?
+ echo $RES
+ if [ \"Z$RES\" = \"Z0\" ]; then echo $ip $hostname;
+ else echo $ip 'not reachable';fi;
+ done
+ sleep 1
+ done
+ params:
+ $IP_VM2: { get_param: instance_2_ip }
+ $IP_VM4: { get_param: instance_4_ip }
+ $ID_RSA: { get_param: id_rsa_key }
+ $AUTH_KEYS: "ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAAAgnWtSS98Am516e\
+ stBsq0jbyOB4eLMUYDdgzsUHsnxFQCtACwwAg9/2uq3FoGUBUWeHZNsT6jcK9\
+ sCMEYiS479CUCzbrxcd8XaIlK38HECcDVglgBNwNzX/WDfMejXpKzZG61s98rU\
+ ElNvZ0YDqhaqZGqxIV4ejalqLjYrQkoly3R+2k= cirros@test1"
+ vm2:
+ type: OS::Nova::Server
+ properties:
+ name: { get_param: instance_2_name }
+ image: { get_param: image_n }
+ flavor: { get_param: flavor }
+ availability_zone: { get_param: av_zone_1 }
+ security_groups:
+ - { get_resource: sec_group }
+ networks:
+ - network: { get_resource: net_1 }
+ fixed_ip: { get_param: instance_2_ip }
+ user_data_format: RAW
+ user_data:
+ str_replace:
+ template: |
+ #!/bin/sh
+ sudo mkdir -p /home/cirros/.ssh/
+ sudo chown cirros:cirros /home/cirros/.ssh/
+ sudo echo $ID_RSA > /home/cirros/.ssh/id_rsa.enc
+ sudo base64 -d /home/cirros/.ssh/id_rsa.enc > /home/cirros/.ssh/id_rsa
+ sudo chown cirros:cirros /home/cirros/.ssh/id_rsa
+ sudo echo $AUTH_KEYS > /home/cirros/.ssh/authorized_keys
+ sudo chown cirros:cirros /home/cirros/.ssh/authorized_keys
+ chmod 700 /home/cirros/.ssh
+ chmod 644 /home/cirros/.ssh/authorized_keys
+ chmod 600 /home/cirros/.ssh/id_rsa
+ params:
+ $ID_RSA: { get_param: id_rsa_key }
+ $AUTH_KEYS: "ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAAAgnWtSS98Am516e\
+ stBsq0jbyOB4eLMUYDdgzsUHsnxFQCtACwwAg9/2uq3FoGUBUWeHZNsT6jcK9\
+ sCMEYiS479CUCzbrxcd8XaIlK38HECcDVglgBNwNzX/WDfMejXpKzZG61s98rU\
+ ElNvZ0YDqhaqZGqxIV4ejalqLjYrQkoly3R+2k= cirros@test1"
+ vm4:
+ type: OS::Nova::Server
+ depends_on: vm2
+ properties:
+ name: { get_param: instance_4_name }
+ image: { get_param: image_n }
+ flavor: { get_param: flavor }
+ availability_zone: { get_param: av_zone_1 }
+ security_groups:
+ - { get_resource: sec_group }
+ networks:
+ - network: { get_resource: net_2 }
+ fixed_ip: { get_param: instance_4_ip }
+ user_data_format: RAW
+ user_data:
+ str_replace:
+ template: |
+ #!/bin/sh
+ sudo mkdir -p /home/cirros/.ssh/
+ sudo chown cirros:cirros /home/cirros/.ssh/
+ sudo echo $ID_RSA > /home/cirros/.ssh/id_rsa.enc
+ sudo base64 -d /home/cirros/.ssh/id_rsa.enc > /home/cirros/.ssh/id_rsa
+ sudo chown cirros:cirros /home/cirros/.ssh/id_rsa
+ sudo echo $AUTH_KEYS > /home/cirros/.ssh/authorized_keys
+ sudo chown cirros:cirros /home/cirros/.ssh/authorized_keys
+ chmod 700 /home/cirros/.ssh
+ chmod 644 /home/cirros/.ssh/authorized_keys
+ chmod 600 /home/cirros/.ssh/id_rsa
+ set $IP_VM1
+ echo will try to ssh to $IP_VM1
+ while true; do
+ for i do
+ ip=$i
+ hostname=$(ssh -y -i /home/cirros/.ssh/id_rsa cirros@$ip 'hostname' </dev/zero 2>/dev/null)
+ RES=$?
+ if [ \"Z$RES\" = \"Z0\" ]; then echo $ip $hostname;
+ else echo $ip 'not reachable';fi;
+ done
+ sleep 1
+ done
+ params:
+ $IP_VM1: { get_param: instance_1_ip }
+ $ID_RSA: { get_param: id_rsa_key }
+ $AUTH_KEYS: "ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAAAgnWtSS98Am516e\
+ stBsq0jbyOB4eLMUYDdgzsUHsnxFQCtACwwAg9/2uq3FoGUBUWeHZNsT6jcK9\
+ sCMEYiS479CUCzbrxcd8XaIlK38HECcDVglgBNwNzX/WDfMejXpKzZG61s98rU\
+ ElNvZ0YDqhaqZGqxIV4ejalqLjYrQkoly3R+2k= cirros@test1"
+ $DROPBEAR_PASSWORD: gocubsgo
+outputs:
+ net_1_o:
+ description: the id of network 1
+ value: { get_attr: [net_1, show, id] }
+ net_2_o:
+ description: the id of network 2
+ value: { get_attr: [net_2, show, id] }
+
+ vm1_o:
+ description: the deployed vm resource
+ value: { get_attr: [vm1, show, name] }
+ vm2_o:
+ description: the deployed vm resource
+ value: { get_attr: [vm2, show, name] }
+ vm3_o:
+ description: dummy
+ value: { get_attr: [vm2, show, name] }
+ vm4_o:
+ description: the deployed vm resource
+ value: { get_attr: [vm4, show, name] }
+ vm5_o:
+ description: dummy
+ value: { get_attr: [vm2, show, name] }
diff --git a/sdnvpn/artifacts/testcase_4bis.yaml b/sdnvpn/artifacts/testcase_4bis.yaml
new file mode 100644
index 0000000..ee59e1d
--- /dev/null
+++ b/sdnvpn/artifacts/testcase_4bis.yaml
@@ -0,0 +1,247 @@
+heat_template_version: 2013-05-23
+
+description: >
+ Template for SDNVPN testcase 4
+ VPN provides connectivity between subnets using router association
+
+parameters:
+ flavor:
+ type: string
+ description: flavor for the servers to be created
+ constraints:
+ - custom_constraint: nova.flavor
+ image_n:
+ type: string
+ description: image for the servers to be created
+ constraints:
+ - custom_constraint: glance.image
+ av_zone_1:
+ type: string
+ description: availability zone 1
+ av_zone_2:
+ type: string
+ description: availability zone 2
+
+ net_1_name:
+ type: string
+ description: network 1
+ subnet_1_name:
+ type: string
+ description: subnet 1 name
+ subnet_1_cidr:
+ type: string
+ description: subnet 1 cidr
+ router_1_name:
+ type: string
+ description: router 1 cidr
+ net_2_name:
+ type: string
+ description: network 2
+ subnet_2_name:
+ type: string
+ description: subnet 2 name
+ subnet_2_cidr:
+ type: string
+ description: subnet 1 cidr
+
+ secgroup_name:
+ type: string
+ description: security group name
+ secgroup_descr:
+ type: string
+ description: security group slogan
+
+ instance_1_name:
+ type: string
+ description: instance name
+ instance_2_name:
+ type: string
+ description: instance name
+ instance_3_name:
+ type: string
+ description: instance name
+ instance_4_name:
+ type: string
+ description: instance name
+ instance_5_name:
+ type: string
+ description: instance name
+
+ ping_count:
+ type: string
+ description: ping count for user data script
+ default: 10
+
+resources:
+ net_1:
+ type: OS::Neutron::Net
+ properties:
+ name: { get_param: net_1_name }
+ subnet_1:
+ type: OS::Neutron::Subnet
+ properties:
+ name: { get_param: subnet_1_name }
+ network: { get_resource: net_1 }
+ cidr: { get_param: subnet_1_cidr }
+ router_1:
+ type: OS::Neutron::Router
+ properties:
+ name: { get_param: router_1_name }
+ routerinterface_1:
+ type: OS::Neutron::RouterInterface
+ properties:
+ router_id: { get_resource: router_1 }
+ subnet_id: { get_resource: subnet_1 }
+
+ net_2:
+ type: OS::Neutron::Net
+ properties:
+ name: { get_param: net_2_name }
+ subnet_2:
+ type: OS::Neutron::Subnet
+ properties:
+ name: { get_param: subnet_2_name }
+ network: { get_resource: net_2 }
+ cidr: { get_param: subnet_2_cidr }
+
+ sec_group:
+ type: OS::Neutron::SecurityGroup
+ properties:
+ name: { get_param: secgroup_name }
+ description: { get_param: secgroup_descr }
+ rules:
+ - protocol: icmp
+ remote_ip_prefix: 0.0.0.0/0
+ - protocol: tcp
+ port_range_min: 22
+ port_range_max: 22
+ remote_ip_prefix: 0.0.0.0/0
+
+ vm1:
+ type: OS::Nova::Server
+ depends_on: [ vm2, vm3, vm4, vm5 ]
+ properties:
+ name: { get_param: instance_1_name }
+ image: { get_param: image_n }
+ flavor: { get_param: flavor }
+ availability_zone: { get_param: av_zone_1 }
+ security_groups:
+ - { get_resource: sec_group }
+ networks:
+ - subnet: { get_resource: subnet_1 }
+ config_drive: True
+ user_data_format: RAW
+ user_data:
+ str_replace:
+ template: |
+ #!/bin/sh
+ set $IP_VM2 $IP_VM3 $IP_VM4 $IP_VM5
+ while true; do
+ for i do
+ ip=$i
+ ping -c $COUNT $ip 2>&1 >/dev/null
+ RES=$?
+ if [ \"Z$RES\" = \"Z0\" ] ; then
+ echo ping $ip OK
+ else echo ping $ip KO
+ fi
+ done
+ sleep 1
+ done
+ params:
+ $IP_VM2: { get_attr: [vm2, addresses, { get_resource: net_1}, 0, addr] }
+ $IP_VM3: { get_attr: [vm3, addresses, { get_resource: net_1}, 0, addr] }
+ $IP_VM4: { get_attr: [vm4, addresses, { get_resource: net_2}, 0, addr] }
+ $IP_VM5: { get_attr: [vm5, addresses, { get_resource: net_2}, 0, addr] }
+ $COUNT: { get_param: ping_count }
+ vm2:
+ type: OS::Nova::Server
+ properties:
+ name: { get_param: instance_2_name }
+ image: { get_param: image_n }
+ flavor: { get_param: flavor }
+ availability_zone: { get_param: av_zone_1 }
+ security_groups:
+ - { get_resource: sec_group }
+ networks:
+ - subnet: { get_resource: subnet_1 }
+ vm3:
+ type: OS::Nova::Server
+ properties:
+ name: { get_param: instance_3_name }
+ image: { get_param: image_n }
+ flavor: { get_param: flavor }
+ availability_zone: { get_param: av_zone_2 }
+ security_groups:
+ - { get_resource: sec_group }
+ networks:
+ - subnet: { get_resource: subnet_1 }
+ vm4:
+ type: OS::Nova::Server
+ depends_on: vm5
+ properties:
+ name: { get_param: instance_4_name }
+ image: { get_param: image_n }
+ flavor: { get_param: flavor }
+ availability_zone: { get_param: av_zone_1 }
+ security_groups:
+ - { get_resource: sec_group }
+ networks:
+ - subnet: { get_resource: subnet_2 }
+ config_drive: True
+ user_data_format: RAW
+ user_data:
+ str_replace:
+ template: |
+ #!/bin/sh
+ set $IP_VM5
+ while true; do
+ for i do
+ ip=$i
+ ping -c $COUNT $ip 2>&1 >/dev/null
+ RES=$?
+ if [ \"Z$RES\" = \"Z0\" ] ; then
+ echo ping $ip OK
+ else echo ping $ip KO
+ fi
+ done
+ sleep 1
+ done
+ params:
+ $IP_VM5: { get_attr: [vm5, addresses, { get_resource: net_2}, 0, addr] }
+ $COUNT: { get_param: ping_count }
+
+ vm5:
+ type: OS::Nova::Server
+ properties:
+ name: { get_param: instance_5_name }
+ image: { get_param: image_n }
+ flavor: { get_param: flavor }
+ availability_zone: { get_param: av_zone_2 }
+ security_groups:
+ - { get_resource: sec_group }
+ networks:
+ - subnet: { get_resource: subnet_2 }
+
+outputs:
+ router_1_o:
+ description: the id of network 1
+ value: { get_attr: [router_1, show, id] }
+ net_2_o:
+ description: the id of network 2
+ value: { get_attr: [net_2, show, id] }
+ vm1_o:
+ description: the deployed vm resource
+ value: { get_attr: [vm1, show, name] }
+ vm2_o:
+ description: the deployed vm resource
+ value: { get_attr: [vm2, show, name] }
+ vm3_o:
+ description: the deployed vm resource
+ value: { get_attr: [vm3, show, name] }
+ vm4_o:
+ description: the deployed vm resource
+ value: { get_attr: [vm4, show, name] }
+ vm5_o:
+ description: the deployed vm resource
+ value: { get_attr: [vm5, show, name] }
diff --git a/sdnvpn/artifacts/testcase_8bis.yaml b/sdnvpn/artifacts/testcase_8bis.yaml
new file mode 100644
index 0000000..94853c3
--- /dev/null
+++ b/sdnvpn/artifacts/testcase_8bis.yaml
@@ -0,0 +1,173 @@
+heat_template_version: 2013-05-23
+
+description: >
+ Template for SDNVPN testcase 8
+ Test floating IP and router assoc coexistence
+
+parameters:
+ flavor:
+ type: string
+ description: flavor for the servers to be created
+ constraints:
+ - custom_constraint: nova.flavor
+ image_n:
+ type: string
+ description: image for the servers to be created
+ constraints:
+ - custom_constraint: glance.image
+ av_zone_1:
+ type: string
+ description: availability zone 1
+
+ external_nw:
+ type: string
+ description: the external network
+ net_1_name:
+ type: string
+ description: network 1
+ subnet_1_name:
+ type: string
+ description: subnet 1 name
+ subnet_1_cidr:
+ type: string
+ description: subnet 1 cidr
+ router_1_name:
+ type: string
+ description: router 1 cidr
+ net_2_name:
+ type: string
+ description: network 2
+ subnet_2_name:
+ type: string
+ description: subnet 2 name
+ subnet_2_cidr:
+ type: string
+ description: subnet 1 cidr
+
+ secgroup_name:
+ type: string
+ description: security group name
+ secgroup_descr:
+ type: string
+ description: security group slogan
+
+ instance_1_name:
+ type: string
+ description: instance name
+ instance_2_name:
+ type: string
+ description: instance name
+
+ ping_count:
+ type: string
+ description: ping count for user data script
+ default: 10
+
+resources:
+ router_1:
+ type: OS::Neutron::Router
+ properties:
+ name: { get_param: router_1_name }
+ external_gateway_info:
+ network: { get_param: external_nw }
+
+ net_1:
+ type: OS::Neutron::Net
+ properties:
+ name: { get_param: net_1_name }
+ subnet_1:
+ type: OS::Neutron::Subnet
+ properties:
+ name: { get_param: subnet_1_name }
+ network: { get_resource: net_1 }
+ cidr: { get_param: subnet_1_cidr }
+ routerinterface_1:
+ type: OS::Neutron::RouterInterface
+ properties:
+ router_id: { get_resource: router_1 }
+ subnet_id: { get_resource: subnet_1 }
+
+ net_2:
+ type: OS::Neutron::Net
+ properties:
+ name: { get_param: net_2_name }
+ subnet_2:
+ type: OS::Neutron::Subnet
+ properties:
+ name: { get_param: subnet_2_name }
+ network: { get_resource: net_2 }
+ cidr: { get_param: subnet_2_cidr }
+
+ sec_group:
+ type: OS::Neutron::SecurityGroup
+ properties:
+ name: { get_param: secgroup_name }
+ description: { get_param: secgroup_descr }
+ rules:
+ - protocol: icmp
+ remote_ip_prefix: 0.0.0.0/0
+ - protocol: tcp
+ port_range_min: 22
+ port_range_max: 22
+ remote_ip_prefix: 0.0.0.0/0
+
+ vm1:
+ type: OS::Nova::Server
+ depends_on: [ vm2 ]
+ properties:
+ name: { get_param: instance_1_name }
+ image: { get_param: image_n }
+ flavor: { get_param: flavor }
+ availability_zone: { get_param: av_zone_1 }
+ security_groups:
+ - { get_resource: sec_group }
+ networks:
+ - subnet: { get_resource: subnet_1 }
+ config_drive: True
+ user_data_format: RAW
+ user_data:
+ str_replace:
+ template: |
+ #!/bin/sh
+ set $IP_VM2
+ while true; do
+ for i do
+ ip=$i
+ ping -c $COUNT $ip 2>&1 >/dev/null
+ RES=$?
+ if [ \"Z$RES\" = \"Z0\" ] ; then
+ echo ping $ip OK
+ else echo ping $ip KO
+ fi
+ done
+ sleep 1
+ done
+ params:
+ $IP_VM2: { get_attr: [vm2, addresses, { get_resource: net_1}, 0, addr] }
+ $COUNT: { get_param: ping_count }
+ vm2:
+ type: OS::Nova::Server
+ properties:
+ name: { get_param: instance_2_name }
+ image: { get_param: image_n }
+ flavor: { get_param: flavor }
+ availability_zone: { get_param: av_zone_1 }
+ security_groups:
+ - { get_resource: sec_group }
+ networks:
+ - subnet: { get_resource: subnet_2 }
+
+
+outputs:
+ router_1_o:
+ description: the id of network 1
+ value: { get_attr: [router_1, show, id] }
+ net_2_o:
+ description: the id of network 2
+ value: { get_attr: [net_2, show, id] }
+ vm1_o:
+ description: the deployed vm resource
+ value: { get_attr: [vm1, show, name] }
+ vm2_o:
+ description: the deployed vm resource
+ value: { get_attr: [vm2, show, name] }
diff --git a/sdnvpn/artifacts/testcase_8bis_upd.yaml b/sdnvpn/artifacts/testcase_8bis_upd.yaml
new file mode 100644
index 0000000..4661e8a
--- /dev/null
+++ b/sdnvpn/artifacts/testcase_8bis_upd.yaml
@@ -0,0 +1,17 @@
+heat_template_version: 2013-05-23
+
+resources:
+ fip_1:
+ type: OS::Neutron::FloatingIP
+ properties:
+ floating_network: { get_param: external_nw }
+ fip_1_assoc:
+ type: OS::Neutron::FloatingIPAssociation
+ properties:
+ floatingip_id: { get_resource: fip_1 }
+ port_id: {get_attr: [vm1, addresses, {get_resource: net_1}, 0, port]}
+
+outputs:
+ fip_1_o:
+ description: the floating IP for vm1
+ value: { get_attr: [fip_1, show, floating_ip_address] }
diff --git a/sdnvpn/lib/openstack_utils.py b/sdnvpn/lib/openstack_utils.py
index f0b9ba7..5fc1e49 100644
--- a/sdnvpn/lib/openstack_utils.py
+++ b/sdnvpn/lib/openstack_utils.py
@@ -18,19 +18,17 @@ import urllib
from keystoneauth1 import loading
from keystoneauth1 import session
-from cinderclient import client as cinderclient
-from heatclient import client as heatclient
from keystoneclient import client as keystoneclient
from neutronclient.neutron import client as neutronclient
from openstack import connection
from openstack import cloud as os_cloud
+from openstack.exceptions import ResourceNotFound
from functest.utils import env
logger = logging.getLogger(__name__)
DEFAULT_API_VERSION = '2'
-DEFAULT_HEAT_API_VERSION = '1'
# *********************************************
@@ -165,20 +163,6 @@ def get_keystone_client(other_creds={}):
interface=os.getenv('OS_INTERFACE', 'admin'))
-def get_cinder_client_version():
- api_version = os.getenv('OS_VOLUME_API_VERSION')
- if api_version is not None:
- logger.info("OS_VOLUME_API_VERSION is set in env as '%s'",
- api_version)
- return api_version
- return DEFAULT_API_VERSION
-
-
-def get_cinder_client(other_creds={}):
- sess = get_session(other_creds)
- return cinderclient.Client(get_cinder_client_version(), session=sess)
-
-
def get_neutron_client_version():
api_version = os.getenv('OS_NETWORK_API_VERSION')
if api_version is not None:
@@ -193,20 +177,6 @@ def get_neutron_client(other_creds={}):
return neutronclient.Client(get_neutron_client_version(), session=sess)
-def get_heat_client_version():
- api_version = os.getenv('OS_ORCHESTRATION_API_VERSION')
- if api_version is not None:
- logger.info("OS_ORCHESTRATION_API_VERSION is set in env as '%s'",
- api_version)
- return api_version
- return DEFAULT_HEAT_API_VERSION
-
-
-def get_heat_client(other_creds={}):
- sess = get_session(other_creds)
- return heatclient.Client(get_heat_client_version(), session=sess)
-
-
def download_url(url, dest_path):
"""
Download a file to a destination path given a URL
@@ -1434,10 +1404,52 @@ def delete_user(keystone_client, user_id):
# *********************************************
# HEAT
# *********************************************
-def get_resource(heat_client, stack_id, resource):
+def create_stack(conn, **kwargs):
+ try:
+ stack = conn.orchestration.create_stack(**kwargs)
+ stack_id = stack.id
+ if stack_id is None:
+ logger.error("Stack create start failed")
+ raise SystemError("Stack create start failed")
+ return stack_id
+ except Exception as e:
+ logger.error("Error [create_stack(orchestration)]: %s" % e)
+ return None
+
+
+def update_stack(conn, stack_id, **kwargs):
+ try:
+ conn.orchestration.update_stack(stack_id, **kwargs)
+ return True
+ except Exception as e:
+ logger.error("Error [update_stack(orchestration)]: %s" % e)
+ return False
+
+
+def delete_stack(conn, stack_id):
try:
- resources = heat_client.resources.get(stack_id, resource)
- return resources
+ conn.orchestration.delete_stack(stack_id)
+ return True
except Exception as e:
- logger.error("Error [get_resource]: %s" % e)
+ logger.error("Error [delete_stack(orchestration)]: %s" % e)
+ return False
+
+
+def list_stacks(conn, **kwargs):
+ try:
+ result = conn.orchestration.stacks(**kwargs)
+ return result
+ except Exception as e:
+ logger.error("Error [list_stack(orchestration)]: %s" % e)
return None
+
+
+def get_output(conn, stack_id, output_key):
+ try:
+ stack = conn.orchestration.get_stack(stack_id)
+ for output in stack.outputs:
+ if output['output_key'] == output_key:
+ return output
+ except ResourceNotFound as e:
+ logger.error("Error [get_output(orchestration)]: %s" % e)
+ return None
diff --git a/sdnvpn/lib/quagga.py b/sdnvpn/lib/quagga.py
index f11e188..6efd6a9 100644
--- a/sdnvpn/lib/quagga.py
+++ b/sdnvpn/lib/quagga.py
@@ -22,12 +22,12 @@ logger = logging.getLogger('sdnvpn-quagga')
COMMON_CONFIG = config.CommonConfig()
-def odl_add_neighbor(neighbor_ip, controller_ip, controller):
- # Explicitly pass controller_ip because controller.ip
+def odl_add_neighbor(neighbor_ip, odl_ip, odl_node):
+ # Explicitly pass odl_ip because odl_node.ip
# Might not be accessible from the Quagga instance
command = 'configure-bgp -op add-neighbor --as-num 200'
- command += ' --ip %s --use-source-ip %s' % (neighbor_ip, controller_ip)
- success = run_odl_cmd(controller, command)
+ command += ' --ip %s --use-source-ip %s' % (neighbor_ip, odl_ip)
+ success = run_odl_cmd(odl_node, command)
# The run_cmd api is really whimsical
logger.info("Maybe stdout of %s: %s", command, success)
return success
@@ -42,20 +42,20 @@ def bootstrap_quagga(fip_addr, controller_ip):
return rc == 0
-def gen_quagga_setup_script(controller_ip,
+def gen_quagga_setup_script(odl_ip,
fake_floating_ip,
ext_net_mask,
ip_prefix, rd, irt, ert):
with open(COMMON_CONFIG.quagga_setup_script_path) as f:
template = f.read()
- script = template.format(controller_ip,
+ script = template.format(odl_ip,
fake_floating_ip,
ext_net_mask,
ip_prefix, rd, irt, ert)
return script
-def check_for_peering(controller):
+def check_for_peering(odl_node):
cmd = 'show-bgp --cmd \\"ip bgp neighbors\\"'
tries = 20
neighbors = None
@@ -64,7 +64,7 @@ def check_for_peering(controller):
while tries > 0:
if neighbors and 'Established' in neighbors:
break
- neighbors = run_odl_cmd(controller, cmd)
+ neighbors = run_odl_cmd(odl_node, cmd)
logger.info("Output of %s: %s", cmd, neighbors)
if neighbors:
opens = opens_regex.search(neighbors)
diff --git a/sdnvpn/lib/utils.py b/sdnvpn/lib/utils.py
index 9a5e181..4c35edc 100644
--- a/sdnvpn/lib/utils.py
+++ b/sdnvpn/lib/utils.py
@@ -14,10 +14,12 @@ import time
import requests
import re
import subprocess
+import yaml
from concurrent.futures import ThreadPoolExecutor
-from openstack.exceptions import ResourceNotFound
+from openstack.exceptions import ResourceNotFound, NotFoundException
from requests.auth import HTTPBasicAuth
+from functest.utils import env
from opnfv.deployment.factory import Factory as DeploymentFactory
from sdnvpn.lib import config as sdnvpn_config
@@ -27,8 +29,10 @@ logger = logging.getLogger('sdnvpn_test_utils')
common_config = sdnvpn_config.CommonConfig()
-ODL_USER = 'admin'
-ODL_PASS = 'admin'
+ODL_USER = env.get('SDN_CONTROLLER_USER')
+ODL_PASSWORD = env.get('SDN_CONTROLLER_PASSWORD')
+ODL_IP = env.get('SDN_CONTROLLER_IP')
+ODL_PORT = env.get('SDN_CONTROLLER_RESTCONFPORT')
executor = ThreadPoolExecutor(5)
@@ -296,18 +300,16 @@ def get_installerHandler():
return None
else:
if installer_type in ["apex"]:
- developHandler = DeploymentFactory.get_handler(
- installer_type,
- installer_ip,
- 'root',
- pkey_file="/root/.ssh/id_rsa")
-
- if installer_type in ["fuel"]:
- developHandler = DeploymentFactory.get_handler(
- installer_type,
- installer_ip,
- 'root',
- 'r00tme')
+ installer_user = "root"
+ elif installer_type in ["fuel"]:
+ installer_user = "ubuntu"
+
+ developHandler = DeploymentFactory.get_handler(
+ installer_type,
+ installer_ip,
+ installer_user,
+ pkey_file="/root/.ssh/id_rsa")
+
return developHandler
@@ -535,17 +537,19 @@ def exec_cmd(cmd, verbose):
return output, success
-def check_odl_fib(ip, controller_ip):
+def check_odl_fib(ip):
"""Check that there is an entry in the ODL Fib for `ip`"""
- url = "http://" + controller_ip + \
- ":8181/restconf/config/odl-fib:fibEntries/"
+ url = ("http://{user}:{password}@{ip}:{port}/restconf/config/"
+ "odl-fib:fibEntries/".format(user=ODL_USER,
+ password=ODL_PASSWORD, ip=ODL_IP,
+ port=ODL_PORT))
logger.debug("Querring '%s' for FIB entries", url)
- res = requests.get(url, auth=(ODL_USER, ODL_PASS))
+ res = requests.get(url, auth=(ODL_USER, ODL_PASSWORD))
if res.status_code != 200:
logger.error("OpenDaylight response status code: %s", res.status_code)
return False
logger.debug("Checking whether '%s' is in the OpenDaylight FIB"
- % controller_ip)
+ % ODL_IP)
logger.debug("OpenDaylight FIB: \n%s" % res.text)
return ip in res.text
@@ -597,34 +601,50 @@ def wait_for_cloud_init(conn, instance):
def attach_instance_to_ext_br(instance, compute_node):
libvirt_instance_name = instance.instance_name
installer_type = str(os.environ['INSTALLER_TYPE'].lower())
- if installer_type == "fuel":
+ # In Apex, br-ex (or br-floating for Fuel) is an ovs bridge and virsh
+ # attach-interface won't just work. We work around it by creating a linux
+ # bridge, attaching that to br-ex (or br-floating for Fuel) with a
+ # veth pair and virsh-attaching the instance to the linux-bridge
+ if installer_type in ["fuel"]:
+ bridge = "br-floating"
+ elif installer_type in ["apex"]:
bridge = "br-ex"
- elif installer_type == "apex":
- # In Apex, br-ex is an ovs bridge and virsh attach-interface
- # won't just work. We work around it by creating a linux
- # bridge, attaching that to br-ex with a veth pair
- # and virsh-attaching the instance to the linux-bridge
- bridge = "br-quagga"
- cmd = """
- set -e
- if ! sudo brctl show |grep -q ^{bridge};then
- sudo brctl addbr {bridge}
- sudo ip link set {bridge} up
- sudo ip link add quagga-tap type veth peer name ovs-quagga-tap
- sudo ip link set dev ovs-quagga-tap up
- sudo ip link set dev quagga-tap up
- sudo ovs-vsctl add-port br-ex ovs-quagga-tap
- sudo brctl addif {bridge} quagga-tap
- fi
- """
- compute_node.run_cmd(cmd.format(bridge=bridge))
+ else:
+ logger.warn("installer type %s is neither fuel nor apex."
+ % installer_type)
+ return
+
+ cmd = """
+ set -e
+ if ! sudo brctl show |grep -q ^br-quagga;then
+ sudo brctl addbr br-quagga
+ sudo ip link set br-quagga up
+ sudo ip link add quagga-tap type veth peer name ovs-quagga-tap
+ sudo ip link set dev ovs-quagga-tap up
+ sudo ip link set dev quagga-tap up
+ sudo ovs-vsctl add-port {bridge} ovs-quagga-tap
+ sudo brctl addif br-quagga quagga-tap
+ fi
+ """
+ compute_node.run_cmd(cmd.format(bridge=bridge))
compute_node.run_cmd("sudo virsh attach-interface %s"
- " bridge %s" % (libvirt_instance_name, bridge))
+ " bridge br-quagga" % (libvirt_instance_name))
def detach_instance_from_ext_br(instance, compute_node):
libvirt_instance_name = instance.instance_name
+ installer_type = str(os.environ['INSTALLER_TYPE'].lower())
+ # This function undoes all the actions performed by
+ # attach_instance_to_ext_br on Fuel and Apex installers.
+ if installer_type in ["fuel"]:
+ bridge = "br-floating"
+ elif installer_type in ["apex"]:
+ bridge = "br-ex"
+ else:
+ logger.warn("installer type %s is neither fuel nor apex."
+ % installer_type)
+ return
mac = compute_node.run_cmd("for vm in $(sudo virsh list | "
"grep running | awk '{print $2}'); "
"do echo -n ; sudo virsh dumpxml $vm| "
@@ -633,25 +653,16 @@ def detach_instance_from_ext_br(instance, compute_node):
" --type bridge --mac %s"
% (libvirt_instance_name, mac))
- installer_type = str(os.environ['INSTALLER_TYPE'].lower())
- if installer_type == "fuel":
- bridge = "br-ex"
- elif installer_type == "apex":
- # In Apex, br-ex is an ovs bridge and virsh attach-interface
- # won't just work. We work around it by creating a linux
- # bridge, attaching that to br-ex with a veth pair
- # and virsh-attaching the instance to the linux-bridge
- bridge = "br-quagga"
- cmd = """
- sudo brctl delif {bridge} quagga-tap &&
- sudo ovs-vsctl del-port br-ex ovs-quagga-tap &&
- sudo ip link set dev quagga-tap down &&
- sudo ip link set dev ovs-quagga-tap down &&
- sudo ip link del quagga-tap type veth peer name ovs-quagga-tap &&
- sudo ip link set {bridge} down &&
- sudo brctl delbr {bridge}
- """
- compute_node.run_cmd(cmd.format(bridge=bridge))
+ cmd = """
+ sudo brctl delif br-quagga quagga-tap &&
+ sudo ovs-vsctl del-port {bridge} ovs-quagga-tap &&
+ sudo ip link set dev quagga-tap down &&
+ sudo ip link set dev ovs-quagga-tap down &&
+ sudo ip link del quagga-tap type veth peer name ovs-quagga-tap &&
+ sudo ip link set br-quagga down &&
+ sudo brctl delbr br-quagga
+ """
+ compute_node.run_cmd(cmd.format(bridge=bridge))
def cleanup_neutron(conn, neutron_client, floatingip_ids, bgpvpn_ids,
@@ -790,6 +801,15 @@ def is_fail_mode_secure():
if not openstack_node.is_active():
continue
+ installer_type = str(os.environ['INSTALLER_TYPE'].lower())
+ if installer_type in ['fuel']:
+ if (
+ 'controller' in openstack_node.roles or
+ 'opendaylight' in openstack_node.roles or
+ 'installer' in openstack_node.roles
+ ):
+ continue
+
ovs_int_list = (openstack_node.run_cmd(get_ovs_int_cmd).
strip().split('\n'))
if 'br-int' in ovs_int_list:
@@ -909,25 +929,42 @@ def get_ovs_flows(compute_node_list, ovs_br_list, of_protocol="OpenFlow13"):
return cmd_out_lines
-def get_odl_bgp_entity_owner(controllers):
+def get_node_ip_and_netmask(node, iface):
+ cmd = "ip a | grep {iface} | grep inet | awk '{{print $2}}'"\
+ .format(iface=iface)
+ mgmt_net_cidr = node.run_cmd(cmd).strip().split('\n')
+ mgmt_ip = mgmt_net_cidr[0].split('/')[0]
+ mgmt_netmask = mgmt_net_cidr[0].split('/')[1]
+
+ return mgmt_ip, mgmt_netmask
+
+
+def get_odl_bgp_entity_owner(odl_nodes):
""" Finds the ODL owner of the BGP entity in the cluster.
When ODL runs in clustering mode we need to execute the BGP speaker
related commands to that ODL which is the owner of the BGP entity.
- :param controllers: list of OS controllers
- :return controller: OS controller in which ODL BGP entity owner runs
+ :param odl_nodes: list of Opendaylight nodes
+ :return odl_node: Opendaylight node in which ODL BGP entity owner runs
"""
- if len(controllers) == 1:
- return controllers[0]
+ if len(odl_nodes) == 1:
+ return odl_nodes[0]
else:
- url = ('http://admin:admin@{ip}:8081/restconf/'
+ url = ('http://{user}:{password}@{ip}:{port}/restconf/'
'operational/entity-owners:entity-owners/entity-type/bgp'
- .format(ip=controllers[0].ip))
+ .format(user=ODL_USER, password=ODL_PASSWORD, ip=ODL_IP,
+ port=ODL_PORT))
+
+ installer_type = str(os.environ['INSTALLER_TYPE'].lower())
+ if installer_type in ['apex']:
+ node_user = 'heat-admin'
+ elif installer_type in ['fuel']:
+ node_user = 'ubuntu'
remote_odl_akka_conf = ('/opt/opendaylight/configuration/'
'initial/akka.conf')
- remote_odl_home_akka_conf = '/home/heat-admin/akka.conf'
+ remote_odl_home_akka_conf = '/home/{0}/akka.conf'.format(node_user)
local_tmp_akka_conf = '/tmp/akka.conf'
try:
json_output = requests.get(url).json()
@@ -937,33 +974,43 @@ def get_odl_bgp_entity_owner(controllers):
return None
odl_bgp_owner = json_output['entity-type'][0]['entity'][0]['owner']
- for controller in controllers:
-
- controller.run_cmd('sudo cp {0} /home/heat-admin/'
- .format(remote_odl_akka_conf))
- controller.run_cmd('sudo chmod 777 {0}'
- .format(remote_odl_home_akka_conf))
- controller.get_file(remote_odl_home_akka_conf, local_tmp_akka_conf)
+ for odl_node in odl_nodes:
+ if installer_type in ['apex']:
+ get_odl_id_cmd = 'sudo docker ps -qf name=opendaylight_api'
+ odl_id = odl_node.run_cmd(get_odl_id_cmd)
+ odl_node.run_cmd('sudo docker cp '
+ '{container_id}:{odl_akka_conf} '
+ '/home/{user}/'
+ .format(container_id=odl_id,
+ odl_akka_conf=remote_odl_akka_conf,
+ user=node_user))
+ elif installer_type in ['fuel']:
+ odl_node.run_cmd('sudo cp {0} /home/{1}/'
+ .format(remote_odl_akka_conf, node_user))
+ odl_node.run_cmd('sudo chmod 777 {0}'
+ .format(remote_odl_home_akka_conf))
+ odl_node.get_file(remote_odl_home_akka_conf, local_tmp_akka_conf)
for line in open(local_tmp_akka_conf):
if re.search(odl_bgp_owner, line):
- return controller
+ return odl_node
return None
-def add_quagga_external_gre_end_point(controllers, remote_tep_ip):
+def add_quagga_external_gre_end_point(odl_nodes, remote_tep_ip):
json_body = {'input':
{'destination-ip': remote_tep_ip,
'tunnel-type': "odl-interface:tunnel-type-mpls-over-gre"}
}
- url = ('http://{ip}:8081/restconf/operations/'
- 'itm-rpc:add-external-tunnel-endpoint'.format(ip=controllers[0].ip))
+ url = ('http://{ip}:{port}/restconf/operations/'
+ 'itm-rpc:add-external-tunnel-endpoint'.format(ip=ODL_IP,
+ port=ODL_PORT))
headers = {'Content-type': 'application/yang.data+json',
'Accept': 'application/yang.data+json'}
try:
requests.post(url, data=json.dumps(json_body),
headers=headers,
- auth=HTTPBasicAuth('admin', 'admin'))
+ auth=HTTPBasicAuth(ODL_USER, ODL_PASSWORD))
except Exception as e:
logger.error("Failed to create external tunnel endpoint on"
" ODL for external tep ip %s with error %s"
@@ -971,9 +1018,11 @@ def add_quagga_external_gre_end_point(controllers, remote_tep_ip):
return None
-def is_fib_entry_present_on_odl(controllers, ip_prefix, vrf_id):
- url = ('http://admin:admin@{ip}:8081/restconf/config/odl-fib:fibEntries/'
- 'vrfTables/{vrf}/'.format(ip=controllers[0].ip, vrf=vrf_id))
+def is_fib_entry_present_on_odl(odl_nodes, ip_prefix, vrf_id):
+ url = ('http://{user}:{password}@{ip}:{port}/restconf/config/'
+ 'odl-fib:fibEntries/vrfTables/{vrf}/'
+ .format(user=ODL_USER, password=ODL_PASSWORD, ip=ODL_IP,
+ port=ODL_PORT, vrf=vrf_id))
logger.error("url is %s" % url)
try:
vrf_table = requests.get(url).json()
@@ -987,3 +1036,135 @@ def is_fib_entry_present_on_odl(controllers, ip_prefix, vrf_id):
logger.error('Failed to find ip prefix %s with error %s'
% (ip_prefix, e))
return False
+
+
+def wait_stack_for_status(conn, stack_id, stack_status, limit=12):
+ """ Waits to reach specified stack status. To be used with
+ CREATE_COMPLETE and UPDATE_COMPLETE.
+ Will try a specific number of attempts at 10sec intervals
+ (default 2min)
+
+ :param stack_id: the stack id returned by create_stack api call
+ :param stack_status: the stack status waiting for
+ :param limit: the maximum number of attempts
+ """
+ logger.debug("Stack '%s' create started" % stack_id)
+
+ stack_create_complete = False
+ attempts = 0
+ while attempts < limit:
+ try:
+ stack_st = conn.orchestration.get_stack(stack_id).status
+ except NotFoundException:
+ logger.error("Stack create failed")
+ raise SystemError("Stack create failed")
+ return False
+ if stack_st == stack_status:
+ stack_create_complete = True
+ break
+ attempts += 1
+ time.sleep(10)
+
+ logger.debug("Stack status check: %s times" % attempts)
+ if stack_create_complete is False:
+ logger.error("Stack create failed")
+ raise SystemError("Stack create failed")
+ return False
+
+ return True
+
+
+def delete_stack_and_wait(conn, stack_id, limit=12):
+ """ Starts and waits for completion of delete stack
+
+ Will try a specific number of attempts at 10sec intervals
+ (default 2min)
+
+ :param stack_id: the id of the stack to be deleted
+ :param limit: the maximum number of attempts
+ """
+ delete_started = False
+ if stack_id is not None:
+ delete_started = os_utils.delete_stack(conn, stack_id)
+
+ if delete_started is True:
+ logger.debug("Stack delete succesfully started")
+ else:
+ logger.error("Stack delete start failed")
+
+ stack_delete_complete = False
+ attempts = 0
+ while attempts < limit:
+ try:
+ stack_st = conn.orchestration.get_stack(stack_id).status
+ if stack_st == 'DELETE_COMPLETE':
+ stack_delete_complete = True
+ break
+ attempts += 1
+ time.sleep(10)
+ except NotFoundException:
+ stack_delete_complete = True
+ break
+
+ logger.debug("Stack status check: %s times" % attempts)
+ if not stack_delete_complete:
+ logger.error("Stack delete failed")
+ raise SystemError("Stack delete failed")
+ return False
+
+ return True
+
+
+def get_heat_environment(testcase, common_config):
+ """ Reads the heat parameters of a testcase into a yaml object
+
+ Each testcase where Heat Orchestratoin Template (HOT) is introduced
+ has an associated parameters section.
+ Reads testcase.heat_parameters section and read COMMON_CONFIG.flavor
+ and place it under parameters tree.
+
+ :param testcase: the tescase for which the HOT file is fetched
+ :param common_config: the common config section
+ :return environment: a yaml object to be used as environment
+ """
+ fl = common_config.default_flavor
+ param_dict = testcase.heat_parameters
+ param_dict['flavor'] = fl
+ env_dict = {'parameters': param_dict}
+ return env_dict
+
+
+def get_vms_from_stack_outputs(conn, stack_id, vm_stack_output_keys):
+ """ Converts a vm name from a heat stack output to a nova vm object
+
+ :param stack_id: the id of the stack to fetch the vms from
+ :param vm_stack_output_keys: a list of stack outputs with the vm names
+ :return vms: a list of vm objects corresponding to the outputs
+ """
+ vms = []
+ for vmk in vm_stack_output_keys:
+ vm_output = os_utils.get_output(conn, stack_id, vmk)
+ if vm_output is not None:
+ vm_name = vm_output['output_value']
+ logger.debug("vm '%s' read from heat output" % vm_name)
+ vm = os_utils.get_instance_by_name(conn, vm_name)
+ if vm is not None:
+ vms.append(vm)
+ return vms
+
+
+def merge_yaml(y1, y2):
+ """ Merge two yaml HOT into one
+
+ The parameters, resources and outputs sections are merged.
+
+ :param y1: the first yaml
+ :param y2: the second yaml
+ :return y: merged yaml
+ """
+ d1 = yaml.load(y1)
+ d2 = yaml.load(y2)
+ for key in ('parameters', 'resources', 'outputs'):
+ if key in d2:
+ d1[key].update(d2[key])
+ return yaml.dump(d1, default_flow_style=False)
diff --git a/sdnvpn/sh_utils/fetch-log-script.sh b/sdnvpn/sh_utils/fetch-log-script.sh
index c3c037d..9b0dc74 100755
--- a/sdnvpn/sh_utils/fetch-log-script.sh
+++ b/sdnvpn/sh_utils/fetch-log-script.sh
@@ -107,7 +107,11 @@ node(){
fi
done
# not all messages only tail the last 10k lines
- tail -n 10000 /var/log/messages > messages
+ if [ -f /var/log/messages ]; then
+ tail -n 10000 /var/log/messages > messages
+ elif [ -f /var/log/syslog ]; then
+ tail -n 10000 /var/log/syslog > messages
+ fi
}
_curl_data_store(){
@@ -137,7 +141,11 @@ datastore()
dump=$tmp_folder/dump-$HOSTNAME.txt
operational=$tmp_folder/Operational-Inventory-$HOSTNAME.txt
karaf_output=$tmp_folder/Karaf_out-$HOSTNAME.txt
- odl_ip_port=$(grep ^url= /etc/neutron/plugins/ml2/ml2_conf.ini |cut -d '/' -f3)
+ if [ -f /etc/neutron/plugins/ml2/ml2_conf.ini ]; then
+ odl_ip_port=$(grep ^url= /etc/neutron/plugins/ml2/ml2_conf.ini |cut -d '/' -f3)
+ else
+ odl_ip_port=$(netstat -tln | grep '8080\|8081\|8181\|8282' | awk 'NR==1 {print $4}')
+ fi
config_urls=( restconf/config/neutron:neutron/networks/ restconf/config/neutron:neutron/subnets/ restconf/config/neutron:neutron/ports/ restconf/config/neutron:neutron/routers/ restconf/config/itm:transport-zones/ restconf/config/itm-state:tunnels_state/ restconf/config/itm-state:external-tunnel-list/ restconf/config/itm-state:dpn-endpoints/ restconf/config/itm-config:vtep-config-schemas/ restconf/config/itm-config:tunnel-monitor-enabled/ restconf/config/itm-config:tunnel-monitor-interval/ restconf/config/interface-service-bindings:service-bindings/ restconf/config/l3vpn:vpn-instances/ restconf/config/ietf-interfaces:interfaces/ restconf/config/l3vpn:vpn-interfaces/ restconf/config/odl-fib:fibEntries restconf/config/neutronvpn:networkMaps restconf/config/neutronvpn:subnetmaps restconf/config/neutronvpn:vpnMaps restconf/config/neutronvpn:neutron-port-data restconf/config/id-manager:id-pools/ restconf/config/elan:elan-instances/ restconf/config/elan:elan-interfaces/ restconf/config/elan:elan-state/ restconf/config/elan:elan-forwarding-tables/ restconf/config/elan:elan-interface-forwarding-entries/ restconf/config/elan:elan-dpn-interfaces/ restconf/config/elan:elan-tag-name-map/ restconf/config/odl-nat:external-networks/ restconf/config/odl-nat:ext-routers/ restconf/config/odl-nat:intext-ip-port-map/ restconf/config/odl-nat:snatint-ip-port-map/ restconf/config/odl-l3vpn:vpn-instance-to-vpn-id/ restconf/config/neutronvpn:neutron-router-dpns/ restconf/operational/itm-config:tunnel-monitor-interval/ restconf/config/itm-config:tunnel-monitor-interval/ restconf/operational/itm-config:tunnel-monitor-params/ restconf/config/itm-config:tunnel-monitor-params/ restconf/config/vpnservice-dhcp:designated-switches-for-external-tunnels/ restconf/config/neutron:neutron/security-groups/ restconf/config/neutron:neutron/security-rules/ restconf/config/network-topology:network-topology/topology/hwvtep:1 restconf/config/network-topology:network-topology/topology/ovsdb:1 )
diff --git a/sdnvpn/test/functest/config.yaml b/sdnvpn/test/functest/config.yaml
index 4042455..3d2fd8b 100644
--- a/sdnvpn/test/functest/config.yaml
+++ b/sdnvpn/test/functest/config.yaml
@@ -27,6 +27,31 @@ testcases:
targets2: '55:55'
route_distinguishers: '11:11'
+ sdnvpn.test.functest.testcase_1bis:
+ enabled: true
+ order: 14
+ description: Test bed for HOT introduction - same tests as case 1
+ image_name: sdnvpn-image
+ stack_name: stack-1bis
+ hot_file_name: artifacts/testcase_1bis.yaml
+ heat_parameters:
+ instance_1_name: sdnvpn-1-1
+ instance_2_name: sdnvpn-1-2
+ instance_3_name: sdnvpn-1-3
+ instance_4_name: sdnvpn-1-4
+ instance_5_name: sdnvpn-1-5
+ net_1_name: sdnvpn-1-1-net
+ subnet_1_name: sdnvpn-1-1-subnet
+ subnet_1_cidr: 10.10.10.0/24
+ net_2_name: sdnvpn-1-2-net
+ subnet_2_name: sdnvpn-1-2-subnet
+ subnet_2_cidr: 10.10.11.0/24
+ secgroup_name: sdnvpn-sg
+ secgroup_descr: Security group for SDNVPN test cases
+ targets1: '88:88'
+ targets2: '55:55'
+ route_distinguishers: '11:11'
+
sdnvpn.test.functest.testcase_2:
enabled: true
order: 2
@@ -61,6 +86,43 @@ testcases:
route_distinguishers1: '111:111'
route_distinguishers2: '222:222'
+ sdnvpn.test.functest.testcase_2bis:
+ enabled: true
+ order: 15
+ description: Tenant separation -same as test case 2
+ image_name: sdnvpn-image
+ stack_name: stack-2bis
+ hot_file_name: artifacts/testcase_2bis.yaml
+ heat_parameters:
+ instance_1_name: sdnvpn-2-1
+ instance_2_name: sdnvpn-2-2
+ instance_3_name: sdnvpn-2-3
+ instance_4_name: sdnvpn-2-4
+ instance_5_name: sdnvpn-2-5
+ instance_1_ip: 10.10.10.11
+ instance_2_ip: 10.10.10.12
+ instance_3_ip: 10.10.11.13
+ instance_4_ip: 10.10.10.12
+ instance_5_ip: 10.10.11.13
+ net_1_name: sdnvpn-2-1-net
+ subnet_1a_name: sdnvpn-2-1a-subnet
+ subnet_1a_cidr: 10.10.10.0/24
+ subnet_1b_name: sdnvpn-2-1b-subnet
+ subnet_1b_cidr: 10.10.11.0/24
+ router_1_name: sdnvpn-2-1-router
+ net_2_name: sdnvpn-2-2-net
+ subnet_2a_name: sdnvpn-2-2a-subnet
+ subnet_2a_cidr: 10.10.11.0/24
+ subnet_2b_name: sdnvpn-2-2b-subnet
+ subnet_2b_cidr: 10.10.10.0/24
+ router_2_name: sdnvpn-2-2-router
+ secgroup_name: sdnvpn-sg
+ secgroup_descr: Security group for SDNVPN test cases
+ targets1: '88:88'
+ targets2: '55:55'
+ route_distinguishers1: '111:111'
+ route_distinguishers2: '222:222'
+
sdnvpn.test.functest.testcase_3:
enabled: true
order: 3
@@ -114,6 +176,32 @@ testcases:
targets2: '55:55'
route_distinguishers: '12:12'
+ sdnvpn.test.functest.testcase_4bis:
+ enabled: true
+ order: 17
+ description: Test bed for HOT introduction - same tests as case 4
+ image_name: sdnvpn-image
+ stack_name: stack-4bis
+ hot_file_name: artifacts/testcase_4bis.yaml
+ heat_parameters:
+ instance_1_name: sdnvpn-4-1
+ instance_2_name: sdnvpn-4-2
+ instance_3_name: sdnvpn-4-3
+ instance_4_name: sdnvpn-4-4
+ instance_5_name: sdnvpn-4-5
+ net_1_name: sdnvpn-4-1-net
+ subnet_1_name: sdnvpn-4-1-subnet
+ subnet_1_cidr: 10.10.10.0/24
+ router_1_name: sdnvpn-4-1-router
+ net_2_name: sdnvpn-4-2-net
+ subnet_2_name: sdnvpn-4-2-subnet
+ subnet_2_cidr: 10.10.11.0/24
+ secgroup_name: sdnvpn-sg
+ secgroup_descr: Security group for SDNVPN test cases
+ targets1: '88:88'
+ targets2: '55:55'
+ route_distinguishers: '12:12'
+
sdnvpn.test.functest.testcase_7:
enabled: false
order: 7
@@ -154,6 +242,30 @@ testcases:
targets: '88:88'
route_distinguishers: '18:18'
+ sdnvpn.test.functest.testcase_8bis:
+ enabled: true
+ order: 21
+ description: "Test floating IP and router assoc coexistence \
+ same as test case 8"
+ image_name: sdnvpn-image
+ stack_name: stack-8bis
+ hot_file_name: artifacts/testcase_8bis.yaml
+ hot_update_file_name: artifacts/testcase_8bis_upd.yaml
+ heat_parameters:
+ instance_1_name: sdnvpn-8-1
+ instance_2_name: sdnvpn-8-2
+ net_1_name: sdnvpn-8-1
+ subnet_1_name: sdnvpn-8-1-subnet
+ subnet_1_cidr: 10.10.10.0/24
+ router_1_name: sdnvpn-8-1-router
+ net_2_name: sdnvpn-8-2
+ subnet_2_name: sdnvpn-8-2-subnet
+ subnet_2_cidr: 10.10.20.0/24
+ secgroup_name: sdnvpn-sg
+ secgroup_descr: Security group for SDNVPN test cases
+ targets: '88:88'
+ route_distinguishers: '18:18'
+
sdnvpn.test.functest.testcase_9:
enabled: true
order: 9
diff --git a/sdnvpn/test/functest/testcase_1bis.py b/sdnvpn/test/functest/testcase_1bis.py
new file mode 100644
index 0000000..30a0abf
--- /dev/null
+++ b/sdnvpn/test/functest/testcase_1bis.py
@@ -0,0 +1,209 @@
+#!/usr/bin/env python
+#
+# Copyright (c) 2018 All rights reserved
+# This program and the accompanying materials
+# are made available under the terms of the Apache License, Version 2.0
+# which accompanies this distribution, and is available at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+
+import logging
+import sys
+import pkg_resources
+
+from random import randint
+from sdnvpn.lib import config as sdnvpn_config
+from sdnvpn.lib import openstack_utils as os_utils
+from sdnvpn.lib import utils as test_utils
+from sdnvpn.lib.results import Results
+
+logger = logging.getLogger(__name__)
+
+COMMON_CONFIG = sdnvpn_config.CommonConfig()
+TESTCASE_CONFIG = sdnvpn_config.TestcaseConfig(
+ 'sdnvpn.test.functest.testcase_1bis')
+
+
+def main():
+ conn = os_utils.get_os_connection()
+ results = Results(COMMON_CONFIG.line_length, conn)
+
+ results.add_to_summary(0, "=")
+ results.add_to_summary(2, "STATUS", "SUBTEST")
+ results.add_to_summary(0, "=")
+
+ conn = os_utils.get_os_connection()
+ # neutron client is needed as long as bgpvpn heat module
+ # is not yet installed by default in apex (APEX-618)
+ neutron_client = os_utils.get_neutron_client()
+
+ image_ids = []
+ bgpvpn_ids = []
+
+ try:
+ # image created outside HOT (OS::Glance::Image deprecated since ocata)
+ image_id = os_utils.create_glance_image(
+ conn, TESTCASE_CONFIG.image_name,
+ COMMON_CONFIG.image_path, disk=COMMON_CONFIG.image_format,
+ container="bare", public='public')
+ image_ids = [image_id]
+
+ compute_nodes = test_utils.assert_and_get_compute_nodes(conn)
+ az_1 = "nova:" + compute_nodes[0]
+ az_2 = "nova:" + compute_nodes[1]
+
+ file_path = pkg_resources.resource_filename(
+ 'sdnvpn', TESTCASE_CONFIG.hot_file_name)
+ templ = open(file_path, 'r').read()
+ logger.debug("Template is read: '%s'" % templ)
+ env = test_utils.get_heat_environment(TESTCASE_CONFIG, COMMON_CONFIG)
+ logger.debug("Environment is read: '%s'" % env)
+
+ env['name'] = TESTCASE_CONFIG.stack_name
+ env['template'] = templ
+ env['parameters']['image_n'] = TESTCASE_CONFIG.image_name
+ env['parameters']['av_zone_1'] = az_1
+ env['parameters']['av_zone_2'] = az_2
+
+ stack_id = os_utils.create_stack(conn, **env)
+ if stack_id is None:
+ logger.error("Stack create start failed")
+ raise SystemError("Stack create start failed")
+
+ test_utils.wait_stack_for_status(conn, stack_id, 'CREATE_COMPLETE')
+
+ net_1_output = os_utils.get_output(conn, stack_id, 'net_1_o')
+ network_1_id = net_1_output['output_value']
+ net_2_output = os_utils.get_output(conn, stack_id, 'net_2_o')
+ network_2_id = net_2_output['output_value']
+
+ vm_stack_output_keys = ['vm1_o', 'vm2_o', 'vm3_o', 'vm4_o', 'vm5_o']
+ vms = test_utils.get_vms_from_stack_outputs(conn,
+ stack_id,
+ vm_stack_output_keys)
+
+ logger.debug("Entering base test case with stack '%s'" % stack_id)
+
+ msg = ("Create VPN with eRT<>iRT")
+ results.record_action(msg)
+ vpn_name = "sdnvpn-" + str(randint(100000, 999999))
+ kwargs = {
+ "import_targets": TESTCASE_CONFIG.targets1,
+ "export_targets": TESTCASE_CONFIG.targets2,
+ "route_distinguishers": TESTCASE_CONFIG.route_distinguishers,
+ "name": vpn_name
+ }
+ bgpvpn = test_utils.create_bgpvpn(neutron_client, **kwargs)
+ bgpvpn_id = bgpvpn['bgpvpn']['id']
+ logger.debug("VPN created details: %s" % bgpvpn)
+ bgpvpn_ids.append(bgpvpn_id)
+
+ msg = ("Associate network '%s' to the VPN." %
+ TESTCASE_CONFIG.heat_parameters['net_1_name'])
+ results.record_action(msg)
+ results.add_to_summary(0, "-")
+
+ test_utils.create_network_association(
+ neutron_client, bgpvpn_id, network_1_id)
+
+ # Remember: vms[X] is former vm_X+1
+
+ results.get_ping_status(vms[0], vms[1], expected="PASS", timeout=200)
+ results.get_ping_status(vms[0], vms[2], expected="PASS", timeout=30)
+ results.get_ping_status(vms[0], vms[3], expected="FAIL", timeout=30)
+
+ msg = ("Associate network '%s' to the VPN." %
+ TESTCASE_CONFIG.heat_parameters['net_2_name'])
+ results.add_to_summary(0, "-")
+ results.record_action(msg)
+ results.add_to_summary(0, "-")
+
+ test_utils.create_network_association(
+ neutron_client, bgpvpn_id, network_2_id)
+
+ test_utils.wait_for_bgp_net_assocs(neutron_client,
+ bgpvpn_id,
+ network_1_id,
+ network_2_id)
+
+ logger.info("Waiting for the VMs to connect to each other using the"
+ " updated network configuration")
+ test_utils.wait_before_subtest()
+
+ results.get_ping_status(vms[3], vms[4], expected="PASS", timeout=30)
+ # TODO enable again when isolation in VPN with iRT != eRT works
+ # results.get_ping_status(vms[0], vms[3], expected="FAIL", timeout=30)
+ # results.get_ping_status(vms[0], vms[4], expected="FAIL", timeout=30)
+
+ msg = ("Update VPN with eRT=iRT ...")
+ results.add_to_summary(0, "-")
+ results.record_action(msg)
+ results.add_to_summary(0, "-")
+
+ # use bgpvpn-create instead of update till NETVIRT-1067 bug is fixed
+ # kwargs = {"import_targets": TESTCASE_CONFIG.targets1,
+ # "export_targets": TESTCASE_CONFIG.targets1,
+ # "name": vpn_name}
+ # bgpvpn = test_utils.update_bgpvpn(neutron_client,
+ # bgpvpn_id, **kwargs)
+
+ test_utils.delete_bgpvpn(neutron_client, bgpvpn_id)
+ bgpvpn_ids.remove(bgpvpn_id)
+ kwargs = {
+ "import_targets": TESTCASE_CONFIG.targets1,
+ "export_targets": TESTCASE_CONFIG.targets1,
+ "route_distinguishers": TESTCASE_CONFIG.route_distinguishers,
+ "name": vpn_name
+ }
+
+ test_utils.wait_before_subtest()
+
+ bgpvpn = test_utils.create_bgpvpn(neutron_client, **kwargs)
+ bgpvpn_id = bgpvpn['bgpvpn']['id']
+ logger.debug("VPN re-created details: %s" % bgpvpn)
+ bgpvpn_ids.append(bgpvpn_id)
+
+ msg = ("Associate network '%s' to the VPN." %
+ TESTCASE_CONFIG.heat_parameters['net_1_name'])
+ results.record_action(msg)
+ results.add_to_summary(0, "-")
+
+ test_utils.create_network_association(
+ neutron_client, bgpvpn_id, network_1_id)
+
+ test_utils.create_network_association(
+ neutron_client, bgpvpn_id, network_2_id)
+
+ test_utils.wait_for_bgp_net_assocs(neutron_client,
+ bgpvpn_id,
+ network_1_id,
+ network_2_id)
+ # The above code has to be removed after re-enabling bgpvpn-update
+
+ logger.info("Waiting for the VMs to connect to each other using the"
+ " updated network configuration")
+ test_utils.wait_before_subtest()
+
+ results.get_ping_status(vms[0], vms[3], expected="PASS", timeout=30)
+ results.get_ping_status(vms[0], vms[4], expected="PASS", timeout=30)
+
+ except Exception as e:
+ logger.error("exception occurred while executing testcase_1bis: %s", e)
+ raise
+ finally:
+ test_utils.cleanup_glance(conn, image_ids)
+ test_utils.cleanup_neutron(conn, neutron_client, [], bgpvpn_ids,
+ [], [], [], [])
+
+ try:
+ test_utils.delete_stack_and_wait(conn, stack_id)
+ except Exception as e:
+ logger.error(
+ "exception occurred while executing testcase_1bis: %s", e)
+
+ return results.compile_summary()
+
+
+if __name__ == '__main__':
+ sys.exit(main())
diff --git a/sdnvpn/test/functest/testcase_2bis.py b/sdnvpn/test/functest/testcase_2bis.py
new file mode 100644
index 0000000..3736c0c
--- /dev/null
+++ b/sdnvpn/test/functest/testcase_2bis.py
@@ -0,0 +1,188 @@
+#!/usr/bin/env python
+#
+# Copyright (c) 2018 All rights reserved
+# This program and the accompanying materials
+# are made available under the terms of the Apache License, Version 2.0
+# which accompanies this distribution, and is available at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+
+import base64
+import logging
+import sys
+import pkg_resources
+
+from random import randint
+from sdnvpn.lib import config as sdnvpn_config
+from sdnvpn.lib import openstack_utils as os_utils
+from sdnvpn.lib import utils as test_utils
+from sdnvpn.lib.results import Results
+
+logger = logging.getLogger(__name__)
+
+COMMON_CONFIG = sdnvpn_config.CommonConfig()
+TESTCASE_CONFIG = sdnvpn_config.TestcaseConfig(
+ 'sdnvpn.test.functest.testcase_2bis')
+
+
+def main():
+ conn = os_utils.get_os_connection()
+ results = Results(COMMON_CONFIG.line_length, conn)
+
+ results.add_to_summary(0, '=')
+ results.add_to_summary(2, 'STATUS', 'SUBTEST')
+ results.add_to_summary(0, '=')
+
+ conn = os_utils.get_os_connection()
+ # neutron client is needed as long as bgpvpn heat module
+ # is not yet installed by default in apex (APEX-618)
+ neutron_client = os_utils.get_neutron_client()
+
+ image_ids = []
+ bgpvpn_ids = []
+
+ try:
+ logger.debug("Using private key %s injected to the VMs."
+ % COMMON_CONFIG.keyfile_path)
+ keyfile = open(COMMON_CONFIG.keyfile_path, 'r')
+ key_buf = keyfile.read()
+ keyfile.close()
+ key = base64.b64encode(key_buf)
+
+ # image created outside HOT (OS::Glance::Image deprecated since ocata)
+ image_id = os_utils.create_glance_image(
+ conn, TESTCASE_CONFIG.image_name,
+ COMMON_CONFIG.image_path, disk=COMMON_CONFIG.image_format,
+ container='bare', public='public')
+ image_ids = [image_id]
+
+ compute_nodes = test_utils.assert_and_get_compute_nodes(conn)
+
+ az_1 = 'nova:' + compute_nodes[0]
+ # av_zone_2 = "nova:" + compute_nodes[1]
+
+ file_path = pkg_resources.resource_filename(
+ 'sdnvpn', TESTCASE_CONFIG.hot_file_name)
+ templ = open(file_path, 'r').read()
+ logger.debug("Template is read: '%s'" % templ)
+ env = test_utils.get_heat_environment(TESTCASE_CONFIG, COMMON_CONFIG)
+ logger.debug("Environment is read: '%s'" % env)
+
+ env['name'] = TESTCASE_CONFIG.stack_name
+ env['template'] = templ
+ env['parameters']['image_n'] = TESTCASE_CONFIG.image_name
+ env['parameters']['av_zone_1'] = az_1
+ env['parameters']['id_rsa_key'] = key
+
+ stack_id = os_utils.create_stack(conn, **env)
+ if stack_id is None:
+ logger.error('Stack create start failed')
+ raise SystemError('Stack create start failed')
+
+ test_utils.wait_stack_for_status(conn, stack_id, 'CREATE_COMPLETE')
+
+ net_1_output = os_utils.get_output(conn, stack_id, 'net_1_o')
+ network_1_id = net_1_output['output_value']
+ net_2_output = os_utils.get_output(conn, stack_id, 'net_2_o')
+ network_2_id = net_2_output['output_value']
+
+ vm_stack_output_keys = ['vm1_o', 'vm2_o', 'vm3_o', 'vm4_o', 'vm5_o']
+ vms = test_utils.get_vms_from_stack_outputs(conn,
+ stack_id,
+ vm_stack_output_keys)
+
+ logger.debug("Entering base test case with stack '%s'" % stack_id)
+
+ msg = ('Create VPN1 with eRT=iRT')
+ results.record_action(msg)
+ vpn1_name = 'sdnvpn-1-' + str(randint(100000, 999999))
+ kwargs = {
+ 'import_targets': TESTCASE_CONFIG.targets2,
+ 'export_targets': TESTCASE_CONFIG.targets2,
+ 'route_targets': TESTCASE_CONFIG.targets2,
+ 'route_distinguishers': TESTCASE_CONFIG.route_distinguishers1,
+ 'name': vpn1_name
+ }
+ bgpvpn1 = test_utils.create_bgpvpn(neutron_client, **kwargs)
+ bgpvpn1_id = bgpvpn1['bgpvpn']['id']
+ logger.debug("VPN1 created details: %s" % bgpvpn1)
+ bgpvpn_ids.append(bgpvpn1_id)
+
+ msg = ("Associate network '%s' to the VPN." %
+ TESTCASE_CONFIG.heat_parameters['net_1_name'])
+ results.record_action(msg)
+ results.add_to_summary(0, '-')
+
+ test_utils.create_network_association(
+ neutron_client, bgpvpn1_id, network_1_id)
+
+ logger.info('Waiting for the VMs to connect to each other using the'
+ ' updated network configuration for VPN1')
+ test_utils.wait_before_subtest()
+
+ # Remember: vms[X] has instance_X+1_name
+
+ # 10.10.10.12 should return sdnvpn-2 to sdnvpn-1
+ results.check_ssh_output(
+ vms[0], vms[1],
+ expected=TESTCASE_CONFIG.heat_parameters['instance_2_name'],
+ timeout=200)
+
+ results.add_to_summary(0, '-')
+ msg = ('Create VPN2 with eRT=iRT')
+ results.record_action(msg)
+ vpn2_name = 'sdnvpn-2-' + str(randint(100000, 999999))
+ kwargs = {
+ 'import_targets': TESTCASE_CONFIG.targets1,
+ 'export_targets': TESTCASE_CONFIG.targets1,
+ 'route_targets': TESTCASE_CONFIG.targets1,
+ 'route_distinguishers': TESTCASE_CONFIG.route_distinguishers2,
+ 'name': vpn2_name
+ }
+ bgpvpn2 = test_utils.create_bgpvpn(neutron_client, **kwargs)
+ bgpvpn2_id = bgpvpn2['bgpvpn']['id']
+ logger.debug("VPN created details: %s" % bgpvpn2)
+ bgpvpn_ids.append(bgpvpn2_id)
+
+ msg = ("Associate network '%s' to the VPN2." %
+ TESTCASE_CONFIG.heat_parameters['net_2_name'])
+ results.record_action(msg)
+ results.add_to_summary(0, '-')
+
+ test_utils.create_network_association(
+ neutron_client, bgpvpn2_id, network_2_id)
+
+ test_utils.wait_for_bgp_net_assoc(neutron_client,
+ bgpvpn1_id, network_1_id)
+ test_utils.wait_for_bgp_net_assoc(neutron_client,
+ bgpvpn2_id, network_2_id)
+
+ logger.info('Waiting for the VMs to connect to each other using the'
+ ' updated network configuration for VPN2')
+ test_utils.wait_before_subtest()
+
+ # 10.10.10.11 should return 'not reachable' to sdnvpn-4
+ results.check_ssh_output(vms[3], vms[0],
+ expected='not reachable',
+ timeout=30)
+
+ except Exception as e:
+ logger.error("exception occurred while executing testcase_2bis: %s", e)
+ raise
+ finally:
+ test_utils.cleanup_glance(conn, image_ids)
+ test_utils.cleanup_neutron(conn, neutron_client, [], bgpvpn_ids,
+ [], [], [], [])
+
+ try:
+ test_utils.delete_stack_and_wait(conn, stack_id)
+ except Exception as e:
+ logger.error(
+ "exception occurred while executing testcase_2bis: %s", e)
+
+ return results.compile_summary()
+
+
+if __name__ == '__main__':
+ sys.exit(main())
diff --git a/sdnvpn/test/functest/testcase_3.py b/sdnvpn/test/functest/testcase_3.py
index 2a3a530..48024cb 100644
--- a/sdnvpn/test/functest/testcase_3.py
+++ b/sdnvpn/test/functest/testcase_3.py
@@ -51,45 +51,47 @@ def main():
health_cmd = "sudo docker ps -f name=opendaylight_api -f " \
"health=healthy -q"
if installer_type in ["fuel"]:
- controllers = [node for node in openstack_nodes
- if "running" in node.run_cmd(fuel_cmd)]
+ odl_nodes = [node for node in openstack_nodes
+ if "running" in node.run_cmd(fuel_cmd)]
elif installer_type in ["apex"]:
- controllers = [node for node in openstack_nodes
- if node.run_cmd(health_cmd)
- if "Running" in node.run_cmd(apex_cmd)]
+ odl_nodes = [node for node in openstack_nodes
+ if node.run_cmd(health_cmd)
+ if "Running" in node.run_cmd(apex_cmd)]
+ else:
+ logger.error("Incompatible installer type")
computes = [node for node in openstack_nodes if node.is_compute()]
msg = ("Verify that OpenDaylight can start/communicate with zrpcd/Quagga")
results.record_action(msg)
results.add_to_summary(0, "-")
- if not controllers:
- msg = ("Controller (ODL) list is empty. Skipping rest of tests.")
+ if not odl_nodes:
+ msg = ("ODL node list is empty. Skipping rest of tests.")
logger.info(msg)
results.add_failure(msg)
return results.compile_summary()
else:
- msg = ("Controller (ODL) list is ready")
+ msg = ("ODL node list is ready")
logger.info(msg)
results.add_success(msg)
logger.info("Checking if zrpcd is "
- "running on the controller nodes")
+ "running on the opendaylight nodes")
- for controller in controllers:
- output_zrpcd = controller.run_cmd("ps --no-headers -C "
- "zrpcd -o state")
+ for odl_node in odl_nodes:
+ output_zrpcd = odl_node.run_cmd("ps --no-headers -C "
+ "zrpcd -o state")
states = output_zrpcd.split()
running = any([s != 'Z' for s in states])
- msg = ("zrpcd is running in {name}".format(name=controller.name))
+ msg = ("zrpcd is running in {name}".format(name=odl_node.name))
if not running:
- logger.info("zrpcd is not running on the controller node {name}"
- .format(name=controller.name))
+ logger.info("zrpcd is not running on the opendaylight node {name}"
+ .format(name=odl_node.name))
results.add_failure(msg)
else:
- logger.info("zrpcd is running on the controller node {name}"
- .format(name=controller.name))
+ logger.info("zrpcd is running on the opendaylight node {name}"
+ .format(name=odl_node.name))
results.add_success(msg)
results.add_to_summary(0, "-")
@@ -97,51 +99,55 @@ def main():
# Find the BGP entity owner in ODL because of this bug:
# https://jira.opendaylight.org/browse/NETVIRT-1308
msg = ("Found BGP entity owner")
- controller = test_utils.get_odl_bgp_entity_owner(controllers)
- if controller is None:
+ odl_node = test_utils.get_odl_bgp_entity_owner(odl_nodes)
+ if odl_node is None:
logger.error("Failed to find the BGP entity owner")
results.add_failure(msg)
else:
logger.info('BGP entity owner is {name}'
- .format(name=controller.name))
+ .format(name=odl_node.name))
results.add_success(msg)
results.add_to_summary(0, "-")
- get_ext_ip_cmd = "sudo ip a | grep br-ex | grep inet | awk '{print $2}'"
- ext_net_cidr = controller.run_cmd(get_ext_ip_cmd).strip().split('\n')
- ext_net_mask = ext_net_cidr[0].split('/')[1]
- controller_ext_ip = ext_net_cidr[0].split('/')[0]
+ installer_type = str(os.environ['INSTALLER_TYPE'].lower())
+ if installer_type in ['apex']:
+ odl_interface = 'br-ex'
+ elif installer_type in ['fuel']:
+ odl_interface = 'br-ext'
+ else:
+ logger.error("Incompatible installer type")
+ odl_ip, odl_netmask = test_utils.get_node_ip_and_netmask(
+ odl_node, odl_interface)
- logger.info("Starting bgp speaker of controller at IP %s "
- % controller_ext_ip)
+ logger.info("Starting bgp speaker of opendaylight node at IP %s "
+ % odl_ip)
# Ensure that ZRPCD ip & port are well configured within ODL
add_client_conn_to_bgp = "bgp-connect -p 7644 -h 127.0.0.1 add"
- test_utils.run_odl_cmd(controller, add_client_conn_to_bgp)
+ test_utils.run_odl_cmd(odl_node, add_client_conn_to_bgp)
# Start bgp daemon
start_quagga = "odl:configure-bgp -op start-bgp-server " \
- "--as-num 100 --router-id {0}".format(controller_ext_ip)
- test_utils.run_odl_cmd(controller, start_quagga)
+ "--as-num 100 --router-id {0}".format(odl_ip)
+ test_utils.run_odl_cmd(odl_node, start_quagga)
# we need to wait a bit until the bgpd is up
time.sleep(5)
- logger.info("Checking if bgpd is running"
- " on the controller node")
+ logger.info("Checking if bgpd is running on the opendaylight node")
# Check if there is a non-zombie bgpd process
- output_bgpd = controller.run_cmd("ps --no-headers -C "
- "bgpd -o state")
+ output_bgpd = odl_node.run_cmd("ps --no-headers -C "
+ "bgpd -o state")
states = output_bgpd.split()
running = any([s != 'Z' for s in states])
msg = ("bgpd is running")
if not running:
- logger.info("bgpd is not running on the controller node")
+ logger.info("bgpd is not running on the opendaylight node")
results.add_failure(msg)
else:
- logger.info("bgpd is running on the controller node")
+ logger.info("bgpd is running on the opendaylight node")
results.add_success(msg)
results.add_to_summary(0, "-")
@@ -150,22 +156,22 @@ def main():
# but the test is disabled because of buggy upstream
# https://github.com/6WIND/zrpcd/issues/15
# stop_quagga = 'odl:configure-bgp -op stop-bgp-server'
- # test_utils.run_odl_cmd(controller, stop_quagga)
+ # test_utils.run_odl_cmd(odl_node, stop_quagga)
# logger.info("Checking if bgpd is still running"
- # " on the controller node")
+ # " on the opendaylight node")
- # output_bgpd = controller.run_cmd("ps --no-headers -C " \
- # "bgpd -o state")
+ # output_bgpd = odl_node.run_cmd("ps --no-headers -C " \
+ # "bgpd -o state")
# states = output_bgpd.split()
# running = any([s != 'Z' for s in states])
# msg = ("bgpd is stopped")
# if not running:
- # logger.info("bgpd is not running on the controller node")
+ # logger.info("bgpd is not running on the opendaylight node")
# results.add_success(msg)
# else:
- # logger.info("bgpd is still running on the controller node")
+ # logger.info("bgpd is still running on the opendaylight node")
# results.add_failure(msg)
# Taken from the sfc tests
@@ -268,9 +274,9 @@ def main():
compute = comp
break
quagga_bootstrap_script = quagga.gen_quagga_setup_script(
- controller_ext_ip,
+ odl_ip,
fake_fip['fip_addr'],
- ext_net_mask,
+ odl_netmask,
TESTCASE_CONFIG.external_network_ip_prefix,
TESTCASE_CONFIG.route_distinguishers,
TESTCASE_CONFIG.import_targets,
@@ -317,16 +323,16 @@ def main():
results.add_to_summary(0, '-')
neighbor = quagga.odl_add_neighbor(fake_fip['fip_addr'],
- controller_ext_ip,
- controller)
- peer = quagga.check_for_peering(controller)
+ odl_ip,
+ odl_node)
+ peer = quagga.check_for_peering(odl_node)
if neighbor and peer:
results.add_success("Peering with quagga")
else:
results.add_failure("Peering with quagga")
- test_utils.add_quagga_external_gre_end_point(controllers,
+ test_utils.add_quagga_external_gre_end_point(odl_nodes,
fake_fip['fip_addr'])
test_utils.wait_before_subtest()
@@ -370,7 +376,7 @@ def main():
instance_ids.append(vm_bgpvpn.id)
# wait for VM to get IP
- instance_up = test_utils.wait_for_instances_up(vm_bgpvpn)
+ instance_up = test_utils.wait_for_instances_get_dhcp(vm_bgpvpn)
if not instance_up:
logger.error("One or more instances are down")
@@ -382,7 +388,7 @@ def main():
msg = ("External IP prefix %s is exchanged with ODL"
% TESTCASE_CONFIG.external_network_ip_prefix)
fib_added = test_utils.is_fib_entry_present_on_odl(
- controllers,
+ odl_nodes,
TESTCASE_CONFIG.external_network_ip_prefix,
TESTCASE_CONFIG.route_distinguishers)
if fib_added:
@@ -417,12 +423,12 @@ def main():
if fake_fip is not None:
bgp_nbr_disconnect_cmd = ("bgp-nbr -i %s -a 200 del"
% fake_fip['fip_addr'])
- test_utils.run_odl_cmd(controller, bgp_nbr_disconnect_cmd)
+ test_utils.run_odl_cmd(odl_node, bgp_nbr_disconnect_cmd)
bgp_server_stop_cmd = ("bgp-rtr -r %s -a 100 del"
- % controller_ext_ip)
+ % odl_ip)
odl_zrpc_disconnect_cmd = "bgp-connect -p 7644 -h 127.0.0.1 del"
- test_utils.run_odl_cmd(controller, bgp_server_stop_cmd)
- test_utils.run_odl_cmd(controller, odl_zrpc_disconnect_cmd)
+ test_utils.run_odl_cmd(odl_node, bgp_server_stop_cmd)
+ test_utils.run_odl_cmd(odl_node, odl_zrpc_disconnect_cmd)
return results.compile_summary()
diff --git a/sdnvpn/test/functest/testcase_4bis.py b/sdnvpn/test/functest/testcase_4bis.py
new file mode 100644
index 0000000..6245f7c
--- /dev/null
+++ b/sdnvpn/test/functest/testcase_4bis.py
@@ -0,0 +1,215 @@
+#!/usr/bin/env python
+#
+# Copyright (c) 2018 All rights reserved
+# This program and the accompanying materials
+# are made available under the terms of the Apache License, Version 2.0
+# which accompanies this distribution, and is available at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+
+import logging
+import sys
+import pkg_resources
+
+from random import randint
+from sdnvpn.lib import config as sdnvpn_config
+from sdnvpn.lib import openstack_utils as os_utils
+from sdnvpn.lib import utils as test_utils
+from sdnvpn.lib.results import Results
+
+logger = logging.getLogger(__name__)
+
+COMMON_CONFIG = sdnvpn_config.CommonConfig()
+TESTCASE_CONFIG = sdnvpn_config.TestcaseConfig(
+ 'sdnvpn.test.functest.testcase_4bis')
+
+
+def main():
+ conn = os_utils.get_os_connection()
+ results = Results(COMMON_CONFIG.line_length, conn)
+
+ results.add_to_summary(0, '=')
+ results.add_to_summary(2, 'STATUS', 'SUBTEST')
+ results.add_to_summary(0, '=')
+
+ conn = os_utils.get_os_connection()
+ # neutron client is needed as long as bgpvpn heat module
+ # is not yet installed by default in apex (APEX-618)
+ neutron_client = os_utils.get_neutron_client()
+
+ image_ids = []
+ bgpvpn_ids = []
+
+ try:
+ image_id = os_utils.create_glance_image(
+ conn, TESTCASE_CONFIG.image_name,
+ COMMON_CONFIG.image_path, disk=COMMON_CONFIG.image_format,
+ container='bare', public='public')
+ image_ids = [image_id]
+
+ compute_nodes = test_utils.assert_and_get_compute_nodes(conn)
+ az_1 = 'nova:' + compute_nodes[0]
+ az_2 = 'nova:' + compute_nodes[1]
+
+ file_path = pkg_resources.resource_filename(
+ 'sdnvpn', TESTCASE_CONFIG.hot_file_name)
+ templ = open(file_path, 'r').read()
+ logger.debug("Template is read: '%s'" % templ)
+ env = test_utils.get_heat_environment(TESTCASE_CONFIG, COMMON_CONFIG)
+ logger.debug("Environment is read: '%s'" % env)
+
+ env['name'] = TESTCASE_CONFIG.stack_name
+ env['template'] = templ
+ env['parameters']['image_n'] = TESTCASE_CONFIG.image_name
+ env['parameters']['av_zone_1'] = az_1
+ env['parameters']['av_zone_2'] = az_2
+
+ stack_id = os_utils.create_stack(conn, **env)
+ if stack_id is None:
+ logger.error('Stack create start failed')
+ raise SystemError('Stack create start failed')
+
+ test_utils.wait_stack_for_status(conn, stack_id, 'CREATE_COMPLETE')
+
+ router_1_output = os_utils.get_output(conn, stack_id, 'router_1_o')
+ router_1_id = router_1_output['output_value']
+ net_2_output = os_utils.get_output(conn, stack_id, 'net_2_o')
+ network_2_id = net_2_output['output_value']
+
+ vm_stack_output_keys = ['vm1_o', 'vm2_o', 'vm3_o', 'vm4_o', 'vm5_o']
+ vms = test_utils.get_vms_from_stack_outputs(conn,
+ stack_id,
+ vm_stack_output_keys)
+
+ logger.debug("Entering base test case with stack '%s'" % stack_id)
+
+ msg = ('Create VPN with eRT<>iRT')
+ results.record_action(msg)
+ vpn_name = 'sdnvpn-' + str(randint(100000, 999999))
+ kwargs = {
+ 'import_targets': TESTCASE_CONFIG.targets1,
+ 'export_targets': TESTCASE_CONFIG.targets2,
+ 'route_distinguishers': TESTCASE_CONFIG.route_distinguishers,
+ 'name': vpn_name
+ }
+ bgpvpn = test_utils.create_bgpvpn(neutron_client, **kwargs)
+ bgpvpn_id = bgpvpn['bgpvpn']['id']
+ logger.debug("VPN created details: %s" % bgpvpn)
+ bgpvpn_ids.append(bgpvpn_id)
+
+ msg = ("Associate router '%s' to the VPN." %
+ TESTCASE_CONFIG.heat_parameters['router_1_name'])
+ results.record_action(msg)
+ results.add_to_summary(0, '-')
+
+ test_utils.create_router_association(
+ neutron_client, bgpvpn_id, router_1_id)
+
+ # Remember: vms[X] is former vm_X+1
+
+ results.get_ping_status(vms[0], vms[1], expected='PASS', timeout=200)
+ results.get_ping_status(vms[0], vms[2], expected='PASS', timeout=30)
+ results.get_ping_status(vms[0], vms[3], expected='FAIL', timeout=30)
+
+ msg = ("Associate network '%s' to the VPN." %
+ TESTCASE_CONFIG.heat_parameters['net_2_name'])
+ results.add_to_summary(0, '-')
+ results.record_action(msg)
+ results.add_to_summary(0, '-')
+
+ test_utils.create_network_association(
+ neutron_client, bgpvpn_id, network_2_id)
+
+ test_utils.wait_for_bgp_router_assoc(
+ neutron_client, bgpvpn_id, router_1_id)
+ test_utils.wait_for_bgp_net_assocs(
+ neutron_client, bgpvpn_id, network_2_id)
+
+ logger.info('Waiting for the VMs to connect to each other using the'
+ ' updated network configuration')
+ test_utils.wait_before_subtest()
+
+ results.get_ping_status(vms[3], vms[4], expected='PASS', timeout=30)
+ # TODO enable again when isolation in VPN with iRT != eRT works
+ # results.get_ping_status(vms[0], vms[3], expected="FAIL", timeout=30)
+ # results.get_ping_status(vms[0], vms[4], expected="FAIL", timeout=30)
+
+ msg = ('Update VPN with eRT=iRT ...')
+ results.add_to_summary(0, "-")
+ results.record_action(msg)
+ results.add_to_summary(0, "-")
+
+ # use bgpvpn-create instead of update till NETVIRT-1067 bug is fixed
+ # kwargs = {"import_targets": TESTCASE_CONFIG.targets1,
+ # "export_targets": TESTCASE_CONFIG.targets1,
+ # "name": vpn_name}
+ # bgpvpn = test_utils.update_bgpvpn(neutron_client,
+ # bgpvpn_id, **kwargs)
+
+ test_utils.delete_bgpvpn(neutron_client, bgpvpn_id)
+ bgpvpn_ids.remove(bgpvpn_id)
+ kwargs = {
+ 'import_targets': TESTCASE_CONFIG.targets1,
+ 'export_targets': TESTCASE_CONFIG.targets1,
+ 'route_distinguishers': TESTCASE_CONFIG.route_distinguishers,
+ 'name': vpn_name
+ }
+
+ test_utils.wait_before_subtest()
+
+ bgpvpn = test_utils.create_bgpvpn(neutron_client, **kwargs)
+ bgpvpn_id = bgpvpn['bgpvpn']['id']
+ logger.debug("VPN re-created details: %s" % bgpvpn)
+ bgpvpn_ids.append(bgpvpn_id)
+
+ msg = ("Associate again network '%s' and router '%s 'to the VPN."
+ % (TESTCASE_CONFIG.heat_parameters['net_2_name'],
+ TESTCASE_CONFIG.heat_parameters['router_1_name']))
+ results.add_to_summary(0, '-')
+ results.record_action(msg)
+ results.add_to_summary(0, '-')
+
+ test_utils.create_router_association(
+ neutron_client, bgpvpn_id, router_1_id)
+
+ test_utils.create_network_association(
+ neutron_client, bgpvpn_id, network_2_id)
+
+ test_utils.wait_for_bgp_router_assoc(
+ neutron_client, bgpvpn_id, router_1_id)
+ test_utils.wait_for_bgp_net_assoc(
+ neutron_client, bgpvpn_id, network_2_id)
+ # The above code has to be removed after re-enabling bgpvpn-update
+
+ logger.info('Waiting for the VMs to connect to each other using the'
+ ' updated network configuration')
+ test_utils.wait_before_subtest()
+
+ # TODO: uncomment the following once ODL netvirt fixes the following
+ # bug: https://jira.opendaylight.org/browse/NETVIRT-932
+ # results.get_ping_status(vms[0], vms[3], expected="PASS", timeout=30)
+ # results.get_ping_status(vms[0], vms[4], expected="PASS", timeout=30)
+
+ results.add_to_summary(0, '=')
+ logger.info("\n%s" % results.summary)
+
+ except Exception as e:
+ logger.error("exception occurred while executing testcase_4bis: %s", e)
+ raise
+ finally:
+ test_utils.cleanup_glance(conn, image_ids)
+ test_utils.cleanup_neutron(conn, neutron_client, [], bgpvpn_ids,
+ [], [], [], [])
+
+ try:
+ test_utils.delete_stack_and_wait(conn, stack_id)
+ except Exception as e:
+ logger.error(
+ "exception occurred while executing testcase_4bis: %s", e)
+
+ return results.compile_summary()
+
+
+if __name__ == '__main__':
+ sys.exit(main())
diff --git a/sdnvpn/test/functest/testcase_8bis.py b/sdnvpn/test/functest/testcase_8bis.py
new file mode 100644
index 0000000..d850020
--- /dev/null
+++ b/sdnvpn/test/functest/testcase_8bis.py
@@ -0,0 +1,176 @@
+#!/usr/bin/env python
+#
+# Copyright (c) 2017 All rights reserved
+# This program and the accompanying materials
+# are made available under the terms of the Apache License, Version 2.0
+# which accompanies this distribution, and is available at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Test whether router assoc can coexist with floating IP
+# - Create VM1 in net1 with a subnet which is connected to a router
+# which is connected with the gateway
+# - Create VM2 in net2 with a subnet without a router attached.
+# - Create bgpvpn with iRT=eRT
+# - Assoc the router of net1 with bgpvpn and assoc net 2 with the bgpvpn
+# - Try to ping from one VM to the other
+# - Assign a floating IP to the VM in the router assoc network
+# - Ping it the floating ip
+
+import logging
+import sys
+import pkg_resources
+
+from sdnvpn.lib import config as sdnvpn_config
+from sdnvpn.lib import openstack_utils as os_utils
+from sdnvpn.lib import utils as test_utils
+from sdnvpn.lib.results import Results
+
+
+logger = logging.getLogger(__name__)
+
+COMMON_CONFIG = sdnvpn_config.CommonConfig()
+TESTCASE_CONFIG = sdnvpn_config.TestcaseConfig(
+ 'sdnvpn.test.functest.testcase_8bis')
+
+
+def main():
+ conn = os_utils.get_os_connection()
+ results = Results(COMMON_CONFIG.line_length, conn)
+
+ results.add_to_summary(0, "=")
+ results.add_to_summary(2, "STATUS", "SUBTEST")
+ results.add_to_summary(0, "=")
+
+ # neutron client is needed as long as bgpvpn heat module
+ # is not yet installed by default in apex (APEX-618)
+ neutron_client = os_utils.get_neutron_client()
+
+ image_ids = []
+ bgpvpn_ids = []
+
+ try:
+ image_id = os_utils.create_glance_image(
+ conn, TESTCASE_CONFIG.image_name,
+ COMMON_CONFIG.image_path, disk=COMMON_CONFIG.image_format,
+ container='bare', public='public')
+ image_ids = [image_id]
+
+ compute_nodes = test_utils.assert_and_get_compute_nodes(conn)
+ az_1 = "nova:" + compute_nodes[0]
+ # spawning the VMs on the same compute because fib flow (21) entries
+ # are not created properly if vm1 and vm2 are attached to two
+ # different computes
+
+ file_path = pkg_resources.resource_filename(
+ 'sdnvpn', TESTCASE_CONFIG.hot_file_name)
+ templ = open(file_path, 'r').read()
+ logger.debug("Template is read: '%s'" % templ)
+ env = test_utils.get_heat_environment(TESTCASE_CONFIG, COMMON_CONFIG)
+ logger.debug("Environment is read: '%s'" % env)
+
+ env['name'] = TESTCASE_CONFIG.stack_name
+ env['parameters']['external_nw'] = os_utils.get_external_net(conn)
+ env['template'] = templ
+ env['parameters']['image_n'] = TESTCASE_CONFIG.image_name
+ env['parameters']['av_zone_1'] = az_1
+
+ stack_id = os_utils.create_stack(conn, **env)
+ if stack_id is None:
+ logger.error('Stack create start failed')
+ raise SystemError('Stack create start failed')
+
+ test_utils.wait_stack_for_status(conn, stack_id, 'CREATE_COMPLETE')
+
+ router_1_output = os_utils.get_output(conn, stack_id, 'router_1_o')
+ router_1_id = router_1_output['output_value']
+ net_2_output = os_utils.get_output(conn, stack_id, 'net_2_o')
+ network_2_id = net_2_output['output_value']
+
+ vm_stack_output_keys = ['vm1_o', 'vm2_o']
+ vms = test_utils.get_vms_from_stack_outputs(conn,
+ stack_id,
+ vm_stack_output_keys)
+
+ logger.debug("Entering base test case with stack '%s'" % stack_id)
+
+ # TODO: check if ODL fixed bug
+ # https://jira.opendaylight.org/browse/NETVIRT-932
+ results.record_action('Create VPN with eRT==iRT')
+ vpn_name = 'sdnvpn-8'
+ kwargs = {
+ 'import_targets': TESTCASE_CONFIG.targets,
+ 'export_targets': TESTCASE_CONFIG.targets,
+ 'route_distinguishers': TESTCASE_CONFIG.route_distinguishers,
+ 'name': vpn_name
+ }
+ bgpvpn = test_utils.create_bgpvpn(neutron_client, **kwargs)
+ bgpvpn_id = bgpvpn['bgpvpn']['id']
+ logger.debug("VPN created details: %s" % bgpvpn)
+ bgpvpn_ids.append(bgpvpn_id)
+
+ msg = ("Associate router '%s' and net '%s' to the VPN."
+ % (TESTCASE_CONFIG.heat_parameters['router_1_name'],
+ TESTCASE_CONFIG.heat_parameters['net_2_name']))
+ results.record_action(msg)
+ results.add_to_summary(0, "-")
+
+ test_utils.create_router_association(
+ neutron_client, bgpvpn_id, router_1_id)
+ test_utils.create_network_association(
+ neutron_client, bgpvpn_id, network_2_id)
+
+ test_utils.wait_for_bgp_router_assoc(
+ neutron_client, bgpvpn_id, router_1_id)
+ test_utils.wait_for_bgp_net_assoc(
+ neutron_client, bgpvpn_id, network_2_id)
+
+ results.get_ping_status(vms[0], vms[1], expected="PASS", timeout=200)
+ results.add_to_summary(0, "=")
+
+ msg = "Assign a Floating IP to %s - using stack update" % vms[0].name
+ results.record_action(msg)
+
+ file_path = pkg_resources.resource_filename(
+ 'sdnvpn', TESTCASE_CONFIG.hot_update_file_name)
+ templ_update = open(file_path, 'r').read()
+ logger.debug("Update template is read: '%s'" % templ_update)
+ templ = test_utils.merge_yaml(templ, templ_update)
+
+ env['name'] = TESTCASE_CONFIG.stack_name
+ env['parameters']['external_nw'] = os_utils.get_external_net(conn)
+ env['template'] = templ
+ env['parameters']['image_n'] = TESTCASE_CONFIG.image_name
+ env['parameters']['av_zone_1'] = az_1
+
+ os_utils.update_stack(conn, stack_id, **env)
+
+ test_utils.wait_stack_for_status(conn, stack_id, 'UPDATE_COMPLETE')
+
+ fip_1_output = os_utils.get_output(conn, stack_id, 'fip_1_o')
+ fip = fip_1_output['output_value']
+
+ results.add_to_summary(0, "=")
+ results.record_action("Ping %s via Floating IP" % vms[0].name)
+ results.add_to_summary(0, "-")
+ results.ping_ip_test(fip)
+
+ except Exception as e:
+ logger.error("exception occurred while executing testcase_8bis: %s", e)
+ raise
+ finally:
+ test_utils.cleanup_glance(conn, image_ids)
+ test_utils.cleanup_neutron(conn, neutron_client, [], bgpvpn_ids,
+ [], [], [], [])
+
+ try:
+ test_utils.delete_stack_and_wait(conn, stack_id)
+ except Exception as e:
+ logger.error(
+ "exception occurred while executing testcase_8bis: %s", e)
+
+ return results.compile_summary()
+
+
+if __name__ == '__main__':
+ sys.exit(main())