diff options
Diffstat (limited to 'docs/testing/user')
-rw-r--r-- | docs/testing/user/userguide/13-nsb-installation.rst | 85 | ||||
-rw-r--r-- | docs/testing/user/userguide/code/pod_ixia.yaml | 31 | ||||
-rw-r--r-- | docs/testing/user/userguide/opnfv_yardstick_tc088.rst | 129 | ||||
-rw-r--r-- | docs/testing/user/userguide/opnfv_yardstick_tc089.rst | 129 |
4 files changed, 299 insertions, 75 deletions
diff --git a/docs/testing/user/userguide/13-nsb-installation.rst b/docs/testing/user/userguide/13-nsb-installation.rst index 1f6c79b0b..3e0ed0bfb 100644 --- a/docs/testing/user/userguide/13-nsb-installation.rst +++ b/docs/testing/user/userguide/13-nsb-installation.rst @@ -1058,40 +1058,8 @@ IxLoad Config ``pod_ixia.yaml`` - .. code-block:: yaml - - nodes: - - - name: trafficgen_1 - role: IxNet - ip: 1.2.1.1 #ixia machine ip - user: user - password: r00t - key_filename: /root/.ssh/id_rsa - tg_config: - ixchassis: "1.2.1.7" #ixia chassis ip - tcl_port: "8009" # tcl server port - lib_path: "/opt/ixia/ixos-api/8.01.0.2/lib/ixTcl1.0" - root_dir: "/opt/ixia/ixos-api/8.01.0.2/" - py_bin_path: "/opt/ixia/ixload/8.01.106.3/bin/" - py_lib_path: "/opt/ixia/ixnetwork/8.01.1029.14/lib/PythonApi" - dut_result_dir: "/mnt/ixia" - version: 8.1 - interfaces: - xe0: # logical name from topology.yaml and vnfd.yaml - vpci: "2:5" # Card:port - driver: "none" - dpdk_port_num: 0 - local_ip: "152.16.100.20" - netmask: "255.255.0.0" - local_mac: "00:98:10:64:14:00" - xe1: # logical name from topology.yaml and vnfd.yaml - vpci: "2:6" # [(Card, port)] - driver: "none" - dpdk_port_num: 1 - local_ip: "152.40.40.20" - netmask: "255.255.0.0" - local_mac: "00:98:28:28:14:00" + .. literalinclude:: code/pod_ixia.yaml + :language: console for sriov/ovs_dpdk pod files, please refer to above Standalone Virtualization for ovs-dpdk/sriov configuration @@ -1113,10 +1081,10 @@ IxLoad IxNetwork --------- -1. Software needed: ``IxNetworkAPI<ixnetwork verson>Linux64.bin.tgz`` - (Download from ixia support site) - Install - ``IxNetworkAPI<ixnetwork verson>Linux64.bin.tgz`` -2. Update pod_ixia.yaml file with ixia details. +IxNetwork testcases use IxNetwork API Python Bindings module, which is +installed as part of the requirements of the project. + +1. Update ``pod_ixia.yaml`` file with ixia details. .. code-block:: console @@ -1124,44 +1092,12 @@ IxNetwork Config pod_ixia.yaml - .. code-block:: yaml - - nodes: - - - name: trafficgen_1 - role: IxNet - ip: 1.2.1.1 #ixia machine ip - user: user - password: r00t - key_filename: /root/.ssh/id_rsa - tg_config: - ixchassis: "1.2.1.7" #ixia chassis ip - tcl_port: "8009" # tcl server port - lib_path: "/opt/ixia/ixos-api/8.01.0.2/lib/ixTcl1.0" - root_dir: "/opt/ixia/ixos-api/8.01.0.2/" - py_bin_path: "/opt/ixia/ixload/8.01.106.3/bin/" - py_lib_path: "/opt/ixia/ixnetwork/8.01.1029.14/lib/PythonApi" - dut_result_dir: "/mnt/ixia" - version: 8.1 - interfaces: - xe0: # logical name from topology.yaml and vnfd.yaml - vpci: "2:5" # Card:port - driver: "none" - dpdk_port_num: 0 - local_ip: "152.16.100.20" - netmask: "255.255.0.0" - local_mac: "00:98:10:64:14:00" - xe1: # logical name from topology.yaml and vnfd.yaml - vpci: "2:6" # [(Card, port)] - driver: "none" - dpdk_port_num: 1 - local_ip: "152.40.40.20" - netmask: "255.255.0.0" - local_mac: "00:98:28:28:14:00" + .. literalinclude:: code/pod_ixia.yaml + :language: console for sriov/ovs_dpdk pod files, please refer to above Standalone Virtualization for ovs-dpdk/sriov configuration -3. Start IxNetwork TCL Server +2. Start IxNetwork TCL Server You will also need to configure the IxNetwork machine to start the IXIA IxNetworkTclServer. This can be started like so: @@ -1170,6 +1106,5 @@ IxNetwork ``Start->Programs->Ixia->IxNetwork->IxNetwork 7.21.893.14 GA->IxNetworkTclServer`` (or ``IxNetworkApiServer``) -4. Execute testcase in samplevnf folder e.g. +3. Execute testcase in samplevnf folder e.g. ``<repo>/samples/vnf_samples/nsut/vfw/tc_baremetal_rfc2544_ipv4_1rule_1flow_64B_ixia.yaml`` - diff --git a/docs/testing/user/userguide/code/pod_ixia.yaml b/docs/testing/user/userguide/code/pod_ixia.yaml new file mode 100644 index 000000000..4ab56fe4e --- /dev/null +++ b/docs/testing/user/userguide/code/pod_ixia.yaml @@ -0,0 +1,31 @@ +nodes: +- + name: trafficgen_1 + role: IxNet + ip: 1.2.1.1 #ixia machine ip + user: user + password: r00t + key_filename: /root/.ssh/id_rsa + tg_config: + ixchassis: "1.2.1.7" #ixia chassis ip + tcl_port: "8009" # tcl server port + lib_path: "/opt/ixia/ixos-api/8.01.0.2/lib/ixTcl1.0" + root_dir: "/opt/ixia/ixos-api/8.01.0.2/" + py_bin_path: "/opt/ixia/ixload/8.01.106.3/bin/" + dut_result_dir: "/mnt/ixia" + version: 8.1 + interfaces: + xe0: # logical name from topology.yaml and vnfd.yaml + vpci: "2:5" # Card:port + driver: "none" + dpdk_port_num: 0 + local_ip: "152.16.100.20" + netmask: "255.255.0.0" + local_mac: "00:98:10:64:14:00" + xe1: # logical name from topology.yaml and vnfd.yaml + vpci: "2:6" # [(Card, port)] + driver: "none" + dpdk_port_num: 1 + local_ip: "152.40.40.20" + netmask: "255.255.0.0" + local_mac: "00:98:28:28:14:00" diff --git a/docs/testing/user/userguide/opnfv_yardstick_tc088.rst b/docs/testing/user/userguide/opnfv_yardstick_tc088.rst new file mode 100644 index 000000000..2423a6b31 --- /dev/null +++ b/docs/testing/user/userguide/opnfv_yardstick_tc088.rst @@ -0,0 +1,129 @@ +.. This work is licensed under a Creative Commons Attribution 4.0 International +.. License. +.. http://creativecommons.org/licenses/by/4.0 +.. (c) OPNFV, Yin Kanglin and others. +.. 14_ykl@tongji.edu.cn + +************************************* +Yardstick Test Case Description TC088 +************************************* + ++-----------------------------------------------------------------------------+ +|Control Node Openstack Service High Availability - Nova Scheduler | +| | ++--------------+--------------------------------------------------------------+ +|test case id | OPNFV_YARDSTICK_TC088: Control node Openstack service down - | +| | nova scheduler | ++--------------+--------------------------------------------------------------+ +|test purpose | This test case will verify the high availability of the | +| | compute scheduler service provided by OpenStack (nova- | +| | scheduler) on control node. | +| | | ++--------------+--------------------------------------------------------------+ +|test method | This test case kills the processes of nova-scheduler service | +| | on a selected control node, then checks whether the request | +| | of the related OpenStack command is OK and the killed | +| | processes are recovered. | +| | | ++--------------+--------------------------------------------------------------+ +|attackers | In this test case, an attacker called "kill-process" is | +| | needed. This attacker includes three parameters: | +| | 1) fault_type: which is used for finding the attacker's | +| | scripts. It should be always set to "kill-process" in this | +| | test case. | +| | 2) process_name: which is the process name of the specified | +| | OpenStack service. If there are multiple processes use the | +| | same name on the host, all of them are killed by this | +| | attacker. | +| | In this case. This parameter should always set to "nova- | +| | scheduler". | +| | 3) host: which is the name of a control node being attacked. | +| | | +| | e.g. | +| | -fault_type: "kill-process" | +| | -process_name: "nova-scheduler" | +| | -host: node1 | +| | | ++--------------+--------------------------------------------------------------+ +|monitors | In this test case, one kind of monitor is needed: | +| | 1. the "process" monitor check whether a process is running | +| | on a specific node, which needs three parameters: | +| | 1) monitor_type: which used for finding the monitor class and| +| | related scripts. It should be always set to "process" | +| | for this monitor. | +| | 2) process_name: which is the process name for monitor | +| | 3) host: which is the name of the node running the process | +| | | +| | e.g. | +| | monitor: | +| | -monitor_type: "process" | +| | -process_name: "nova-scheduler" | +| | -host: node1 | +| | | ++--------------+--------------------------------------------------------------+ +|operations | In this test case, the following operations are needed: | +| | 1. "nova-create-instance": create a VM instance to check | +| | whether the nova-scheduler works normally. | +| | | ++--------------+--------------------------------------------------------------+ +|metrics | In this test case, there are one metric: | +| | 1)process_recover_time: which indicates the maximum time | +| | (seconds) from the process being killed to recovered | +| | | ++--------------+--------------------------------------------------------------+ +|test tool | Developed by the project. Please see folder: | +| | "yardstick/benchmark/scenarios/availability/ha_tools" | +| | | ++--------------+--------------------------------------------------------------+ +|references | ETSI NFV REL001 | +| | | ++--------------+--------------------------------------------------------------+ +|configuration | This test case needs two configuration files: | +| | 1) test case file: opnfv_yardstick_tc088.yaml | +| | -Attackers: see above "attackers" description | +| | -waiting_time: which is the time (seconds) from the process | +| | being killed to stopping monitors the monitors | +| | -Monitors: see above "monitors" description | +| | -SLA: see above "metrics" description | +| | | +| | 2)POD file: pod.yaml | +| | The POD configuration should record on pod.yaml first. | +| | the "host" item in this test case will use the node name in | +| | the pod.yaml. | +| | | ++--------------+--------------------------------------------------------------+ +|test sequence | description and expected result | +| | | ++--------------+--------------------------------------------------------------+ +|step 1 | do attacker: connect the host through SSH, and then execute | +| | the kill process script with param value specified by | +| | "process_name" | +| | | +| | Result: Process will be killed. | +| | | ++--------------+--------------------------------------------------------------+ +|step 2 | start monitors: | +| | each monitor will run with independently process | +| | | +| | Result: The monitor info will be collected. | +| | | ++--------------+--------------------------------------------------------------+ +|step 3 | create a new instance to check whether the nova scheduler | +| | works normally. | +| | | ++--------------+--------------------------------------------------------------+ +|step 4 | stop the monitor after a period of time specified by | +| | "waiting_time" | +| | | +| | Result: The monitor info will be aggregated. | +| | | ++--------------+--------------------------------------------------------------+ +|post-action | It is the action when the test cases exist. It will check the| +| | status of the specified process on the host, and restart the | +| | process if it is not running for next test cases | +| | | ++--------------+--------------------------------------------------------------+ +|test verdict | Fails only if SLA is not passed, or if there is a test case | +| | execution problem. | +| | | ++--------------+--------------------------------------------------------------+ diff --git a/docs/testing/user/userguide/opnfv_yardstick_tc089.rst b/docs/testing/user/userguide/opnfv_yardstick_tc089.rst new file mode 100644 index 000000000..0a8b2570b --- /dev/null +++ b/docs/testing/user/userguide/opnfv_yardstick_tc089.rst @@ -0,0 +1,129 @@ +.. This work is licensed under a Creative Commons Attribution 4.0 International +.. License. +.. http://creativecommons.org/licenses/by/4.0 +.. (c) OPNFV, Yin Kanglin and others. +.. 14_ykl@tongji.edu.cn + +************************************* +Yardstick Test Case Description TC089 +************************************* + ++-----------------------------------------------------------------------------+ +|Control Node Openstack Service High Availability - Nova Conductor | +| | ++--------------+--------------------------------------------------------------+ +|test case id | OPNFV_YARDSTICK_TC089: Control node Openstack service down - | +| | nova conductor | ++--------------+--------------------------------------------------------------+ +|test purpose | This test case will verify the high availability of the | +| | compute database proxy service provided by OpenStack (nova- | +| | conductor) on control node. | +| | | ++--------------+--------------------------------------------------------------+ +|test method | This test case kills the processes of nova-conductor service | +| | on a selected control node, then checks whether the request | +| | of the related OpenStack command is OK and the killed | +| | processes are recovered. | +| | | ++--------------+--------------------------------------------------------------+ +|attackers | In this test case, an attacker called "kill-process" is | +| | needed. This attacker includes three parameters: | +| | 1) fault_type: which is used for finding the attacker's | +| | scripts. It should be always set to "kill-process" in this | +| | test case. | +| | 2) process_name: which is the process name of the specified | +| | OpenStack service. If there are multiple processes use the | +| | same name on the host, all of them are killed by this | +| | attacker. | +| | In this case. This parameter should always set to "nova- | +| | conductor". | +| | 3) host: which is the name of a control node being attacked. | +| | | +| | e.g. | +| | -fault_type: "kill-process" | +| | -process_name: "nova-conductor" | +| | -host: node1 | +| | | ++--------------+--------------------------------------------------------------+ +|monitors | In this test case, one kind of monitor is needed: | +| | 1. the "process" monitor check whether a process is running | +| | on a specific node, which needs three parameters: | +| | 1) monitor_type: which used for finding the monitor class and| +| | related scripts. It should be always set to "process" | +| | for this monitor. | +| | 2) process_name: which is the process name for monitor | +| | 3) host: which is the name of the node running the process | +| | | +| | e.g. | +| | monitor: | +| | -monitor_type: "process" | +| | -process_name: "nova-conductor" | +| | -host: node1 | +| | | ++--------------+--------------------------------------------------------------+ +|operations | In this test case, the following operations are needed: | +| | 1. "nova-create-instance": create a VM instance to check | +| | whether the nova-conductor works normally. | +| | | ++--------------+--------------------------------------------------------------+ +|metrics | In this test case, there are one metric: | +| | 1)process_recover_time: which indicates the maximum time | +| | (seconds) from the process being killed to recovered | +| | | ++--------------+--------------------------------------------------------------+ +|test tool | Developed by the project. Please see folder: | +| | "yardstick/benchmark/scenarios/availability/ha_tools" | +| | | ++--------------+--------------------------------------------------------------+ +|references | ETSI NFV REL001 | +| | | ++--------------+--------------------------------------------------------------+ +|configuration | This test case needs two configuration files: | +| | 1) test case file: opnfv_yardstick_tc089.yaml | +| | -Attackers: see above "attackers" description | +| | -waiting_time: which is the time (seconds) from the process | +| | being killed to stopping monitors the monitors | +| | -Monitors: see above "monitors" description | +| | -SLA: see above "metrics" description | +| | | +| | 2)POD file: pod.yaml | +| | The POD configuration should record on pod.yaml first. | +| | the "host" item in this test case will use the node name in | +| | the pod.yaml. | +| | | ++--------------+--------------------------------------------------------------+ +|test sequence | description and expected result | +| | | ++--------------+--------------------------------------------------------------+ +|step 1 | do attacker: connect the host through SSH, and then execute | +| | the kill process script with param value specified by | +| | "process_name" | +| | | +| | Result: Process will be killed. | +| | | ++--------------+--------------------------------------------------------------+ +|step 2 | start monitors: | +| | each monitor will run with independently process | +| | | +| | Result: The monitor info will be collected. | +| | | ++--------------+--------------------------------------------------------------+ +|step 3 | create a new instance to check whether the nova conductor | +| | works normally. | +| | | ++--------------+--------------------------------------------------------------+ +|step 4 | stop the monitor after a period of time specified by | +| | "waiting_time" | +| | | +| | Result: The monitor info will be aggregated. | +| | | ++--------------+--------------------------------------------------------------+ +|post-action | It is the action when the test cases exist. It will check the| +| | status of the specified process on the host, and restart the | +| | process if it is not running for next test cases | +| | | ++--------------+--------------------------------------------------------------+ +|test verdict | Fails only if SLA is not passed, or if there is a test case | +| | execution problem. | +| | | ++--------------+--------------------------------------------------------------+ |