aboutsummaryrefslogtreecommitdiffstats
path: root/ci/common/net-config-multinode-os-net-config.yaml
blob: 8c50b6419eeda1c2d31e21c167ea44a46710edfc (plain)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
heat_template_version: ocata

description: >
  Software Config to drive os-net-config for a simple bridge configured
  with a static IP address for the ctlplane network.

parameters:
  ControlPlaneIp:
    default: ''
    description: IP address/subnet on the ctlplane network
    type: string
  ExternalIpSubnet:
    default: ''
    description: IP address/subnet on the external network
    type: string
  InternalApiIpSubnet:
    default: ''
    description: IP address/subnet on the internal API network
    type: string
  StorageIpSubnet:
    default: ''
    description: IP address/subnet on the storage network
    type: string
  StorageMgmtIpSubnet:
    default: ''
    description: IP address/subnet on the storage mgmt network
    type: string
  TenantIpSubnet:
    default: ''
    description: IP address/subnet on the tenant network
    type: string
  ManagementIpSubnet:
    default: ''
    description: IP address/subnet on the management network
    type: string
  ControlPlaneSubnetCidr: # Override this via parameter_defaults
    default: '24'
    description: The subnet CIDR of the control plane network.
    type: string
  OvSBridgeMtu:
    default: 1300
    description: The mtu of the OvS bridge
    type: number

resources:

  OsNetConfigImpl:
    type: OS::Heat::SoftwareConfig
    properties:
      group: script
      config:
        list_join:
        - ''
        - - |
            #!/bin/bash
            function network_config_hook {
              primary_private_ip=$(cat /etc/nodepool/primary_node_private)
              sed -i "s/primary_private_ip/$primary_private_ip/" /etc/os-net-config/config.json
              subnode_private_ip=$(cat /etc/nodepool/node_private)
              sed -i "s/subnode_private_ip/$subnode_private_ip/" /etc/os-net-config/config.json
              # We start with an arbitrarily high vni key so that we don't
              # overlap with Neutron created values. These will also match the
              # values that we've been using previously from the devstack-gate
              # code.
              vni=1000002
              subnode_index=$(grep -n $(cat /etc/nodepool/node_private) /etc/nodepool/sub_nodes_private | cut -d: -f1)
              let vni+=$subnode_index
              sed -i "s/vni/$vni/" /etc/os-net-config/config.json
              export interface_name="br-ex_$primary_private_ip"
              # Until we are fully migrated to os-net-config we need to clean
              # up the old bridge first created by devstack-gate
              ovs-vsctl del-br br-ex
            }

          -
            str_replace:
              template:
                get_file: ../../network/scripts/run-os-net-config.sh
              params:
                $network_config:
                  network_config:
                    - type: ovs_bridge
                      name: bridge_name
                      mtu:
                        get_param: OvSBridgeMtu
                      use_dhcp: false
                      addresses:
                        - ip_netmask:
                            list_join:
                              - "/"
                              - - get_param: ControlPlaneIp
                                - get_param: ControlPlaneSubnetCidr
                      members:
                        - type: ovs_tunnel
                          name: interface_name
                          tunnel_type: vxlan
                          ovs_options:
                            - list_join:
                              - "="
                              - - key
                                - vni
                            - list_join:
                              - "="
                              - - remote_ip
                                - primary_private_ip
                            - list_join:
                              - "="
                              - - local_ip
                                - subnode_private_ip

outputs:
  OS::stack_id:
    description: The OsNetConfigImpl resource.
    value: {get_resource: OsNetConfigImpl}
e foundation for networks of the future. * OpenContrail: An open source SDN controller designed for cloud and NFV use cases. It has an analytics engine, well defined northbound REST APIs to configure and gather ops/analytics data. * OVN: A virtual networking solution developed by the same team that created OVS. OVN stands for Open Virtual Networking and is dissimilar from the above projects in that it focuses only on overlay networks. ---------- Data Plane ---------- OPNFV extends Linux virtual networking capabilities by using virtual switching and routing components. The OPNFV community proactively engages with the following open source communities to address performance, scale and resiliency needs apparent in carrier networks. * OVS (Open vSwitch): a production quality, multilayer virtual switch designed to enable massive network automation through programmatic extension, while still supporting standard management interfaces and protocols. * FD.io (Fast data - Input/Output): a high performance alternative to Open vSwitch, the core engine of FD.io is a vector processing engine (VPP). VPP processes a number of packets in parallel instead of one at a time thus significantly improving packet throughput. * DPDK: a set of libraries that bypass the kernel and provide polling mechanisms, instead of interrupt based operations, to speed up packet processing. DPDK works with both OVS and FD.io. ---- MANO ---- OPNFV integrates open source MANO projects for NFV orchestration and VNF management. New MANO projects are constantly being added, currently OPNFV integrates: * OpenBaton: Open Baton is a ETSI NFV compliant Management and Orchestration (MANO) Framework. It enables virtual Network Services deployments on top of heterogeneous NFV Infrastructures. OpenBaton is also used to deploy vIMS (clearwater and openIMS). Deployment Architecture ======================= A typical OPNFV deployment starts with three controller nodes running in a high availability configuration including control plane components from OpenStack, SDN controllers, etc. and a minimum of two compute nodes for deployment of workloads (VNFs). A detailed description of the hardware requirements required to support the 5 node configuration can be found in pharos specification: `Pharos Project <https://www.opnfv.org/developers/pharos>`_ In addition to the deployment on a highly available physical infrastructure, OPNFV can be deployed for development and lab purposes in a virtual environment. In this case each of the hosts is provided by a virtual machine and allows control and workload placement using nested virtualization. The initial deployment is done using a staging server, referred to as the "jumphost". This server-either physical or virtual-is first installed with the installation program that then installs OpenStack and other components on the controller nodes and compute nodes. See the :ref:`OPNFV User Guide & Configuration Guide <opnfv-user-config>` for more details. The OPNFV Testing Ecosystem =========================== The OPNFV community has set out to address the needs of virtualization in the carrier network and as such platform validation and measurements are a cornerstone to the iterative releases and objectives. To simplify the complex task of feature, component and platform validation and characterization the testing community has established a fully automated method for addressing all key areas of platform validation. This required the integration of a variety of testing frameworks in our CI systems, real time and automated analysis of results, storage and publication of key facts for each run as shown in the following diagram. .. image:: ../images/OPNFV_testing_working_group.png :alt: Overview infographic of the OPNFV testing Ecosystem Release Verification ==================== The OPNFV community relies on its testing community to establish release criteria for each OPNFV release. With each release cycle the testing criteria become more stringent and better representative of our feature and resiliency requirements. Each release establishes a set of deployment scenarios to validate, the testing infrastructure and test suites need to accommodate these features and capabilities. The release criteria as established by the testing teams include passing a set of test cases derived from the functional testing project ‘functest,’ a set of test cases derived from our platform system and performance test project ‘yardstick,’ and a selection of test cases for feature capabilities derived from other test projects such as bottlenecks, vsperf, cperf and storperf. The scenario needs to be able to be deployed, pass these tests, and be removed from the infrastructure iteratively in order to fulfill the release criteria. -------- Functest -------- Functest provides a functional testing framework incorporating a number of test suites and test cases that test and verify OPNFV platform functionality. The scope of Functest and relevant test cases can be found in the :ref:`Functest User Guide <functest-userguide>` Functest provides both feature project and component test suite integration, leveraging OpenStack and SDN controllers testing frameworks to verify the key components of the OPNFV platform are running successfully. --------- Yardstick --------- Yardstick is a testing project for verifying the infrastructure compliance when running VNF applications. Yardstick benchmarks a number of characteristics and performance vectors on the infrastructure making it a valuable pre-deployment NFVI testing tools. Yardstick provides a flexible testing framework for launching other OPNFV testing projects. There are two types of test cases in Yardstick: * Yardstick generic test cases and OPNFV feature test cases; including basic characteristics benchmarking in compute/storage/network area. * OPNFV feature test cases include basic telecom feature testing from OPNFV projects; for example nfv-kvm, sfc, ipv6, Parser, Availability and SDN VPN System Evaluation and compliance testing ======================================== The OPNFV community is developing a set of test suites intended to evaluate a set of reference behaviors and capabilities for NFV systems developed externally from the OPNFV ecosystem to evaluate and measure their ability to provide the features and capabilities developed in the OPNFV ecosystem. The Dovetail project will provide a test framework and methodology able to be used on any NFV platform, including an agreed set of test cases establishing an evaluation criteria for exercising an OPNFV compatible system. The Dovetail project has begun establishing the test framework and will provide a preliminary methodology for the Danube release. Work will continue to develop these test cases to establish a stand alone compliance evaluation solution in future releases. Additional Testing ================== Besides the test suites and cases for release verification, additional testing is performed to validate specific features or characteristics of the OPNFV platform. These testing framework and test cases may include some specific needs; such as extended measurements, additional testing stimuli, or tests simulating environmental disturbances or failures. These additional testing activities provide a more complete evaluation of the OPNFV platform. Some of the projects focused on these testing areas include: ----------- Bottlenecks ----------- Bottlenecks provides a framework to find system limitations and bottlenecks, providing root cause isolation capabilities to facilitate system evaluation. -------- NFVBench -------- NFVbench is a lightweight end-to-end dataplane benchmarking framework project. It includes traffic generator(s) and measures a number of packet performance related metrics. ---- QTIP ---- QTIP boils down NFVI compute and storage performance into one single metric for easy comparison. QTIP crunches these numbers based on five different categories of compute metrics and relies on Storperf for storage metrics. -------- Storperf -------- Storperf measures the performance of external block storage. The goal of this project is to provide a report based on SNIA’s (Storage Networking Industry Association) Performance Test Specification. ------ VSPERF ------ VSPERF provides an automated test-framework and comprehensive test suite for measuring data-plane performance of the NFVI including switching technology, physical and virtual network interfaces. The provided test cases with network topologies can be customized while also allowing individual versions of Operating System, vSwitch and hypervisor to be specified. .. _`OPNFV Configuration Guide`: `OPNFV User Guide & Configuration Guide` .. _`OPNFV User Guide`: `OPNFV User Guide & Configuration Guide` .. _`Dovetail project`: https://wiki.opnfv.org/display/dovetail