.. This work is licensed under a Creative Commons Attribution 4.0 International License. .. http://creativecommons.org/licenses/by/4.0 .. (c) Open Platform for NFV Project, Inc. and its contributors ======== Abstract ======== This document describes how to install the Fraser release of OPNFV when using Fuel as a deployment tool, covering its usage, limitations, dependencies and required system resources. This is an unified documentation for both x86_64 and aarch64 architectures. All information is common for both architectures except when explicitly stated. ============ Introduction ============ This document provides guidelines on how to install and configure the Fraser release of OPNFV when using Fuel as a deployment tool, including required software and hardware configurations. Although the available installation options provide a high degree of freedom in how the system is set up, including architecture, services and features, etc., said permutations may not provide an OPNFV compliant reference architecture. This document provides a step-by-step guide that results in an OPNFV Fraser compliant deployment. The audience of this document is assumed to have good knowledge of networking and Unix/Linux administration. ======= Preface ======= Before starting the installation of the Fraser release of OPNFV, using Fuel as a deployment tool, some planning must be done. Preparations ============ Prior to installation, a number of deployment specific parameters must be collected, those are: #. Provider sub-net and gateway information #. Provider VLAN information #. Provider DNS addresses #. Provider NTP addresses #. Network overlay you plan to deploy (VLAN, VXLAN, FLAT) #. How many nodes and what roles you want to deploy (Controllers, Storage, Computes) #. Monitoring options you want to deploy (Ceilometer, Syslog, etc.). #. Other options not covered in the document are available in the links above This information will be needed for the configuration procedures provided in this document. ========================================= Hardware Requirements for Virtual Deploys ========================================= The following minimum hardware requirements must be met for the virtual installation of Fraser using Fuel: +----------------------------+--------------------------------------------------------+ | **HW Aspect** | **Requirement** | | | | +============================+========================================================+ | **1 Jumpserver** | A physical node (also called Foundation Node) that | | | will host a Salt Master VM and each of the VM nodes in | | | the virtual deploy | +----------------------------+--------------------------------------------------------+ | **CPU** | Minimum 1 socket with Virtualization support | +----------------------------+--------------------------------------------------------+ | **RAM** | Minimum 32GB/server (Depending on VNF work load) | +----------------------------+--------------------------------------------------------+ | **Disk** | Minimum 100GB (SSD or SCSI (15krpm) highly recommended)| +----------------------------+--------------------------------------------------------+ =========================================== Hardware Requirements for Baremetal Deploys =========================================== The following minimum hardware requirements must be met for the baremetal installation of Fraser using Fuel: +-------------------------+------------------------------------------------------+ | **HW Aspect** | **Requirement** | | | | +=========================+======================================================+ | **# of nodes** | Minimum 5 | | | | | | - 3 KVM servers which will run all the controller | | | services | | | | | | - 2 Compute nodes | | | | +-------------------------+------------------------------------------------------+ | **CPU** | Minimum 1 socket with Virtualization support | +-------------------------+------------------------------------------------------+ | **RAM** | Minimum 16GB/server (Depending on VNF work load) | +-------------------------+------------------------------------------------------+ | **Disk** | Minimum 256GB 10kRPM spinning disks | +-------------------------+------------------------------------------------------+ | **Networks** | 4 VLANs (PUBLIC, MGMT, STORAGE, PRIVATE) - can be | | | a mix of tagged/native | | | | | | 1 Un-Tagged VLAN for PXE Boot - ADMIN Network | | | | | | Note: These can be allocated to a single NIC - | | | or spread out over multiple NICs | +-------------------------+------------------------------------------------------+ | **1 Jumpserver** | A physical node (also called Foundation Node) that | | | hosts the Salt Master and MaaS VMs | +-------------------------+------------
.. This work is licensed under a Creative Commons Attribution 4.0 International
.. License.
.. http://creativecommons.org/licenses/by/4.0
.. (c) OPNFV, Huawei Technologies Co.,Ltd and others.

*************************************
Yardstick Test Case Description TC075
*************************************


+-----------------------------------------------------------------------------+
|Network Capacity and Scale Testing                                           |
|                                                                             |
+--------------+--------------------------------------------------------------+
|test case id  | OPNFV_YARDSTICK_TC075_Network_Capacity_and_Scale_testing     |
|              |                                                              |
+--------------+--------------------------------------------------------------+
|metric        | Number of connections, Number of frames sent/received        |
|              |                                                              |
+--------------+--------------------------------------------------------------+
|test purpose  | To evaluate the network capacity and scale with regards to   |
|              | connections and frmaes.                                      |
|              |                                                              |
+--------------+--------------------------------------------------------------+
|configuration | file: opnfv_yardstick_tc075.yaml                             |
|              |                                                              |
|              | There is no additional configuration to be set for this TC.  |
|              |                                                              |
+--------------+--------------------------------------------------------------+
|test tool     | netstar                                                      |
|              |                                                              |
|              | Netstat is normally part of any Linux distribution, hence it |
|              | doesn't need to be installed.                                |
|              |                                                              |
+--------------+--------------------------------------------------------------+
|references    | Netstat man page                                             |
|              |                                                              |
|              | ETSI-NFV-TST001                                              |
|              |                                                              |
+--------------+--------------------------------------------------------------+
|applicability | This test case is mainly for evaluating network performance. |
|              |                                                              |
+--------------+--------------------------------------------------------------+
|pre_test      | Each pod node must have netstat included in it.              |
|conditions    |                                                              |
|              |                                                              |
+--------------+--------------------------------------------------------------+
|test sequence | description and expected result                              |
|              |                                                              |
+--------------+--------------------------------------------------------------+
|step 1        | The pod is available.                                        |
|              | Netstat is invoked and logs are produced and stored.         |
|              |                                                              |
|              | Result: Logs are stored.                                     |
|              |                                                              |
+--------------+--------------------------------------------------------------+
|test verdict  | None. Number of connections and frames are fetched and       |
|              | stored.                                                      |
|              |                                                              |
+--------------+--------------------------------------------------------------+
yment is complete, the SaltStack Deployment Documentation is available at http://:8090 **NOTE**: The deployment uses the OPNFV Pharos project as input (PDF and IDF files) for hardware and network configuration of all current OPNFV PODs. When deploying a new POD, one can pass the `-b` flag to the deploy script to override the path for the labconfig directory structure containing the PDF and IDF. .. code-block:: bash $ ci/deploy.sh -b file:// \ -l \ -p \ -s \ -D \ -S |& tee deploy.log - is the absolute path to a local directory, populated similar to Pharos, i.e. PDF/IDF reside in /labs/ - is the same as the directory in the path above - is the name used for the PDF (.yaml) and IDF (idf-.yaml) files Pod and Installer Descriptor Files ================================== Descriptor files provide the installer with an abstraction of the target pod with all its hardware characteristics and required parameters. This information is split into two different files: Pod Descriptor File (PDF) and Installer Descriptor File (IDF). The Pod Descriptor File is a hardware description of the pod infrastructure. The information is modeled under a yaml structure. A reference file with the expected yaml structure is available at *mcp/config/labs/local/pod1.yaml* The hardware description is arranged into a main "jumphost" node and a "nodes" set for all target boards. For each node the following characteristics are defined: - Node parameters including CPU features and total memory. - A list of available disks. - Remote management parameters. - Network interfaces list including mac address, speed, advanced features and name. **Note**: The fixed IPs are ignored by the MCP installer script and it will instead assign based on the network ranges defined in IDF. The Installer Descriptor File extends the PDF with pod related parameters required by the installer. This information may differ per each installer type and it is not considered part of the pod infrastructure. The IDF file must be named after the PDF with the prefix "idf-". A reference file with the expected structure is available at *mcp/config/labs/local/idf-pod1.yaml* The file follows a yaml structure and two sections "net_config" and "fuel" are expected. The "net_config" section describes all the internal and provider networks assigned to the pod. Each used network is expected to have a vlan tag, IP subnet and attached interface on the boards. Untagged vlans shall be defined as "native". The "fuel" section defines several sub-sections required by the Fuel installer: - jumphost: List of bridge names for each network on the Jumpserver. - network: List of device name and bus address info of all the target nodes. The order must be aligned with the order defined in PDF file. Fuel installer relies on the IDF model to setup all node NICs by defining the expected device name and bus address. - maas: Defines the target nodes commission timeout and deploy timeout. (optional) - reclass: Defines compute parameter tuning, including huge pages, cpu pinning and other DPDK settings. (optional) The following parameters can be defined in the IDF files under "reclass". Those value will overwrite the default configuration values in Fuel repository: - nova_cpu_pinning: List of CPU cores nova will be pinned to. Currently disabled. - compute_hugepages_size: Size of each persistent huge pages. Usual values are '2M' and '1G'. - compute_hugepages_count: Total number of persistent huge pages. - compute_hugepages_mount: Mount point to use for huge pages. - compute_kernel_isolcpu: List of certain CPU cores that are isolated from Linux scheduler. - compute_dpdk_driver: Kernel module to provide userspace I/O support. - compute_ovs_pmd_cpu_mask: Hexadecimal mask of CPUs to run DPDK Poll-mode drivers. - compute_ovs_dpdk_socket_mem: Set of amount huge pages in MB to be used by OVS-DPDK daemon taken for each NUMA node. Set size is equal to NUMA nodes count, elements are divided by comma. - compute_ovs_dpdk_lcore_mask: Hexadecimal mask of DPDK lcore parameter used to run DPDK processes. - compute_ovs_memory_channels: Number of memory channels to be used. - dpdk0_driver: NIC driver to use for physical network interface. - dpdk0_n_rxq: Number of RX queues. The full description of the PDF and IDF file structure are available as yaml schemas. The schemas are defined as a git submodule in Fuel repository. Input files provided to the installer will be validated against the schemas. - *mcp/scripts/pharos/config/pdf/pod1.schema.yaml* - *mcp/scripts/pharos/config/pdf/idf-pod1.schema.yaml* ============= Release Notes ============= Please refer to the :ref:`Release Notes ` article. ========== References ========== OPNFV 1) `OPNFV Home Page `_ 2) `OPNFV documentation `_ 3) `Software downloads `_ OpenStack 4) `OpenStack Pike Release Artifacts `_ 5) `OpenStack Documentation `_ OpenDaylight 6) `OpenDaylight Artifacts `_ Fuel 7) `Mirantis Cloud Platform Documentation `_ Salt 8) `Saltstack Documentation `_ 9) `Saltstack Formulas `_ Reclass 10) `Reclass model `_