diff options
Diffstat (limited to 'docs/release/installation/installation.instruction.rst')
-rw-r--r-- | docs/release/installation/installation.instruction.rst | 766 |
1 files changed, 173 insertions, 593 deletions
diff --git a/docs/release/installation/installation.instruction.rst b/docs/release/installation/installation.instruction.rst index 8b25aa5a..fbf22d16 100644 --- a/docs/release/installation/installation.instruction.rst +++ b/docs/release/installation/installation.instruction.rst @@ -6,83 +6,42 @@ Abstract ======== -This document describes how to install the Danube release of -OPNFV when using Fuel as a deployment tool, with an AArch64 (only) -target node pool. It covers its usage, limitations, dependencies -and required system resources. +This document describes how to install the Euphrates release of +OPNFV when using Fuel as a deployment tool, covering its usage, +limitations, dependencies and required system resources. +This is an unified documentation for both x86_64 and aarch64 +architectures. All information is common for both architectures +except when explicitly stated. ============ Introduction ============ This document provides guidelines on how to install and -configure the Danube release of OPNFV when using Fuel as a -deployment tool, with an AArch64 (only) target node pool, -including required software and hardware configurations. +configure the Euphrates release of OPNFV when using Fuel as a +deployment tool, including required software and hardware configurations. -Although the available installation options give a high degree of -freedom in how the system is set-up, including architecture, services +Although the available installation options provide a high de.g.ee of +freedom in how the system is set up, including architecture, services and features, etc., said permutations may not provide an OPNFV -compliant reference architecture. This instruction provides a -step-by-step guide that results in an OPNFV Danube compliant +compliant reference architecture. This document provides a +step-by-step guide that results in an OPNFV Euphrates compliant deployment. -The audience of this document is assumed to have good knowledge in +The audience of this document is assumed to have good knowledge of networking and Unix/Linux administration. ======= Preface ======= -Before starting the installation of the AArch64 Danube release -of OPNFV, using Fuel as a deployment tool, some planning must be +Before starting the installation of the Euphrates release of +OPNFV, using Fuel as a deployment tool, some planning must be done. -Retrieving the ISO image -======================== - -First of all, the Fuel deployment ISO image needs to be retrieved, the -ArmbandFuel .iso image of the AArch64 Danube release can be found at `OPNFV Downloads <https://www.opnfv.org/software/download>`_. - -Building the ISO image -====================== - -Alternatively, you may build the Armband Fuel .iso from source by cloning -the opnfv/armband git repository. To retrieve the repository for the AArch64 -Danube release use the following command: - -.. code-block:: bash - - $ git clone https://gerrit.opnfv.org/gerrit/armband - -Check-out the Danube release tag to set the HEAD to the -baseline required to replicate the Danube release: - -.. code-block:: bash - - $ git checkout danube.3.0 - -Go to the armband directory and build the .iso: - -.. code-block:: bash - - $ cd armband; make all - -For more information on how to build, please see :ref:`Build instruction for Fuel\@OPNFV <armband-development-overview-build-label>` - -Other preparations +Preparations ================== -Next, familiarize yourself with Fuel by reading the following documents: - -- `Fuel Installation Guide <http://docs.openstack.org/developer/fuel-docs/userdocs/fuel-install-guide.html>`_ - -- `Fuel User Guide <http://docs.openstack.org/developer/fuel-docs/userdocs/fuel-user-guide.html>`_ - -- `Fuel Developer Guide <http://docs.openstack.org/developer/fuel-docs/devdocs/develop.html>`_ - -- `Fuel Plugin Developers Guide <http://docs.openstack.org/developer/fuel-docs/plugindocs/fuel-plugin-sdk-guide.html>`_ - Prior to installation, a number of deployment specific parameters must be collected, those are: #. Provider sub-net and gateway information @@ -105,48 +64,74 @@ Prior to installation, a number of deployment specific parameters must be collec This information will be needed for the configuration procedures provided in this document. -===================== -Hardware requirements -===================== - -The following minimum hardware requirements must be met for the -installation of AArch64 Danube using Fuel: - -+----------------------------+------------------------------------------------------+ -| **HW Aspect** | **Requirement** | -| | | -+============================+======================================================+ -| **# of AArch64 nodes** | Minimum 5 (3 for non redundant deployment): | -| | | -| | - 1 Fuel deployment master (may be virtualized) | -| | | -| | - 3(1) Controllers (1 colocated mongo/ceilometer | -| | role, 2 Ceph-OSD roles) | -| | | -| | - 1 Compute (1 co-located Ceph-OSD role) | -| | | -+----------------------------+------------------------------------------------------+ -| **CPU** | Minimum 1 socket AArch64 (ARMv8) with Virtualization | -| | support | -+----------------------------+------------------------------------------------------+ -| **RAM** | Minimum 16GB/server (Depending on VNF work load) | -| | | -+----------------------------+------------------------------------------------------+ -| **Firmware** | UEFI compatible (e.g. EDK2) with PXE support | -+----------------------------+------------------------------------------------------+ -| **Disk** | Minimum 256GB 10kRPM spinning disks | -| | | -+----------------------------+------------------------------------------------------+ -| **Networks** | 4 Tagged VLANs (PUBLIC, MGMT, STORAGE, PRIVATE) | -| | | -| | 1 Un-Tagged VLAN for PXE Boot - ADMIN Network | -| | | -| | Note: These can be allocated to a single NIC - | -| | or spread out over multiple NICs as your hardware | -| | supports. | -+----------------------------+------------------------------------------------------+ -| **1 x86_64 node** | - 1 Fuel deployment master, x86 (may be virtualized) | -+----------------------------+------------------------------------------------------+ +========================================= +Hardware requirements for virtual deploys +========================================= + +The following minimum hardware requirements must be met for the virtual +installation of Euphrates using Fuel: + ++----------------------------+--------------------------------------------------------+ +| **HW Aspect** | **Requirement** | +| | | ++============================+========================================================+ +| **1 Jumpserver** | A physical node (also called Foundation Node) that | +| | hosts a Salt Master VM and each of the VM nodes in | +| | the virtual deploy | ++----------------------------+--------------------------------------------------------+ +| **CPU** | Minimum 1 socket with Virtualization support | ++----------------------------+--------------------------------------------------------+ +| **RAM** | Minimum 32GB/server (Depending on VNF work load) | ++----------------------------+--------------------------------------------------------+ +| **Disk** | Minimum 100GB (SSD or SCSI (15krpm) highly recommended | ++----------------------------+--------------------------------------------------------+ + + +=========================================== +Hardware requirements for baremetal deploys +=========================================== + +The following minimum hardware requirements must be met for the baremetal +installation of Euphrates using Fuel: + ++-------------------------+------------------------------------------------------+ +| **HW Aspect** | **Requirement** | +| | | ++=========================+======================================================+ +| **# of nodes** | Minimum 5 | +| | | +| | - 3 KVM servers which will run all the controller | +| | services | +| | | +| | - 2 Compute nodes | +| | | ++-------------------------+------------------------------------------------------+ +| **CPU** | Minimum 1 socket with Virtualization support | ++-------------------------+------------------------------------------------------+ +| **RAM** | Minimum 16GB/server (Depending on VNF work load) | ++-------------------------+------------------------------------------------------+ +| **Disk** | Minimum 256GB 10kRPM spinning disks | ++-------------------------+------------------------------------------------------+ +| **Networks** | 4 VLANs (PUBLIC, MGMT, STORAGE, PRIVATE) - can be | +| | a mix of tagged/native | +| | | +| | 1 Un-Tagged VLAN for PXE Boot - ADMIN Network | +| | | +| | Note: These can be allocated to a single NIC - | +| | or spread out over multiple NICs | ++-------------------------+------------------------------------------------------+ +| **1 Jumpserver** | A physical node (also called Foundation Node) that | +| | hosts the Salt Master and MaaS VMs | ++-------------------------+------------------------------------------------------+ +| **Power management** | All targets need to have power management tools that | +| | allow rebooting the hardware and setting the boot | +| | order (e.g. IPMI) | ++-------------------------+------------------------------------------------------+ + +**NOTE:** All nodes including the Jumpserver must have the same architecture (either x86_64 or aarch64). + +**NOTE:** For aarch64 deployments an UEFI compatible firmware with PXE support is needed (e.g. EDK2). + =============================== Help with Hardware Requirements @@ -159,7 +144,7 @@ For information on compatible hardware types available for use, please see `Fuel When choosing the hardware on which you will deploy your OpenStack environment, you should think about: -- CPU -- Consider the number of virtual machines that you plan to deploy in your cloud environment and the CPU per virtual machine. +- CPU -- Consider the number of virtual machines that you plan to deploy in your cloud environment and the CPUs per virtual machine. - Memory -- Depends on the amount of RAM assigned per virtual machine and the controller node. @@ -188,7 +173,7 @@ the Fuel OPNFV reference platform. All the networks involved in the OPNFV infrastructure as well as the provider networks and the private tenant VLANs needs to be manually configured. -Manual configuration of the Danube hardware platform should +Manual configuration of the Euphrates hardware platform should be carried out according to the `OPNFV Pharos Specification <https://wiki.opnfv.org/display/pharos/Pharos+Specification>`_. @@ -196,525 +181,125 @@ be carried out according to the `OPNFV Pharos Specification OPNFV Software installation and deployment ========================================== -This section describes the installation of the OPNFV installation -server (Fuel master) as well as the deployment of the full OPNFV -reference platform stack across a server cluster. - -Install Fuel master -=================== - -#. Mount the Danube Armband Fuel ISO file/media as a boot device to the jump host server. - -#. Reboot the jump host to establish the Fuel server. - - - The system now boots from the ISO image. - - - Select "Fuel Install (Static IP)" (See figure below) - - - Press [Enter]. - - .. figure:: img/grub-1.png - -#. Wait until the Fuel setup screen is shown (Note: This can take up to 30 minutes). - -#. In the "Fuel User" section - Confirm/change the default password (See figure below) - - - Enter "admin" in the Fuel password input - - - Enter "admin" in the Confirm password input - - - Select "Check" and press [Enter] - - .. figure:: img/fuelmenu1.png - -#. In the "Network Setup" section - Configure DHCP/Static IP information for your FUEL node - For example, ETH0 is 10.20.0.2/24 for FUEL booting and ETH1 is DHCP in your corporate/lab network (see figure below). - - - Configure eth1 or other network interfaces here as well (if you have them present on your FUEL server). - - .. figure:: img/fuelmenu2.png - - .. figure:: img/fuelmenu2a.png - -#. In the "PXE Setup" section (see figure below) - Change the following fields to appropriate values (example below): - - - DHCP Pool Start 10.20.0.4 - - - DHCP Pool End 10.20.0.254 - - - DHCP Pool Gateway 10.20.0.2 (IP address of Fuel node) - - .. figure:: img/fuelmenu3.png - -#. In the "DNS & Hostname" section (see figure below) - Change the following fields to appropriate values: - - - Hostname - - - Domain - - - Search Domain - - - External DNS - - - Hostname to test DNS - - - Select <Check> and press [Enter] - - .. figure:: img/fuelmenu4.png - - -#. OPTION TO ENABLE PROXY SUPPORT - In the "Bootstrap Image" section (see figure below), edit the following fields to define a proxy. (**NOTE:** cannot be used in tandem with local repository support) - - - Navigate to "HTTP proxy" and enter your http proxy address - - - Select <Check> and press [Enter] - - .. figure:: img/fuelmenu5.png - -#. In the "Time Sync" section (see figure below) - Change the following fields to appropriate values: - - - NTP Server 1 <Customer NTP server 1> - - - NTP Server 2 <Customer NTP server 2> - - - NTP Server 3 <Customer NTP server 3> - - .. figure:: img/fuelmenu6.png - -#. In the "Feature groups" section - Enable "Experimental features" if you plan on using Ceilometer and/or MongoDB. - - **NOTE**: Ceilometer and MongoDB are experimental features starting with Danube.1.0. - -#. Start the installation. - - **NOTE**: Saving each section and hitting <F8> does not apply all settings! - - - Select Quit Setup and press Save and Quit. - - - The installation will now start, wait until the login screen is shown. - -Boot the Node Servers -===================== - -After the Fuel Master node has rebooted from the above steps and is at -the login prompt, you should boot the Node Servers (Your -Compute/Control/Storage blades, nested or real) with a PXE booting -scheme so that the FUEL Master can pick them up for control. - -**NOTE**: AArch64 target nodes are expected to support PXE booting an -EFI binary, i.e. an EFI-stubbed GRUB2 bootloader. - -**NOTE**: UEFI (EDK2) firmware is **highly** recommended, becoming -the **de facto** standard for ARMv8 nodes. - -#. Enable PXE booting - - - For every controller and compute server: enable PXE Booting as the first boot device in the UEFI (EDK2) boot order menu, and hard disk as the second boot device in the same menu. - -#. Reboot all the control and compute blades. - -#. Wait for the availability of nodes showing up in the Fuel GUI. - - - Connect to the FUEL UI via the URL provided in the Console (default: https://10.20.0.2:8443) - - - Wait until all nodes are displayed in top right corner of the Fuel GUI: Total nodes and Unallocated nodes (see figure below). - - .. figure:: img/nodes.png - -Install additional Plugins/Features on the FUEL node -==================================================== - -#. SSH to your FUEL node (e.g. root@10.20.0.2 pwd: r00tme) - -#. Select wanted plugins/features from the /opt/opnfv/ directory. - -#. Install the wanted plugin with the command - - .. code-block:: bash - - $ fuel plugins --install /opt/opnfv/<plugin-name>-<version>.<arch>.rpm - - Expected output (see figure below): - - .. code-block:: bash - - Plugin ....... was successfully installed. - - .. figure:: img/plugin_install.png - - **NOTE**: AArch64 Danube 3.0 ships only with ODL, OVS, BGPVPN, SFC and Tacker - plugins, see *Reference 15*. - -Create an OpenStack Environment -=============================== - -#. Connect to Fuel WEB UI with a browser (default: https://10.20.0.2:8443) (login: admin/admin) - -#. Create and name a new OpenStack environment, to be installed. - - .. figure:: img/newenv.png - -#. Select "<Newton on Ubuntu 16.04 (aarch64)>" and press <Next> - -#. Select "compute virtulization method". - - - Select "QEMU-KVM as hypervisor" and press <Next> - -#. Select "network mode". - - - Select "Neutron with ML2 plugin" - - - Select "Neutron with tunneling segmentation" (Required when using the ODL plugin) - - - Press <Next> - -#. Select "Storage Back-ends". - - - Select "Ceph for block storage" and press <Next> - -#. Select "additional services" you wish to install. - - - Check option "Install Ceilometer and Aodh" and press <Next> - -#. Create the new environment. - - - Click <Create> Button - -Configure the network environment -================================= - -#. Open the environment you previously created. +This section describes the process of installing all the components needed to +deploy the full OPNFV reference platform stack across a server cluster. -#. Open the networks tab and select the "default" Node Networks group to on the left pane (see figure below). +The installation is done with Mirantis Cloud Platform (MCP), which is based on +a reclass model. This model provides the formula inputs to Salt, to make the deploy +automatic based on deployment scenario. +The reclass model covers: - .. figure:: img/network.png + - Infrastucture node definition: Salt Master node (cfg01) and MaaS node (mas01) + - Openstack node defition: Controler nodes (ctl01, ctl02, ctl03) and Compute nodes (cmp001, cmp002) + - Infrastructure components to install (software packages, services etc.) + - Openstack components and services (rabbitmq, galera etc.), as well as all configuration for them -#. Update the Public network configuration and change the following fields to appropriate values: - - CIDR to <CIDR for Public IP Addresses> +Automatic Installation of a Virtual POD +======================================= - - IP Range Start to <Public IP Address start> +For virtual deploys all the targets are VMs on the Jumpserver. The deploy script will: - - IP Range End to <Public IP Address end> + - Create a Salt Master VM on the Jumpserver which will drive the installation + - Create the bridges for networking with virsh (only if a real bridge does not already exists for a given network) + - Install Openstack on the targets + - Leverage Salt to install & configure Openstack services - - Gateway to <Gateway for Public IP Addresses> - - Check <VLAN tagging>. +Automatic Installation of a Baremetal POD +========================================= - - Set appropriate VLAN id. +The baremetal installation process can be done by editing the information about +hardware and enviroment in the reclass files, or by using a Pod Descriptor File (PDF). +This file contains all the information about the hardware and network of the deployment +the will be fed to the reclass model during deployment. -#. Update the Storage Network Configuration +The installation is done automatically with the deploy script, which will: - - Set CIDR to appropriate value (default 192.168.1.0/24) + - Create a Salt Master VM on the Jumpserver which will drive the installation + - Create a MaaS Node VM on the Jumpserver which will provision the targets + - Install Openstack on the targets + - Leverage MaaS to provision baremetal nodes with the operating system + - Leverage Salt to configure the operatign system on the baremetal nodes + - Leverage Salt to install & configure Openstack services - - Set IP Range Start to appropriate value (default 192.168.1.1) - - Set IP Range End to appropriate value (default 192.168.1.254) +Steps to start the automatic deploy +=================================== - - Set vlan to appropriate value (default 102) +These steps are common both for virtual and baremetal deploys. -#. Update the Management network configuration. +#. Clone the Fuel code from gerrit - - Set CIDR to appropriate value (default 192.168.0.0/24) + For x86_64 - - Set IP Range Start to appropriate value (default 192.168.0.1) + .. code-block:: bash - - Set IP Range End to appropriate value (default 192.168.0.254) + $ git clone https://git.opnfv.org/fuel + $ cd fuel - - Check <VLAN tagging>. + For aarch64 - - Set appropriate VLAN id. (default 101) + .. code-block:: bash -#. Update the Private Network Information + $ git clone https://git.opnfv.org/armband + $ cd armband - - Set CIDR to appropriate value (default 192.168.2.0/24 +#. Checkout the Euphrates release - - Set IP Range Start to appropriate value (default 192.168.2.1) + .. code-block:: bash - - Set IP Range End to appropriate value (default 192.168.2.254) + $ git checkout 5.0.0 - - Check <VLAN tagging>. +#. Start the deploy script - - Set appropriate VLAN tag (default 103) + .. code-block:: bash -#. Select the "Neutron L3" Node Networks group on the left pane. + $ ci/deploy.sh -l <lab_name> \ + -p <pod_name> \ + -b <URI to the PDF file> \ + -s <scenario> \ + -B <list of admin, public and management bridges> - .. figure:: img/neutronl3.png +Examples +-------- +#. Virtual deploy -#. Update the Floating Network configuration. + .. code-block:: bash - - Set the Floating IP range start (default 172.16.0.130) + $ ci/deploy.sh -b file:///home/jenkins/tmpdir/securedlab \ + -l ericsson \ + -p virtual_kvm \ + -s os-nosdn-nofeature-noha \ - - Set the Floating IP range end (default 172.16.0.254) +#. Baremetal deploy - - Set the Floating network name (default admin_floating_net) +A x86 deploy on pod1 from Ericsson lab -#. Update the Internal Network configuration. + .. code-block:: bash - - Set Internal network CIDR to an appropriate value (default 192.168.111.0/24) + $ ci/deploy.sh -b file:///home/jenkins/tmpdir/securedlab \ + -l ericsson \ + -p pod1 \ + -s os-nosdn-nofeature-ha \ + -B pxebr - - Set Internal network gateway to an appropriate value +An aarch64 deploy on pod5 from Arm lab - - Set the Internal network name (default admin_internal_net) + .. code-block:: bash -#. Update the Guest OS DNS servers. - - - Set Guest OS DNS Server values appropriately - -#. Save Settings. - -#. Select the "Other" Node Networks group on the left pane (see figure below). - - .. figure:: img/other.png - -#. Update the Public network assignment. - - - Check the box for "Assign public network to all nodes" (Required by OpenDaylight) - -#. Update Host OS DNS Servers. - - - Provide the DNS server settings - -#. Update Host OS NTP Servers. - - - Provide the NTP server settings - -Select Hypervisor type -====================== - -#. In the FUEL UI of your Environment, click the "Settings" Tab - -#. Select "Compute" on the left side pane (see figure below) - - - Check the KVM box and press "Save settings" - - .. figure:: img/compute.png - -Enable Plugins -============== - -#. In the FUEL UI of your Environment, click the "Settings" Tab - -#. Select Other on the left side pane (see figure below) - - - Enable and configure the plugins of your choice - - .. figure:: img/plugins_aarch64.png - -Allocate nodes to environment and assign functional roles -========================================================= - -#. Click on the "Nodes" Tab in the FUEL WEB UI (see figure below). - - .. figure:: img/addnodes.png - -#. Assign roles (see figure below). - - - Click on the <+Add Nodes> button - - - Check <Controller>, <Telemetry - MongoDB> and optionally an SDN Controller role (OpenDaylight controller) in the "Assign Roles" Section. - - - Check one node which you want to act as a Controller from the bottom half of the screen - - - Click <Apply Changes>. - - - Click on the <+Add Nodes> button - - - Check the <Controller> and <Storage - Ceph OSD> roles. - - - Check the two next nodes you want to act as Controllers from the bottom half of the screen - - - Click <Apply Changes> - - - Click on <+Add Nodes> button - - - Check the <Compute> and <Storage - Ceph OSD> roles. - - - Check the Nodes you want to act as Computes from the bottom half of the screen - - - Click <Apply Changes>. - - .. figure:: img/computelist.png - -#. Configure interfaces (see figure below). - - - Check Select <All> to select all allocated nodes - - - Click <Configure Interfaces> - - - Assign interfaces (bonded) for mgmt-, admin-, private-, public- and storage networks - - - Click <Apply> - - .. figure:: img/interfaceconf.png - -OPTIONAL - Set Local Mirror Repos -================================= - -**NOTE**: Support for local mirrors is incomplete in Danube 3.0. -You may opt in for it to fetch less packages from internet during deployment, -but an internet connection is still required. - -The following steps must be executed if you are in an environment with -no connection to the Internet. The Fuel server delivers a local repo -that can be used for installation / deployment of openstack. - -#. In the Fuel UI of your Environment, click the Settings Tab and select General from the left pane. - - - Replace the URI values for the "Name" values outlined below: - - - "ubuntu" URI="deb http://<ip-of-fuel-server>:8080/mirrors/ubuntu/ xenial main" - - - "mos" URI="deb http://<ip-of-fuel-server>::8080/newton-10.0/ubuntu/x86_64 mos10.0 main restricted" - - - "Auxiliary" URI="deb http://<ip-of-fuel-server>:8080/newton-10.0/ubuntu/auxiliary auxiliary main restricted" - - - Click <Save Settings> at the bottom to Save your changes - -Target specific configuration -============================= - -#. [AArch64 specific] Configure MySQL WSREP SST provider - - **NOTE**: This option is only available for ArmbandFuel@OPNFV, since it - currently only affects AArch64 targets (see *Reference 15*). - - When using some AArch64 platforms as controller nodes, WSREP SST - synchronisation using default backend provider (xtrabackup-v2) used to fail, - so a mechanism that allows selecting a different WSREP SST provider - has been introduced. - - In the FUEL UI of your Environment, click the <Settings> tab, click - <OpenStack Services> on the left side pane (see figure below), then - select one of the following options: - - - xtrabackup-v2 (default provider, AArch64 stability issues); - - - rsync (AArch64 validated, better or comparable speed to xtrabackup, - takes the donor node offline during state transfer); - - - mysqldump (untested); - - .. figure:: img/fuelwsrepsst.png - -#. [AArch64 specific] Using a different kernel - - **NOTE**: By default, a 4.8 based kernel is used, for enabling experimental - GICv3 features (e.g. live migration) and SFC support (required by OVS-NSH). - - To use Ubuntu Xenial LTS generic kernel (also available in offline mirror), - in the FUEL UI of your Environment, click the <Settings> tab, click - <General> on the left side pane, then at the bottom of the page, in the - <Provision> subsection, amend the package list: - - - add <linux-headers-generic-lts-xenial>; - - - add <linux-image-generic-lts-xenial>; - - - add <linux-image-extra-lts-xenial> (optional); - - - remove <linux-image-4.8.0-9944-generic>; - - - remove <linux-headers-4.8.0-9944-generic>; - - - remove <linux-image-extra-4.8.0-9944-generic>; - -#. Set up targets for provisioning with non-default "Offloading Modes" - - Some target nodes may require additional configuration after they are - PXE booted (bootstrapped); the most frequent changes are in defaults - for ethernet devices' "Offloading Modes" settings (e.g. some targets' - ethernet drivers may strip VLAN traffic by default). - - If your target ethernet drivers have wrong "Offloading Modes" defaults, - in "Configure interfaces" page (described above), expand affected - interface's "Offloading Modes" and [un]check the relevant settings - (see figure below): - - .. figure:: img/offloadingmodes.png - -#. Set up targets for "Verify Networks" with non-default "Offloading Modes" - - **NOTE**: Check *Reference 15* for an updated and comprehensive list of - known issues and/or limitations, including "Offloading Modes" not being - applied during "Verify Networks" step. - - Setting custom "Offloading Modes" in Fuel GUI will only apply those settings - during provisiong and **not** during "Verify Networks", so if your targets - need this change, you have to apply "Offloading Modes" settings by hand - to bootstrapped nodes. - - **E.g.**: Our driver has "rx-vlan-filter" default "on" (expected "off") on - the Openstack interface(s) "eth1", preventing VLAN traffic from passing - during "Verify Networks". - - - From Fuel master console identify target nodes admin IPs (see figure below): - - .. code-block:: bash - - $ fuel nodes - - .. figure:: img/fuelconsole1.png - - - SSH into each of the target nodes and disable "rx-vlan-filter" on the - affected physical interface(s) allocated for OpenStack traffic (eth1): - - .. code-block:: bash - - $ ssh root@10.20.0.6 ethtool -K eth1 rx-vlan-filter off - - - Repeat the step above for all affected nodes/interfaces in the POD. - -Verify Networks -=============== - -It is important that the Verify Networks action is performed as it will verify -that communicate works for the networks you have setup, as well as check that -packages needed for a successful deployment can be fetched. - -#. From the FUEL UI in your Environment, Select the Networks Tab and select "Connectivity check" on the left pane (see figure below) - - - Select <Verify Networks> - - - Continue to fix your topology (physical switch, etc) until the "Verification Succeeded" and "Your network is configured correctly" message is shown - - .. figure:: img/verifynet.png - -Deploy Your Environment -======================= - -#. Deploy the environment. - - - In the Fuel GUI, click on the "Dashboard" Tab. - - - Click on <Deploy Changes> in the "Ready to Deploy?" section - - - Examine any information notice that pops up and click <Deploy> - - Wait for your deployment to complete, you can view the "Dashboard" - Tab to see the progress and status of your deployment. - -========================= -Installation health-check -========================= - -#. Perform system health-check (see figure below) - - - Click the "Health Check" tab inside your Environment in the FUEL Web UI - - - Check <Select All> and Click <Run Tests> - - - Allow tests to run and investigate results where appropriate - - - Check *Reference 15* for known issues / limitations on AArch64 - - .. figure:: img/health.png + $ ci/deploy.sh -b file:///home/jenkins/tmpdir/securedlab \ + -l arm \ + -p pod5 \ + -s os-nosdn-nofeature-ha \ + -B pxebr \ ============= Release Notes ============= -Please refer to the :ref:`Release Notes <armband-releasenotes>` article. +Please refer to the :ref:`Release Notes <fuel-release-notes-label>` article. ========== References @@ -723,32 +308,27 @@ References OPNFV 1) `OPNFV Home Page <http://www.opnfv.org>`_ -2) `OPNFV documentation- and software downloads <https://www.opnfv.org/software/download>`_ +2) `OPNFV documentation <http://docs.opnfv.org>`_ +3) `Software downloads <https://www.opnfv.org/software/download>`_ OpenStack -3) `OpenStack Newton Release Artifacts <http://www.openstack.org/software/newton>`_ -4) `OpenStack Documentation <http://docs.openstack.org>`_ +4) `OpenStack Ocata Release Artifacts <http://www.openstack.org/software/ocata>`_ +5) `OpenStack Documentation <http://docs.openstack.org>`_ OpenDaylight -5) `OpenDaylight Artifacts <http://www.opendaylight.org/software/downloads>`_ +6) `OpenDaylight Artifacts <http://www.opendaylight.org/software/downloads>`_ Fuel -6) `The Fuel OpenStack Project <https://wiki.openstack.org/wiki/Fuel>`_ -7) `Fuel Documentation Overview <http://docs.openstack.org/developer/fuel-docs>`_ -8) `Fuel Installation Guide <http://docs.openstack.org/developer/fuel-docs/userdocs/fuel-install-guide.html>`_ -9) `Fuel User Guide <http://docs.openstack.org/developer/fuel-docs/userdocs/fuel-user-guide.html>`_ -10) `Fuel Developer Guide <http://docs.openstack.org/developer/fuel-docs/devdocs/develop.html>`_ -11) `Fuel Plugin Developers Guide <http://docs.openstack.org/developer/fuel-docs/plugindocs/fuel-plugin-sdk-guide.html>`_ -12) `(N/A on AArch64) Fuel OpenStack Hardware Compatibility List <https://www.mirantis.com/software/hardware-compatibility/>`_ - -Armband Fuel in OPNFV +7) `Mirantis Cloud Platform Documentation <https://docs.mirantis.com/mcp/latest>`_ -13) `OPNFV Installation instruction for the AArch64 Danube release of OPNFV when using Fuel as a deployment tool <http://artifacts.opnfv.org/armband/docs/release_installation/index.html>`_ +Salt -14) `OPNFV Build instruction for the AArch64 Danube release of OPNFV when using Fuel as a deployment tool <http://artifacts.opnfv.org/armband/docs/development_overview_build/index.html>`_ +8) `Saltstack Documentation <https://docs.saltstack.com/en/latest/topics>`_ +9) `Saltstack Formulas <http://salt-formulas.readthedocs.io/en/latest/develop/overview-reclass.html>`_ -15) `OPNFV Release Note for the AArch64 Danube release of OPNFV when using Fuel as a deployment tool <http://artifacts.opnfv.org/armband/docs/release_release-notes/index.html>`_ +Reclass +10) `Reclass model <http://reclass.pantsfullofunix.net>`_ |