diff options
5 files changed, 33 insertions, 695 deletions
diff --git a/docs/installationprocedure/template-os-nosdn-nofeature-ha/deployment.rst b/docs/installationprocedure/template-os-nosdn-nofeature-ha/deployment.rst index 4889d8380..4a8852ee6 100644 --- a/docs/installationprocedure/template-os-nosdn-nofeature-ha/deployment.rst +++ b/docs/installationprocedure/template-os-nosdn-nofeature-ha/deployment.rst @@ -10,7 +10,7 @@ It is important that the Verify Networks action is performed as it will verify that communicate works for the networks you have setup, as well as check that packages needed for a successful deployment can be fetched. -#. From the FUEL UI in your Environment, Select the Networks Tab and select "Connectivity check" on the left pane +From the FUEL UI in your Environment, Select the Networks Tab and select "Connectivity check" on the left pane - Select <Verify Networks> @@ -19,7 +19,7 @@ packages needed for a successful deployment can be fetched. Deploy Your Environment ----------------------- -38. Deploy the environment. +Deploy the environment. - In the Fuel GUI, click on the "Dashboard" Tab. diff --git a/docs/installationprocedure/template-os-nosdn-nofeature-ha/hardware.requirements.rst b/docs/installationprocedure/template-os-nosdn-nofeature-ha/hardware.requirements.rst index 082d4e96b..328f92199 100644 --- a/docs/installationprocedure/template-os-nosdn-nofeature-ha/hardware.requirements.rst +++ b/docs/installationprocedure/template-os-nosdn-nofeature-ha/hardware.requirements.rst @@ -5,81 +5,23 @@ Hardware requirements ===================== -The following minimum hardware requirements must be met for the -installation of <template> scenario: +The Pharos Lab +-------------- -+--------------------+------------------------------------------------------+ -| **HW Aspect** | **Requirement** | -| | | -+====================+======================================================+ -| **# of nodes** | Minimum 5 (3 for non redundant deployment): | -| | | -| | - 1 Fuel deployment master (may be virtualized) | -| | | -| | - 3(1) Controllers (1 colocated mongo/ceilometer | -| | role, 2 Ceph-OSD roles) | -| | | -| | - 1 Compute (1 co-located Ceph-OSD role) | -| | | -+--------------------+------------------------------------------------------+ -| **CPU** | Minimum 1 socket x86_AMD64 with Virtualization | -| | support | -+--------------------+------------------------------------------------------+ -| **RAM** | Minimum 16GB/server (Depending on VNF work load) | -| | | -+--------------------+------------------------------------------------------+ -| **Disk** | Minimum 256GB 10kRPM spinning disks | -| | | -+--------------------+------------------------------------------------------+ -| **Networks** | 4 Tagged VLANs (PUBLIC, MGMT, STORAGE, PRIVATE) | -| | | -| | 1 Un-Tagged VLAN for PXE Boot - ADMIN Network | -| | | -| | Note: These can be allocated to a single NIC - | -| | or spread out over multiple NICs as your hardware | -| | supports. | -+--------------------+------------------------------------------------------+ +Hardware requirements for OPNFV infrastuctures are specified by the Pharos project. +The Pharos project provides an OPNFV hardware specification for configuring your hardware +at: http://artifacts.opnfv.org/pharos/docs/pharos-spec.html . -Help with Hardware Requirements -=============================== +Virtual deployment hardware requirements +---------------------------------------- -Calculate hardware requirements: +To perform a virtual deployment of an OPNFV scenario establised different hardware requirements. +The server requirements for this type of deployment are outlined in the <missing spec>. -For information on compatible hardware types available for use, please see *Reference: 11*. - -When choosing the hardware on which you will deploy your OpenStack -environment, you should think about: - -- CPU -- Consider the number of virtual machines that you plan to deploy in your cloud environment and the CPU per virtual machine. - -- Memory -- Depends on the amount of RAM assigned per virtual machine and the controller node. - -- Storage -- Depends on the local drive space per virtual machine, remote volumes that can be attached to a virtual machine, and object storage. - -- Networking -- Depends on the Choose Network Topology, the network bandwidth per virtual machine, and network storage. - - -Top of the rack (TOR) Configuration requirements -================================================ - -The switching infrastructure provides connectivity for the OPNFV -infrastructure operations, tenant networks (East/West) and provider -connectivity (North/South); it also provides needed connectivity for -the Storage Area Network (SAN). -To avoid traffic congestion, it is strongly suggested that three -physically separated networks are used, that is: 1 physical network -for administration and control, one physical network for tenant private -and public networks, and one physical network for SAN. -The switching connectivity can (but does not need to) be fully redundant, -in such case it comprises a redundant 10GE switch pair for each of the -three physically separated networks. - -The physical TOR switches are **not** automatically configured from -the Fuel OPNFV reference platform. All the networks involved in the OPNFV -infrastructure as well as the provider networks and the private tenant -VLANs needs to be manually configured. - -Manual configuration of the Brahmaputra hardware platform should -be carried out according to the OPNFV Pharos specification: -<https://wiki.opnfv.org/pharos/pharos_specification> +.. Additional Hardware requirements +.. -------------------------------- +.. +.. Your scenario may require specific capabilities that are not explicitly stated in +.. the Pharos spec. If this is the case add your specific hardware requirements to this +.. section of the document under sub-headings. diff --git a/docs/installationprocedure/template-os-nosdn-nofeature-ha/installation-instruction.rst b/docs/installationprocedure/template-os-nosdn-nofeature-ha/installation-instruction.rst deleted file mode 100644 index b5383327b..000000000 --- a/docs/installationprocedure/template-os-nosdn-nofeature-ha/installation-instruction.rst +++ /dev/null @@ -1,608 +0,0 @@ -======================================================================================================== -OPNFV Installation instruction for the Brahmaputra release of OPNFV when using Fuel as a deployment tool -======================================================================================================== - -License -======= - -This work is licensed under a Creative Commons Attribution 4.0 International -License. .. http://creativecommons.org/licenses/by/4.0 .. -(c) Jonas Bjurel (Ericsson AB) and others - -Abstract -======== - -This document describes how to install the Brahmaputra release of -OPNFV when using Fuel as a deployment tool, covering it's usage, -limitations, dependencies and required system resources. - -Introduction -============ - -This document provides guidelines on how to install and -configure the Brahmaputra release of OPNFV when using Fuel as a -deployment tool, including required software and hardware configurations. - -Although the available installation options give a high degree of -freedom in how the system is set-up, including architecture, services -and features, etc., said permutations may not provide an OPNFV -compliant reference architecture. This instruction provides a -step-by-step guide that results in an OPNFV Brahmaputra compliant -deployment. - -The audience of this document is assumed to have good knowledge in -networking and Unix/Linux administration. - -Hardware requirements -===================== - -The following minimum hardware requirements must be met for the -installation of Brahmaputra using Fuel: - -+--------------------+------------------------------------------------------+ -| **HW Aspect** | **Requirement** | -| | | -+====================+======================================================+ -| **# of nodes** | Minimum 5 (3 for non redundant deployment): | -| | | -| | - 1 Fuel deployment master (may be virtualized) | -| | | -| | - 3(1) Controllers (1 colocated mongo/ceilometer | -| | role, 2 Ceph-OSD roles) | -| | | -| | - 1 Compute (1 co-located Ceph-OSD role) | -| | | -+--------------------+------------------------------------------------------+ -| **CPU** | Minimum 1 socket x86_AMD64 with Virtualization | -| | support | -+--------------------+------------------------------------------------------+ -| **RAM** | Minimum 16GB/server (Depending on VNF work load) | -| | | -+--------------------+------------------------------------------------------+ -| **Disk** | Minimum 256GB 10kRPM spinning disks | -| | | -+--------------------+------------------------------------------------------+ -| **Networks** | 4 Tagged VLANs (PUBLIC, MGMT, STORAGE, PRIVATE) | -| | | -| | 1 Un-Tagged VLAN for PXE Boot - ADMIN Network | -| | | -| | Note: These can be allocated to a single NIC - | -| | or spread out over multiple NICs as your hardware | -| | supports. | -+--------------------+------------------------------------------------------+ - -Help with Hardware Requirements -=============================== - -Calculate hardware requirements: - -For information on compatible hardware types available for use, please see *Reference: 11*. - -When choosing the hardware on which you will deploy your OpenStack -environment, you should think about: - -- CPU -- Consider the number of virtual machines that you plan to deploy in your cloud environment and the CPU per virtual machine. - -- Memory -- Depends on the amount of RAM assigned per virtual machine and the controller node. - -- Storage -- Depends on the local drive space per virtual machine, remote volumes that can be attached to a virtual machine, and object storage. - -- Networking -- Depends on the Choose Network Topology, the network bandwidth per virtual machine, and network storage. - - -Top of the rack (TOR) Configuration requirements -================================================ - -The switching infrastructure provides connectivity for the OPNFV -infrastructure operations, tenant networks (East/West) and provider -connectivity (North/South); it also provides needed connectivity for -the Storage Area Network (SAN). -To avoid traffic congestion, it is strongly suggested that three -physically separated networks are used, that is: 1 physical network -for administration and control, one physical network for tenant private -and public networks, and one physical network for SAN. -The switching connectivity can (but does not need to) be fully redundant, -in such case it comprises a redundant 10GE switch pair for each of the -three physically separated networks. - -The physical TOR switches are **not** automatically configured from -the Fuel OPNFV reference platform. All the networks involved in the OPNFV -infrastructure as well as the provider networks and the private tenant -VLANs needs to be manually configured. - -Manual configuration of the Brahmaputra hardware platform should -be carried out according to the OPNFV Pharos specification: -<https://wiki.opnfv.org/pharos/pharos_specification> - -OPNFV Software installation and deployment -========================================== - -This section describes the installation of the OPNFV installation -server (Fuel master) as well as the deployment of the full OPNFV -reference platform stack across a server cluster. - -Install Fuel master -------------------- -#. Mount the Brahmaputra Fuel ISO file/media as a boot device to the jump host server. - -#. Reboot the jump host to establish the Fuel server. - - - The system now boots from the ISO image. - - - Select "Fuel Install (Static IP)" (See figure below) - - - Press [Enter]. - - .. figure:: img/grub-1.png - -#. Wait until screen Fuel setup is shown (Note: This can take up to 30 minutes). - -#. In the "Fuel User" section - Confirm/change the default password (See figure below) - - - Enter "admin" in the Fuel password input - - - Enter "admin" in the Confirm password input - - - Select "Check" and press [Enter] - - .. figure:: img/fuelmenu1.png - -#. In the "Network Setup" section - Configure DHCP/Static IP information for your FUEL node - For example, ETH0 is 10.20.0.2/24 for FUEL booting and ETH1 is DHCP in your corporate/lab network (see figure below). - - - Configure eth1 or other network interfaces here as well (if you have them present on your FUEL server). - - .. figure:: img/fuelmenu2.png - -#. In the "PXE Setup" section (see figure below) - Change the following fields to appropriate values (example below): - - - DHCP Pool Start 10.20.0.3 - - - DHCP Pool End 10.20.0.254 - - - DHCP Pool Gateway 10.20.0.2 (IP address of Fuel node) - - .. figure:: img/fuelmenu3.png - -#. In the "DNS & Hostname" section (see figure below) - Change the following fields to appropriate values: - - - Hostname - - - Domain - - - Search Domain - - - External DNS - - - Hostname to test DNS - - - Select <Check> and press [Enter] - - .. figure:: img/fuelmenu4.png - - -#. OPTION TO ENABLE PROXY SUPPORT - In the "Bootstrap Image" section (see figure below), edit the following fields to define a proxy. (**NOTE:** cannot be used in tandem with local repository support) - - - Navigate to "HTTP proxy" and enter your http proxy address - - - Select <Check> and press [Enter] - - .. figure:: img/fuelmenu5.png - -#. In the "Time Sync" section (see figure below) - Change the following fields to appropriate values: - - - NTP Server 1 <Customer NTP server 1> - - - NTP Server 2 <Customer NTP server 2> - - - NTP Server 3 <Customer NTP server 3> - - .. figure:: img/fuelmenu6.png - -#. Start the installation. - - - Select Quit Setup and press Save and Quit. - - - Installation starts, wait until the login screen is shown. - - -Boot the Node Servers ---------------------- - -After the Fuel Master node has rebooted from the above steps and is at -the login prompt, you should boot the Node Servers (Your -Compute/Control/Storage blades (nested or real) with a PXE booting -scheme so that the FUEL Master can pick them up for control. - -#. Enable PXE booting - - - For every controller and compute server: enable PXE Booting as the first boot device in the BIOS boot order menu and hard disk as the second boot device in the same menu. - -#. Reboot all the control and compute blades. - -#. Wait for the availability of nodes showing up in the Fuel GUI. - - - Connect to the FUEL UI via the URL provided in the Console (default: https://10.20.0.2:8443) - - - Wait until all nodes are displayed in top right corner of the Fuel GUI: Total nodes and Unallocated nodes (see figure below). - - .. figure:: img/nodes.png - - -Install additional Plugins/Features on the FUEL node ----------------------------------------------------- - -#. SSH to your FUEL node (e.g. root@10.20.0.2 pwd: r00tme) - -#. Select wanted plugins/features from the /opt/opnfv/ directory. - -#. Install the wanted plugin with the command "fuel plugins --install /opt/opnfv/<plugin-name>-<version>.<arch>.rpm" - Expected output: "Plugin ....... was successfully installed." (see figure below) - - .. figure:: img/plugin_install.png - -Create an OpenStack Environment -------------------------------- - -#. Connect to Fuel WEB UI with a browser (default: https://10.20.0.2:8443) (login admin/admin) - -#. Create and name a new OpenStack environment, to be installed. - - .. figure:: img/newenv.png - -#. Select "<Liberty on Ubuntu 14.04>" and press <Next> - -#. Select "compute virtulization method". - - - Select "QEMU-KVM as hypervisor" and press <Next> - -#. Select "network mode". - - - Select "Neutron with ML2 plugin" - - - Select "Neutron with tunneling segmentation" (Required when using the ODL or ONOS plugins) - - - Press <Next> - -#. Select "Storage Back-ends". - - - Select "Ceph for block storage" and press <Next> - -#. Select "additional services" you wish to install. - - - Check option "Install Ceilometer (OpenStack Telemetry)" and press <Next> - -#. Create the new environment. - - - Click <Create> Button - -Configure the network environment ---------------------------------- - -#. Open the environment you previously created. - -#. Open the networks tab and select the "default Node Networks group to" on the left pane (see figure below). - - .. figure:: img/network.png - -#. Update the Public network configuration and change the following fields to appropriate values: - - - CIDR to <CIDR for Public IP Addresses> - - - IP Range Start to <Public IP Address start> - - - IP Range End to <Public IP Address end> - - - Gateway to <Gateway for Public IP Addresses> - - - Check <VLAN tagging>. - - - Set appropriate VLAN id. - -#. Update the Storage Network Configuration - - - Set CIDR to appropriate value (default 192.168.1.0/24) - - - Set IP Range Start to appropriate value (default 192.168.1.1) - - - Set IP Range End to appropriate value (default 192.168.1.254) - - - Set vlan to appropriate value (default 102) - -#. Update the Management network configuration. - - - Set CIDR to appropriate value (default 192.168.0.0/24) - - - Set IP Range Start to appropriate value (default 192.168.0.1) - - - Set IP Range End to appropriate value (default 192.168.0.254) - - - Check <VLAN tagging>. - - - Set appropriate VLAN id. (default 101) - -#. Update the Private Network Information - - - Set CIDR to appropriate value (default 192.168.2.0/24 - - - Set IP Range Start to appropriate value (default 192.168.2.1) - - - Set IP Range End to appropriate value (default 192.168.2.254) - - - Check <VLAN tagging>. - - - Set appropriate VLAN tag (default 103) - -#. Select the "Neutron L3 Node Networks group" on the left pane. - - .. figure:: img/neutronl3.png - -#. Update the Floating Network configuration. - - - Set the Floating IP range start (default 172.16.0.130) - - - Set the Floating IP range end (default 172.16.0.254) - - - Set the Floating network name (default admin_floating_net) - -#. Update the Internal Network configuration. - - - Set Internal network CIDR to an appropriate value (default 192.168.111.0/24) - - - Set Internal network gateway to an appropriate value - - - Set the Internal network name (default admin_internal_net) - -#. Update the Guest OS DNS servers. - - - Set Guest OS DNS Server values appropriately - -#. Save Settings. - -#. Select the "Other Node Networks group" on the left pane(see figure below). - - .. figure:: img/other.png - -#. Update the Public network assignment. - - - Check the box for "Assign public network to all nodes" (Required by OpenDaylight) - -#. Update Host OS DNS Servers. - - - Provide the DNS server settings - -#. Update Host OS NTP Servers. - - - Provide the NTP server settings - -Select Hypervisor type ----------------------- - -#. In the FUEL UI of your Environment, click the "Settings" Tab - -#. Select Compute on the left side pane (see figure below) - - - Check the KVM box and press "Save settings" - - .. figure:: img/compute.png - -Enable Plugins --------------- - -#. In the FUEL UI of your Environment, click the "Settings" Tab - -#. Select Other on the left side pane (see figure below) - - - Enable and configure the plugins of your choice - - .. figure:: img/plugins.png - -Allocate nodes to environment and assign functional roles ---------------------------------------------------------- - -#. Click on the "Nodes" Tab in the FUEL WEB UI (see figure below). - - .. figure:: img/addnodes.png - -#. Assign roles (see figure below). - - - Click on the <+Add Nodes> button - - - Check <Controller>, <Telemetry - MongoDB> and optionally an SDN Controller role (OpenDaylight controller/ONOS) in the Assign Roles Section. - - - Check one node which you want to act as a Controller from the bottom half of the screen - - - Click <Apply Changes>. - - - Click on the <+Add Nodes> button - - - Check the <Controller> and <Storage - Ceph OSD> roles. - - - Check the two next nodes you want to act as Controllers from the bottom half of the screen - - - Click <Apply Changes> - - - Click on <+Add Nodes> button - - - Check the <Compute> and <Storage - Ceph OSD> roles. - - - Check the Nodes you want to act as Computes from the bottom half of the screen - - - Click <Apply Changes>. - - .. figure:: img/computelist.png - -#. Configure interfaces (see figure below). - - - Check Select <All> to select all allocated nodes - - - Click <Configure Interfaces> - - - Assign interfaces (bonded) for mgmt-, admin-, private-, public- - and storage networks - - - Click <Apply> - - .. figure:: img/interfaceconf.png - - -OPTIONAL - Set Local Mirror Repos ---------------------------------- - -The following steps can be executed if you are in an environment with -no connection to the Internet. The Fuel server delivers a local repo -that can be used for installation / deployment of openstack. - -#. In the Fuel UI of your Environment, click the Settings Tab and select General from the left pane. - - - Replace the URI values for the "Name" values outlined below: - - - "ubuntu" URI="deb http://<ip-of-fuel-server>:8080/mirrors/ubuntu/ trusty main" - - - "ubuntu-security" URI="deb http://<ip-of-fuel-server>:8080/mirrors/ubuntu/ trusty-security main" - - - "ubuntu-updates" URI="deb http://<ip-of-fuel-server>:8080/mirrors/ubuntu/ trusty-updates main" - - - "mos" URI="deb http://<ip-of-fuel-server>::8080/liberty-8.0/ubuntu/x86_64 mos8.0 main restricted" - - - "Auxiliary" URI="deb http://<ip-of-fuel-server>:8080/liberty-8.0/ubuntu/auxiliary auxiliary main restricted" - - - Click <Save Settings> at the bottom to Save your changes - -Target specific configuration ------------------------------ - -#. Set up targets for provisioning with non-default "Offloading Modes" - - Some target nodes may require additional configuration after they are - PXE booted (bootstrapped); the most frequent changes are in defaults - for ethernet devices' "Offloading Modes" settings (e.g. some targets' - ethernet drivers may strip VLAN traffic by default). - - If your target ethernet drivers have wrong "Offloading Modes" defaults, - in "Configure interfaces" page (described above), expand affected - interface's "Offloading Modes" and [un]check the relevant settings - (see figure below): - - .. figure:: img/offloadingmodes.png - -#. Set up targets for "Verify Networks" with non-default "Offloading Modes" - - **NOTE**: Check *Reference 15* for an updated and comprehensive list of - known issues and/or limitations, including "Offloading Modes" not being - applied during "Verify Networks" step. - - Setting custom "Offloading Modes" in Fuel GUI will only apply those settings - during provisiong and **not** during "Verify Networks", so if your targets - need this change, you have to apply "Offloading Modes" settings by hand - to bootstrapped nodes. - - **E.g.**: Our driver has "rx-vlan-filter" default "on" (expected "off") on - the Openstack interface(s) "eth1", preventing VLAN traffic from passing - during "Verify Networks". - - - From Fuel master console identify target nodes admin IPs (see figure below): - - .. code-block:: bash - - $ fuel nodes - - .. figure:: img/fuelconsole1.png - - - SSH into each of the target nodes and disable "rx-vlan-filter" on the - affected physical interface(s) allocated for OpenStack traffic (eth1): - - .. code-block:: bash - - $ ssh root@10.20.0.6 ethtool -K eth1 rx-vlan-filter off - - - Repeat the step above for all affected nodes/interfaces in the POD. - -Verify Networks ---------------- - -It is important that the Verify Networks action is performed as it will verify -that communicate works for the networks you have setup, as well as check that -packages needed for a successful deployment can be fetched. - -#. From the FUEL UI in your Environment, Select the Networks Tab and select "Connectivity check" on the left pane (see figure below) - - - Select <Verify Networks> - - - Continue to fix your topology (physical switch, etc) until the "Verification Succeeded" and "Your network is configured correctly" message is shown - - .. figure:: img/verifynet.png - - -Deploy Your Environment ------------------------ - -38. Deploy the environment. - - - In the Fuel GUI, click on the "Dashboard" Tab. - - - Click on <Deploy Changes> in the "Ready to Deploy?" section - - - Examine any information notice that pops up and click <Deploy> - - Wait for your deployment to complete, you can view the "Dashboard" - Tab to see the progress and status of your deployment. - -Installation health-check -========================= - -#. Perform system health-check (see figure below) - - - Click the "Health Check" tab inside your Environment in the FUEL Web UI - - - Check <Select All> and Click <Run Tests> - - - Allow tests to run and investigate results where appropriate - - .. figure:: img/health.png - -References -========== - -OPNFV ------ - -1) `OPNFV Home Page <http://www.opnfv.org>`_ - -2) `OPNFV documentation- and software downloads <https://www.opnfv.org/software/download>`_ - -OpenStack ---------- - -3) `OpenStack Liberty Release artifacts <http://www.openstack.org/software/liberty>`_ - -4) `OpenStack documentation <http://docs.openstack.org>`_ - -OpenDaylight ------------- - -5) `OpenDaylight artifacts <http://www.opendaylight.org/software/downloads>`_ - -Fuel ----- -6) `The Fuel OpenStack project <https://wiki.openstack.org/wiki/Fuel>`_ - -7) `Fuel documentation overview <https://docs.fuel-infra.org/openstack/fuel/fuel-8.0/>`_ - -8) `Fuel planning guide <https://docs.fuel-infra.org/openstack/fuel/fuel-8.0/mos-planning-guide.html>`_ - -9) `Fuel quick start guide <https://docs.mirantis.com/openstack/fuel/fuel-8.0/quickstart-guide.html>`_ - -10) `Fuel operations guide <https://docs.mirantis.com/openstack/fuel/fuel-8.0/operations.html>`_ - -11) `Fuel Plugin Developers Guide <https://wiki.openstack.org/wiki/Fuel/Plugins>`_ - -12) `Fuel OpenStack Hardware Compatibility List <https://www.mirantis.com/products/openstack-drivers-and-plugins/hardware-compatibility-list>`_ - -Fuel in OPNFV -------------- - -13) `OPNFV Installation instruction for the Brahmaputra release of OPNFV when using Fuel as a deployment tool <http://artifacts.opnfv.org/fuel/brahmaputra/docs/installation-instruction.html>`_ - -14) `OPNFV Build instruction for the Brahmaputra release of OPNFV when using Fuel as a deployment tool <http://artifacts.opnfv.org/fuel/brahmaputra/docs/build-instruction.html>`_ - -15) `OPNFV Release Note for the Brahmaputra release of OPNFV when using Fuel as a deployment tool <http://artifacts.opnfv.org/fuel/brahmaputra/docs/release-notes.html>`_ diff --git a/docs/installationprocedure/template-os-nosdn-nofeature-ha/installation.rst b/docs/installationprocedure/template-os-nosdn-nofeature-ha/installation.rst index 3387eacc3..bcfb6d5de 100644 --- a/docs/installationprocedure/template-os-nosdn-nofeature-ha/installation.rst +++ b/docs/installationprocedure/template-os-nosdn-nofeature-ha/installation.rst @@ -2,11 +2,15 @@ .. License. .. http://creativecommons.org/licenses/by/4.0 .. .. (c) Christopher Price (Ericsson AB) and others -<template> software installation and deployment +<scenario> software installation and deployment =============================================== +.. Let's figure out how to structure this to highlight both virtual and +.. bare metal deployments. I need some help from the scenrio owners to get +.. that right. + This section describes the installation of the OPNFV installation -server (jumphost) as well as the deployment of the <template> OPNFV +server (jumphost) as well as the deployment of the <scenario> OPNFV reference platform stack across a server cluster. Install jumphost @@ -15,7 +19,7 @@ Install jumphost If you have not already done so, prepare your jumphost according the instructions in _#ref_Preparation this can be done using an ISO image with the following commands -#. Mount the <template> ISO file/media as a boot device to the jump host server. +#. Mount the <scenario> ISO file/media as a boot device to the jump host server. #. Reboot the jump host to establish the jumphost server. diff --git a/docs/installationprocedure/template-os-nosdn-nofeature-ha/preparation.rst b/docs/installationprocedure/template-os-nosdn-nofeature-ha/preparation.rst index c546a35f3..450c8bf5e 100644 --- a/docs/installationprocedure/template-os-nosdn-nofeature-ha/preparation.rst +++ b/docs/installationprocedure/template-os-nosdn-nofeature-ha/preparation.rst @@ -7,14 +7,14 @@ Preparation .. Not all of these options are relevant for all scenario's. I advise following the .. instructions applicable to the deploy tool used in the scenario. -Before starting the installation of the <template> scenario some preparation must -be done. You may choose to install the <template> scenario using an ISO image, or +Before starting the installation of the <scenario> scenario some preparation must +be done. You may choose to install the <scenario> scenario using an ISO image, or executing the installation from a prepared jumphost. Preparing your jumphost to install by script -------------------------------------------- -To deploy the <template> scenario from a script you will need to prepare the jumphost +To deploy the <scenario> scenario from a script you will need to prepare the jumphost with a compatible operating system. Prepare your jumphost running CentOS 7 with libvirt running on it. You may then install the RDO Release RPM: @@ -27,13 +27,13 @@ the OpenVSwitch RPM from the RDO Project repositories and install it with the op Preparing your jumphost using an ISO image ------------------------------------------ -An alternative to preparing your own jumphost id to use a <template> ISO image as a boot image. +An alternative to preparing your own jumphost id to use a <scenario> ISO image as a boot image. Download or build the ISO image according to the following instructions. Retrieving the ISO image ^^^^^^^^^^^^^^^^^^^^^^^^ -If you choose to install the <template> scenario from an ISO image you must first +If you choose to install the <scenario> scenario from an ISO image you must first retrieve the <template-containing>.iso image of the Colorado release. This can be found can be found at <hyperlink required>. @@ -41,12 +41,12 @@ Building the ISO image ^^^^^^^^^^^^^^^^^^^^^^ Alternatively, you may choose to build the Fuel .iso from source by cloning the -opnfv/fuel git repository. To retrieve the repository for the Brahmaputra release use the following command: +opnfv/fuel git repository. To retrieve the repository for the Colorado release use the following command: $ git clone https://gerrit.opnfv.org/gerrit/fuel -Check-out the Brahmaputra release tag to set the HEAD to the -baseline required to replicate the Brahmaputra release: +Check-out the Colorado release tag to set the HEAD to the +baseline required to replicate the Colorado release: $ git checkout colorado.1.0 @@ -59,7 +59,7 @@ For more information on how to build, please see *Reference: 14* Booting from the ISO image ^^^^^^^^^^^^^^^^^^^^^^^^^^ -Mount the <template> ISO file/media as a boot device on the jump host server. If all your hardware +Mount the <scenario> ISO file/media as a boot device on the jump host server. If all your hardware preparation is complete at this time you should reboot the jumphost to establish the deployment server. |