summaryrefslogtreecommitdiffstats
path: root/docs/release/configguide/featureconfig.rst
diff options
context:
space:
mode:
Diffstat (limited to 'docs/release/configguide/featureconfig.rst')
-rw-r--r--docs/release/configguide/featureconfig.rst584
1 files changed, 11 insertions, 573 deletions
diff --git a/docs/release/configguide/featureconfig.rst b/docs/release/configguide/featureconfig.rst
index b364580..3e3d99b 100644
--- a/docs/release/configguide/featureconfig.rst
+++ b/docs/release/configguide/featureconfig.rst
@@ -6,18 +6,17 @@
IPv6 Configuration - Setting Up a Service VM as an IPv6 vRouter
===============================================================
-This section provides instructions to set up a service VM as an IPv6 vRouter using OPNFV Danube Release
-installers. The environment may be pure OpenStack option or Open Daylight L2-only option.
-The deployment model may be HA or non-HA. The infrastructure may be bare metal or virtual environment.
+This section provides instructions to set up a service VM as an IPv6 vRouter using OPNFV Euphrates Release
+installers. Because Open Daylight no longer supports L2-only option, and there is only limited support of
+IPv6 in L3 option of Open Daylight, setup of service VM as an IPv6 vRouter is only available under
+pure/native OpenStack environment. The deployment model may be HA or non-HA. The infrastructure may be
+bare metal or virtual environment.
****************************
Pre-configuration Activities
****************************
-The configuration will work in 2 environments:
-
-1. OpenStack-only environment
-2. OpenStack with Open Daylight L2-only environment
+The configuration will work only in OpenStack-only environment.
Depending on which installer will be used to deploy OPNFV, each environment may be deployed
on bare metal or virtualized infrastructure. Each deployment may be HA or non-HA.
@@ -29,7 +28,7 @@ Setup Manual in OpenStack-Only Environment
******************************************
If you intend to set up a service VM as an IPv6 vRouter in OpenStack-only environment of
-OPNFV Danube Release, please **NOTE** that:
+OPNFV Euphrates Release, please **NOTE** that:
* Because the anti-spoofing rules of Security Group feature in OpenStack prevents
a VM from forwarding packets, we need to disable Security Group feature in the
@@ -43,7 +42,7 @@ OPNFV Danube Release, please **NOTE** that:
Install OPNFV and Preparation
-----------------------------
-**OPNFV-NATIVE-INSTALL-1**: To install OpenStack-only environment of OPNFV Danube Release:
+**OPNFV-NATIVE-INSTALL-1**: To install OpenStack-only environment of OPNFV Euphrates Release:
**Apex Installer**:
@@ -116,7 +115,7 @@ Install OPNFV and Preparation
# 4.1 Example of <lab-name>: -l devel-pipeline
# 4.2 Example of <pod-name>: -p elx
# 5. <iso-uri> could be local or remote ISO image of Fuel Installer
- # 5.1 Example: http://artifacts.opnfv.org/fuel/danube/opnfv-danube.1.0.iso
+ # 5.1 Example: http://artifacts.opnfv.org/fuel/euphrates/opnfv-euphrates.1.0.iso
#
# Please refer to Fuel Installer's documentation for further information and any update
@@ -450,577 +449,16 @@ would be as shown as follows:
ssh -i ~/vRouterKey cirros@<floating-ip-of-VM1>
ssh -i ~/vRouterKey cirros@<floating-ip-of-VM2>
-****************************************************************
-Setup Manual in OpenStack with Open Daylight L2-Only Environment
-****************************************************************
-
-If you intend to set up a service VM as an IPv6 vRouter in an environment of OpenStack
-and Open Daylight L2-only of OPNFV Danube Release, please **NOTE** that:
-
-* We **SHOULD** use the ``odl-ovsdb-openstack`` version of Open Daylight Boron
- in OPNFV Danube Release. Please refer to our
- `Gap Analysis <http://artifacts.opnfv.org/ipv6/docs/release_userguide/index.html#ipv6-gap-analysis-with-open-daylight-boron>`_
- for more information.
-* The hostnames, IP addresses, and username are for exemplary purpose in instructions.
- Please change as needed to fit your environment.
-* The instructions apply to both deployment model of single controller node and
- HA (High Availability) deployment model where multiple controller nodes are used.
-* However, in case of HA, when ``ipv6-router`` is created in step **SETUP-SVM-11**,
- it could be created in any of the controller node. Thus you need to identify in which
- controller node ``ipv6-router`` is created in order to manually spawn ``radvd`` daemon
- inside the ``ipv6-router`` namespace in steps **SETUP-SVM-24** through **SETUP-SVM-30**.
-
------------------------------
-Install OPNFV and Preparation
------------------------------
-
-**OPNFV-INSTALL-1**: To install OpenStack with Open Daylight L2-only environment
-of OPNFV Danube Release:
-
-**Apex Installer**:
-
-Please **NOTE** that:
-
-* In OPNFV Danube Release, Apex Installer no longer supports Open Daylight
- L2-only environment or ``odl-ovsdb-openstack``. Instead, it supports Open
- Daylight L3 deployment with ``odl-netvirt-openstack``.
-* IPv6 features are not fully supported in Open Daylight L3 with
- ``odl-netvirt-openstack`` yet. It is still a work in progress.
-* Thus we cannot realize Service VM as an IPv6 vRouter using Apex Installer
- under OpenStack + Open Daylight L3 with ``odl-netvirt-openstack`` environment.
-
-**Compass** Installer:
-
-.. code-block:: bash
-
- # HA deployment in OpenStack with Open Daylight L2-only environment
- export ISO_URL=file://$BUILD_DIRECTORY/compass.iso
- export OS_VERSION=${{COMPASS_OS_VERSION}}
- export OPENSTACK_VERSION=${{COMPASS_OPENSTACK_VERSION}}
- export CONFDIR=$WORKSPACE/deploy/conf/vm_environment
- ./deploy.sh --dha $CONFDIR/os-odl_l2-nofeature-ha.yml \
- --network $CONFDIR/$NODE_NAME/network.yml
-
- # Non-HA deployment in OpenStack with Open Daylight L2-only environment
- # Non-HA deployment is currently not supported by Compass installer
-
-**Fuel** Installer:
-
-.. code-block:: bash
-
- # HA deployment in OpenStack with Open Daylight L2-only environment
- # Scenario Name: os-odl_l2-nofeature-ha
- # Scenario Configuration File: ha_odl-l2_heat_ceilometer_scenario.yaml
- # You can use either Scenario Name or Scenario Configuration File Name in "-s" parameter
- sudo ./deploy.sh -b <stack-config-uri> -l <lab-name> -p <pod-name> \
- -s os-odl_l2-nofeature-ha -i <iso-uri>
-
- # Non-HA deployment in OpenStack with Open Daylight L2-only environment
- # Scenario Name: os-odl_l2-nofeature-noha
- # Scenario Configuration File: no-ha_odl-l2_heat_ceilometer_scenario.yaml
- # You can use either Scenario Name or Scenario Configuration File Name in "-s" parameter
- sudo ./deploy.sh -b <stack-config-uri> -l <lab-name> -p <pod-name> \
- -s os-odl_l2-nofeature-noha -i <iso-uri>
-
- # Note:
- #
- # 1. Refer to http://git.opnfv.org/cgit/fuel/tree/deploy/scenario/scenario.yaml for scenarios
- # 2. Refer to http://git.opnfv.org/cgit/fuel/tree/ci/README for description of
- # stack configuration directory structure
- # 3. <stack-config-uri> is the base URI of stack configuration directory structure
- # 3.1 Example: http://git.opnfv.org/cgit/fuel/tree/deploy/config
- # 4. <lab-name> and <pod-name> must match the directory structure in stack configuration
- # 4.1 Example of <lab-name>: -l devel-pipeline
- # 4.2 Example of <pod-name>: -p elx
- # 5. <iso-uri> could be local or remote ISO image of Fuel Installer
- # 5.1 Example: http://artifacts.opnfv.org/fuel/danube/opnfv-danube.1.0.iso
- #
- # Please refer to Fuel Installer's documentation for further information and any update
-
-**Joid** Installer:
-
-.. code-block:: bash
-
- # HA deployment in OpenStack with Open Daylight L2-only environment
- ./deploy.sh -o mitaka -s odl -t ha -l default -f ipv6
-
- # Non-HA deployment in OpenStack with Open Daylight L2-only environment
- ./deploy.sh -o mitaka -s odl -t nonha -l default -f ipv6
-
-Please **NOTE** that:
-
-* You need to refer to **installer's documentation** for other necessary
- parameters applicable to your deployment.
-* You need to refer to **Release Notes** and **installer's documentation** if there is
- any issue in installation.
-
-**OPNFV-INSTALL-2**: Clone the following GitHub repository to get the
-configuration and metadata files
-
-.. code-block:: bash
-
- git clone https://github.com/sridhargaddam/opnfv_os_ipv6_poc.git \
- /opt/stack/opnfv_os_ipv6_poc
-
-----------------------------------------------
-Disable Security Groups in OpenStack ML2 Setup
-----------------------------------------------
-
-Please **NOTE** that although Security Groups feature has been disabled automatically
-through ``local.conf`` configuration file by some installers such as ``devstack``, it is very likely
-that other installers such as ``Apex``, ``Compass``, ``Fuel`` or ``Joid`` will enable Security
-Groups feature after installation.
-
-**Please make sure that Security Groups are disabled in the setup**
-
-In order to disable Security Groups globally, please make sure that the settings in
-**OPNFV-SEC-1** and **OPNFV-SEC-2** are applied, if they are not there by default.
-
-**OPNFV-SEC-1**: Change the settings in
-``/etc/neutron/plugins/ml2/ml2_conf.ini`` as follows, if they
-are not there by default.
-
-.. code-block:: bash
-
- # /etc/neutron/plugins/ml2/ml2_conf.ini
- [securitygroup]
- enable_security_group = True
- firewall_driver = neutron.agent.firewall.NoopFirewallDriver
- [ml2]
- extension_drivers = port_security
- [agent]
- prevent_arp_spoofing = False
-
-**OPNFV-SEC-2**: Change the settings in ``/etc/nova/nova.conf`` as follows,
-if they are not there by default.
-
-.. code-block:: bash
-
- # /etc/nova/nova.conf
- [DEFAULT]
- security_group_api = neutron
- firewall_driver = nova.virt.firewall.NoopFirewallDriver
-
-**OPNFV-SEC-3**: After updating the settings, you will have to restart the
-``Neutron`` and ``Nova`` services.
-
-**Please note that the commands of restarting** ``Neutron`` **and** ``Nova`` **would vary
-depending on the installer. Please refer to relevant documentation of specific installers**
-
----------------------------------------------------
-Source the Credentials in OpenStack Controller Node
----------------------------------------------------
-
-**SETUP-SVM-1**: Login in OpenStack Controller Node. Start a new terminal,
-and change directory to where OpenStack is installed.
-
-**SETUP-SVM-2**: We have to source the tenant credentials in this step. Please **NOTE**
-that the method of sourcing tenant credentials may vary depending on installers. For example:
-
-**Apex** installer:
-
-.. code-block:: bash
-
- # On jump host, source the tenant credentials using /bin/opnfv-util provided by Apex installer
- opnfv-util undercloud "source overcloudrc; keystone service-list"
-
- # Alternatively, you can copy the file /home/stack/overcloudrc from the installer VM called "undercloud"
- # to a location in controller node, for example, in the directory /opt, and do:
- # source /opt/overcloudrc
-
-**Compass** installer:
-
-.. code-block:: bash
-
- # source the tenant credentials using Compass installer of OPNFV
- source /opt/admin-openrc.sh
-
-**Fuel** installer:
-
-.. code-block:: bash
-
- # source the tenant credentials using Fuel installer of OPNFV
- source /root/openrc
-
-**Joid** installer:
-
-.. code-block:: bash
-
- # source the tenant credentials using Joid installer of OPNFV
- source $HOME/joid_config/admin-openrc
-
-**devstack**:
-
-.. code-block:: bash
-
- # source the tenant credentials in devstack
- source openrc admin demo
-
-**Please refer to relevant documentation of installers if you encounter any issue**.
-
-------------------------------------------------------------------------------------
-Informational Note: Move Public Network from Physical Network Interface to ``br-ex``
-------------------------------------------------------------------------------------
-
-**SETUP-SVM-3**: Move the physical interface (i.e. the public network interface) to ``br-ex``
-
-**SETUP-SVM-4**: Verify setup of ``br-ex``
-
-**Those 2 steps are Informational. OPNFV Installer has taken care of those 2 steps during deployment.
-You may refer to this step only if there is any issue, or if you are using other installers**.
-
-We have to move the physical interface (i.e. the public network interface) to ``br-ex``, including moving
-the public IP address and setting up default route. Please refer to ``SETUP-SVM-3`` and
-``SETUP-SVM-4`` in our `more complete instruction <http://artifacts.opnfv.org/ipv6/docs/setupservicevm/4-ipv6-configguide-servicevm.html#add-external-connectivity-to-br-ex>`_.
-
---------------------------------------------------------
-Create IPv4 Subnet and Router with External Connectivity
---------------------------------------------------------
-
-**SETUP-SVM-5**: Create a Neutron router ``ipv4-router`` which needs to provide external connectivity.
-
-.. code-block:: bash
-
- neutron router-create ipv4-router
-
-**SETUP-SVM-6**: Create an external network/subnet ``ext-net`` using the appropriate values based on the
-data-center physical network setup.
-
-Please **NOTE** that you may only need to create the subnet of ``ext-net`` because OPNFV installers
-should have created an external network during installation. You must use the same name of external
-network that installer creates when you create the subnet. For example:
-
-* **Apex** installer: ``external``
-* **Compass** installer: ``ext-net``
-* **Fuel** installer: ``admin_floating_net``
-* **Joid** installer: ``ext-net``
-
-**Please refer to the documentation of installers if there is any issue**
-
-.. code-block:: bash
-
- # This is needed only if installer does not create an external work
- # Otherwise, skip this command "net-create"
- neutron net-create --router:external ext-net
-
- # Note that the name "ext-net" may work for some installers such as Compass and Joid
- # Change the name "ext-net" to match the name of external network that an installer creates
- neutron subnet-create --disable-dhcp --allocation-pool start=198.59.156.251,\
- end=198.59.156.254 --gateway 198.59.156.1 ext-net 198.59.156.0/24
-
-Please note that the IP addresses in the command above are for exemplary purpose. **Please replace the IP addresses of
-your actual network**.
-
-**SETUP-SVM-7**: Associate the ``ext-net`` to the Neutron router ``ipv4-router``.
-
-.. code-block:: bash
-
- # Note that the name "ext-net" may work for some installers such as Compass and Joid
- # Change the name "ext-net" to match the name of external network that an installer creates
- neutron router-gateway-set ipv4-router ext-net
-
-**SETUP-SVM-8**: Create an internal/tenant IPv4 network ``ipv4-int-network1``
-
-.. code-block:: bash
-
- neutron net-create ipv4-int-network1
-
-**SETUP-SVM-9**: Create an IPv4 subnet ``ipv4-int-subnet1`` in the internal network ``ipv4-int-network1``
-
-.. code-block:: bash
-
- neutron subnet-create --name ipv4-int-subnet1 --dns-nameserver 8.8.8.8 \
- ipv4-int-network1 20.0.0.0/24
-
-**SETUP-SVM-10**: Associate the IPv4 internal subnet ``ipv4-int-subnet1`` to the Neutron router ``ipv4-router``.
-
-.. code-block:: bash
-
- neutron router-interface-add ipv4-router ipv4-int-subnet1
-
---------------------------------------------------------
-Create IPv6 Subnet and Router with External Connectivity
---------------------------------------------------------
-
-Now, let us create a second neutron router where we can "manually" spawn a ``radvd`` daemon to simulate an external
-IPv6 router.
-
-**SETUP-SVM-11**: Create a second Neutron router ``ipv6-router`` which needs to provide external connectivity
-
-.. code-block:: bash
-
- neutron router-create ipv6-router
-
-**SETUP-SVM-12**: Associate the ``ext-net`` to the Neutron router ``ipv6-router``
-
-.. code-block:: bash
-
- # Note that the name "ext-net" may work for some installers such as Compass and Joid
- # Change the name "ext-net" to match the name of external network that an installer creates
- neutron router-gateway-set ipv6-router ext-net
-
-**SETUP-SVM-13**: Create a second internal/tenant IPv4 network ``ipv4-int-network2``
-
-.. code-block:: bash
-
- neutron net-create ipv4-int-network2
-
-**SETUP-SVM-14**: Create an IPv4 subnet ``ipv4-int-subnet2`` for the ``ipv6-router`` internal network
-``ipv4-int-network2``
-
-.. code-block:: bash
-
- neutron subnet-create --name ipv4-int-subnet2 --dns-nameserver 8.8.8.8 \
- ipv4-int-network2 10.0.0.0/24
-
-**SETUP-SVM-15**: Associate the IPv4 internal subnet ``ipv4-int-subnet2`` to the Neutron router ``ipv6-router``.
-
-.. code-block:: bash
-
- neutron router-interface-add ipv6-router ipv4-int-subnet2
-
---------------------------------------------------
-Prepare Image, Metadata and Keypair for Service VM
---------------------------------------------------
-
-**SETUP-SVM-16**: Download ``fedora22`` image which would be used as ``vRouter``
-
-.. code-block:: bash
-
- wget https://download.fedoraproject.org/pub/fedora/linux/releases/22/Cloud/x86_64/\
- Images/Fedora-Cloud-Base-22-20150521.x86_64.qcow2
-
- glance image-create --name 'Fedora22' --disk-format qcow2 --container-format bare \
- --file ./Fedora-Cloud-Base-22-20150521.x86_64.qcow2
-
-**SETUP-SVM-17**: Create a keypair
-
-.. code-block:: bash
-
- nova keypair-add vRouterKey > ~/vRouterKey
-
-**SETUP-SVM-18**: Create ports for ``vRouter`` and both the VMs with some specific MAC addresses.
-
-.. code-block:: bash
-
- neutron port-create --name eth0-vRouter --mac-address fa:16:3e:11:11:11 ipv4-int-network2
- neutron port-create --name eth1-vRouter --mac-address fa:16:3e:22:22:22 ipv4-int-network1
- neutron port-create --name eth0-VM1 --mac-address fa:16:3e:33:33:33 ipv4-int-network1
- neutron port-create --name eth0-VM2 --mac-address fa:16:3e:44:44:44 ipv4-int-network1
-
-----------------------------------------------------------------------------------------------------------
-Boot Service VM (``vRouter``) with ``eth0`` on ``ipv4-int-network2`` and ``eth1`` on ``ipv4-int-network1``
-----------------------------------------------------------------------------------------------------------
-
-Let us boot the service VM (``vRouter``) with ``eth0`` interface on ``ipv4-int-network2`` connecting to ``ipv6-router``,
-and ``eth1`` interface on ``ipv4-int-network1`` connecting to ``ipv4-router``.
-
-**SETUP-SVM-19**: Boot the ``vRouter`` using ``Fedora22`` image on the OpenStack Compute Node with hostname
-``opnfv-os-compute``
-
-.. code-block:: bash
-
- nova boot --image Fedora22 --flavor m1.small \
- --user-data /opt/stack/opnfv_os_ipv6_poc/metadata.txt \
- --availability-zone nova:opnfv-os-compute \
- --nic port-id=$(neutron port-list | grep -w eth0-vRouter | awk '{print $2}') \
- --nic port-id=$(neutron port-list | grep -w eth1-vRouter | awk '{print $2}') \
- --key-name vRouterKey vRouter
-
-Please **note** that ``/opt/stack/opnfv_os_ipv6_poc/metadata.txt`` is used to enable the ``vRouter`` to automatically
-spawn a ``radvd``, and
-
-* Act as an IPv6 vRouter which advertises the RA (Router Advertisements) with prefix
- ``2001:db8:0:2::/64`` on its internal interface (``eth1``).
-* Forward IPv6 traffic from internal interface (``eth1``)
-
-**SETUP-SVM-20**: Verify that ``Fedora22`` image boots up successfully and vRouter has ``ssh`` keys properly injected
-
-.. code-block:: bash
-
- nova list
- nova console-log vRouter
-
-Please note that **it may take a few minutes** for the necessary packages to get installed and ``ssh`` keys
-to be injected.
-
-.. code-block:: bash
-
- # Sample Output
- [ 762.884523] cloud-init[871]: ec2: #############################################################
- [ 762.909634] cloud-init[871]: ec2: -----BEGIN SSH HOST KEY FINGERPRINTS-----
- [ 762.931626] cloud-init[871]: ec2: 2048 e3:dc:3d:4a:bc:b6:b0:77:75:a1:70:a3:d0:2a:47:a9 (RSA)
- [ 762.957380] cloud-init[871]: ec2: -----END SSH HOST KEY FINGERPRINTS-----
- [ 762.979554] cloud-init[871]: ec2: #############################################################
-
--------------------------------------------
-Boot Two Other VMs in ``ipv4-int-network1``
--------------------------------------------
-
-In order to verify that the setup is working, let us create two cirros VMs with ``eth1`` interface on the
-``ipv4-int-network1``, i.e., connecting to ``vRouter`` ``eth1`` interface for internal network.
-
-We will have to configure appropriate ``mtu`` on the VMs' interface by taking into account the tunneling
-overhead and any physical switch requirements. If so, push the ``mtu`` to the VM either using ``dhcp``
-options or via ``meta-data``.
-
-**SETUP-SVM-21**: Create VM1 on OpenStack Controller Node with hostname ``opnfv-os-controller``
-
-.. code-block:: bash
-
- nova boot --image cirros-0.3.4-x86_64-uec --flavor m1.tiny \
- --user-data /opt/stack/opnfv_os_ipv6_poc/set_mtu.sh \
- --availability-zone nova:opnfv-os-controller \
- --nic port-id=$(neutron port-list | grep -w eth0-VM1 | awk '{print $2}') \
- --key-name vRouterKey VM1
-
-**SETUP-SVM-22**: Create VM2 on OpenStack Compute Node with hostname ``opnfv-os-compute``
-
-.. code-block:: bash
-
- nova boot --image cirros-0.3.4-x86_64-uec --flavor m1.tiny \
- --user-data /opt/stack/opnfv_os_ipv6_poc/set_mtu.sh \
- --availability-zone nova:opnfv-os-compute \
- --nic port-id=$(neutron port-list | grep -w eth0-VM2 | awk '{print $2}') \
- --key-name vRouterKey VM2
-
-**SETUP-SVM-23**: Confirm that both the VMs are successfully booted.
-
-.. code-block:: bash
-
- nova list
- nova console-log VM1
- nova console-log VM2
-
-----------------------------------
-Spawn ``RADVD`` in ``ipv6-router``
-----------------------------------
-
-Let us manually spawn a ``radvd`` daemon inside ``ipv6-router`` namespace to simulate an external router.
-First of all, we will have to identify the ``ipv6-router`` namespace and move to the namespace.
-
-Please **NOTE** that in case of HA (High Availability) deployment model where multiple controller
-nodes are used, ``ipv6-router`` created in step **SETUP-SVM-11** could be in any of the controller
-node. Thus you need to identify in which controller node ``ipv6-router`` is created in order to manually
-spawn ``radvd`` daemon inside the ``ipv6-router`` namespace in steps **SETUP-SVM-24** through
-**SETUP-SVM-30**. The following command in Neutron will display the controller on which the
-``ipv6-router`` is spawned.
-
-.. code-block:: bash
-
- neutron l3-agent-list-hosting-router ipv6-router
-
-Then you login to that controller and execute steps **SETUP-SVM-24**
-through **SETUP-SVM-30**
-
-**SETUP-SVM-24**: identify the ``ipv6-router`` namespace and move to the namespace
-
-.. code-block:: bash
-
- sudo ip netns exec qrouter-$(neutron router-list | grep -w ipv6-router | \
- awk '{print $2}') bash
-
-**SETUP-SVM-25**: Upon successful execution of the above command, you will be in the router namespace.
-Now let us configure the IPv6 address on the <qr-xxx> interface.
-
-.. code-block:: bash
-
- export router_interface=$(ip a s | grep -w "global qr-*" | awk '{print $7}')
- ip -6 addr add 2001:db8:0:1::1 dev $router_interface
-
-**SETUP-SVM-26**: Update the sample file ``/opt/stack/opnfv_os_ipv6_poc/scenario2/radvd.conf``
-with ``$router_interface``.
-
-.. code-block:: bash
-
- cp /opt/stack/opnfv_os_ipv6_poc/scenario2/radvd.conf /tmp/radvd.$router_interface.conf
- sed -i 's/$router_interface/'$router_interface'/g' /tmp/radvd.$router_interface.conf
-
-**SETUP-SVM-27**: Spawn a ``radvd`` daemon to simulate an external router. This ``radvd`` daemon advertises an IPv6
-subnet prefix of ``2001:db8:0:1::/64`` using RA (Router Advertisement) on its $router_interface so that ``eth0``
-interface of ``vRouter`` automatically configures an IPv6 SLAAC address.
-
-.. code-block:: bash
-
- $radvd -C /tmp/radvd.$router_interface.conf -p /tmp/br-ex.pid.radvd -m syslog
-
-**SETUP-SVM-28**: Add an IPv6 downstream route pointing to the ``eth0`` interface of vRouter.
-
-.. code-block:: bash
-
- ip -6 route add 2001:db8:0:2::/64 via 2001:db8:0:1:f816:3eff:fe11:1111
-
-**SETUP-SVM-29**: The routing table should now look similar to something shown below.
-
-.. code-block:: bash
-
- ip -6 route show
- 2001:db8:0:1::1 dev qr-42968b9e-62 proto kernel metric 256
- 2001:db8:0:1::/64 dev qr-42968b9e-62 proto kernel metric 256 expires 86384sec
- 2001:db8:0:2::/64 via 2001:db8:0:1:f816:3eff:fe11:1111 dev qr-42968b9e-62 proto ra metric 1024 expires 29sec
- fe80::/64 dev qg-3736e0c7-7c proto kernel metric 256
- fe80::/64 dev qr-42968b9e-62 proto kernel metric 256
-
-**SETUP-SVM-30**: If all goes well, the IPv6 addresses assigned to the VMs would be as shown as follows:
-
-.. code-block:: bash
-
- # vRouter eth0 interface would have the following IPv6 address:
- # 2001:db8:0:1:f816:3eff:fe11:1111/64
- # vRouter eth1 interface would have the following IPv6 address:
- # 2001:db8:0:2::1/64
- # VM1 would have the following IPv6 address:
- # 2001:db8:0:2:f816:3eff:fe33:3333/64
- # VM2 would have the following IPv6 address:
- # 2001:db8:0:2:f816:3eff:fe44:4444/64
-
---------------------------------
-Testing to Verify Setup Complete
---------------------------------
-
-Now, let us ``SSH`` to those VMs, e.g. VM1 and / or VM2 and / or vRouter, to confirm that
-it has successfully configured the IPv6 address using ``SLAAC`` with prefix
-``2001:db8:0:2::/64`` from ``vRouter``.
-
-We use ``floatingip`` mechanism to achieve ``SSH``.
-
-**SETUP-SVM-31**: Now we can ``SSH`` to VMs. You can execute the following command.
-
-.. code-block:: bash
-
- # 1. Create a floatingip and associate it with VM1, VM2 and vRouter (to the port id that is passed).
- # Note that the name "ext-net" may work for some installers such as Compass and Joid
- # Change the name "ext-net" to match the name of external network that an installer creates
- neutron floatingip-create --port-id $(neutron port-list | grep -w eth0-VM1 | \
- awk '{print $2}') ext-net
- neutron floatingip-create --port-id $(neutron port-list | grep -w eth0-VM2 | \
- awk '{print $2}') ext-net
- neutron floatingip-create --port-id $(neutron port-list | grep -w eth1-vRouter | \
- awk '{print $2}') ext-net
-
- # 2. To know / display the floatingip associated with VM1, VM2 and vRouter.
- neutron floatingip-list -F floating_ip_address -F port_id | grep $(neutron port-list | \
- grep -w eth0-VM1 | awk '{print $2}') | awk '{print $2}'
- neutron floatingip-list -F floating_ip_address -F port_id | grep $(neutron port-list | \
- grep -w eth0-VM2 | awk '{print $2}') | awk '{print $2}'
- neutron floatingip-list -F floating_ip_address -F port_id | grep $(neutron port-list | \
- grep -w eth1-vRouter | awk '{print $2}') | awk '{print $2}'
-
- # 3. To ssh to the vRouter, VM1 and VM2, user can execute the following command.
- ssh -i ~/vRouterKey fedora@<floating-ip-of-vRouter>
- ssh -i ~/vRouterKey cirros@<floating-ip-of-VM1>
- ssh -i ~/vRouterKey cirros@<floating-ip-of-VM2>
-
If everything goes well, ``ssh`` will be successful and you will be logged into those VMs.
Run some commands to verify that IPv6 addresses are configured on ``eth0`` interface.
-**SETUP-SVM-32**: Show an IPv6 address with a prefix of ``2001:db8:0:2::/64``
+**OPNFV-NATIVE-SETUP-19**: Show an IPv6 address with a prefix of ``2001:db8:0:2::/64``
.. code-block:: bash
ip address show
-**SETUP-SVM-33**: ping some external IPv6 address, e.g. ``ipv6-router``
+**OPNFV-NATIVE-SETUP-20**: ping some external IPv6 address, e.g. ``ipv6-router``
.. code-block:: bash