summaryrefslogtreecommitdiffstats
diff options
context:
space:
mode:
-rw-r--r--docs/setupservicevm/1-ipv6-configguide-odl-setup.rst85
-rw-r--r--docs/setupservicevm/2-ipv6-configguide-os-controller.rst54
-rw-r--r--docs/setupservicevm/3-ipv6-configguide-os-compute.rst55
-rw-r--r--docs/setupservicevm/4-ipv6-configguide-servicevm.rst222
4 files changed, 257 insertions, 159 deletions
diff --git a/docs/setupservicevm/1-ipv6-configguide-odl-setup.rst b/docs/setupservicevm/1-ipv6-configguide-odl-setup.rst
index 6c142c6..ce7823e 100644
--- a/docs/setupservicevm/1-ipv6-configguide-odl-setup.rst
+++ b/docs/setupservicevm/1-ipv6-configguide-odl-setup.rst
@@ -14,15 +14,21 @@ For exemplary purpose, we assume:
**ODL-2**: Download the ODL Lithium distribution from
``http://www.opendaylight.org/software/downloads``
- ``wget https://nexus.opendaylight.org/content/groups/public/org/opendaylight/integration/distribution-karaf/0.3.3-Lithium-SR3/distribution-karaf-0.3.3-Lithium-SR3.tar.gz``
+.. code-block:: bash
+
+ wget https://nexus.opendaylight.org/content/groups/public/org/opendaylight/integration/distribution-karaf/0.3.3-Lithium-SR3/distribution-karaf-0.3.3-Lithium-SR3.tar.gz
**ODL-3**: Extract the tar file
- ``tar -zxvf distribution-karaf-0.3.3-Lithium-SR3.tar.gz``
+.. code-block:: bash
+
+ tar -zxvf distribution-karaf-0.3.3-Lithium-SR3.tar.gz
**ODL-4**: Install Java7
- ``sudo yum install -y java-1.7.0-openjdk.x86_64``
+.. code-block:: bash
+
+ sudo yum install -y java-1.7.0-openjdk.x86_64
**ODL-5 (OPTIONAL)**: We are using ``iptables`` instead of
``firewalld`` but this is optional for the OpenDaylight Controller
@@ -30,75 +36,80 @@ Node. The objective is to allow all connections on the internal
private network (ens160). The same objective can be achieved using
firewalld as well. **If you intend to use firewalld, please skip this step and directly go to next step**:
- ``sudo systemctl stop firewalld.service``
-
- ``sudo yum remove -y firewalld``
-
- ``sudo yum install -y iptables-services``
-
- ``sudo touch /etc/sysconfig/iptables``
-
- ``sudo systemctl enable iptables.service``
-
- ``sudo systemctl start iptables.service``
+.. code-block:: bash
- ``sudo iptables -I INPUT 1 -i ens160 -j ACCEPT``
-
- ``# For ODL DLUX UI``
-
- ``sudo iptables -I INPUT -m state --state NEW -p tcp --dport 8181 -j ACCEPT``
-
- ``sudo iptables-save > /etc/sysconfig/iptables``
+ sudo systemctl stop firewalld.service
+ sudo yum remove -y firewalld
+ sudo yum install -y iptables-services
+ sudo touch /etc/sysconfig/iptables
+ sudo systemctl enable iptables.service
+ sudo systemctl start iptables.service
+ sudo iptables -I INPUT 1 -i ens160 -j ACCEPT
+ sudo iptables -I INPUT -m state --state NEW -p tcp --dport 8181 -j ACCEPT # For ODL DLUX UI
+ sudo iptables-save > /etc/sysconfig/iptables
**ODL-6**: Open a screen session.
- ``screen -S ODL_Controller``
+.. code-block:: bash
+
+ screen -S ODL_Controller
**ODL-7**: In the new screen session, change directory to where Open
Daylight is installed. Here we use ``odl`` directory name and
``Lithium SR3`` installation as an example.
- ``cd ~/odl/distribution-karaf-0.3.3-Lithium-SR3/bin``
+.. code-block:: bash
+
+ cd ~/odl/distribution-karaf-0.3.3-Lithium-SR3/bin
**ODL-8**: Set the JAVA environment variables.
- ``export JAVA_HOME=/usr/lib/jvm/java-1.7.0-openjdk-1.7.0.85-2.6.1.2.el7_1.x86_64/jre``
+.. code-block:: bash
- ``export PATH=$PATH:/usr/lib/jvm/java-1.7.0-openjdk-1.7.0.85-2.6.1.2.el7_1.x86_64/jre/bin``
+ export JAVA_HOME=/usr/lib/jvm/java-1.7.0-openjdk-1.7.0.85-2.6.1.2.el7_1.x86_64/jre
+ export PATH=$PATH:/usr/lib/jvm/java-1.7.0-openjdk-1.7.0.85-2.6.1.2.el7_1.x86_64/jre/bin
**ODL-9**: Run the ``karaf`` shell.
- ``./karaf``
+.. code-block:: bash
+
+ ./karaf
**ODL-10**: You are now in the Karaf shell of Open Daylight. To explore the list of available features you can execute
``feature:list``. In order to enable Open Daylight with OpenStack, you have to load the ``odl-ovsdb-openstack``
feature.
- ``opendaylight-user@opnfv>feature:install odl-ovsdb-openstack``
+.. code-block:: bash
+
+ opendaylight-user@opnfv>feature:install odl-ovsdb-openstack
**ODL-11**: Verify that OVSDB feature is installed successfully.
- ``opendaylight-user@opnfv>feature:list -i | grep ovsdb``
+.. code-block:: bash
-| odl-ovsdb-openstack | 1.1.1-Lithium-SR1 | x | ovsdb-1.1.1-Lithium-SR1 | OpenDaylight :: OVSDB :: OpenStack Network Virtual
-| odl-ovsdb-southbound-api | 1.1.1-Lithium-SR1 | x | odl-ovsdb-southbound-1.1.1-Lithium-SR1 | OpenDaylight :: southbound :: api
-| odl-ovsdb-southbound-impl | 1.1.1-Lithium-SR1 | x | odl-ovsdb-southbound-1.1.1-Lithium-SR1 | OpenDaylight :: southbound :: impl
-| odl-ovsdb-southbound-impl-rest|1.1.1-Lithium-SR1 | x | odl-ovsdb-southbound-1.1.1-Lithium-SR1| OpenDaylight :: southbound :: impl :: REST
-| odl-ovsdb-southbound-impl-ui | 1.1.1-Lithium-SR1| x | odl-ovsdb-southbound-1.1.1-Lithium-SR1| OpenDaylight :: southbound :: impl :: UI
-| ``opendaylight-user@opnfv>``
+ opendaylight-user@opnfv>feature:list -i | grep ovsdb
+ odl-ovsdb-openstack | 1.1.1-Lithium-SR1 | x | ovsdb-1.1.1-Lithium-SR1 | OpenDaylight :: OVSDB :: OpenStack Network Virtual
+ odl-ovsdb-southbound-api | 1.1.1-Lithium-SR1 | x | odl-ovsdb-southbound-1.1.1-Lithium-SR1 | OpenDaylight :: southbound :: api
+ odl-ovsdb-southbound-impl | 1.1.1-Lithium-SR1 | x | odl-ovsdb-southbound-1.1.1-Lithium-SR1 | OpenDaylight :: southbound :: impl
+ odl-ovsdb-southbound-impl-rest|1.1.1-Lithium-SR1 | x | odl-ovsdb-southbound-1.1.1-Lithium-SR1| OpenDaylight :: southbound :: impl :: REST
+ odl-ovsdb-southbound-impl-ui | 1.1.1-Lithium-SR1| x | odl-ovsdb-southbound-1.1.1-Lithium-SR1| OpenDaylight :: southbound :: impl :: UI
+ opendaylight-user@opnfv>
**ODL-12**: To view the logs, you can use the following commands (or alternately the file data/log/karaf.log).
- ``opendaylight-user@opnfv>log:display``
+.. code-block:: bash
- ``opendaylight-user@opnfv>log:tail``
+ opendaylight-user@opnfv>log:display
+ opendaylight-user@opnfv>log:tail
**ODL-13**: To enable ODL DLUX UI, install the following features.
Then you can navigate to
``http://<opnfv-odl-controller IP address>:8181/index.html`` for DLUX
UI. The default user-name and password is ``admin/admin``.
- ``opendaylight-user@opnfv>feature:install odl-restconf odl-l2switch-switch odl-mdsal-apidocs odl-dlux-core``
+.. code-block:: bash
+
+ opendaylight-user@opnfv>feature:install odl-restconf odl-l2switch-switch odl-mdsal-apidocs odl-dlux-core
**ODL-14**: To exit out of screen session, please use the command ``CTRL+a`` followed by ``d``
diff --git a/docs/setupservicevm/2-ipv6-configguide-os-controller.rst b/docs/setupservicevm/2-ipv6-configguide-os-controller.rst
index f748f0b..b0dd63b 100644
--- a/docs/setupservicevm/2-ipv6-configguide-os-controller.rst
+++ b/docs/setupservicevm/2-ipv6-configguide-os-controller.rst
@@ -20,15 +20,21 @@ For exemplary purpose, we assume:
**OS-N-3**: Download devstack and switch to stable/kilo branch
- ``git clone https://github.com/openstack-dev/devstack.git -b stable/kilo``
+.. code-block:: bash
+
+ git clone https://github.com/openstack-dev/devstack.git -b stable/kilo
**OS-N-4**: Start a new terminal, and change directory to where OpenStack is installed.
- ``cd ~/devstack``
+.. code-block:: bash
+
+ cd ~/devstack
**OS-N-5**: Create a ``local.conf`` file with the contents from the following URL.
- ``http://fpaste.org/276949/39476214/``
+.. code-block:: bash
+
+ http://fpaste.org/276949/39476214/
Please note:
@@ -39,17 +45,21 @@ Please note:
**OS-N-6**: Initiate Openstack setup by invoking ``stack.sh``
- ``./stack.sh``
+.. code-block:: bash
+
+ ./stack.sh
**OS-N-7**: If the setup is successful you would see the following logs on the console. Please note
that the IP addresses are all for the purpose of example. Your IP addresses will match the ones
of your actual network interfaces.
- ``This is your host ip: <opnfv-os-controller IP address>``
-| ``Horizon is now available at http://<opnfv-os-controller IP address>/``
-| ``Keystone is serving at http://<opnfv-os-controller IP address>:5000/``
-| ``The default users are: admin and demo``
-| ``The password: password``
+.. code-block:: bash
+
+ This is your host ip: <opnfv-os-controller IP address>
+ Horizon is now available at http://<opnfv-os-controller IP address>/
+ Keystone is serving at http://<opnfv-os-controller IP address>:5000/
+ The default users are: admin and demo
+ The password: password
**OS-N-8**: Assuming that all goes well, you can set ``OFFLINE=True`` and ``RECLONE=no`` in ``local.conf``
to lock the codebase. Devstack uses these configuration parameters to determine if it has to run with
@@ -57,19 +67,23 @@ the existing codebase or update to the latest copy.
**OS-N-9**: Source the credentials.
- ``opnfv@opnfv-os-controller:~/devstack$ source openrc admin demo``
+.. code-block:: bash
+
+ opnfv@opnfv-os-controller:~/devstack$ source openrc admin demo
**OS-N-10**: Verify some commands to check if setup is working fine.
- ``opnfv@opnfv-os-controller:~/devstack$ nova flavor-list``
-| +----+-----------+-----------+------+-----------+------+-------+-------------+-----------+
-| | ID | Name | Memory_MB | Disk | Ephemeral | Swap | VCPUs | RXTX_Factor | Is_Public |
-| +----+-----------+-----------+------+-----------+------+-------+-------------+-----------+
-| | 1 | m1.tiny | 512 | 1 | 0 | | 1 | 1.0 | True |
-| | 2 | m1.small | 2048 | 20 | 0 | | 1 | 1.0 | True |
-| | 3 | m1.medium | 4096 | 40 | 0 | | 2 | 1.0 | True |
-| | 4 | m1.large | 8192 | 80 | 0 | | 4 | 1.0 | True |
-| | 5 | m1.xlarge | 16384 | 160 | 0 | | 8 | 1.0 | True |
-| +----+-----------+-----------+------+-----------+------+-------+-------------+-----------+
+.. code-block:: bash
+
+ opnfv@opnfv-os-controller:~/devstack$ nova flavor-list
+ +----+-----------+-----------+------+-----------+------+-------+-------------+-----------+
+ | ID | Name | Memory_MB | Disk | Ephemeral | Swap | VCPUs | RXTX_Factor | Is_Public |
+ +----+-----------+-----------+------+-----------+------+-------+-------------+-----------+
+ | 1 | m1.tiny | 512 | 1 | 0 | | 1 | 1.0 | True |
+ | 2 | m1.small | 2048 | 20 | 0 | | 1 | 1.0 | True |
+ | 3 | m1.medium | 4096 | 40 | 0 | | 2 | 1.0 | True |
+ | 4 | m1.large | 8192 | 80 | 0 | | 4 | 1.0 | True |
+ | 5 | m1.xlarge | 16384 | 160 | 0 | | 8 | 1.0 | True |
+ +----+-----------+-----------+------+-----------+------+-------+-------------+-----------+
Now you can start the Compute node setup.
diff --git a/docs/setupservicevm/3-ipv6-configguide-os-compute.rst b/docs/setupservicevm/3-ipv6-configguide-os-compute.rst
index d4208bc..6bc0dd9 100644
--- a/docs/setupservicevm/3-ipv6-configguide-os-compute.rst
+++ b/docs/setupservicevm/3-ipv6-configguide-os-compute.rst
@@ -13,32 +13,43 @@ For exemplary purpose, we assume:
**OS-M-2**: Update the packages and install git
- ``sudo apt-get update -y``
+.. code-block:: bash
- ``sudo apt-get install -y git``
+ sudo apt-get update -y
+ sudo apt-get install -y git
**OS-M-3**: Download devstack and switch to stable/kilo branch
- ``git clone https://github.com/openstack-dev/devstack.git -b stable/kilo``
+.. code-block:: bash
+
+ git clone https://github.com/openstack-dev/devstack.git -b stable/kilo
**OS-M-4**: Start a new terminal, and change directory to where OpenStack is installed.
- ``cd ~/devstack``
+.. code-block:: bash
+
+ cd ~/devstack
**OS-M-5**: Create a ``local.conf`` file with the contents from the following URL.
- ``http://fpaste.org/276958/44395955/``
+.. code-block:: bash
+
+ http://fpaste.org/276958/44395955/
+
+Please Note:
*Note 1: you need to change the IP address of ``SERVICE_HOST`` to point to your actual IP address
-of OpenStack Controller.
+ of OpenStack Controller.
*Note 2: you need to change the IP address of ``ODL_MGR_IP`` to point to your actual IP address
-of Open Daylight Controller.
+ of Open Daylight Controller.
*Note 3: You may have to change the value of ``ODL_PROVIDER_MAPPINGS`` and ``PUBLIC_INTERFACE``
-to match your actual network interface.
+ to match your actual network interface.
**OS-M-6**: Initiate Openstack setup by invoking ``stack.sh``
- ``./stack.sh``
+.. code-block:: bash
+
+ ./stack.sh
**OS-M-7**: Assuming that all goes well, you can set ``OFFLINE=True`` and ``RECLONE=no`` in ``local.conf``
to lock the codebase. Devstack uses these configuration parameters to determine if it has to run with
@@ -46,19 +57,23 @@ the existing codebase or update to the latest copy.
**OS-M-8**: Source the credentials.
- ``opnfv@opnfv-os-controller:~/devstack$ source openrc admin demo``
+.. code-block:: bash
+
+ opnfv@opnfv-os-controller:~/devstack$ source openrc admin demo
**OS-M-9**: Verify some commands to check if setup is working fine.
- ``opnfv@opnfv-os-controller:~/devstack$ nova flavor-list``
-| +----+-----------+-----------+------+-----------+------+-------+-------------+-----------+
-| | ID | Name | Memory_MB | Disk | Ephemeral | Swap | VCPUs | RXTX_Factor | Is_Public |
-| +----+-----------+-----------+------+-----------+------+-------+-------------+-----------+
-| | 1 | m1.tiny | 512 | 1 | 0 | | 1 | 1.0 | True |
-| | 2 | m1.small | 2048 | 20 | 0 | | 1 | 1.0 | True |
-| | 3 | m1.medium | 4096 | 40 | 0 | | 2 | 1.0 | True |
-| | 4 | m1.large | 8192 | 80 | 0 | | 4 | 1.0 | True |
-| | 5 | m1.xlarge | 16384 | 160 | 0 | | 8 | 1.0 | True |
-| +----+-----------+-----------+------+-----------+------+-------+-------------+-----------+
+.. code-block:: bash
+
+ opnfv@opnfv-os-controller:~/devstack$ nova flavor-list
+ +----+-----------+-----------+------+-----------+------+-------+-------------+-----------+
+ | ID | Name | Memory_MB | Disk | Ephemeral | Swap | VCPUs | RXTX_Factor | Is_Public |
+ +----+-----------+-----------+------+-----------+------+-------+-------------+-----------+
+ | 1 | m1.tiny | 512 | 1 | 0 | | 1 | 1.0 | True |
+ | 2 | m1.small | 2048 | 20 | 0 | | 1 | 1.0 | True |
+ | 3 | m1.medium | 4096 | 40 | 0 | | 2 | 1.0 | True |
+ | 4 | m1.large | 8192 | 80 | 0 | | 4 | 1.0 | True |
+ | 5 | m1.xlarge | 16384 | 160 | 0 | | 8 | 1.0 | True |
+ +----+-----------+-----------+------+-----------+------+-------+-------------+-----------+
Now you can start to set up the service VM as an IPv6 vRouter in the environment of OpenStack and Open Daylight.
diff --git a/docs/setupservicevm/4-ipv6-configguide-servicevm.rst b/docs/setupservicevm/4-ipv6-configguide-servicevm.rst
index 1ffbc53..8e83022 100644
--- a/docs/setupservicevm/4-ipv6-configguide-servicevm.rst
+++ b/docs/setupservicevm/4-ipv6-configguide-servicevm.rst
@@ -18,11 +18,15 @@ Source the Credentials in OpenStack Controller Node
**SETUP-SVM-1**: Login with username ``opnfv`` in OpenStack Controller Node ``opnfv-os-controller``.
Start a new terminal, and change directory to where OpenStack is installed.
- ``cd ~/devstack``
+.. code-block:: bash
+
+ cd ~/devstack
**SETUP-SVM-2**: Source the credentials.
- ``opnfv@opnfv-os-controller:~/devstack$ source openrc admin demo``
+.. code-block:: bash
+
+ opnfv@opnfv-os-controller:~/devstack$ source openrc admin demo
**************************************
Add External Connectivity to ``br-ex``
@@ -38,30 +42,33 @@ from ``eth1`` to ``br-ex``.
**SETUP-SVM-3**: Add ``eth1`` to ``br-ex`` and move the IP address and the default route from ``eth1`` to ``br-ex``
- ``sudo ip addr del <External IP address of opnfv-os-controller> dev eth1 && sudo ovs-vsctl add-port br-ex eth1 &&
+.. code-block:: bash
+
+ sudo ip addr del <External IP address of opnfv-os-controller> dev eth1 && sudo ovs-vsctl add-port br-ex eth1 &&
sudo ifconfig eth1 up && sudo ip addr add <External IP address of opnfv-os-controller> dev br-ex && sudo ifconfig
-br-ex up && sudo ip route add default via <Default gateway IP address of opnfv-os-controller> dev br-ex``
+br-ex up && sudo ip route add default via <Default gateway IP address of opnfv-os-controller> dev br-ex
-* Note: This can be automated in /etc/network/interfaces.
+Please note that **this can be automated in /etc/network/interfaces**.
**SETUP-SVM-4**: Verify that ``br-ex`` now has the original external IP address, and that the default route is on
``br-ex``
- ``opnfv@opnfv-os-controller:~/devstack$ ip a s br-ex``
-| 38: br-ex: <BROADCAST,UP,LOWER_UP> mtu 1430 qdisc noqueue state UNKNOWN group default
-| link/ether 00:50:56:82:42:d1 brd ff:ff:ff:ff:ff:ff
-| inet 198.59.156.113/24 brd 198.59.156.255 scope global br-ex
-| valid_lft forever preferred_lft forever
-| inet6 fe80::543e:28ff:fe70:4426/64 scope link
-| valid_lft forever preferred_lft forever
+.. code-block:: bash
- ``opnfv@opnfv-os-controller:~/devstack$ ip route``
-| default via 198.59.156.1 dev br-ex
-| 10.134.156.0/24 dev eth0 proto kernel scope link src 10.134.156.113
-| 192.168.122.0/24 dev virbr0 proto kernel scope link src 192.168.122.1
-| 198.59.156.0/24 dev br-ex proto kernel scope link src 198.59.156.113
+ opnfv@opnfv-os-controller:~/devstack$ ip a s br-ex
+ 38: br-ex: <BROADCAST,UP,LOWER_UP> mtu 1430 qdisc noqueue state UNKNOWN group default
+ link/ether 00:50:56:82:42:d1 brd ff:ff:ff:ff:ff:ff
+ inet 198.59.156.113/24 brd 198.59.156.255 scope global br-ex
+ valid_lft forever preferred_lft forever
+ inet6 fe80::543e:28ff:fe70:4426/64 scope link
+ valid_lft forever preferred_lft forever
+ opnfv@opnfv-os-controller:~/devstack$ ip route
+ default via 198.59.156.1 dev br-ex
+ 10.134.156.0/24 dev eth0 proto kernel scope link src 10.134.156.113
+ 192.168.122.0/24 dev virbr0 proto kernel scope link src 192.168.122.1
+ 198.59.156.0/24 dev br-ex proto kernel scope link src 198.59.156.113
-* Note: The IP addresses above are exemplary purpose
+Please note that The IP addresses above are exemplary purpose
********************************************************
Create IPv4 Subnet and Router with External Connectivity
@@ -69,37 +76,48 @@ Create IPv4 Subnet and Router with External Connectivity
**SETUP-SVM-5**: Create a Neutron router ``ipv4-router`` which needs to provide external connectivity.
- ``neutron router-create ipv4-router``
+.. code-block:: bash
+
+ neutron router-create ipv4-router
**SETUP-SVM-6**: Create an external network/subnet ``ext-net`` using the appropriate values based on the
data-center physical network setup.
- ``neutron net-create --router:external ext-net``
+.. code-block:: bash
- ``neutron subnet-create --disable-dhcp --allocation-pool start=198.59.156.251,end=198.59.156.254 --gateway
-198.59.156.1 ext-net 198.59.156.0/24``
+ neutron net-create --router:external ext-net
+ neutron subnet-create --disable-dhcp --allocation-pool start=198.59.156.251,end=198.59.156.254 --gateway
+198.59.156.1 ext-net 198.59.156.0/24
-* Note: The IP addresses in the command above are for exemplary purpose. **Please replace the IP addresses of
+Please note that the IP addresses in the command above are for exemplary purpose. **Please replace the IP addresses of
your actual network**.
**SETUP-SVM-7**: Associate the ``ext-net`` to the Neutron router ``ipv4-router``.
- ``neutron router-gateway-set ipv4-router ext-net``
+.. code-block:: bash
+
+ neutron router-gateway-set ipv4-router ext-net
**SETUP-SVM-8**: Create an internal/tenant IPv4 network ``ipv4-int-network1``
- ``neutron net-create ipv4-int-network1``
+.. code-block:: bash
+
+ neutron net-create ipv4-int-network1
**SETUP-SVM-9**: Create an IPv4 subnet ``ipv4-int-subnet1`` in the internal network ``ipv4-int-network1``
- ``neutron subnet-create --name ipv4-int-subnet1 --dns-nameserver 8.8.8.8 ipv4-int-network1 20.0.0.0/24``
+.. code-block:: bash
+
+ neutron subnet-create --name ipv4-int-subnet1 --dns-nameserver 8.8.8.8 ipv4-int-network1 20.0.0.0/24
-* Note: The IP addresses in the command above are for exemplary purpose. **Please replace the IP addresses of your
+Please note that the IP addresses in the command above are for exemplary purpose. **Please replace the IP addresses of your
actual network**
**SETUP-SVM-10**: Associate the IPv4 internal subnet ``ipv4-int-subnet1`` to the Neutron router ``ipv4-router``.
- ``neutron router-interface-add ipv4-router ipv4-int-subnet1``
+.. code-block:: bash
+
+ neutron router-interface-add ipv4-router ipv4-int-subnet1
********************************************************
Create IPv6 Subnet and Router with External Connectivity
@@ -110,27 +128,37 @@ IPv6 router.
**SETUP-SVM-11**: Create a second Neutron router ``ipv6-router`` which needs to provide external connectivity
- ``neutron router-create ipv6-router``
+.. code-block:: bash
+
+ neutron router-create ipv6-router
**SETUP-SVM-12**: Associate the ``ext-net`` to the Neutron router ``ipv6-router``
- ``neutron router-gateway-set ipv6-router ext-net``
+.. code-block:: bash
+
+ neutron router-gateway-set ipv6-router ext-net
**SETUP-SVM-13**: Create a second internal/tenant IPv4 network ``ipv4-int-network2``
- ``neutron net-create ipv4-int-network2``
+.. code-block:: bash
+
+ neutron net-create ipv4-int-network2
**SETUP-SVM-14**: Create an IPv4 subnet ``ipv4-int-subnet2`` for the ``ipv6-router`` internal network
``ipv4-int-network2``
- ``neutron subnet-create --name ipv4-int-subnet2 --dns-nameserver 8.8.8.8 ipv4-int-network2 10.0.0.0/24``
+.. code-block:: bash
-* Note: The IP addresses in the command above are for exemplary purpose. **Please replace the IP addresses of
+ neutron subnet-create --name ipv4-int-subnet2 --dns-nameserver 8.8.8.8 ipv4-int-network2 10.0.0.0/24
+
+Please note that the IP addresses in the command above are for exemplary purpose. **Please replace the IP addresses of
your actual network**
**SETUP-SVM-15**: Associate the IPv4 internal subnet ``ipv4-int-subnet2`` to the Neutron router ``ipv6-router``.
- ``neutron router-interface-add ipv6-router ipv4-int-subnet2``
+.. code-block:: bash
+
+ neutron router-interface-add ipv6-router ipv4-int-subnet2
**************************************************
Prepare Image, Metadata and Keypair for Service VM
@@ -138,18 +166,24 @@ Prepare Image, Metadata and Keypair for Service VM
**SETUP-SVM-16**: Download ``fedora20`` image which would be used as ``vRouter``
- ``glance image-create --name 'Fedora20' --disk-format qcow2 --container-format bare --is-public true --copy-from http://cloud.fedoraproject.org/fedora-20.x86_64.qcow2``
+.. code-block:: bash
+
+ glance image-create --name 'Fedora20' --disk-format qcow2 --container-format bare --is-public true --copy-from http://cloud.fedoraproject.org/fedora-20.x86_64.qcow2
**SETUP-SVM-17**: Create a keypair
- ``nova keypair-add vRouterKey > ~/vRouterKey``
+.. code-block:: bash
+
+ nova keypair-add vRouterKey > ~/vRouterKey
**SETUP-SVM-18**: Copy the contents from the following url to ``metadata.txt``, i.e. preparing metadata which enables
IPv6 router functionality inside ``vRouter``
- ``http://fpaste.org/303942/50781923/``
+.. code-block:: bash
+
+ http://fpaste.org/303942/50781923/
-* Note: this ``metadata.txt`` will enable the ``vRouter`` to automatically spawn a ``radvd`` daemon, which advertises
+Please note that this ``metadata.txt`` will enable the ``vRouter`` to automatically spawn a ``radvd`` daemon, which advertises
its IPv6 subnet prefix ``2001:db8:0:2::/64`` in RA (Router Advertisement) message through its ``eth1`` interface to
other VMs on ``ipv4-int-network1``. The ``radvd`` daemon also advertises the routing information, which routes to
``2001:db8:0:2::/64`` subnet, in RA (Router Advertisement) message through its ``eth0`` interface to ``eth1``
@@ -165,24 +199,29 @@ and ``eth1`` interface on ``ipv4-int-network1`` connecting to ``ipv4-router``.
**SETUP-SVM-19**: Boot the ``vRouter`` using ``Fedora20`` image on the OpenStack Compute Node with hostname
``opnfv-os-compute``
- ``nova boot --image Fedora20 --flavor m1.small --user-data ./metadata.txt --availability-zone nova:opnfv-os-compute
+.. code-block:: bash
+
+ nova boot --image Fedora20 --flavor m1.small --user-data ./metadata.txt --availability-zone nova:opnfv-os-compute
--nic net-id=$(neutron net-list | grep -w ipv4-int-network2 | awk '{print $2}')
---nic net-id=$(neutron net-list | grep -w ipv4-int-network1 | awk '{print $2}') --key-name vRouterKey vRouter``
+--nic net-id=$(neutron net-list | grep -w ipv4-int-network1 | awk '{print $2}') --key-name vRouterKey vRouter
**SETUP-SVM-20**: Verify that ``Fedora20`` image boots up successfully and the ssh keys are properly injected
- ``nova list``
+.. code-block:: bash
+
+ nova list
+ nova console-log vRouter
- ``nova console-log vRouter``
+Please note that **it may take a few minutes** for the necessary packages to get installed and ``ssh`` keys to be injected.
-* Note: It may take few minutes for the necessary packages to get installed and ssh keys to be injected.
+.. code-block:: bash
- ``# Sample Output``
-| [ 762.884523] cloud-init[871]: ec2: #############################################################
-| [ 762.909634] cloud-init[871]: ec2: -----BEGIN SSH HOST KEY FINGERPRINTS-----
-| [ 762.931626] cloud-init[871]: ec2: 2048 e3:dc:3d:4a:bc:b6:b0:77:75:a1:70:a3:d0:2a:47:a9 (RSA)
-| [ 762.957380] cloud-init[871]: ec2: -----END SSH HOST KEY FINGERPRINTS-----
-| [ 762.979554] cloud-init[871]: ec2: #############################################################
+ # Sample Output
+ [ 762.884523] cloud-init[871]: ec2: #############################################################
+ [ 762.909634] cloud-init[871]: ec2: -----BEGIN SSH HOST KEY FINGERPRINTS-----
+ [ 762.931626] cloud-init[871]: ec2: 2048 e3:dc:3d:4a:bc:b6:b0:77:75:a1:70:a3:d0:2a:47:a9 (RSA)
+ [ 762.957380] cloud-init[871]: ec2: -----END SSH HOST KEY FINGERPRINTS-----
+ [ 762.979554] cloud-init[871]: ec2: #############################################################
*******************************************
Boot Two Other VMs in ``ipv4-int-network1``
@@ -197,23 +236,27 @@ options or via ``meta-data``.
**SETUP-SVM-21**: Create VM1 on OpenStack Controller Node with hostname ``opnfv-os-controller``
- ``nova boot --image cirros-0.3.4-x86_64-uec --flavor m1.tiny --nic net-id=$(neutron net-list |
+.. code-block:: bash
+
+ nova boot --image cirros-0.3.4-x86_64-uec --flavor m1.tiny --nic net-id=$(neutron net-list |
grep -w ipv4-int-network1 | awk '{print $2}')
---availability-zone nova:opnfv-os-controller --key-name vRouterKey VM1``
+--availability-zone nova:opnfv-os-controller --key-name vRouterKey VM1
**SETUP-SVM-22**: Create VM2 on OpenStack Compute Node with hostname ``opnfv-os-compute``
- ``nova boot --image cirros-0.3.4-x86_64-uec --flavor m1.tiny --nic net-id=$(neutron net-list |
+.. code-block:: bash
+
+ nova boot --image cirros-0.3.4-x86_64-uec --flavor m1.tiny --nic net-id=$(neutron net-list |
grep -w ipv4-int-network1 | awk '{print $2}')
---availability-zone nova:opnfv-os-compute --key-name vRouterKey VM2``
+--availability-zone nova:opnfv-os-compute --key-name vRouterKey VM2
**SETUP-SVM-23**: Confirm that both the VMs are successfully booted.
- ``nova list``
-
- ``nova console-log VM1``
+.. code-block:: bash
- ``nova console-log VM2``
+ nova list
+ nova console-log VM1
+ nova console-log VM2
**********************************
Spawn ``RADVD`` in ``ipv6-router``
@@ -224,52 +267,59 @@ First of all, we will have to identify the ``ipv6-router`` namespace and move to
**SETUP-SVM-24**: identify the ``ipv6-router`` namespace and move to the namespace
- ``sudo ip netns exec qrouter-$(neutron router-list | grep -w ipv6-router | awk '{print $2}') bash``
+.. code-block:: bash
+
+ sudo ip netns exec qrouter-$(neutron router-list | grep -w ipv6-router | awk '{print $2}') bash
**SETUP-SVM-25**: Upon successful execution of the above command, you will be in the router namespace.
Now let us configure the IPv6 address on the <qr-xxx> interface.
- ``router_interface=$(ip a s | grep -w "global qr-*" | awk '{print $7}')``
+.. code-block:: bash
- ``ip -6 addr add 2001:db8:0:1::1 dev $router_interface``
+ router_interface=$(ip a s | grep -w "global qr-*" | awk '{print $7}')
+ ip -6 addr add 2001:db8:0:1::1 dev $router_interface
**SETUP-SVM-26**: Copy the following contents to some file, e.g. ``/tmp/br-ex.radvd.conf``
-.. code-block::
+.. code-block:: bash
- interface $router_interface
- {
- AdvSendAdvert on;
- MinRtrAdvInterval 3;
- MaxRtrAdvInterval 10;
- prefix 2001:db8:0:1::/64
- {
- AdvOnLink on;
- AdvAutonomous on;
- };
- };
+ interface $router_interface
+ {
+ AdvSendAdvert on;
+ MinRtrAdvInterval 3;
+ MaxRtrAdvInterval 10;
+ prefix 2001:db8:0:1::/64
+ {
+ AdvOnLink on;
+ AdvAutonomous on;
+ };
+ };
**SETUP-SVM-27**: Spawn a ``radvd`` daemon to simulate an external router. This ``radvd`` daemon advertises its
IPv6 subnet prefix ``2001:db8:0:1::/64`` in RA (Router Advertisement) message through its ``eth1`` interface to
``eth0`` interface of ``vRouter`` on ``ipv4-int-network2``.
- ``$radvd -C /tmp/br-ex.radvd.conf -p /tmp/br-ex.pid.radvd -m syslog``
+.. code-block:: bash
+
+ $radvd -C /tmp/br-ex.radvd.conf -p /tmp/br-ex.pid.radvd -m syslog
**SETUP-SVM-28**: Configure the ``$router_interface`` process entries to process the RA (Router Advertisement)
message from ``vRouter``, and automatically add a downstream route pointing to the LLA (Link Local Address) of
``eth0`` interface of the ``vRouter``.
-.. code-block::
+.. code-block:: bash
- sysctl -w net.ipv6.conf.$router_interface.accept_ra=2
- sysctl -w net.ipv6.conf.$router_interface.accept_ra_rt_info_max_plen=64
+ sysctl -w net.ipv6.conf.$router_interface.accept_ra=2
+ sysctl -w net.ipv6.conf.$router_interface.accept_ra_rt_info_max_plen=64
**SETUP-SVM-29**: Please note that after the vRouter successfully initializes and starts sending RA (Router
Advertisement) message (**SETUP-SVM-20**), you would see an IPv6 route to the ''2001:db8:0:2::/64'' prefix
(subnet) reachable via LLA (Link Local Address) of ``eth0`` interface of the ``vRouter``. You can execute the
following command to list the IPv6 routes.
- ``ip -6 route show``
+.. code-block:: bash
+
+ ip -6 route show
********************************
Testing to Verify Setup Complete
@@ -278,29 +328,37 @@ Testing to Verify Setup Complete
Now, let us ``ssh`` to one of the VMs, e.g. VM1, to confirm that it has successfully configured the IPv6 address
using ``SLAAC`` with prefix ``2001:db8:0:2::/64`` from ``vRouter``.
- * Note: You need to get the IPv4 address associated to VM1. This can be inferred from ``nova list`` command.
+Please note that you need to get the IPv4 address associated to VM1. This can be inferred from ``nova list`` command.
**SETUP-SVM-30**: ``ssh`` VM1
- ``ssh -i /home/odl/vRouterKey cirros@<VM1-IPv4-address>``
+.. code-block:: bash
+
+ ssh -i /home/odl/vRouterKey cirros@<VM1-IPv4-address>
If everything goes well, ``ssh`` will be successful and you will be logged into VM1. Run some commands to verify
that IPv6 addresses are configured on ``eth0`` interface.
**SETUP-SVM-31**: Show an IPv6 address with a prefix of ``2001:db8:0:2::/64``
- ``ip address show``
+.. code-block:: bash
+
+ ip address show
**SETUP-SVM-32**: ping some external IPv6 address, e.g. ``ipv6-router``
- ``ping6 2001:db8:0:1::1``
+.. code-block:: bash
+
+ ping6 2001:db8:0:1::1
If the above ping6 command succeeds, it implies that ``vRouter`` was able to successfully forward the IPv6 traffic
to reach external ``ipv6-router``.
**SETUP-SVM-33**: When all tests show that the setup works as expected, You can now exit the ``ipv6-router`` namespace.
- ``exit``
+.. code-block:: bash
+
+ exit
**********
Next Steps