summaryrefslogtreecommitdiffstats
path: root/docs
diff options
context:
space:
mode:
Diffstat (limited to 'docs')
-rw-r--r--docs/setupservicevm/0-ipv6-configguide-prep-infra.rst14
-rw-r--r--docs/setupservicevm/1-ipv6-configguide-odl-setup.rst2
-rw-r--r--docs/setupservicevm/2-ipv6-configguide-os-controller.rst51
-rw-r--r--docs/setupservicevm/3-ipv6-configguide-os-compute.rst71
-rw-r--r--docs/setupservicevm/4-ipv6-configguide-servicevm.rst101
-rw-r--r--docs/setupservicevm/5-ipv6-configguide-scenario-1-native-os.rst202
-rw-r--r--docs/setupservicevm/images/ipv6-topology-scenario-1.pngbin0 -> 49082 bytes
-rw-r--r--docs/setupservicevm/images/ipv6-topology-scenario-2.pngbin0 -> 67153 bytes
-rw-r--r--docs/setupservicevm/index.rst30
-rw-r--r--docs/setupservicevm/scenario-2.rst4
-rw-r--r--docs/setupservicevm/scenario-3.rst8
11 files changed, 299 insertions, 184 deletions
diff --git a/docs/setupservicevm/0-ipv6-configguide-prep-infra.rst b/docs/setupservicevm/0-ipv6-configguide-prep-infra.rst
index 4c2287a..5b3b584 100644
--- a/docs/setupservicevm/0-ipv6-configguide-prep-infra.rst
+++ b/docs/setupservicevm/0-ipv6-configguide-prep-infra.rst
@@ -7,10 +7,10 @@ Architectural Design
********************
The architectural design of using a service VM as an IPv6 vRouter is
-shown as follows in :numref:`figure1`:
+shown as follows in :numref:`s2-figure1`:
.. figure:: images/ipv6-architecture.png
- :name: figure1
+ :name: s2-figure1
:width: 100%
Architectural Design of Using a VM as an IPv6 vRouter
@@ -27,15 +27,15 @@ OpenStack Compute Node.
For exemplary purpose, we give them hostnames ``opnfv-odl-controller``,
``opnfv-os-controller``, and ``opnfv-os-compute`` respectively.
-The underlay network topology of those 3 hosts are shown as follows in :numref:`figure2`:
+The underlay network topology of those 3 hosts are shown as follows in :numref:`s2-figure2`:
-.. figure:: images/ipv6-topology.png
- :name: figure2
+.. figure:: images/ipv6-topology-scenario-2.png
+ :name: s2-figure2
:width: 100%
- Underlay Network Topology
+ Underlay Network Topology - Scenario 2
-**Please note that the IP address shown in** :numref:`figure2`
+**Please note that the IP address shown in** :numref:`s2-figure2`
**are for exemplary purpose. You need to configure your public IP
address connecting to Internet according to your actual network
infrastructure. And you need to make sure the private IP address are
diff --git a/docs/setupservicevm/1-ipv6-configguide-odl-setup.rst b/docs/setupservicevm/1-ipv6-configguide-odl-setup.rst
index ce7823e..95d6be4 100644
--- a/docs/setupservicevm/1-ipv6-configguide-odl-setup.rst
+++ b/docs/setupservicevm/1-ipv6-configguide-odl-setup.rst
@@ -109,7 +109,7 @@ UI. The default user-name and password is ``admin/admin``.
.. code-block:: bash
- opendaylight-user@opnfv>feature:install odl-restconf odl-l2switch-switch odl-mdsal-apidocs odl-dlux-core
+ opendaylight-user@opnfv>feature:install odl-dlux-core
**ODL-14**: To exit out of screen session, please use the command ``CTRL+a`` followed by ``d``
diff --git a/docs/setupservicevm/2-ipv6-configguide-os-controller.rst b/docs/setupservicevm/2-ipv6-configguide-os-controller.rst
index b0dd63b..3efa370 100644
--- a/docs/setupservicevm/2-ipv6-configguide-os-controller.rst
+++ b/docs/setupservicevm/2-ipv6-configguide-os-controller.rst
@@ -2,22 +2,42 @@
Setting Up OpenStack Controller
===============================
+Please **note** that the instructions shown here are using ``devstack`` installer. If you are an experienced
+user and installs OpenStack in a different way, you can skip this step and follow the instructions of the
+method you are using to install OpenStack.
+
For exemplary purpose, we assume:
-* The hostname of OpenStack Controller Node is ``opnfv-os-controller``
-* Ubuntu 14.04 is installed
+* The hostname of OpenStack Controller Node is ``opnfv-os-controller``, and the host IP address is ``192.168.0.10``
+* Ubuntu 14.04 or Fedora 21 is installed
* We use ``opnfv`` as username to login.
-* We use ``devstack`` to install OpenStack Kilo
+* We use ``devstack`` to install OpenStack Kilo. Please note that although the instructions are based on
+OpenStack Kilo, they can be applied to Liberty in the same way.
+
+**OS-N-0**: Login to OpenStack Controller Node with username ``opnfv``
-**OS-N-1**: Login to OpenStack Controller Node with username ``opnfv``
+**OS-N-1**: Update the packages and install git
-**OS-N-2**: Update the packages and install git
+For **Ubuntu**:
.. code-block:: bash
sudo apt-get update -y
sudo apt-get install -y git
+For **Fedora**:
+
+.. code-block:: bash
+
+ sudo yum update -y
+ sudo yum install -y git
+
+**OS-N-2**: Clone the following GitHub repository to get the configuration and metadata files
+
+.. code-block:: bash
+
+ git clone https://github.com/sridhargaddam/opnfv_os_ipv6_poc.git /opt/stack/opnfv_os_ipv6_poc
+
**OS-N-3**: Download devstack and switch to stable/kilo branch
.. code-block:: bash
@@ -30,18 +50,14 @@ For exemplary purpose, we assume:
cd ~/devstack
-**OS-N-5**: Create a ``local.conf`` file with the contents from the following URL.
+**OS-N-5**: Create a ``local.conf`` file from the GitHub repo we cloned at **OS-N-2**.
.. code-block:: bash
- http://fpaste.org/276949/39476214/
-
-Please note:
+ cp /opt/stack/opnfv_os_ipv6_poc/scenario2/local.conf.odl.controller ~/devstack/local.conf
-* Note 1: you need to change the IP address of ``ODL_MGR_IP`` to point to your actual IP address
- of Open Daylight Controller.
-* Note 2: You may have to change the value of ``ODL_PROVIDER_MAPPINGS`` and ``PUBLIC_INTERFACE``
- to match your actual network interfaces.
+Please **note** that you need to change the IP address of ``ODL_MGR_IP`` to point to your actual IP address
+of Open Daylight Controller.
**OS-N-6**: Initiate Openstack setup by invoking ``stack.sh``
@@ -55,12 +71,15 @@ of your actual network interfaces.
.. code-block:: bash
- This is your host ip: <opnfv-os-controller IP address>
- Horizon is now available at http://<opnfv-os-controller IP address>/
- Keystone is serving at http://<opnfv-os-controller IP address>:5000/
+ This is your host IP address: 192.168.0.10
+ This is your host IPv6 address: ::1
+ Horizon is now available at http://192.168.0.10/
+ Keystone is serving at http://192.168.0.10:5000/
The default users are: admin and demo
The password: password
+Please **note** that The IP addresses above are exemplary purpose. It will show you the actual IP address of your host.
+
**OS-N-8**: Assuming that all goes well, you can set ``OFFLINE=True`` and ``RECLONE=no`` in ``local.conf``
to lock the codebase. Devstack uses these configuration parameters to determine if it has to run with
the existing codebase or update to the latest copy.
diff --git a/docs/setupservicevm/3-ipv6-configguide-os-compute.rst b/docs/setupservicevm/3-ipv6-configguide-os-compute.rst
index 1652131..3873646 100644
--- a/docs/setupservicevm/3-ipv6-configguide-os-compute.rst
+++ b/docs/setupservicevm/3-ipv6-configguide-os-compute.rst
@@ -2,22 +2,42 @@
Setting Up OpenStack Compute Node
=================================
+Please **note** that the instructions shown here are using ``devstack`` installer. If you are an experienced user
+and installs OpenStack in a different way, you can skip this step and follow the instructions of the method you
+are using to install OpenStack.
+
For exemplary purpose, we assume:
-* The hostname of OpenStack Compute Node is ``opnfv-os-compute``
-* Ubuntu 14.04 is installed
+* The hostname of OpenStack Compute Node is ``opnfv-os-compute``, and the host IP address is ``192.168.0.20``
+* Ubuntu 14.04 or Fedora 21 is installed
* We use ``opnfv`` as username to login.
-* We use ``devstack`` to install OpenStack Kilo
+* We use ``devstack`` to install OpenStack Kilo. Please note that although the instructions are based on
+OpenStack Kilo, they can be applied to Liberty in the same way.
+
+**OS-M-0**: Login to OpenStack Compute Node with username ``opnfv``
-**OS-M-1**: Login to OpenStack Compute Node with username ``opnfv``
+**OS-M-1**: Update the packages and install git
-**OS-M-2**: Update the packages and install git
+For **Ubuntu**:
.. code-block:: bash
sudo apt-get update -y
sudo apt-get install -y git
+For **Fedora**:
+
+.. code-block:: bash
+
+ sudo yum update -y
+ sudo yum install -y git
+
+**OS-M-2**: Clone the following GitHub repository to get the configuration and metadata files
+
+.. code-block:: bash
+
+ git clone https://github.com/sridhargaddam/opnfv_os_ipv6_poc.git /opt/stack/opnfv_os_ipv6_poc
+
**OS-M-3**: Download devstack and switch to stable/kilo branch
.. code-block:: bash
@@ -30,11 +50,11 @@ For exemplary purpose, we assume:
cd ~/devstack
-**OS-M-5**: Create a ``local.conf`` file with the contents from the following URL.
+**OS-M-5**: Create a ``local.conf`` file from the GitHub repo we cloned at **OS-M-2**.
.. code-block:: bash
- http://fpaste.org/276958/44395955/
+ cp /opt/stack/opnfv_os_ipv6_poc/scenario2/local.conf.odl.compute ~/devstack/local.conf
Please Note:
@@ -42,8 +62,6 @@ Please Note:
of OpenStack Controller.
* Note 2: you need to change the IP address of ``ODL_MGR_IP`` to point to your actual IP address
of Open Daylight Controller.
-* Note 3: You may have to change the value of ``ODL_PROVIDER_MAPPINGS`` and ``PUBLIC_INTERFACE``
- to match your actual network interface.
**OS-M-6**: Initiate Openstack setup by invoking ``stack.sh``
@@ -51,29 +69,34 @@ Please Note:
./stack.sh
-**OS-M-7**: Assuming that all goes well, you can set ``OFFLINE=True`` and ``RECLONE=no`` in ``local.conf``
-to lock the codebase. Devstack uses these configuration parameters to determine if it has to run with
-the existing codebase or update to the latest copy.
+**OS-M-7**: Assuming that all goes well, you should see the following output.
+
+.. code-block:: bash
+
+ This is your host IP address: 192.168.0.20
+ This is your host IPv6 address: ::1
+
+Please **note** that The IP addresses above are exemplary purpose. It will show you the actual IP address of your host.
+
+You can set ``OFFLINE=True`` and ``RECLONE=no`` in ``local.conf`` to lock the codebase. Devstack uses these
+configuration parameters to determine if it has to run with the existing codebase or update to the latest copy.
**OS-M-8**: Source the credentials.
.. code-block:: bash
- opnfv@opnfv-os-controller:~/devstack$ source openrc admin demo
+ opnfv@opnfv-os-compute:~/devstack$ source openrc admin demo
-**OS-M-9**: Verify some commands to check if setup is working fine.
+**OS-M-9**: You can verify that OpenStack is set up correctly by showing hypervisor list
.. code-block:: bash
- opnfv@opnfv-os-controller:~/devstack$ nova flavor-list
- +----+-----------+-----------+------+-----------+------+-------+-------------+-----------+
- | ID | Name | Memory_MB | Disk | Ephemeral | Swap | VCPUs | RXTX_Factor | Is_Public |
- +----+-----------+-----------+------+-----------+------+-------+-------------+-----------+
- | 1 | m1.tiny | 512 | 1 | 0 | | 1 | 1.0 | True |
- | 2 | m1.small | 2048 | 20 | 0 | | 1 | 1.0 | True |
- | 3 | m1.medium | 4096 | 40 | 0 | | 2 | 1.0 | True |
- | 4 | m1.large | 8192 | 80 | 0 | | 4 | 1.0 | True |
- | 5 | m1.xlarge | 16384 | 160 | 0 | | 8 | 1.0 | True |
- +----+-----------+-----------+------+-----------+------+-------+-------------+-----------+
+ opnfv@opnfv-os-compute:~/devstack$ nova hypervisor-list
+ +----+------------------------------------+---------+------------+
+ | ID | Hypervisor hostname | State | Status |
+ +----+------------------------------------+---------+------------+
+ | 1 | opnfv-os-controller | up | enabled |
+ | 2 | opnfv-os-compute | up | enabled |
+ +----+------------------------------------+---------+------------+
Now you can start to set up the service VM as an IPv6 vRouter in the environment of OpenStack and Open Daylight.
diff --git a/docs/setupservicevm/4-ipv6-configguide-servicevm.rst b/docs/setupservicevm/4-ipv6-configguide-servicevm.rst
index 295aab3..884f931 100644
--- a/docs/setupservicevm/4-ipv6-configguide-servicevm.rst
+++ b/docs/setupservicevm/4-ipv6-configguide-servicevm.rst
@@ -37,8 +37,8 @@ Because we need to manually create networks/subnets to achieve the IPv6 vRouter,
``devstack`` does not create any networks/subnets during the setup phase.
In OpenStack Controller Node ``opnfv-os-controller``, ``eth1`` is configured to provide external/public connectivity
-for both IPv4 and IPv6. So let us add this interface to ``br-ex`` and move the IP address, including the default route
-from ``eth1`` to ``br-ex``.
+for both IPv4 and IPv6 (optional). So let us add this interface to ``br-ex`` and move the IP address, including the
+default route from ``eth1`` to ``br-ex``.
**SETUP-SVM-3**: Add ``eth1`` to ``br-ex`` and move the IP address and the default route from ``eth1`` to ``br-ex``
@@ -112,9 +112,6 @@ your actual network**.
neutron subnet-create --name ipv4-int-subnet1 --dns-nameserver 8.8.8.8 ipv4-int-network1 20.0.0.0/24
-Please note that the IP addresses in the command above are for exemplary purpose. **Please replace the
-IP addresses of your actual network**
-
**SETUP-SVM-10**: Associate the IPv4 internal subnet ``ipv4-int-subnet1`` to the Neutron router ``ipv4-router``.
.. code-block:: bash
@@ -153,9 +150,6 @@ IPv6 router.
neutron subnet-create --name ipv4-int-subnet2 --dns-nameserver 8.8.8.8 ipv4-int-network2 10.0.0.0/24
-Please note that the IP addresses in the command above are for exemplary purpose. **Please replace the IP addresses of
-your actual network**
-
**SETUP-SVM-15**: Associate the IPv4 internal subnet ``ipv4-int-subnet2`` to the Neutron router ``ipv6-router``.
.. code-block:: bash
@@ -166,11 +160,11 @@ your actual network**
Prepare Image, Metadata and Keypair for Service VM
**************************************************
-**SETUP-SVM-16**: Download ``fedora20`` image which would be used as ``vRouter``
+**SETUP-SVM-16**: Download ``fedora22`` image which would be used as ``vRouter``
.. code-block:: bash
- glance image-create --name 'Fedora20' --disk-format qcow2 --container-format bare --is-public true --copy-from http://cloud.fedoraproject.org/fedora-20.x86_64.qcow2
+ glance image-create --name 'Fedora22' --disk-format qcow2 --container-format bare --is-public true --copy-from https://download.fedoraproject.org/pub/fedora/linux/releases/22/Cloud/x86_64/Images/Fedora-Cloud-Base-22-20150521.x86_64.qcow2
**SETUP-SVM-17**: Create a keypair
@@ -178,18 +172,14 @@ Prepare Image, Metadata and Keypair for Service VM
nova keypair-add vRouterKey > ~/vRouterKey
-**SETUP-SVM-18**: Copy the contents from the following url to ``metadata.txt``, i.e. preparing metadata which enables
-IPv6 router functionality inside ``vRouter``
+**SETUP-SVM-18**: Create ports for ``vRouter`` and both the VMs with some specific MAC addresses.
.. code-block:: bash
- http://fpaste.org/303942/50781923/
-
-Please note that this ``metadata.txt`` will enable the ``vRouter`` to automatically spawn a ``radvd`` daemon,
-which advertises its IPv6 subnet prefix ``2001:db8:0:2::/64`` in RA (Router Advertisement) message through
-its ``eth1`` interface to other VMs on ``ipv4-int-network1``. The ``radvd`` daemon also advertises the routing
-information, which routes to ``2001:db8:0:2::/64`` subnet, in RA (Router Advertisement) message through its
-``eth0`` interface to ``eth1`` interface of ``ipv6-router`` on ``ipv4-int-network2``.
+ neutron port-create --name eth0-vRouter --mac-address fa:16:3e:11:11:11 ipv4-int-network2
+ neutron port-create --name eth1-vRouter --mac-address fa:16:3e:22:22:22 ipv4-int-network1
+ neutron port-create --name eth0-VM1 --mac-address fa:16:3e:33:33:33 ipv4-int-network1
+ neutron port-create --name eth0-VM2 --mac-address fa:16:3e:44:44:44 ipv4-int-network1
**********************************************************************************************************
Boot Service VM (``vRouter``) with ``eth0`` on ``ipv4-int-network2`` and ``eth1`` on ``ipv4-int-network1``
@@ -198,14 +188,22 @@ Boot Service VM (``vRouter``) with ``eth0`` on ``ipv4-int-network2`` and ``eth1`
Let us boot the service VM (``vRouter``) with ``eth0`` interface on ``ipv4-int-network2`` connecting to ``ipv6-router``,
and ``eth1`` interface on ``ipv4-int-network1`` connecting to ``ipv4-router``.
-**SETUP-SVM-19**: Boot the ``vRouter`` using ``Fedora20`` image on the OpenStack Compute Node with hostname
+**SETUP-SVM-19**: Boot the ``vRouter`` using ``Fedora22`` image on the OpenStack Compute Node with hostname
``opnfv-os-compute``
.. code-block:: bash
- nova boot --image Fedora20 --flavor m1.small --user-data ./metadata.txt --availability-zone nova:opnfv-os-compute --nic net-id=$(neutron net-list | grep -w ipv4-int-network2 | awk '{print $2}') --nic net-id=$(neutron net-list | grep -w ipv4-int-network1 | awk '{print $2}') --key-name vRouterKey vRouter
+ nova boot --image Fedora22 --flavor m1.small --user-data /opt/stack/opnfv_os_ipv6_poc/metadata.txt --availability-zone nova:opnfv-os-compute --nic port-id=$(neutron port-list | grep -w eth0-vRouter | awk '{print $2}') --nic port-id=$(neutron port-list | grep -w eth1-vRouter | awk '{print $2}') --key-name vRouterKey vRouter
+
+Please **note** that ``/opt/stack/opnfv_os_ipv6_poc/metadata.txt`` is used to enable the ``vRouter`` to automatically
+spawn a ``radvd``, and:
+* Act as an IPv6 vRouter which advertises the RA (Router Advertisements) with prefix ``2001:db8:0:2::/64`` on its
+internal interface (``eth1``).
+* Advertise RA (Router Advertisements) with just route information on its eth0 interface so that ``ipv6-router`` can
+automatically add a downstream route to subnet ``2001:db8:0:2::/64`` whose next hop would be the ``eth0`` interface
+of ``vRouter``.
-**SETUP-SVM-20**: Verify that ``Fedora20`` image boots up successfully and the ``ssh`` keys are properly injected
+**SETUP-SVM-20**: Verify that ``Fedora22`` image boots up successfully and vRouter has ``ssh`` keys properly injected
.. code-block:: bash
@@ -239,13 +237,13 @@ options or via ``meta-data``.
.. code-block:: bash
- nova boot --image cirros-0.3.4-x86_64-uec --flavor m1.tiny --nic net-id=$(neutron net-list | grep -w ipv4-int-network1 | awk '{print $2}') --availability-zone nova:opnfv-os-controller --key-name vRouterKey VM1
+ nova boot --image cirros-0.3.4-x86_64-uec --flavor m1.tiny --nic port-id=$(neutron port-list | grep -w eth0-VM1 | awk '{print $2}') --availability-zone nova:opnfv-os-controller --key-name vRouterKey --user-data /opt/stack/opnfv_os_ipv6_poc/set_mtu.sh VM1
**SETUP-SVM-22**: Create VM2 on OpenStack Compute Node with hostname ``opnfv-os-compute``
.. code-block:: bash
- nova boot --image cirros-0.3.4-x86_64-uec --flavor m1.tiny --nic net-id=$(neutron net-list | grep -w ipv4-int-network1 | awk '{print $2}') --availability-zone nova:opnfv-os-compute --key-name vRouterKey VM2
+ nova boot --image cirros-0.3.4-x86_64-uec --flavor m1.tiny --nic port-id=$(neutron port-list | grep -w eth0-VM2 | awk '{print $2}') --availability-zone nova:opnfv-os-compute --key-name vRouterKey --user-data /opt/stack/opnfv_os_ipv6_poc/set_mtu.sh VM2
**SETUP-SVM-23**: Confirm that both the VMs are successfully booted.
@@ -276,29 +274,16 @@ Now let us configure the IPv6 address on the <qr-xxx> interface.
router_interface=$(ip a s | grep -w "global qr-*" | awk '{print $7}')
ip -6 addr add 2001:db8:0:1::1 dev $router_interface
-**SETUP-SVM-26**: Copy the following contents to some file, e.g. ``/tmp/br-ex.radvd.conf``
-
-.. code-block:: bash
-
- interface $router_interface
- {
- AdvSendAdvert on;
- MinRtrAdvInterval 3;
- MaxRtrAdvInterval 10;
- prefix 2001:db8:0:1::/64
- {
- AdvOnLink on;
- AdvAutonomous on;
- };
- };
+**SETUP-SVM-26**: Update the file ``/opt/stack/opnfv_os_ipv6_poc/scenario2/radvd.conf`` with ``$router_interface``,
+i.e. replace ``$router_interface`` with the ``qr-xxx`` interface.
-**SETUP-SVM-27**: Spawn a ``radvd`` daemon to simulate an external router. This ``radvd`` daemon advertises its
-IPv6 subnet prefix ``2001:db8:0:1::/64`` in RA (Router Advertisement) message through its ``eth1`` interface to
-``eth0`` interface of ``vRouter`` on ``ipv4-int-network2``.
+**SETUP-SVM-27**: Spawn a ``radvd`` daemon to simulate an external router. This ``radvd`` daemon advertises an IPv6
+subnet prefix of ``2001:db8:0:1::/64`` using RA (Router Advertisement) on its $router_interface so that ``eth0``
+interface of ``vRouter`` automatically configures an IPv6 SLAAC address.
.. code-block:: bash
- $radvd -C /tmp/br-ex.radvd.conf -p /tmp/br-ex.pid.radvd -m syslog
+ $radvd -C /opt/stack/opnfv_os_ipv6_poc/scenario2/radvd.conf -p /tmp/br-ex.pid.radvd -m syslog
**SETUP-SVM-28**: Configure the ``$router_interface`` process entries to process the RA (Router Advertisement)
message from ``vRouter``, and automatically add a downstream route pointing to the LLA (Link Local Address) of
@@ -311,12 +296,26 @@ message from ``vRouter``, and automatically add a downstream route pointing to t
**SETUP-SVM-29**: Please note that after the vRouter successfully initializes and starts sending RA (Router
Advertisement) message (**SETUP-SVM-20**), you would see an IPv6 route to the ``2001:db8:0:2::/64`` prefix
-(subnet) reachable via LLA (Link Local Address) of ``eth0`` interface of the ``vRouter``. You can execute the
-following command to list the IPv6 routes.
+(subnet) reachable via LLA (Link Local Address) ``fe80::f816:3eff:fe11:1111`` of ``eth0`` interface of the
+``vRouter``. You can execute the following command to list the IPv6 routes.
.. code-block:: bash
ip -6 route show
+ 2001:db8:0:1::1 dev qr-42968b9e-62 proto kernel metric 256
+ 2001:db8:0:1::/64 dev qr-42968b9e-62 proto kernel metric 256 expires 86384sec
+ 2001:db8:0:2::/64 via fe80::f816:3eff:fe11:1111 dev qr-42968b9e-62 proto ra metric 1024 expires 29sec
+ fe80::/64 dev qg-3736e0c7-7c proto kernel metric 256
+ fe80::/64 dev qr-42968b9e-62 proto kernel metric 256
+
+**SETUP-SVM-30**: If all goes well, the IPv6 addresses assigned to the VMs would be as shown as follows:
+
+.. code-block:: bash
+
+ vRouter eth0 interface would have the following IPv6 address: 2001:db8:0:1:f816:3eff:fe11:1111/64
+ vRouter eth1 interface would have the following IPv6 address: 2001:db8:0:2::1/64
+ VM1 would have the following IPv6 address: 2001:db8:0:2:f816:3eff:fe33:3333/64
+ VM2 would have the following IPv6 address: 2001:db8:0:2:f816:3eff:fe44:4444/64
********************************
Testing to Verify Setup Complete
@@ -327,7 +326,7 @@ using ``SLAAC`` with prefix ``2001:db8:0:2::/64`` from ``vRouter``.
Please note that you need to get the IPv4 address associated to VM1. This can be inferred from ``nova list`` command.
-**SETUP-SVM-30**: ``ssh`` VM1
+**SETUP-SVM-31**: ``ssh`` VM1
.. code-block:: bash
@@ -336,13 +335,13 @@ Please note that you need to get the IPv4 address associated to VM1. This can be
If everything goes well, ``ssh`` will be successful and you will be logged into VM1. Run some commands to verify
that IPv6 addresses are configured on ``eth0`` interface.
-**SETUP-SVM-31**: Show an IPv6 address with a prefix of ``2001:db8:0:2::/64``
+**SETUP-SVM-32**: Show an IPv6 address with a prefix of ``2001:db8:0:2::/64``
.. code-block:: bash
ip address show
-**SETUP-SVM-32**: ping some external IPv6 address, e.g. ``ipv6-router``
+**SETUP-SVM-33**: ping some external IPv6 address, e.g. ``ipv6-router``
.. code-block:: bash
@@ -351,7 +350,7 @@ that IPv6 addresses are configured on ``eth0`` interface.
If the above ping6 command succeeds, it implies that ``vRouter`` was able to successfully forward the IPv6 traffic
to reach external ``ipv6-router``.
-**SETUP-SVM-33**: When all tests show that the setup works as expected, You can now exit the ``ipv6-router`` namespace.
+**SETUP-SVM-34**: When all tests show that the setup works as expected, You can now exit the ``ipv6-router`` namespace.
.. code-block:: bash
@@ -369,10 +368,10 @@ this IPv6 vRouter.
Sample Network Topology of this Setup through Horizon UI
********************************************************
-The sample network topology of above setup is shown in Horizon UI as follows :numref:`figure3`:
+The sample network topology of above setup is shown in Horizon UI as follows :numref:`s2-figure3`:
.. figure:: images/ipv6-sample-in-horizon.png
- :name: figure3
+ :name: s2-figure3
:width: 100%
Sample Network Topology in Horizon UI
diff --git a/docs/setupservicevm/5-ipv6-configguide-scenario-1-native-os.rst b/docs/setupservicevm/5-ipv6-configguide-scenario-1-native-os.rst
index 266acb1..b6c92fc 100644
--- a/docs/setupservicevm/5-ipv6-configguide-scenario-1-native-os.rst
+++ b/docs/setupservicevm/5-ipv6-configguide-scenario-1-native-os.rst
@@ -2,24 +2,115 @@
Scenario 1 - Native OpenStack Environment
=========================================
-Scenario 1 is the native OpenStack environment. Because the anti-spoofing rule of Security Group feature in OpenStack
-prevents a VM from forwarding packets, we need to work around Security Group feature in the native OpenStack
-environment.
+Scenario 1 is the native OpenStack environment. Although the instructions are based on Liberty, they can be
+applied to Kilo in the same way. Because the anti-spoofing rules of Security Group feature in OpenStack prevents
+a VM from forwarding packets, we need to disable Security Group feature in the native OpenStack environment.
For exemplary purpose, we assume:
-* A two-node setup of OpenStack environment is used
-* The hostname of OpenStack Controller+Network+Compute Node is ``opnfv-os-controller``
-* The hostname of OpenStack Compute Node is ``opnfv-os-compute``
+* A two-node setup of OpenStack environment is used as shown in :numref:`s1-figure1`
+* The hostname of OpenStack Controller+Network+Compute Node is ``opnfv-os-controller``, and the host IP address
+is ``192.168.0.10``
+* The hostname of OpenStack Compute Node is ``opnfv-os-compute``, and the host IP address is ``192.168.0.20``
* Ubuntu 14.04 is installed
* We use ``opnfv`` as username to login.
-* We use ``devstack`` to install OpenStack Kilo
+* We use ``devstack`` to install OpenStack Liberty. Please note that OpenStack Kilo can be used as well.
-***********************************
-Verify OpenStack is Setup Correctly
-***********************************
+.. figure:: images/ipv6-topology-scenario-1.png
+ :name: s1-figure1
+ :width: 100%
-**OS-NATIVE-1**: Show hypervisor list
+ Underlay Network Topology - Scenario 1
+
+**Please note that the IP address shown in** :numref:`s1-figure1`
+**are for exemplary purpose. You need to configure your public IP
+address connecting to Internet according to your actual network
+infrastructure. And you need to make sure the private IP address are
+not conflicting with other subnets**.
+
+************
+Prerequisite
+************
+
+**OS-NATIVE-0**: Clone the following GitHub repository to get the configuration and metadata files
+
+.. code-block:: bash
+
+ git clone https://github.com/sridhargaddam/opnfv_os_ipv6_poc.git /opt/stack/opnfv_os_ipv6_poc
+
+********************************
+Set up OpenStack Controller Node
+********************************
+
+We assume the hostname is ``opnfv-os-controller``, and the host IP address is ``192.168.0.10``.
+
+**OS-NATIVE-N-1**: Clone ``stable/liberty`` ``devstack`` code base.
+
+.. code-block:: bash
+
+ git clone https://github.com/openstack-dev/devstack.git -b stable/liberty
+
+**OS-NATIVE-N-2**: Copy ``local.conf.controller`` to ``devstack`` as ``local.conf``
+
+.. code-block:: bash
+
+ cp /opt/stack/local.conf.controller ~/devstack/local.conf
+
+**OS-NATIVE-N-3**: If you want to modify any ``devstack`` configuration, update ``local.conf`` now.
+
+**OS-NATIVE-N-4**: Start the ``devstack`` installation.
+
+.. code-block:: bash
+
+ cd ~/devstack
+ ./stack.sh
+
+**OS-NATIVE-N-5**: If all goes well, you should see the following output.
+
+.. code-block:: bash
+
+ This is your host IP address: 192.168.0.10
+ This is your host IPv6 address: ::1
+ Horizon is now available at http://192.168.0.10/
+ Keystone is serving at http://192.168.0.10:5000/
+ The default users are: admin and demo
+ The password: password
+
+*****************************
+Set up OpenStack Compute Node
+*****************************
+
+We assume the hostname is ``opnfv-os-compute``, and the host IP address is ``192.168.0.20``.
+
+**OS-NATIVE-M-1**: Clone ``stable/liberty`` ``devstack`` code base.
+
+.. code-block:: bash
+
+ git clone https://github.com/openstack-dev/devstack.git -b stable/liberty
+
+**OS-NATIVE-M-2**: Copy ``local.conf.compute`` to ``devstack`` as ``local.conf``
+
+.. code-block:: bash
+
+ cp /opt/stack/local.conf.compute ~/devstack/local.conf
+
+**OS-NATIVE-M-3**: If you want to modify any ``devstack`` configuration, update ``local.conf`` now.
+
+**OS-NATIVE-M-4**: Start the ``devstack`` installation.
+
+.. code-block:: bash
+
+ cd ~/devstack
+ ./stack.sh
+
+**OS-NATIVE-M-5**: If all goes well, you should see the following output.
+
+.. code-block:: bash
+
+ This is your host IP address: 192.168.0.20
+ This is your host IPv6 address: ::1
+
+**OS-NATIVE-M-6 (OPTIONAL)**: You can verify that OpenStack is set up correctly by showing hypervisor list
.. code-block:: bash
@@ -31,11 +122,17 @@ Verify OpenStack is Setup Correctly
| 2 | opnfv-os-compute | up | enabled |
+----+------------------------------------+---------+------------+
-**********************************************
-Disable Security Groups in OpenStack ML2 Setup
-**********************************************
+********************************************************
+**Note**: Disable Security Groups in OpenStack ML2 Setup
+********************************************************
+
+Please note that Security Groups feature has been disabled automatically through ``local.conf`` configuration file
+during the setup procedure of OpenStack in both Controller Node and Compute Node.
-**OS-NATIVE-2**: Change the settings in ``/etc/neutron/plugins/ml2/ml2_conf.ini`` as follows
+If you are an experienced user that installs OpenStack in a different way, please reference the following
+instructions to verify that Security Groups are disabled, and configuration matches the note below.
+
+**OS-NATIVE-SEC-1**: Change the settings in ``/etc/neutron/plugins/ml2/ml2_conf.ini`` as follows
.. code-block:: bash
@@ -44,7 +141,7 @@ Disable Security Groups in OpenStack ML2 Setup
enable_security_group = False
firewall_driver = neutron.agent.firewall.NoopFirewallDriver
-**OS-NATIVE-3**: Change the settings in ``/etc/nova/nova.conf`` as follows
+**OS-NATIVE-SEC-2**: Change the settings in ``/etc/nova/nova.conf`` as follows
.. code-block:: bash
@@ -53,27 +150,11 @@ Disable Security Groups in OpenStack ML2 Setup
security_group_api = nova
firewall_driver = nova.virt.firewall.NoopFirewallDriver
-***********************************************************************
-Prepare Fedora22 Image, Configuration and Metadata Files for Service VM
-***********************************************************************
-
-**OS-NATIVE-4**: Clone the following GitHub repository to get the configuration and metadata files
-
-.. code-block:: bash
-
- git clone https://github.com/sridhargaddam/opnfv_os_ipv6_poc.git /opt/stack/opnfv_os_ipv6_poc
-
-**OS-NATIVE-5**: Download ``fedora22`` image which would be used for ``vRouter``
-
-.. code-block:: bash
-
- wget https://download.fedoraproject.org/pub/fedora/linux/releases/22/Cloud/x86_64/Images/Fedora-Cloud-Base-22-20150521.x86_64.qcow2
-
*********************************
Set Up Service VM as Ipv6 vRouter
*********************************
-**OS-NATIVE-5**: Now we assume that OpenStack multi-node setup is up and running. The following
+**OS-NATIVE-SETUP-1**: Now we assume that OpenStack multi-node setup is up and running. The following
commands should be executed:
.. code-block:: bash
@@ -81,19 +162,19 @@ commands should be executed:
cd ~/devstack
source openrc admin demo
-**OS-NATIVE-6**: Download ``fedora22`` image which would be used for ``vRouter``
+**OS-NATIVE-SETUP-2**: Download ``fedora22`` image which would be used for ``vRouter``
.. code-block:: bash
wget https://download.fedoraproject.org/pub/fedora/linux/releases/22/Cloud/x86_64/Images/Fedora-Cloud-Base-22-20150521.x86_64.qcow2
-**OS-NATIVE-7**: Import Fedora22 image to ``glance``
+**OS-NATIVE-SETUP-3**: Import Fedora22 image to ``glance``
.. code-block:: bash
- glance image-create --name 'Fedora20' --disk-format qcow2 --container-format bare --file ./Fedora-Cloud-Base-22-20150521.x86_64.qcow2
+ glance image-create --name 'Fedora22' --disk-format qcow2 --container-format bare --file ./Fedora-Cloud-Base-22-20150521.x86_64.qcow2
-**OS-NATIVE-8**: Create Neutron routers ``ipv4-router`` and ``ipv6-router`` which need to provide external
+**OS-NATIVE-SETUP-4**: Create Neutron routers ``ipv4-router`` and ``ipv6-router`` which need to provide external
connectivity.
.. code-block:: bash
@@ -101,14 +182,14 @@ connectivity.
neutron router-create ipv4-router
neutron router-create ipv6-router
-**OS-NATIVE-9**: Create an external network/subnet ``ext-net`` using the appropriate values based on the
+**OS-NATIVE-SETUP-5**: Create an external network/subnet ``ext-net`` using the appropriate values based on the
data-center physical network setup.
.. code-block:: bash
neutron net-create --router:external ext-net
-**OS-NATIVE-10**: If your ``opnfv-os-controller`` node has two interfaces ``eth0`` and ``eth1``,
+**OS-NATIVE-SETUP-6**: If your ``opnfv-os-controller`` node has two interfaces ``eth0`` and ``eth1``,
and ``eth1`` is used for external connectivity, move the IP address of ``eth1`` to ``br-ex``.
Please note that the IP address ``198.59.156.113`` and related subnet and gateway addressed in the command
@@ -124,7 +205,7 @@ below are for exemplary purpose. **Please replace them with the IP addresses of
sudo ip route add default via 198.59.156.1 dev br-ex
neutron subnet-create --disable-dhcp --allocation-pool start=198.59.156.251,end=198.59.156.254 --gateway 198.59.156.1 ext-net 198.59.156.0/24
-**OS-NATIVE-11**: Verify that ``br-ex`` now has the original external IP address, and that the default route is on
+**OS-NATIVE-SETUP-7**: Verify that ``br-ex`` now has the original external IP address, and that the default route is on
``br-ex``
.. code-block:: bash
@@ -143,7 +224,7 @@ below are for exemplary purpose. **Please replace them with the IP addresses of
192.168.122.0/24 dev virbr0 proto kernel scope link src 192.168.122.1
198.59.156.0/24 dev br-ex proto kernel scope link src 198.59.156.113
-**OS-NATIVE-12**: Create Neutron networks ``ipv4-int-network1`` and ``ipv6-int-network2``
+**OS-NATIVE-SETUP-8**: Create Neutron networks ``ipv4-int-network1`` and ``ipv6-int-network2``
with port_security disabled
.. code-block:: bash
@@ -151,7 +232,7 @@ with port_security disabled
neutron net-create --port_security_enabled=False ipv4-int-network1
neutron net-create --port_security_enabled=False ipv6-int-network2
-**OS-NATIVE-13**: Create IPv4 subnet ``ipv4-int-subnet1`` in the internal network ``ipv4-int-network1``,
+**OS-NATIVE-SETUP-9**: Create IPv4 subnet ``ipv4-int-subnet1`` in the internal network ``ipv4-int-network1``,
and associate it to ``ipv4-router``.
.. code-block:: bash
@@ -159,15 +240,15 @@ and associate it to ``ipv4-router``.
neutron subnet-create --name ipv4-int-subnet1 --dns-nameserver 8.8.8.8 ipv4-int-network1 20.0.0.0/24
neutron router-interface-add ipv4-router ipv4-int-subnet1
-**OS-NATIVE-14**: Associate the ``ext-net`` to the Neutron routers ``ipv4-router`` and ``ipv6-router``.
+**OS-NATIVE-SETUP-10**: Associate the ``ext-net`` to the Neutron routers ``ipv4-router`` and ``ipv6-router``.
.. code-block:: bash
neutron router-gateway-set ipv4-router ext-net
neutron router-gateway-set ipv6-router ext-net
-**OS-NATIVE-15**: Create IPv4 subnet ``ipv4-int-subnet2`` and IPv6 subnet ``ipv6-int-subnet2`` in
-the internal network ``ipv6-int-network2``, and associate them to ``ipv6-router``
+**OS-NATIVE-SETUP-11**: Create two subnets, one IPv4 subnet ``ipv4-int-subnet2`` and one IPv6 subnet
+``ipv6-int-subnet2`` in ``ipv6-int-network2``, and associate both subnets to ``ipv6-router``
.. code-block:: bash
@@ -176,13 +257,13 @@ the internal network ``ipv6-int-network2``, and associate them to ``ipv6-router`
neutron router-interface-add ipv6-router ipv4-int-subnet2
neutron router-interface-add ipv6-router ipv6-int-subnet2
-**OS-NATIVE-16**: Create a keypair
+**OS-NATIVE-SETUP-12**: Create a keypair
.. code-block:: bash
nova keypair-add vRouterKey > ~/vRouterKey
-**OS-NATIVE-17**: Create ports for vRouter (with some specific MAC address - basically for automation -
+**OS-NATIVE-SETUP-13**: Create ports for vRouter (with some specific MAC address - basically for automation -
to know the IPv6 addresses that would be assigned to the port).
.. code-block:: bash
@@ -190,31 +271,31 @@ to know the IPv6 addresses that would be assigned to the port).
neutron port-create --name eth0-vRouter --mac-address fa:16:3e:11:11:11 ipv6-int-network2
neutron port-create --name eth1-vRouter --mac-address fa:16:3e:22:22:22 ipv4-int-network1
-**OS-NATIVE-18**: Create ports for VM1 and VM2.
+**OS-NATIVE-SETUP-14**: Create ports for VM1 and VM2.
.. code-block:: bash
neutron port-create --name eth0-VM1 --mac-address fa:16:3e:33:33:33 ipv4-int-network1
neutron port-create --name eth0-VM2 --mac-address fa:16:3e:44:44:44 ipv4-int-network1
-**OS-NATIVE-19**: Update ``ipv6-router`` with routing information to subnet ``2001:db8:0:2::/64``
+**OS-NATIVE-SETUP-15**: Update ``ipv6-router`` with routing information to subnet ``2001:db8:0:2::/64``
.. code-block:: bash
neutron router-update ipv6-router --routes type=dict list=true destination=2001:db8:0:2::/64,nexthop=2001:db8:0:1:f816:3eff:fe11:1111
-**OS-NATIVE-20**: Boot Service VM (``vRouter``), VM1 and VM2
+**OS-NATIVE-SETUP-16**: Boot Service VM (``vRouter``), VM1 and VM2
.. code-block:: bash
- nova boot --image Fedora20 --flavor m1.small --user-data /opt/stack/opnfv_os_ipv6_poc/metadata.txt --availability-zone nova:opnfv-os-compute --nic port-id=$(neutron port-list | grep -w eth0-vRouter | awk '{print $2}') --nic port-id=$(neutron port-list | grep -w eth1-vRouter | awk '{print $2}') --key-name vRouterKey vRouter
+ nova boot --image Fedora22 --flavor m1.small --user-data /opt/stack/opnfv_os_ipv6_poc/metadata.txt --availability-zone nova:opnfv-os-compute --nic port-id=$(neutron port-list | grep -w eth0-vRouter | awk '{print $2}') --nic port-id=$(neutron port-list | grep -w eth1-vRouter | awk '{print $2}') --key-name vRouterKey vRouter
nova list
nova console-log vRouter #Please wait for some 10 to 15 minutes so that necessary packages (like radvd) are installed and vRouter is up.
nova boot --image cirros-0.3.4-x86_64-uec --flavor m1.tiny --nic port-id=$(neutron port-list | grep -w eth0-VM1 | awk '{print $2}') --availability-zone nova:opnfv-os-controller --key-name vRouterKey --user-data /opt/stack/opnfv_os_ipv6_poc/set_mtu.sh VM1
nova boot --image cirros-0.3.4-x86_64-uec --flavor m1.tiny --nic port-id=$(neutron port-list | grep -w eth0-VM2 | awk '{print $2}') --availability-zone nova:opnfv-os-compute --key-name vRouterKey --user-data /opt/stack/opnfv_os_ipv6_poc/set_mtu.sh VM2
nova list # Verify that all the VMs are in ACTIVE state.
-**OS-NATIVE-21**: If all goes well, the IPv6 addresses assigned to the VMs would be as shown as follows:
+**OS-NATIVE-SETUP-17**: If all goes well, the IPv6 addresses assigned to the VMs would be as shown as follows:
.. code-block:: bash
@@ -223,24 +304,9 @@ to know the IPv6 addresses that would be assigned to the port).
VM1 would have the following IPv6 address: 2001:db8:0:2:f816:3eff:fe33:3333/64
VM2 would have the following IPv6 address: 2001:db8:0:2:f816:3eff:fe44:4444/64
-**OS-NATIVE-22**: To ``SSH`` to vRouter, you can execute the following command.
+**OS-NATIVE-SETUP-18**: To ``SSH`` to vRouter, you can execute the following command.
.. code-block:: bash
sudo ip netns exec qrouter-$(neutron router-list | grep -w ipv6-router | awk '{print $2}') ssh -i ~/vRouterKey fedora@2001:db8:0:1:f816:3eff:fe11:1111
-*******************
-Miscellaneour Notes
-*******************
-
-We are adding some static routes to the ``ipv6-router``. For whatever reason, if we want to delete the router
-or dissociate the ``ipv6-router`` from ``ipv6-int-subnet2``, ``Neutron`` will not allow this operation because
-the static route requires the ``ipv6-int-subnet2`` subnet.
-
-In order to work around this issue, and to clear the static routes associated to the ``ipv6-router``,
-you may execute the following:
-
-.. code-block:: bash
-
- neutron router-update ipv6-router --routes action=clear
-
diff --git a/docs/setupservicevm/images/ipv6-topology-scenario-1.png b/docs/setupservicevm/images/ipv6-topology-scenario-1.png
new file mode 100644
index 0000000..118ff4c
--- /dev/null
+++ b/docs/setupservicevm/images/ipv6-topology-scenario-1.png
Binary files differ
diff --git a/docs/setupservicevm/images/ipv6-topology-scenario-2.png b/docs/setupservicevm/images/ipv6-topology-scenario-2.png
new file mode 100644
index 0000000..15f39c5
--- /dev/null
+++ b/docs/setupservicevm/images/ipv6-topology-scenario-2.png
Binary files differ
diff --git a/docs/setupservicevm/index.rst b/docs/setupservicevm/index.rst
index d116c5f..bc72f31 100644
--- a/docs/setupservicevm/index.rst
+++ b/docs/setupservicevm/index.rst
@@ -3,19 +3,25 @@ Setting Up a Service VM as an IPv6 vRouter
==========================================
:Project: IPv6, http://wiki.opnfv.org/ipv6_opnfv_project
-:Editors: Bin Hu (AT&T)
-:Authors: Bin Hu (AT&T), Sridhar Gaddam (RedHat)
+:Editors: Bin Hu (AT&T), Sridhar Gaddam (RedHat)
+:Authors: Sridhar Gaddam (RedHat), Bin Hu (AT&T)
-:Abstract: This document provides the users with installation guidelines to create a Service VM as
- an IPv6 vRouter in OPNFV environment, i.e. integrated OpenStack with Open Daylight
- environment. There are three scenarios. Scenario 1 is pre-OPNFV environment, i.e. a native
- OpenStack environment without Open Daylight Controller. Scenario 2 is an OPNFV environment
- where OpenStack is integrated with Open Daylight Official Lithium Release which not only
- does not support IPv6 L3 Routing but also has a bug in net-virt provider implementation
- that throws Java exception. Scenario 3 is similar to Scenario 2. However, Open Daylight
- Lithium is patched with a fix of Java exception. The complete set of instructions walk
- you through every step of preparing the infrastructure, setting up Open Daylight and
- OpenStack, creating service VM and IPv6 subnet, and testing and validating the setup.
+:Abstract:
+
+This document provides the users with installation guidelines to create a Service VM as
+an IPv6 vRouter in OPNFV environment, i.e. integrated OpenStack with Open Daylight
+environment. There are three scenarios.
+
+* Scenario 1 is pre-OPNFV environment, i.e. a native OpenStack environment
+without Open Daylight Controller.
+* Scenario 2 is an OPNFV environment where OpenStack is integrated with
+Open Daylight Official Lithium Release. In this setup we use ODL for "Layer 2 connectivity"
+and Neutron L3 agent for "Layer 3 routing". Because of a bug, which got fixed recently
+and is not part of ODL SR3, we will have to manually execute certain commands to simulate
+an external IPv6 Router in this setup.
+* Scenario 3 is similar to Scenario 2. However, we use an Open Daylight Lithium
+controller which is built from the latest stable/Lithium branch which includes the fix.
+In this scenario, we can fully automate the setup similar to Scenario 1.
.. toctree::
:numbered:
diff --git a/docs/setupservicevm/scenario-2.rst b/docs/setupservicevm/scenario-2.rst
index ff30973..3ffe43c 100644
--- a/docs/setupservicevm/scenario-2.rst
+++ b/docs/setupservicevm/scenario-2.rst
@@ -5,7 +5,9 @@ Scenario 2 - OpenStack + Open Daylight Lithium Official Release
Scenario 2 is the environment of OpenStack + Open Daylight Lithium Official Release. Because Lithium Official
Release does not support IPv6 L3 Routing, we need to enable Neutron L3 Agent instead of Open Daylight L3
function, while we still use Open Daylight for L2 switching. Because there is a bug in net-virt provider
-implementation, we need to use manual configuration to work around this bug / Java exception.
+implementation, we need to use manual configuration to simulate IPv6 external router in our setup.
+
+Please note that although the instructions are based on OpenStack Kilo, they can be applied to Liberty in the same way.
.. toctree::
:numbered:
diff --git a/docs/setupservicevm/scenario-3.rst b/docs/setupservicevm/scenario-3.rst
index 0c5c432..db590fe 100644
--- a/docs/setupservicevm/scenario-3.rst
+++ b/docs/setupservicevm/scenario-3.rst
@@ -1,10 +1,10 @@
===============================================================
-Scenario 3 - OpenStack + Open Daylight Lithium Official Release
+Scenario 3 - OpenStack + Open Daylight Lithium with Patch of Bug Fix
===============================================================
-Scenario 3 is the environment of OpenStack + Open Daylight Lithium Official Release. Because Lithium Official
-Release does not support IPv6 L3 Routing, we need to enable Neutron L3 Agent instead of Open Daylight L3
-function, while we still use Open Daylight for L2 switching.
+Scenario 3 is the environment of OpenStack + Open Daylight Lithium, however, Lithium is patched with a bug fix of
+net-virt provider implementation that throws Java exception. Because Lithium still does not support IPv6 L3 Routing, we
+need to enable Neutron L3 Agent instead of Open Daylight L3 function, while we still use Open Daylight for L2 switching.
.. toctree::
:numbered: