summaryrefslogtreecommitdiffstats
path: root/docs
diff options
context:
space:
mode:
authorBin Hu <bh526r@att.com>2016-01-11 20:30:14 -0800
committerBin Hu <bh526r@att.com>2016-01-12 04:35:28 +0000
commit71f81460aec49af9937fd235e60ae9a4e696a856 (patch)
tree90e9bdd55b0e6dabafd239805a2ee1257649cff0 /docs
parent84fc5062e00343446d16075f3a03f3b3cc310b7f (diff)
JIRA:IPVSIX-29
Change-Id: If94577d45de4902224c3c292c2e1d7960605de14 Signed-off-by: Bin Hu <bh526r@att.com> (cherry picked from commit 97bbcc7580ef411103960d3b7617077a15390e07)
Diffstat (limited to 'docs')
-rw-r--r--docs/configguide/ipv6-configguide.rst267
-rw-r--r--docs/configguide/option-odl-l2.rst398
-rw-r--r--docs/configguide/option-pure-os.rst222
3 files changed, 632 insertions, 255 deletions
diff --git a/docs/configguide/ipv6-configguide.rst b/docs/configguide/ipv6-configguide.rst
index e460c7d..d0a02d3 100644
--- a/docs/configguide/ipv6-configguide.rst
+++ b/docs/configguide/ipv6-configguide.rst
@@ -2,262 +2,19 @@
Setting Up a Service VM as an IPv6 vRouter
==========================================
-After OPNFV Brahmaputra Release base platform has been successfully installed through previous chapters, there are 11
-steps to set up a service VM as an IPv6 vRouter:
+In order to use the feature of setting up a service VM as an IPv6 vRouter, you need to install
+OPNFV Brahmaputra Release base platform with either pure OpenStack option
+or Open Daylight L2-only option. Please see the instructions in the following first 2 sections.
-- `Step 1: Disable odl-l3 and Enable neutron-l3-agent`_
+For complete instructions and documentations, please see the third section, or refer to:
-- `Step 2: Start Open Daylight`_
+* IPv6 Configuration Guide: http://artifacts.opnfv.org/ipv6/docs/setupservicevm/index.html
+* IPv6 User Guide: http://artifacts.opnfv.org/ipv6/docs/gapanalysis/index.html
-- `Step 3: Start Open Stack on Controller Node`_
-
-- `Step 4: Start Open Stack on Compute Node`_
-
-- `Step 5: Create External Network Connectivity ext-net`_
-
-- `Step 6: Create IPv4 Subnet and Router with External Connectivity`_
-
-- `Step 7: Create IPv6 Subnet and Router with External Connectivity`_
-
-- `Step 8: Prepare Image, Metadata and Keypair for Service VM`_
-
-- `Step 9: Boot Service VM (vRouter) and other VMs in IPv6 Subnet`_
-
-- `Step 10: Spawn RADVD in vRouter`_
-
-- `Step 11: Testing to Verify Setup Complete`_
-
-Once the setup is complete, you can go to `Next Steps`_.
-
-*****************************************************
-_`Step 1: Disable odl-l3 and Enable neutron-l3-agent`
-*****************************************************
-
-This step is optional, and only needed if you didn't choose to enable neutron-l3-agent during previous installation of
-OPNFV Brahmaputra Release.
-
-If you have chosen to enable neutron-l3-agent during installation, please skip this step and directly go to
-`Step 2: Start Open Daylight`_.
-
-# Place holder for instructions of how to disable odl-l3 and enable neutron-l3-agent
-
-******************************
-_`Step 2: Start Open Daylight`
-******************************
-
-**Note: we assume that you have installed Open Daylight through OPNFV Installer in prior chapters. However, if Open Daylight is not installed, please go to** ``http://www.opendaylight.org/downloads`` **to download and install Open Daylight**
-
-ODL-1: Login to Open Daylight Controller Node. For the purpose of example, we use ``opnfv`` as username of login, and
-``opnfv-odl-controller`` as hostname of the Open Daylight Controller Node.
-
-ODL-2: Start a new terminal session, and change directory to where Open Daylight is installed. Here we use ``odl``
-directory name and ``Lithium SR2`` installation as an example.
-
- ``cd ~/odl/distribution-karaf-0.3.2-Lithium-SR2/bin``
-
-ODL-3: Run the ``karaf`` shell. Please note that it is recommended to run the command in a ``screen`` session.
-
-| ``screen -S ODL_Controller``
-| ``./karaf``
-
-ODL-4: You are now in the Karaf shell of Open Daylight. To explore the list of available features you can execute
-``feature:list``. In order to enable Open Daylight with Open Stack, you have to load the ``odl-ovsdb-openstack``
-feature.
-
- ``opendaylight-user@opnfv>feature:install odl-ovsdb-openstack``
-
-ODL-5: Verify that OVSDB feature is installed successfully.
-
-| ``opendaylight-user@opnfv>feature:list -i | grep ovsdb``
-| odl-ovsdb-openstack | 1.1.1-Lithium-SR1 | x | ovsdb-1.1.1-Lithium-SR1 | OpenDaylight :: OVSDB :: OpenStack Network Virtual
-| odl-ovsdb-southbound-api | 1.1.1-Lithium-SR1 | x | odl-ovsdb-southbound-1.1.1-Lithium-SR1 | OpenDaylight :: southbound :: api
-| odl-ovsdb-southbound-impl | 1.1.1-Lithium-SR1 | x | odl-ovsdb-southbound-1.1.1-Lithium-SR1 | OpenDaylight :: southbound :: impl
-| odl-ovsdb-southbound-impl-rest|1.1.1-Lithium-SR1 | x | odl-ovsdb-southbound-1.1.1-Lithium-SR1| OpenDaylight :: southbound :: impl :: REST
-| odl-ovsdb-southbound-impl-ui | 1.1.1-Lithium-SR1| x | odl-ovsdb-southbound-1.1.1-Lithium-SR1| OpenDaylight :: southbound :: impl :: UI
-| ``opendaylight-user@opnfv>``
-
-ODL-6: To view the logs, you can use the following commands (or alternately the file data/log/karaf.log).
-
-| ``opendaylight-user@opnfv>log:display``
-| ``opendaylight-user@opnfv>log:tail``
-
-ODL-7: To enable ODL DLUX UI, install the following features. Then you can navigate to
-``http://<opnfv-odl-controller IP address>:8181/index.html`` for DLUX UI.
-The default user-name and password is admin/admin.
-
- ``opendaylight-user@opnfv>feature:install odl-restconf odl-l2switch-switch odl-mdsal-apidocs odl-dlux-core``
-
-ODL-8: To exit out of screen session, please use the command ``CTRL+a`` followed by ``d``
-
-**Note: Do not kill the screen session, it will terminate the ODL controller.**
-
-At this moment, Open Daylight has been started successfully.
-
-**********************************************
-_`Step 3: Start Open Stack on Controller Node`
-**********************************************
-
-OS-N-1: Login to Open Stack Controller Node. For the purpose of example, we use ``opnfv`` as username of login, and
-``opnfv-os-controller`` as hostname of the Open Stack Controller Node.
-
-OS-N-2: Start a new terminal, and change directory to where Open Stack is installed. Here we use ``devstack`` directory
-name as an example.
-
- ``cd ~/devstack``
-
-OS-N-3: Create a ``local.conf`` file with the contents from the following URL.
-
- ``http://fpaste.org/276949/39476214/``
-
-Note 1: You need to change the value of ``BRANCH``, and all appearance of ``stable/kilo`` and related URL to point to
-the actual branch of your upstream repository.
-
-Note 2: you need to change the IP address of ``ODL_MGR_IP`` to point to your actual IP address of Open Daylight
-Controller.
-
-Note 3: You may have to change the value of ``ODL_PROVIDER_MAPPINGS`` and ``PUBLIC_INTERFACE`` to match your actual
-network interfaces.
-
-OS-N-4: Initiate Openstack setup by invoking ``stack.sh``
-
- ``./stack.sh``
-
-OS-N-5: If the setup is successful you would see the following logs on the console. Please note that the IP addresses
-are all for the purpose of example. Your IP addresses will match the ones assigned during the installation of OPNFV B
-Release base platform in prior chapters.
-
-| ``This is your host ip: <opnfv-os-controller IP address>``
-| ``Horizon is now available at http://<opnfv-os-controller IP address>/``
-| ``Keystone is serving at <opnfv-os-controller IP address>/``
-| ``The default users are: admin and demo``
-| ``The password: password``
-
-OS-N-6: Assuming that all goes well, you can set ``OFFLINE=True`` and ``RECLONE=no`` in ``local.conf`` to lock the
-codebase. Devstack uses these configuration parameters to determine if it has to run with the existing codebase or
-update to the latest copy.
-
-OS-N-7: Source the credentials.
-
- ``opnfv@opnfv-os-controller:~/devstack$ source openrc admin demo``
-
-OS-N-8: Verify some commands to check if setup is working fine.
-
-| ``opnfv@opnfv-os-controller:~/devstack$ nova flavor-list``
-| +----+-----------+-----------+------+-----------+------+-------+-------------+-----------+
-| | ID | Name | Memory_MB | Disk | Ephemeral | Swap | VCPUs | RXTX_Factor | Is_Public |
-| +----+-----------+-----------+------+-----------+------+-------+-------------+-----------+
-| | 1 | m1.tiny | 512 | 1 | 0 | | 1 | 1.0 | True |
-| | 2 | m1.small | 2048 | 20 | 0 | | 1 | 1.0 | True |
-| | 3 | m1.medium | 4096 | 40 | 0 | | 2 | 1.0 | True |
-| | 4 | m1.large | 8192 | 80 | 0 | | 4 | 1.0 | True |
-| | 5 | m1.xlarge | 16384 | 160 | 0 | | 8 | 1.0 | True |
-| +----+-----------+-----------+------+-----------+------+-------+-------------+-----------+
-
-Now you can start the Compute node setup.
-
-*******************************************
-_`Step 4: Start Open Stack on Compute Node`
-*******************************************
-
-OS-M-1: Login to Open Stack Compute Node. For the purpose of example, we use ``opnfv`` as username of login, and
-``opnfv-os-compute`` as hostname of the Open Stack Compute Node.
-
-OS-M-2: Start a new terminal, and change directory to where Open Stack is installed. Here we use ``devstack``
-directory name as an example.
-
- ``cd ~/devstack``
-
-OS-M-3: Create a ``local.conf`` file with the contents from the following URL.
-
- ``http://fpaste.org/276958/44395955/``
-
-Note 1: You need to change the value of ``BRANCH``, and all appearance of ``stable/kilo`` and related URL to point to
-the actual branch of your upstream repository.
-
-Note 2: you need to change the IP address of ``SERVICE_HOST`` to point to your actual IP address of Open Stack
-Controller.
-
-Note 3: you need to change the IP address of ``ODL_MGR_IP`` to point to your actual IP address of Open Daylight
-Controller.
-
-Note 4: You may have to change the value of ``ODL_PROVIDER_MAPPINGS`` and ``PUBLIC_INTERFACE`` to match your actual
-network interface.
-
-OS-M-4: Initiate Openstack setup by invoking ``stack.sh``
-
- ``./stack.sh``
-
-OS-M-5: Assuming that all goes well, you can set ``OFFLINE=True`` and ``RECLONE=no`` in ``local.conf`` to lock the
-codebase. Devstack uses these configuration parameters to determine if it has to run with the existing codebase or
-update to the latest copy.
-
-OS-M-6: Source the credentials.
-
- ``opnfv@opnfv-os-compute:~/devstack$ source openrc admin demo``
-
-OS-M-7:Verify some commands to check if setup is working fine.
-
-| ``opnfv@opnfv-os-compute:~/devstack$ nova flavor-list``
-| +----+-----------+-----------+------+-----------+------+-------+-------------+-----------+
-| | ID | Name | Memory_MB | Disk | Ephemeral | Swap | VCPUs | RXTX_Factor | Is_Public |
-| +----+-----------+-----------+------+-----------+------+-------+-------------+-----------+
-| | 1 | m1.tiny | 512 | 1 | 0 | | 1 | 1.0 | True |
-| | 2 | m1.small | 2048 | 20 | 0 | | 1 | 1.0 | True |
-| | 3 | m1.medium | 4096 | 40 | 0 | | 2 | 1.0 | True |
-| | 4 | m1.large | 8192 | 80 | 0 | | 4 | 1.0 | True |
-| | 5 | m1.xlarge | 16384 | 160 | 0 | | 8 | 1.0 | True |
-| +----+-----------+-----------+------+-----------+------+-------+-------------+-----------+
-
-Now you can start to set up the service VM as an Ipv6 vRouter in the environment of Open Stack and Open Daylight.
-
-*******************************************************
-_`Step 5: Create External Network Connectivity ext-net`
-*******************************************************
-
-# Place holder for instructions of how to create ext-net
-
-*******************************************************************
-_`Step 6: Create IPv4 Subnet and Router with External Connectivity`
-*******************************************************************
-
-# Place holder for instructions of how to create IPv4 subnet and router associated with ext-net
-
-*******************************************************************
-_`Step 7: Create IPv6 Subnet and Router with External Connectivity`
-*******************************************************************
-
-# Place holder for instructions of how to create IPv6 subnet and router associated with ext-net
-
-*************************************************************
-_`Step 8: Prepare Image, Metadata and Keypair for Service VM`
-*************************************************************
-
-# Place holder for instructions of how to get the image and prepare the metadata for service VM, and how to add keypairs
-
-*****************************************************************
-_`Step 9: Boot Service VM (vRouter) and other VMs in IPv6 Subnet`
-*****************************************************************
-
-# Place holder for instructions of how to boot the service VM named vRouter, and a couple of others in the same Ipv6
-subnet for testing purpose
-
-**********************************
-_`Step 10: Spawn RADVD in vRouter`
-**********************************
-
-# Place holder for instructions of how to spawn the RADVD daemon in vRouter
-
-********************************************
-_`Step 11: Testing to Verify Setup Complete`
-********************************************
-
-# Place holder for instructions of how to test and verify that the setup is complete
-
-*************
-_`Next Steps`
-*************
-
-Congratulations, you have completed the setup of using a service VM to act as an IPv6 vRouter. This setup allows further
-open innovation by any 3rd-party. Please refer to relevant sections in User's Guide for further value-added services on
-this IPv6 vRouter.
+.. toctree::
+ :numbered:
+ :maxdepth: 4
+ option-pure-os.rst
+ option-odl-l2.rst
+ ../setupservicevm/index.rst
diff --git a/docs/configguide/option-odl-l2.rst b/docs/configguide/option-odl-l2.rst
new file mode 100644
index 0000000..42fb527
--- /dev/null
+++ b/docs/configguide/option-odl-l2.rst
@@ -0,0 +1,398 @@
+========================================================
+Setup in OpenStack and Open Daylight L2-Only Environment
+========================================================
+
+If you intend to set up a service VM as an IPv6 vRouter in an environment of OpenStack
+and Open Daylight L2-only of OPNFV Brahmaputra Release base platform, the instructions
+are as follows.
+
+Please **NOTE** that:
+
+* The hostnames, IP addresses, and username are for exemplary purpose in instructions.
+ Please change as needed to fit your environment.
+* The instructions apply to both deployment model of single controller node and
+ HA (High Availability) deployment model where multiple controller nodes are used.
+* However, in case of HA, when ``ipv6-router`` is created in step **SETUP-SVM-11**,
+ it could be created in any of the controller node. Thus you need to identify in which
+ controller node ``ipv6-router`` is created in order to manually spawn ``radvd`` daemon
+ inside the ``ipv6-router`` namespace in steps **SETUP-SVM-24** through **SETUP-SVM-30**.
+
+*****************************
+Install OPNFV and Preparation
+*****************************
+
+**OPNFV-INSTALL-1**: To install pure OpenStack option of OPNFV Brahmaputra Release:
+
+.. code-block:: bash
+
+ deploy --scenario os_odl-l2_ha
+
+**OPNFV-INSTALL-2**: Clone the following GitHub repository to get the
+configuration and metadata files
+
+.. code-block:: bash
+
+ git clone https://github.com/sridhargaddam/opnfv_os_ipv6_poc.git /opt/stack/opnfv_os_ipv6_poc
+
+***************************************************
+Source the Credentials in OpenStack Controller Node
+***************************************************
+
+**SETUP-SVM-1**: Login in OpenStack Controller Node. Start a new terminal,
+and change directory to where OpenStack is installed.
+
+**SETUP-SVM-2**: Source the credentials.
+
+.. code-block:: bash
+
+ source openrc admin demo
+
+**************************************
+Add External Connectivity to ``br-ex``
+**************************************
+
+In OpenStack Controller Node, ``eth1`` is configured to provide external/public connectivity
+for both IPv4 and IPv6 (optional). So let us add this interface to ``br-ex`` and move the IP address, including the
+default route from ``eth1`` to ``br-ex``.
+
+**SETUP-SVM-3**: Add ``eth1`` to ``br-ex`` and move the IP address and the default route from ``eth1`` to ``br-ex``
+
+.. code-block:: bash
+
+ sudo ip addr del 198.59.156.113/24 dev eth1
+ sudo ovs-vsctl add-port br-ex eth1
+ sudo ifconfig eth1 up
+ sudo ip addr add 198.59.156.113/24 dev br-ex
+ sudo ifconfig br-ex up
+ sudo ip route add default via 198.59.156.1 dev br-ex
+
+Please note that:
+
+* The IP address ``198.59.156.113`` and related subnet and gateway addressed in the command
+ below are for exemplary purpose. **Please replace them with the IP addresses of your actual network**.
+* **This can be automated in /etc/network/interfaces**.
+
+**SETUP-SVM-4**: Verify that ``br-ex`` now has the original external IP address, and that the default route is on
+``br-ex``
+
+.. code-block:: bash
+
+ $ ip a s br-ex
+ 38: br-ex: <BROADCAST,UP,LOWER_UP> mtu 1430 qdisc noqueue state UNKNOWN group default
+ link/ether 00:50:56:82:42:d1 brd ff:ff:ff:ff:ff:ff
+ inet 198.59.156.113/24 brd 198.59.156.255 scope global br-ex
+ valid_lft forever preferred_lft forever
+ inet6 fe80::543e:28ff:fe70:4426/64 scope link
+ valid_lft forever preferred_lft forever
+ $
+ $ ip route
+ default via 198.59.156.1 dev br-ex
+ 192.168.0.0/24 dev eth0 proto kernel scope link src 192.168.0.10
+ 192.168.122.0/24 dev virbr0 proto kernel scope link src 192.168.122.1
+ 198.59.156.0/24 dev br-ex proto kernel scope link src 198.59.156.113
+
+Please note that The IP addresses above are exemplary purpose
+
+********************************************************
+Create IPv4 Subnet and Router with External Connectivity
+********************************************************
+
+**SETUP-SVM-5**: Create a Neutron router ``ipv4-router`` which needs to provide external connectivity.
+
+.. code-block:: bash
+
+ neutron router-create ipv4-router
+
+**SETUP-SVM-6**: Create an external network/subnet ``ext-net`` using the appropriate values based on the
+data-center physical network setup.
+
+.. code-block:: bash
+
+ neutron net-create --router:external ext-net
+ neutron subnet-create --disable-dhcp --allocation-pool start=198.59.156.251,end=198.59.156.254 --gateway 198.59.156.1 ext-net 198.59.156.0/24
+
+Please note that the IP addresses in the command above are for exemplary purpose. **Please replace the IP addresses of
+your actual network**.
+
+**SETUP-SVM-7**: Associate the ``ext-net`` to the Neutron router ``ipv4-router``.
+
+.. code-block:: bash
+
+ neutron router-gateway-set ipv4-router ext-net
+
+**SETUP-SVM-8**: Create an internal/tenant IPv4 network ``ipv4-int-network1``
+
+.. code-block:: bash
+
+ neutron net-create ipv4-int-network1
+
+**SETUP-SVM-9**: Create an IPv4 subnet ``ipv4-int-subnet1`` in the internal network ``ipv4-int-network1``
+
+.. code-block:: bash
+
+ neutron subnet-create --name ipv4-int-subnet1 --dns-nameserver 8.8.8.8 ipv4-int-network1 20.0.0.0/24
+
+**SETUP-SVM-10**: Associate the IPv4 internal subnet ``ipv4-int-subnet1`` to the Neutron router ``ipv4-router``.
+
+.. code-block:: bash
+
+ neutron router-interface-add ipv4-router ipv4-int-subnet1
+
+********************************************************
+Create IPv6 Subnet and Router with External Connectivity
+********************************************************
+
+Now, let us create a second neutron router where we can "manually" spawn a ``radvd`` daemon to simulate an external
+IPv6 router.
+
+**SETUP-SVM-11**: Create a second Neutron router ``ipv6-router`` which needs to provide external connectivity
+
+.. code-block:: bash
+
+ neutron router-create ipv6-router
+
+**SETUP-SVM-12**: Associate the ``ext-net`` to the Neutron router ``ipv6-router``
+
+.. code-block:: bash
+
+ neutron router-gateway-set ipv6-router ext-net
+
+**SETUP-SVM-13**: Create a second internal/tenant IPv4 network ``ipv4-int-network2``
+
+.. code-block:: bash
+
+ neutron net-create ipv4-int-network2
+
+**SETUP-SVM-14**: Create an IPv4 subnet ``ipv4-int-subnet2`` for the ``ipv6-router`` internal network
+``ipv4-int-network2``
+
+.. code-block:: bash
+
+ neutron subnet-create --name ipv4-int-subnet2 --dns-nameserver 8.8.8.8 ipv4-int-network2 10.0.0.0/24
+
+**SETUP-SVM-15**: Associate the IPv4 internal subnet ``ipv4-int-subnet2`` to the Neutron router ``ipv6-router``.
+
+.. code-block:: bash
+
+ neutron router-interface-add ipv6-router ipv4-int-subnet2
+
+**************************************************
+Prepare Image, Metadata and Keypair for Service VM
+**************************************************
+
+**SETUP-SVM-16**: Download ``fedora22`` image which would be used as ``vRouter``
+
+.. code-block:: bash
+
+ glance image-create --name 'Fedora22' --disk-format qcow2 --container-format bare --is-public true --copy-from https://download.fedoraproject.org/pub/fedora/linux/releases/22/Cloud/x86_64/Images/Fedora-Cloud-Base-22-20150521.x86_64.qcow2
+
+**SETUP-SVM-17**: Create a keypair
+
+.. code-block:: bash
+
+ nova keypair-add vRouterKey > ~/vRouterKey
+
+**SETUP-SVM-18**: Create ports for ``vRouter`` and both the VMs with some specific MAC addresses.
+
+.. code-block:: bash
+
+ neutron port-create --name eth0-vRouter --mac-address fa:16:3e:11:11:11 ipv4-int-network2
+ neutron port-create --name eth1-vRouter --mac-address fa:16:3e:22:22:22 ipv4-int-network1
+ neutron port-create --name eth0-VM1 --mac-address fa:16:3e:33:33:33 ipv4-int-network1
+ neutron port-create --name eth0-VM2 --mac-address fa:16:3e:44:44:44 ipv4-int-network1
+
+**********************************************************************************************************
+Boot Service VM (``vRouter``) with ``eth0`` on ``ipv4-int-network2`` and ``eth1`` on ``ipv4-int-network1``
+**********************************************************************************************************
+
+Let us boot the service VM (``vRouter``) with ``eth0`` interface on ``ipv4-int-network2`` connecting to ``ipv6-router``,
+and ``eth1`` interface on ``ipv4-int-network1`` connecting to ``ipv4-router``.
+
+**SETUP-SVM-19**: Boot the ``vRouter`` using ``Fedora22`` image on the OpenStack Compute Node with hostname
+``opnfv-os-compute``
+
+.. code-block:: bash
+
+ nova boot --image Fedora22 --flavor m1.small --user-data /opt/stack/opnfv_os_ipv6_poc/metadata.txt --availability-zone nova:opnfv-os-compute --nic port-id=$(neutron port-list | grep -w eth0-vRouter | awk '{print $2}') --nic port-id=$(neutron port-list | grep -w eth1-vRouter | awk '{print $2}') --key-name vRouterKey vRouter
+
+Please **note** that ``/opt/stack/opnfv_os_ipv6_poc/metadata.txt`` is used to enable the ``vRouter`` to automatically
+spawn a ``radvd``, and
+
+* Act as an IPv6 vRouter which advertises the RA (Router Advertisements) with prefix
+ ``2001:db8:0:2::/64`` on its internal interface (``eth1``).
+* Forward IPv6 traffic from internal interface (``eth1``)
+
+**SETUP-SVM-20**: Verify that ``Fedora22`` image boots up successfully and vRouter has ``ssh`` keys properly injected
+
+.. code-block:: bash
+
+ nova list
+ nova console-log vRouter
+
+Please note that **it may take a few minutes** for the necessary packages to get installed and ``ssh`` keys
+to be injected.
+
+.. code-block:: bash
+
+ # Sample Output
+ [ 762.884523] cloud-init[871]: ec2: #############################################################
+ [ 762.909634] cloud-init[871]: ec2: -----BEGIN SSH HOST KEY FINGERPRINTS-----
+ [ 762.931626] cloud-init[871]: ec2: 2048 e3:dc:3d:4a:bc:b6:b0:77:75:a1:70:a3:d0:2a:47:a9 (RSA)
+ [ 762.957380] cloud-init[871]: ec2: -----END SSH HOST KEY FINGERPRINTS-----
+ [ 762.979554] cloud-init[871]: ec2: #############################################################
+
+*******************************************
+Boot Two Other VMs in ``ipv4-int-network1``
+*******************************************
+
+In order to verify that the setup is working, let us create two cirros VMs with ``eth1`` interface on the
+``ipv4-int-network1``, i.e., connecting to ``vRouter`` ``eth1`` interface for internal network.
+
+We will have to configure appropriate ``mtu`` on the VMs' interface by taking into account the tunneling
+overhead and any physical switch requirements. If so, push the ``mtu`` to the VM either using ``dhcp``
+options or via ``meta-data``.
+
+**SETUP-SVM-21**: Create VM1 on OpenStack Controller Node with hostname ``opnfv-os-controller``
+
+.. code-block:: bash
+
+ nova boot --image cirros-0.3.4-x86_64-uec --flavor m1.tiny --nic port-id=$(neutron port-list | grep -w eth0-VM1 | awk '{print $2}') --availability-zone nova:opnfv-os-controller --key-name vRouterKey --user-data /opt/stack/opnfv_os_ipv6_poc/set_mtu.sh VM1
+
+**SETUP-SVM-22**: Create VM2 on OpenStack Compute Node with hostname ``opnfv-os-compute``
+
+.. code-block:: bash
+
+ nova boot --image cirros-0.3.4-x86_64-uec --flavor m1.tiny --nic port-id=$(neutron port-list | grep -w eth0-VM2 | awk '{print $2}') --availability-zone nova:opnfv-os-compute --key-name vRouterKey --user-data /opt/stack/opnfv_os_ipv6_poc/set_mtu.sh VM2
+
+**SETUP-SVM-23**: Confirm that both the VMs are successfully booted.
+
+.. code-block:: bash
+
+ nova list
+ nova console-log VM1
+ nova console-log VM2
+
+**********************************
+Spawn ``RADVD`` in ``ipv6-router``
+**********************************
+
+Let us manually spawn a ``radvd`` daemon inside ``ipv6-router`` namespace to simulate an external router.
+First of all, we will have to identify the ``ipv6-router`` namespace and move to the namespace.
+
+Please **NOTE** that in case of HA (High Availability) deployment model where multiple controller
+nodes are used, ``ipv6-router`` created in step **SETUP-SVM-11** could be in any of the controller
+node. Thus you need to identify in which controller node ``ipv6-router`` is created in order to manually
+spawn ``radvd`` daemon inside the ``ipv6-router`` namespace in steps **SETUP-SVM-24** through
+**SETUP-SVM-30**. The following command in Neutron will display the controller on which the
+``ipv6-router`` is spawned.
+
+.. code-block:: bash
+
+ neutron l3-agent-list-hosting-router ipv6-router
+
+Then you login to that controller and execute steps **SETUP-SVM-24**
+through **SETUP-SVM-30**
+
+**SETUP-SVM-24**: identify the ``ipv6-router`` namespace and move to the namespace
+
+.. code-block:: bash
+
+ sudo ip netns exec qrouter-$(neutron router-list | grep -w ipv6-router | awk '{print $2}') bash
+
+**SETUP-SVM-25**: Upon successful execution of the above command, you will be in the router namespace.
+Now let us configure the IPv6 address on the <qr-xxx> interface.
+
+.. code-block:: bash
+
+ export router_interface=$(ip a s | grep -w "global qr-*" | awk '{print $7}')
+ ip -6 addr add 2001:db8:0:1::1 dev $router_interface
+
+**SETUP-SVM-26**: Update the sample file ``/opt/stack/opnfv_os_ipv6_poc/scenario2/radvd.conf``
+with ``$router_interface``.
+
+.. code-block:: bash
+
+ cp /opt/stack/opnfv_os_ipv6_poc/scenario2/radvd.conf /tmp/radvd.$router_interface.conf
+ sed -i 's/$router_interface/'$router_interface'/g' /tmp/radvd.$router_interface.conf
+
+**SETUP-SVM-27**: Spawn a ``radvd`` daemon to simulate an external router. This ``radvd`` daemon advertises an IPv6
+subnet prefix of ``2001:db8:0:1::/64`` using RA (Router Advertisement) on its $router_interface so that ``eth0``
+interface of ``vRouter`` automatically configures an IPv6 SLAAC address.
+
+.. code-block:: bash
+
+ $radvd -C /tmp/radvd.$router_interface.conf -p /tmp/br-ex.pid.radvd -m syslog
+
+**SETUP-SVM-28**: Add an IPv6 downstream route pointing to the ``eth0`` interface of vRouter.
+
+.. code-block:: bash
+
+ ip -6 route add 2001:db8:0:2::/64 via 2001:db8:0:1:f816:3eff:fe11:1111
+
+**SETUP-SVM-29**: The routing table should now look similar to something shown below.
+
+.. code-block:: bash
+
+ ip -6 route show
+ 2001:db8:0:1::1 dev qr-42968b9e-62 proto kernel metric 256
+ 2001:db8:0:1::/64 dev qr-42968b9e-62 proto kernel metric 256 expires 86384sec
+ 2001:db8:0:2::/64 via 2001:db8:0:1:f816:3eff:fe11:1111 dev qr-42968b9e-62 proto ra metric 1024 expires 29sec
+ fe80::/64 dev qg-3736e0c7-7c proto kernel metric 256
+ fe80::/64 dev qr-42968b9e-62 proto kernel metric 256
+
+**SETUP-SVM-30**: If all goes well, the IPv6 addresses assigned to the VMs would be as shown as follows:
+
+.. code-block:: bash
+
+ vRouter eth0 interface would have the following IPv6 address: 2001:db8:0:1:f816:3eff:fe11:1111/64
+ vRouter eth1 interface would have the following IPv6 address: 2001:db8:0:2::1/64
+ VM1 would have the following IPv6 address: 2001:db8:0:2:f816:3eff:fe33:3333/64
+ VM2 would have the following IPv6 address: 2001:db8:0:2:f816:3eff:fe44:4444/64
+
+********************************
+Testing to Verify Setup Complete
+********************************
+
+Now, let us ``ssh`` to one of the VMs, e.g. VM1, to confirm that it has successfully configured the IPv6 address
+using ``SLAAC`` with prefix ``2001:db8:0:2::/64`` from ``vRouter``.
+
+Please note that you need to get the IPv4 address associated to VM1. This can be inferred from ``nova list`` command.
+
+**SETUP-SVM-31**: ``ssh`` VM1
+
+.. code-block:: bash
+
+ ssh -i /home/odl/vRouterKey cirros@<VM1-IPv4-address>
+
+If everything goes well, ``ssh`` will be successful and you will be logged into VM1. Run some commands to verify
+that IPv6 addresses are configured on ``eth0`` interface.
+
+**SETUP-SVM-32**: Show an IPv6 address with a prefix of ``2001:db8:0:2::/64``
+
+.. code-block:: bash
+
+ ip address show
+
+**SETUP-SVM-33**: ping some external IPv6 address, e.g. ``ipv6-router``
+
+.. code-block:: bash
+
+ ping6 2001:db8:0:1::1
+
+If the above ping6 command succeeds, it implies that ``vRouter`` was able to successfully forward the IPv6 traffic
+to reach external ``ipv6-router``.
+
+**SETUP-SVM-34**: When all tests show that the setup works as expected, You can now exit the ``ipv6-router`` namespace.
+
+.. code-block:: bash
+
+ exit
+
+**********
+Next Steps
+**********
+
+Congratulations, you have completed the setup of using a service VM to act as an IPv6 vRouter. This setup allows further
+open innovation by any 3rd-party. Please refer to relevant sections in User's Guide for further value-added services on
+this IPv6 vRouter.
+
diff --git a/docs/configguide/option-pure-os.rst b/docs/configguide/option-pure-os.rst
new file mode 100644
index 0000000..609b1f8
--- /dev/null
+++ b/docs/configguide/option-pure-os.rst
@@ -0,0 +1,222 @@
+======================================================================
+Set Up a Service VM as an IPv6 vRouter in Native OpenStack Environment
+======================================================================
+
+If you intend to set up a service VM as an IPv6 vRouter in native OpenStack environment of
+OPNFV Brahmaputra Release base platform, the instructions are as follows.
+
+Please **NOTE** that:
+
+* Because the anti-spoofing rules of Security Group feature in OpenStack prevents
+ a VM from forwarding packets, we need to disable Security Group feature in the
+ native OpenStack environment.
+* The hostnames, IP addresses, and username are for exemplary purpose in instructions.
+ Please change as needed to fit your environment.
+* The instructions apply to both deployment model of single controller node and
+ HA (High Availability) deployment model where multiple controller nodes are used
+
+*****************************
+Install OPNFV and Preparation
+*****************************
+
+**OPNFV-NATIVE-INSTALL-1**: To install pure OpenStack option of OPNFV Brahmaputra Release:
+
+.. code-block:: bash
+
+ deploy --scenario os_ha
+
+**OPNFV-NATIVE-INSTALL-2**: Clone the following GitHub repository to get the
+configuration and metadata files
+
+.. code-block:: bash
+
+ git clone https://github.com/sridhargaddam/opnfv_os_ipv6_poc.git /opt/stack/opnfv_os_ipv6_poc
+
+**********************************************
+Disable Security Groups in OpenStack ML2 Setup
+**********************************************
+
+**OPNFV-NATIVE-SEC-1**: Change the settings in
+``/etc/neutron/plugins/ml2/ml2_conf.ini`` as follows
+
+.. code-block:: bash
+
+ # /etc/neutron/plugins/ml2/ml2_conf.ini
+ [securitygroup]
+ enable_security_group = False
+ firewall_driver = neutron.agent.firewall.NoopFirewallDriver
+
+**OPNFV-NATIVE-SEC-2**: Change the settings in ``/etc/nova/nova.conf`` as follows
+
+.. code-block:: bash
+
+ # /etc/nova/nova.conf
+ [DEFAULT]
+ security_group_api = nova
+ firewall_driver = nova.virt.firewall.NoopFirewallDriver
+
+*********************************
+Set Up Service VM as IPv6 vRouter
+*********************************
+
+**OPNFV-NATIVE-SETUP-1**: Now we assume that OpenStack multi-node setup is up and running. The following
+commands should be executed:
+
+.. code-block:: bash
+
+ source openrc admin demo
+
+**OPNFV-NATIVE-SETUP-2**: Download ``fedora22`` image which would be used for ``vRouter``
+
+.. code-block:: bash
+
+ wget https://download.fedoraproject.org/pub/fedora/linux/releases/22/Cloud/x86_64/Images/Fedora-Cloud-Base-22-20150521.x86_64.qcow2
+
+**OPNFV-NATIVE-SETUP-3**: Import Fedora22 image to ``glance``
+
+.. code-block:: bash
+
+ glance image-create --name 'Fedora22' --disk-format qcow2 --container-format bare --file ./Fedora-Cloud-Base-22-20150521.x86_64.qcow2
+
+**OPNFV-NATIVE-SETUP-4**: Create Neutron routers ``ipv4-router`` and ``ipv6-router``
+which need to provide external connectivity.
+
+.. code-block:: bash
+
+ neutron router-create ipv4-router
+ neutron router-create ipv6-router
+
+**OPNFV-NATIVE-SETUP-5**: Create an external network/subnet ``ext-net`` using
+the appropriate values based on the data-center physical network setup.
+
+.. code-block:: bash
+
+ neutron net-create --router:external ext-net
+
+**OPNFV-NATIVE-SETUP-6**: If your ``opnfv-os-controller`` node has two interfaces ``eth0`` and
+``eth1``, and ``eth1`` is used for external connectivity, move the IP address of ``eth1`` to ``br-ex``.
+
+Please note that the IP address ``198.59.156.113`` and related subnet and gateway addressed in the command
+below are for exemplary purpose. **Please replace them with the IP addresses of your actual network**.
+
+.. code-block:: bash
+
+ sudo ip addr del 198.59.156.113/24 dev eth1
+ sudo ovs-vsctl add-port br-ex eth1
+ sudo ifconfig eth1 up
+ sudo ip addr add 198.59.156.113/24 dev br-ex
+ sudo ifconfig br-ex up
+ sudo ip route add default via 198.59.156.1 dev br-ex
+ neutron subnet-create --disable-dhcp --allocation-pool start=198.59.156.251,end=198.59.156.254 --gateway 198.59.156.1 ext-net 198.59.156.0/24
+
+**OPNFV-NATIVE-SETUP-7**: Verify that ``br-ex`` now has the original external IP address,
+and that the default route is on ``br-ex``
+
+.. code-block:: bash
+
+ $ ip a s br-ex
+ 38: br-ex: <BROADCAST,UP,LOWER_UP> mtu 1430 qdisc noqueue state UNKNOWN group default
+ link/ether 00:50:56:82:42:d1 brd ff:ff:ff:ff:ff:ff
+ inet 198.59.156.113/24 brd 198.59.156.255 scope global br-ex
+ valid_lft forever preferred_lft forever
+ inet6 fe80::543e:28ff:fe70:4426/64 scope link
+ valid_lft forever preferred_lft forever
+ $
+ $ ip route
+ default via 198.59.156.1 dev br-ex
+ 192.168.0.0/24 dev eth0 proto kernel scope link src 192.168.0.10
+ 192.168.122.0/24 dev virbr0 proto kernel scope link src 192.168.122.1
+ 198.59.156.0/24 dev br-ex proto kernel scope link src 198.59.156.113
+
+Please note that the IP addresses above are exemplary purpose.
+
+**OPNFV-NATIVE-SETUP-8**: Create Neutron networks ``ipv4-int-network1`` and
+``ipv6-int-network2`` with port_security disabled
+
+.. code-block:: bash
+
+ neutron net-create --port_security_enabled=False ipv4-int-network1
+ neutron net-create --port_security_enabled=False ipv6-int-network2
+
+**OPNFV-NATIVE-SETUP-9**: Create IPv4 subnet ``ipv4-int-subnet1`` in the internal network
+``ipv4-int-network1``, and associate it to ``ipv4-router``.
+
+.. code-block:: bash
+
+ neutron subnet-create --name ipv4-int-subnet1 --dns-nameserver 8.8.8.8 ipv4-int-network1 20.0.0.0/24
+ neutron router-interface-add ipv4-router ipv4-int-subnet1
+
+**OPNFV-NATIVE-SETUP-10**: Associate the ``ext-net`` to the Neutron routers ``ipv4-router``
+and ``ipv6-router``.
+
+.. code-block:: bash
+
+ neutron router-gateway-set ipv4-router ext-net
+ neutron router-gateway-set ipv6-router ext-net
+
+**OPNFV-NATIVE-SETUP-11**: Create two subnets, one IPv4 subnet ``ipv4-int-subnet2`` and
+one IPv6 subnet ``ipv6-int-subnet2`` in ``ipv6-int-network2``, and associate both subnets to
+``ipv6-router``
+
+.. code-block:: bash
+
+ neutron subnet-create --name ipv4-int-subnet2 --dns-nameserver 8.8.8.8 ipv6-int-network2 10.0.0.0/24
+ neutron subnet-create --name ipv6-int-subnet2 --ip-version 6 --ipv6-ra-mode slaac --ipv6-address-mode slaac ipv6-int-network2 2001:db8:0:1::/64
+ neutron router-interface-add ipv6-router ipv4-int-subnet2
+ neutron router-interface-add ipv6-router ipv6-int-subnet2
+
+**OPNFV-NATIVE-SETUP-12**: Create a keypair
+
+.. code-block:: bash
+
+ nova keypair-add vRouterKey > ~/vRouterKey
+
+**OPNFV-NATIVE-SETUP-13**: Create ports for vRouter (with some specific MAC address
+- basically for automation - to know the IPv6 addresses that would be assigned to the port).
+
+.. code-block:: bash
+
+ neutron port-create --name eth0-vRouter --mac-address fa:16:3e:11:11:11 ipv6-int-network2
+ neutron port-create --name eth1-vRouter --mac-address fa:16:3e:22:22:22 ipv4-int-network1
+
+**OPNFV-NATIVE-SETUP-14**: Create ports for VM1 and VM2.
+
+.. code-block:: bash
+
+ neutron port-create --name eth0-VM1 --mac-address fa:16:3e:33:33:33 ipv4-int-network1
+ neutron port-create --name eth0-VM2 --mac-address fa:16:3e:44:44:44 ipv4-int-network1
+
+**OPNFV-NATIVE-SETUP-15**: Update ``ipv6-router`` with routing information to subnet
+``2001:db8:0:2::/64``
+
+.. code-block:: bash
+
+ neutron router-update ipv6-router --routes type=dict list=true destination=2001:db8:0:2::/64,nexthop=2001:db8:0:1:f816:3eff:fe11:1111
+
+**OPNFV-NATIVE-SETUP-16**: Boot Service VM (``vRouter``), VM1 and VM2
+
+.. code-block:: bash
+
+ nova boot --image Fedora22 --flavor m1.small --user-data /opt/stack/opnfv_os_ipv6_poc/metadata.txt --availability-zone nova:opnfv-os-compute --nic port-id=$(neutron port-list | grep -w eth0-vRouter | awk '{print $2}') --nic port-id=$(neutron port-list | grep -w eth1-vRouter | awk '{print $2}') --key-name vRouterKey vRouter
+ nova list
+ nova console-log vRouter #Please wait for some 10 to 15 minutes so that necessary packages (like radvd) are installed and vRouter is up.
+ nova boot --image cirros-0.3.4-x86_64-uec --flavor m1.tiny --nic port-id=$(neutron port-list | grep -w eth0-VM1 | awk '{print $2}') --availability-zone nova:opnfv-os-controller --key-name vRouterKey --user-data /opt/stack/opnfv_os_ipv6_poc/set_mtu.sh VM1
+ nova boot --image cirros-0.3.4-x86_64-uec --flavor m1.tiny --nic port-id=$(neutron port-list | grep -w eth0-VM2 | awk '{print $2}') --availability-zone nova:opnfv-os-compute --key-name vRouterKey --user-data /opt/stack/opnfv_os_ipv6_poc/set_mtu.sh VM2
+ nova list # Verify that all the VMs are in ACTIVE state.
+
+**OPNFV-NATIVE-SETUP-17**: If all goes well, the IPv6 addresses assigned to the VMs
+would be as shown as follows:
+
+.. code-block:: bash
+
+ vRouter eth0 interface would have the following IPv6 address: 2001:db8:0:1:f816:3eff:fe11:1111/64
+ vRouter eth1 interface would have the following IPv6 address: 2001:db8:0:2::1/64
+ VM1 would have the following IPv6 address: 2001:db8:0:2:f816:3eff:fe33:3333/64
+ VM2 would have the following IPv6 address: 2001:db8:0:2:f816:3eff:fe44:4444/64
+
+**OPNFV-NATIVE-SETUP-18**: To ``SSH`` to vRouter, you can execute the following command.
+
+.. code-block:: bash
+
+ sudo ip netns exec qrouter-$(neutron router-list | grep -w ipv6-router | awk '{print $2}') ssh -i ~/vRouterKey fedora@2001:db8:0:1:f816:3eff:fe11:1111
+