summaryrefslogtreecommitdiffstats
path: root/docs
diff options
context:
space:
mode:
Diffstat (limited to 'docs')
-rw-r--r--docs/configguide/ipv6-configguide.rst274
-rw-r--r--docs/configguide/option-odl-l2.rst398
-rw-r--r--docs/configguide/option-pure-os.rst236
-rw-r--r--docs/gapanalysis/gap-analysis-odl-lithium.rst7
-rw-r--r--docs/gapanalysis/gap-analysis-openstack-kilo.rst8
-rw-r--r--docs/reldoc/index.rst25
-rw-r--r--docs/reldoc/option-odl-l2.rst398
-rw-r--r--docs/reldoc/option-pure-os.rst236
-rw-r--r--docs/setupservicevm/0-ipv6-configguide-prep-infra.rst10
-rw-r--r--docs/setupservicevm/4-ipv6-configguide-servicevm.rst14
-rw-r--r--docs/setupservicevm/5-ipv6-configguide-scenario-1-native-os.rst14
-rw-r--r--docs/setupservicevm/architecture-design.rst3
-rw-r--r--docs/setupservicevm/images/ipv6-topology-scenario-3.pngbin0 -> 68490 bytes
-rw-r--r--docs/setupservicevm/index.rst11
-rw-r--r--docs/setupservicevm/scenario-3-0-ipv6-configguide-prep-infra.rst18
-rw-r--r--docs/setupservicevm/scenario-3-1-ipv6-configguide-odl-setup.rst24
-rw-r--r--docs/setupservicevm/scenario-3-2-ipv6-configguide-os-controller.rst8
-rw-r--r--docs/setupservicevm/scenario-3-3-ipv6-configguide-os-compute.rst6
18 files changed, 1394 insertions, 296 deletions
diff --git a/docs/configguide/ipv6-configguide.rst b/docs/configguide/ipv6-configguide.rst
index e460c7d..84afde6 100644
--- a/docs/configguide/ipv6-configguide.rst
+++ b/docs/configguide/ipv6-configguide.rst
@@ -1,263 +1,25 @@
-==========================================
-Setting Up a Service VM as an IPv6 vRouter
-==========================================
+===============================================================
+Setting Up a Service VM as an IPv6 vRouter with OPNFV B Release
+===============================================================
-After OPNFV Brahmaputra Release base platform has been successfully installed through previous chapters, there are 11
-steps to set up a service VM as an IPv6 vRouter:
+This section provides instructions to set up a service VM as an IPv6 vRouter using OPNFV Brahmaputra Release
+installers with either pure OpenStack option or Open Daylight L2-only option.
-- `Step 1: Disable odl-l3 and Enable neutron-l3-agent`_
+For complete instructions and documentations of setting up service VM as an IPv6 vRouter using ANY method,
+please refer to:
-- `Step 2: Start Open Daylight`_
+1. IPv6 Configuration Guide (HTML): http://artifacts.opnfv.org/ipv6/docs/setupservicevm/index.html
+2. IPv6 Configuration Guide (PDF): http://artifacts.opnfv.org/ipv6/docs/setupservicevm/setupservicevm.pdf
+3. IPv6 User Guide (HTML): http://artifacts.opnfv.org/ipv6/docs/gapanalysis/index.html
+4. IPv6 User Guide (PDF): http://artifacts.opnfv.org/ipv6/docs/gapanalysis/gapanalysis.pdf
-- `Step 3: Start Open Stack on Controller Node`_
+Please see the instructions in the following 2 sections for setup using OPNFV B Release installers.
+Or go to http://artifacts.opnfv.org/ipv6/docs/reldoc/reldoc.pdf to download a PDF version.
-- `Step 4: Start Open Stack on Compute Node`_
+.. toctree::
+ :numbered:
+ :maxdepth: 4
-- `Step 5: Create External Network Connectivity ext-net`_
-
-- `Step 6: Create IPv4 Subnet and Router with External Connectivity`_
-
-- `Step 7: Create IPv6 Subnet and Router with External Connectivity`_
-
-- `Step 8: Prepare Image, Metadata and Keypair for Service VM`_
-
-- `Step 9: Boot Service VM (vRouter) and other VMs in IPv6 Subnet`_
-
-- `Step 10: Spawn RADVD in vRouter`_
-
-- `Step 11: Testing to Verify Setup Complete`_
-
-Once the setup is complete, you can go to `Next Steps`_.
-
-*****************************************************
-_`Step 1: Disable odl-l3 and Enable neutron-l3-agent`
-*****************************************************
-
-This step is optional, and only needed if you didn't choose to enable neutron-l3-agent during previous installation of
-OPNFV Brahmaputra Release.
-
-If you have chosen to enable neutron-l3-agent during installation, please skip this step and directly go to
-`Step 2: Start Open Daylight`_.
-
-# Place holder for instructions of how to disable odl-l3 and enable neutron-l3-agent
-
-******************************
-_`Step 2: Start Open Daylight`
-******************************
-
-**Note: we assume that you have installed Open Daylight through OPNFV Installer in prior chapters. However, if Open Daylight is not installed, please go to** ``http://www.opendaylight.org/downloads`` **to download and install Open Daylight**
-
-ODL-1: Login to Open Daylight Controller Node. For the purpose of example, we use ``opnfv`` as username of login, and
-``opnfv-odl-controller`` as hostname of the Open Daylight Controller Node.
-
-ODL-2: Start a new terminal session, and change directory to where Open Daylight is installed. Here we use ``odl``
-directory name and ``Lithium SR2`` installation as an example.
-
- ``cd ~/odl/distribution-karaf-0.3.2-Lithium-SR2/bin``
-
-ODL-3: Run the ``karaf`` shell. Please note that it is recommended to run the command in a ``screen`` session.
-
-| ``screen -S ODL_Controller``
-| ``./karaf``
-
-ODL-4: You are now in the Karaf shell of Open Daylight. To explore the list of available features you can execute
-``feature:list``. In order to enable Open Daylight with Open Stack, you have to load the ``odl-ovsdb-openstack``
-feature.
-
- ``opendaylight-user@opnfv>feature:install odl-ovsdb-openstack``
-
-ODL-5: Verify that OVSDB feature is installed successfully.
-
-| ``opendaylight-user@opnfv>feature:list -i | grep ovsdb``
-| odl-ovsdb-openstack | 1.1.1-Lithium-SR1 | x | ovsdb-1.1.1-Lithium-SR1 | OpenDaylight :: OVSDB :: OpenStack Network Virtual
-| odl-ovsdb-southbound-api | 1.1.1-Lithium-SR1 | x | odl-ovsdb-southbound-1.1.1-Lithium-SR1 | OpenDaylight :: southbound :: api
-| odl-ovsdb-southbound-impl | 1.1.1-Lithium-SR1 | x | odl-ovsdb-southbound-1.1.1-Lithium-SR1 | OpenDaylight :: southbound :: impl
-| odl-ovsdb-southbound-impl-rest|1.1.1-Lithium-SR1 | x | odl-ovsdb-southbound-1.1.1-Lithium-SR1| OpenDaylight :: southbound :: impl :: REST
-| odl-ovsdb-southbound-impl-ui | 1.1.1-Lithium-SR1| x | odl-ovsdb-southbound-1.1.1-Lithium-SR1| OpenDaylight :: southbound :: impl :: UI
-| ``opendaylight-user@opnfv>``
-
-ODL-6: To view the logs, you can use the following commands (or alternately the file data/log/karaf.log).
-
-| ``opendaylight-user@opnfv>log:display``
-| ``opendaylight-user@opnfv>log:tail``
-
-ODL-7: To enable ODL DLUX UI, install the following features. Then you can navigate to
-``http://<opnfv-odl-controller IP address>:8181/index.html`` for DLUX UI.
-The default user-name and password is admin/admin.
-
- ``opendaylight-user@opnfv>feature:install odl-restconf odl-l2switch-switch odl-mdsal-apidocs odl-dlux-core``
-
-ODL-8: To exit out of screen session, please use the command ``CTRL+a`` followed by ``d``
-
-**Note: Do not kill the screen session, it will terminate the ODL controller.**
-
-At this moment, Open Daylight has been started successfully.
-
-**********************************************
-_`Step 3: Start Open Stack on Controller Node`
-**********************************************
-
-OS-N-1: Login to Open Stack Controller Node. For the purpose of example, we use ``opnfv`` as username of login, and
-``opnfv-os-controller`` as hostname of the Open Stack Controller Node.
-
-OS-N-2: Start a new terminal, and change directory to where Open Stack is installed. Here we use ``devstack`` directory
-name as an example.
-
- ``cd ~/devstack``
-
-OS-N-3: Create a ``local.conf`` file with the contents from the following URL.
-
- ``http://fpaste.org/276949/39476214/``
-
-Note 1: You need to change the value of ``BRANCH``, and all appearance of ``stable/kilo`` and related URL to point to
-the actual branch of your upstream repository.
-
-Note 2: you need to change the IP address of ``ODL_MGR_IP`` to point to your actual IP address of Open Daylight
-Controller.
-
-Note 3: You may have to change the value of ``ODL_PROVIDER_MAPPINGS`` and ``PUBLIC_INTERFACE`` to match your actual
-network interfaces.
-
-OS-N-4: Initiate Openstack setup by invoking ``stack.sh``
-
- ``./stack.sh``
-
-OS-N-5: If the setup is successful you would see the following logs on the console. Please note that the IP addresses
-are all for the purpose of example. Your IP addresses will match the ones assigned during the installation of OPNFV B
-Release base platform in prior chapters.
-
-| ``This is your host ip: <opnfv-os-controller IP address>``
-| ``Horizon is now available at http://<opnfv-os-controller IP address>/``
-| ``Keystone is serving at <opnfv-os-controller IP address>/``
-| ``The default users are: admin and demo``
-| ``The password: password``
-
-OS-N-6: Assuming that all goes well, you can set ``OFFLINE=True`` and ``RECLONE=no`` in ``local.conf`` to lock the
-codebase. Devstack uses these configuration parameters to determine if it has to run with the existing codebase or
-update to the latest copy.
-
-OS-N-7: Source the credentials.
-
- ``opnfv@opnfv-os-controller:~/devstack$ source openrc admin demo``
-
-OS-N-8: Verify some commands to check if setup is working fine.
-
-| ``opnfv@opnfv-os-controller:~/devstack$ nova flavor-list``
-| +----+-----------+-----------+------+-----------+------+-------+-------------+-----------+
-| | ID | Name | Memory_MB | Disk | Ephemeral | Swap | VCPUs | RXTX_Factor | Is_Public |
-| +----+-----------+-----------+------+-----------+------+-------+-------------+-----------+
-| | 1 | m1.tiny | 512 | 1 | 0 | | 1 | 1.0 | True |
-| | 2 | m1.small | 2048 | 20 | 0 | | 1 | 1.0 | True |
-| | 3 | m1.medium | 4096 | 40 | 0 | | 2 | 1.0 | True |
-| | 4 | m1.large | 8192 | 80 | 0 | | 4 | 1.0 | True |
-| | 5 | m1.xlarge | 16384 | 160 | 0 | | 8 | 1.0 | True |
-| +----+-----------+-----------+------+-----------+------+-------+-------------+-----------+
-
-Now you can start the Compute node setup.
-
-*******************************************
-_`Step 4: Start Open Stack on Compute Node`
-*******************************************
-
-OS-M-1: Login to Open Stack Compute Node. For the purpose of example, we use ``opnfv`` as username of login, and
-``opnfv-os-compute`` as hostname of the Open Stack Compute Node.
-
-OS-M-2: Start a new terminal, and change directory to where Open Stack is installed. Here we use ``devstack``
-directory name as an example.
-
- ``cd ~/devstack``
-
-OS-M-3: Create a ``local.conf`` file with the contents from the following URL.
-
- ``http://fpaste.org/276958/44395955/``
-
-Note 1: You need to change the value of ``BRANCH``, and all appearance of ``stable/kilo`` and related URL to point to
-the actual branch of your upstream repository.
-
-Note 2: you need to change the IP address of ``SERVICE_HOST`` to point to your actual IP address of Open Stack
-Controller.
-
-Note 3: you need to change the IP address of ``ODL_MGR_IP`` to point to your actual IP address of Open Daylight
-Controller.
-
-Note 4: You may have to change the value of ``ODL_PROVIDER_MAPPINGS`` and ``PUBLIC_INTERFACE`` to match your actual
-network interface.
-
-OS-M-4: Initiate Openstack setup by invoking ``stack.sh``
-
- ``./stack.sh``
-
-OS-M-5: Assuming that all goes well, you can set ``OFFLINE=True`` and ``RECLONE=no`` in ``local.conf`` to lock the
-codebase. Devstack uses these configuration parameters to determine if it has to run with the existing codebase or
-update to the latest copy.
-
-OS-M-6: Source the credentials.
-
- ``opnfv@opnfv-os-compute:~/devstack$ source openrc admin demo``
-
-OS-M-7:Verify some commands to check if setup is working fine.
-
-| ``opnfv@opnfv-os-compute:~/devstack$ nova flavor-list``
-| +----+-----------+-----------+------+-----------+------+-------+-------------+-----------+
-| | ID | Name | Memory_MB | Disk | Ephemeral | Swap | VCPUs | RXTX_Factor | Is_Public |
-| +----+-----------+-----------+------+-----------+------+-------+-------------+-----------+
-| | 1 | m1.tiny | 512 | 1 | 0 | | 1 | 1.0 | True |
-| | 2 | m1.small | 2048 | 20 | 0 | | 1 | 1.0 | True |
-| | 3 | m1.medium | 4096 | 40 | 0 | | 2 | 1.0 | True |
-| | 4 | m1.large | 8192 | 80 | 0 | | 4 | 1.0 | True |
-| | 5 | m1.xlarge | 16384 | 160 | 0 | | 8 | 1.0 | True |
-| +----+-----------+-----------+------+-----------+------+-------+-------------+-----------+
-
-Now you can start to set up the service VM as an Ipv6 vRouter in the environment of Open Stack and Open Daylight.
-
-*******************************************************
-_`Step 5: Create External Network Connectivity ext-net`
-*******************************************************
-
-# Place holder for instructions of how to create ext-net
-
-*******************************************************************
-_`Step 6: Create IPv4 Subnet and Router with External Connectivity`
-*******************************************************************
-
-# Place holder for instructions of how to create IPv4 subnet and router associated with ext-net
-
-*******************************************************************
-_`Step 7: Create IPv6 Subnet and Router with External Connectivity`
-*******************************************************************
-
-# Place holder for instructions of how to create IPv6 subnet and router associated with ext-net
-
-*************************************************************
-_`Step 8: Prepare Image, Metadata and Keypair for Service VM`
-*************************************************************
-
-# Place holder for instructions of how to get the image and prepare the metadata for service VM, and how to add keypairs
-
-*****************************************************************
-_`Step 9: Boot Service VM (vRouter) and other VMs in IPv6 Subnet`
-*****************************************************************
-
-# Place holder for instructions of how to boot the service VM named vRouter, and a couple of others in the same Ipv6
-subnet for testing purpose
-
-**********************************
-_`Step 10: Spawn RADVD in vRouter`
-**********************************
-
-# Place holder for instructions of how to spawn the RADVD daemon in vRouter
-
-********************************************
-_`Step 11: Testing to Verify Setup Complete`
-********************************************
-
-# Place holder for instructions of how to test and verify that the setup is complete
-
-*************
-_`Next Steps`
-*************
-
-Congratulations, you have completed the setup of using a service VM to act as an IPv6 vRouter. This setup allows further
-open innovation by any 3rd-party. Please refer to relevant sections in User's Guide for further value-added services on
-this IPv6 vRouter.
+ option-pure-os.rst
+ option-odl-l2.rst
diff --git a/docs/configguide/option-odl-l2.rst b/docs/configguide/option-odl-l2.rst
new file mode 100644
index 0000000..42fb527
--- /dev/null
+++ b/docs/configguide/option-odl-l2.rst
@@ -0,0 +1,398 @@
+========================================================
+Setup in OpenStack and Open Daylight L2-Only Environment
+========================================================
+
+If you intend to set up a service VM as an IPv6 vRouter in an environment of OpenStack
+and Open Daylight L2-only of OPNFV Brahmaputra Release base platform, the instructions
+are as follows.
+
+Please **NOTE** that:
+
+* The hostnames, IP addresses, and username are for exemplary purpose in instructions.
+ Please change as needed to fit your environment.
+* The instructions apply to both deployment model of single controller node and
+ HA (High Availability) deployment model where multiple controller nodes are used.
+* However, in case of HA, when ``ipv6-router`` is created in step **SETUP-SVM-11**,
+ it could be created in any of the controller node. Thus you need to identify in which
+ controller node ``ipv6-router`` is created in order to manually spawn ``radvd`` daemon
+ inside the ``ipv6-router`` namespace in steps **SETUP-SVM-24** through **SETUP-SVM-30**.
+
+*****************************
+Install OPNFV and Preparation
+*****************************
+
+**OPNFV-INSTALL-1**: To install pure OpenStack option of OPNFV Brahmaputra Release:
+
+.. code-block:: bash
+
+ deploy --scenario os_odl-l2_ha
+
+**OPNFV-INSTALL-2**: Clone the following GitHub repository to get the
+configuration and metadata files
+
+.. code-block:: bash
+
+ git clone https://github.com/sridhargaddam/opnfv_os_ipv6_poc.git /opt/stack/opnfv_os_ipv6_poc
+
+***************************************************
+Source the Credentials in OpenStack Controller Node
+***************************************************
+
+**SETUP-SVM-1**: Login in OpenStack Controller Node. Start a new terminal,
+and change directory to where OpenStack is installed.
+
+**SETUP-SVM-2**: Source the credentials.
+
+.. code-block:: bash
+
+ source openrc admin demo
+
+**************************************
+Add External Connectivity to ``br-ex``
+**************************************
+
+In OpenStack Controller Node, ``eth1`` is configured to provide external/public connectivity
+for both IPv4 and IPv6 (optional). So let us add this interface to ``br-ex`` and move the IP address, including the
+default route from ``eth1`` to ``br-ex``.
+
+**SETUP-SVM-3**: Add ``eth1`` to ``br-ex`` and move the IP address and the default route from ``eth1`` to ``br-ex``
+
+.. code-block:: bash
+
+ sudo ip addr del 198.59.156.113/24 dev eth1
+ sudo ovs-vsctl add-port br-ex eth1
+ sudo ifconfig eth1 up
+ sudo ip addr add 198.59.156.113/24 dev br-ex
+ sudo ifconfig br-ex up
+ sudo ip route add default via 198.59.156.1 dev br-ex
+
+Please note that:
+
+* The IP address ``198.59.156.113`` and related subnet and gateway addressed in the command
+ below are for exemplary purpose. **Please replace them with the IP addresses of your actual network**.
+* **This can be automated in /etc/network/interfaces**.
+
+**SETUP-SVM-4**: Verify that ``br-ex`` now has the original external IP address, and that the default route is on
+``br-ex``
+
+.. code-block:: bash
+
+ $ ip a s br-ex
+ 38: br-ex: <BROADCAST,UP,LOWER_UP> mtu 1430 qdisc noqueue state UNKNOWN group default
+ link/ether 00:50:56:82:42:d1 brd ff:ff:ff:ff:ff:ff
+ inet 198.59.156.113/24 brd 198.59.156.255 scope global br-ex
+ valid_lft forever preferred_lft forever
+ inet6 fe80::543e:28ff:fe70:4426/64 scope link
+ valid_lft forever preferred_lft forever
+ $
+ $ ip route
+ default via 198.59.156.1 dev br-ex
+ 192.168.0.0/24 dev eth0 proto kernel scope link src 192.168.0.10
+ 192.168.122.0/24 dev virbr0 proto kernel scope link src 192.168.122.1
+ 198.59.156.0/24 dev br-ex proto kernel scope link src 198.59.156.113
+
+Please note that The IP addresses above are exemplary purpose
+
+********************************************************
+Create IPv4 Subnet and Router with External Connectivity
+********************************************************
+
+**SETUP-SVM-5**: Create a Neutron router ``ipv4-router`` which needs to provide external connectivity.
+
+.. code-block:: bash
+
+ neutron router-create ipv4-router
+
+**SETUP-SVM-6**: Create an external network/subnet ``ext-net`` using the appropriate values based on the
+data-center physical network setup.
+
+.. code-block:: bash
+
+ neutron net-create --router:external ext-net
+ neutron subnet-create --disable-dhcp --allocation-pool start=198.59.156.251,end=198.59.156.254 --gateway 198.59.156.1 ext-net 198.59.156.0/24
+
+Please note that the IP addresses in the command above are for exemplary purpose. **Please replace the IP addresses of
+your actual network**.
+
+**SETUP-SVM-7**: Associate the ``ext-net`` to the Neutron router ``ipv4-router``.
+
+.. code-block:: bash
+
+ neutron router-gateway-set ipv4-router ext-net
+
+**SETUP-SVM-8**: Create an internal/tenant IPv4 network ``ipv4-int-network1``
+
+.. code-block:: bash
+
+ neutron net-create ipv4-int-network1
+
+**SETUP-SVM-9**: Create an IPv4 subnet ``ipv4-int-subnet1`` in the internal network ``ipv4-int-network1``
+
+.. code-block:: bash
+
+ neutron subnet-create --name ipv4-int-subnet1 --dns-nameserver 8.8.8.8 ipv4-int-network1 20.0.0.0/24
+
+**SETUP-SVM-10**: Associate the IPv4 internal subnet ``ipv4-int-subnet1`` to the Neutron router ``ipv4-router``.
+
+.. code-block:: bash
+
+ neutron router-interface-add ipv4-router ipv4-int-subnet1
+
+********************************************************
+Create IPv6 Subnet and Router with External Connectivity
+********************************************************
+
+Now, let us create a second neutron router where we can "manually" spawn a ``radvd`` daemon to simulate an external
+IPv6 router.
+
+**SETUP-SVM-11**: Create a second Neutron router ``ipv6-router`` which needs to provide external connectivity
+
+.. code-block:: bash
+
+ neutron router-create ipv6-router
+
+**SETUP-SVM-12**: Associate the ``ext-net`` to the Neutron router ``ipv6-router``
+
+.. code-block:: bash
+
+ neutron router-gateway-set ipv6-router ext-net
+
+**SETUP-SVM-13**: Create a second internal/tenant IPv4 network ``ipv4-int-network2``
+
+.. code-block:: bash
+
+ neutron net-create ipv4-int-network2
+
+**SETUP-SVM-14**: Create an IPv4 subnet ``ipv4-int-subnet2`` for the ``ipv6-router`` internal network
+``ipv4-int-network2``
+
+.. code-block:: bash
+
+ neutron subnet-create --name ipv4-int-subnet2 --dns-nameserver 8.8.8.8 ipv4-int-network2 10.0.0.0/24
+
+**SETUP-SVM-15**: Associate the IPv4 internal subnet ``ipv4-int-subnet2`` to the Neutron router ``ipv6-router``.
+
+.. code-block:: bash
+
+ neutron router-interface-add ipv6-router ipv4-int-subnet2
+
+**************************************************
+Prepare Image, Metadata and Keypair for Service VM
+**************************************************
+
+**SETUP-SVM-16**: Download ``fedora22`` image which would be used as ``vRouter``
+
+.. code-block:: bash
+
+ glance image-create --name 'Fedora22' --disk-format qcow2 --container-format bare --is-public true --copy-from https://download.fedoraproject.org/pub/fedora/linux/releases/22/Cloud/x86_64/Images/Fedora-Cloud-Base-22-20150521.x86_64.qcow2
+
+**SETUP-SVM-17**: Create a keypair
+
+.. code-block:: bash
+
+ nova keypair-add vRouterKey > ~/vRouterKey
+
+**SETUP-SVM-18**: Create ports for ``vRouter`` and both the VMs with some specific MAC addresses.
+
+.. code-block:: bash
+
+ neutron port-create --name eth0-vRouter --mac-address fa:16:3e:11:11:11 ipv4-int-network2
+ neutron port-create --name eth1-vRouter --mac-address fa:16:3e:22:22:22 ipv4-int-network1
+ neutron port-create --name eth0-VM1 --mac-address fa:16:3e:33:33:33 ipv4-int-network1
+ neutron port-create --name eth0-VM2 --mac-address fa:16:3e:44:44:44 ipv4-int-network1
+
+**********************************************************************************************************
+Boot Service VM (``vRouter``) with ``eth0`` on ``ipv4-int-network2`` and ``eth1`` on ``ipv4-int-network1``
+**********************************************************************************************************
+
+Let us boot the service VM (``vRouter``) with ``eth0`` interface on ``ipv4-int-network2`` connecting to ``ipv6-router``,
+and ``eth1`` interface on ``ipv4-int-network1`` connecting to ``ipv4-router``.
+
+**SETUP-SVM-19**: Boot the ``vRouter`` using ``Fedora22`` image on the OpenStack Compute Node with hostname
+``opnfv-os-compute``
+
+.. code-block:: bash
+
+ nova boot --image Fedora22 --flavor m1.small --user-data /opt/stack/opnfv_os_ipv6_poc/metadata.txt --availability-zone nova:opnfv-os-compute --nic port-id=$(neutron port-list | grep -w eth0-vRouter | awk '{print $2}') --nic port-id=$(neutron port-list | grep -w eth1-vRouter | awk '{print $2}') --key-name vRouterKey vRouter
+
+Please **note** that ``/opt/stack/opnfv_os_ipv6_poc/metadata.txt`` is used to enable the ``vRouter`` to automatically
+spawn a ``radvd``, and
+
+* Act as an IPv6 vRouter which advertises the RA (Router Advertisements) with prefix
+ ``2001:db8:0:2::/64`` on its internal interface (``eth1``).
+* Forward IPv6 traffic from internal interface (``eth1``)
+
+**SETUP-SVM-20**: Verify that ``Fedora22`` image boots up successfully and vRouter has ``ssh`` keys properly injected
+
+.. code-block:: bash
+
+ nova list
+ nova console-log vRouter
+
+Please note that **it may take a few minutes** for the necessary packages to get installed and ``ssh`` keys
+to be injected.
+
+.. code-block:: bash
+
+ # Sample Output
+ [ 762.884523] cloud-init[871]: ec2: #############################################################
+ [ 762.909634] cloud-init[871]: ec2: -----BEGIN SSH HOST KEY FINGERPRINTS-----
+ [ 762.931626] cloud-init[871]: ec2: 2048 e3:dc:3d:4a:bc:b6:b0:77:75:a1:70:a3:d0:2a:47:a9 (RSA)
+ [ 762.957380] cloud-init[871]: ec2: -----END SSH HOST KEY FINGERPRINTS-----
+ [ 762.979554] cloud-init[871]: ec2: #############################################################
+
+*******************************************
+Boot Two Other VMs in ``ipv4-int-network1``
+*******************************************
+
+In order to verify that the setup is working, let us create two cirros VMs with ``eth1`` interface on the
+``ipv4-int-network1``, i.e., connecting to ``vRouter`` ``eth1`` interface for internal network.
+
+We will have to configure appropriate ``mtu`` on the VMs' interface by taking into account the tunneling
+overhead and any physical switch requirements. If so, push the ``mtu`` to the VM either using ``dhcp``
+options or via ``meta-data``.
+
+**SETUP-SVM-21**: Create VM1 on OpenStack Controller Node with hostname ``opnfv-os-controller``
+
+.. code-block:: bash
+
+ nova boot --image cirros-0.3.4-x86_64-uec --flavor m1.tiny --nic port-id=$(neutron port-list | grep -w eth0-VM1 | awk '{print $2}') --availability-zone nova:opnfv-os-controller --key-name vRouterKey --user-data /opt/stack/opnfv_os_ipv6_poc/set_mtu.sh VM1
+
+**SETUP-SVM-22**: Create VM2 on OpenStack Compute Node with hostname ``opnfv-os-compute``
+
+.. code-block:: bash
+
+ nova boot --image cirros-0.3.4-x86_64-uec --flavor m1.tiny --nic port-id=$(neutron port-list | grep -w eth0-VM2 | awk '{print $2}') --availability-zone nova:opnfv-os-compute --key-name vRouterKey --user-data /opt/stack/opnfv_os_ipv6_poc/set_mtu.sh VM2
+
+**SETUP-SVM-23**: Confirm that both the VMs are successfully booted.
+
+.. code-block:: bash
+
+ nova list
+ nova console-log VM1
+ nova console-log VM2
+
+**********************************
+Spawn ``RADVD`` in ``ipv6-router``
+**********************************
+
+Let us manually spawn a ``radvd`` daemon inside ``ipv6-router`` namespace to simulate an external router.
+First of all, we will have to identify the ``ipv6-router`` namespace and move to the namespace.
+
+Please **NOTE** that in case of HA (High Availability) deployment model where multiple controller
+nodes are used, ``ipv6-router`` created in step **SETUP-SVM-11** could be in any of the controller
+node. Thus you need to identify in which controller node ``ipv6-router`` is created in order to manually
+spawn ``radvd`` daemon inside the ``ipv6-router`` namespace in steps **SETUP-SVM-24** through
+**SETUP-SVM-30**. The following command in Neutron will display the controller on which the
+``ipv6-router`` is spawned.
+
+.. code-block:: bash
+
+ neutron l3-agent-list-hosting-router ipv6-router
+
+Then you login to that controller and execute steps **SETUP-SVM-24**
+through **SETUP-SVM-30**
+
+**SETUP-SVM-24**: identify the ``ipv6-router`` namespace and move to the namespace
+
+.. code-block:: bash
+
+ sudo ip netns exec qrouter-$(neutron router-list | grep -w ipv6-router | awk '{print $2}') bash
+
+**SETUP-SVM-25**: Upon successful execution of the above command, you will be in the router namespace.
+Now let us configure the IPv6 address on the <qr-xxx> interface.
+
+.. code-block:: bash
+
+ export router_interface=$(ip a s | grep -w "global qr-*" | awk '{print $7}')
+ ip -6 addr add 2001:db8:0:1::1 dev $router_interface
+
+**SETUP-SVM-26**: Update the sample file ``/opt/stack/opnfv_os_ipv6_poc/scenario2/radvd.conf``
+with ``$router_interface``.
+
+.. code-block:: bash
+
+ cp /opt/stack/opnfv_os_ipv6_poc/scenario2/radvd.conf /tmp/radvd.$router_interface.conf
+ sed -i 's/$router_interface/'$router_interface'/g' /tmp/radvd.$router_interface.conf
+
+**SETUP-SVM-27**: Spawn a ``radvd`` daemon to simulate an external router. This ``radvd`` daemon advertises an IPv6
+subnet prefix of ``2001:db8:0:1::/64`` using RA (Router Advertisement) on its $router_interface so that ``eth0``
+interface of ``vRouter`` automatically configures an IPv6 SLAAC address.
+
+.. code-block:: bash
+
+ $radvd -C /tmp/radvd.$router_interface.conf -p /tmp/br-ex.pid.radvd -m syslog
+
+**SETUP-SVM-28**: Add an IPv6 downstream route pointing to the ``eth0`` interface of vRouter.
+
+.. code-block:: bash
+
+ ip -6 route add 2001:db8:0:2::/64 via 2001:db8:0:1:f816:3eff:fe11:1111
+
+**SETUP-SVM-29**: The routing table should now look similar to something shown below.
+
+.. code-block:: bash
+
+ ip -6 route show
+ 2001:db8:0:1::1 dev qr-42968b9e-62 proto kernel metric 256
+ 2001:db8:0:1::/64 dev qr-42968b9e-62 proto kernel metric 256 expires 86384sec
+ 2001:db8:0:2::/64 via 2001:db8:0:1:f816:3eff:fe11:1111 dev qr-42968b9e-62 proto ra metric 1024 expires 29sec
+ fe80::/64 dev qg-3736e0c7-7c proto kernel metric 256
+ fe80::/64 dev qr-42968b9e-62 proto kernel metric 256
+
+**SETUP-SVM-30**: If all goes well, the IPv6 addresses assigned to the VMs would be as shown as follows:
+
+.. code-block:: bash
+
+ vRouter eth0 interface would have the following IPv6 address: 2001:db8:0:1:f816:3eff:fe11:1111/64
+ vRouter eth1 interface would have the following IPv6 address: 2001:db8:0:2::1/64
+ VM1 would have the following IPv6 address: 2001:db8:0:2:f816:3eff:fe33:3333/64
+ VM2 would have the following IPv6 address: 2001:db8:0:2:f816:3eff:fe44:4444/64
+
+********************************
+Testing to Verify Setup Complete
+********************************
+
+Now, let us ``ssh`` to one of the VMs, e.g. VM1, to confirm that it has successfully configured the IPv6 address
+using ``SLAAC`` with prefix ``2001:db8:0:2::/64`` from ``vRouter``.
+
+Please note that you need to get the IPv4 address associated to VM1. This can be inferred from ``nova list`` command.
+
+**SETUP-SVM-31**: ``ssh`` VM1
+
+.. code-block:: bash
+
+ ssh -i /home/odl/vRouterKey cirros@<VM1-IPv4-address>
+
+If everything goes well, ``ssh`` will be successful and you will be logged into VM1. Run some commands to verify
+that IPv6 addresses are configured on ``eth0`` interface.
+
+**SETUP-SVM-32**: Show an IPv6 address with a prefix of ``2001:db8:0:2::/64``
+
+.. code-block:: bash
+
+ ip address show
+
+**SETUP-SVM-33**: ping some external IPv6 address, e.g. ``ipv6-router``
+
+.. code-block:: bash
+
+ ping6 2001:db8:0:1::1
+
+If the above ping6 command succeeds, it implies that ``vRouter`` was able to successfully forward the IPv6 traffic
+to reach external ``ipv6-router``.
+
+**SETUP-SVM-34**: When all tests show that the setup works as expected, You can now exit the ``ipv6-router`` namespace.
+
+.. code-block:: bash
+
+ exit
+
+**********
+Next Steps
+**********
+
+Congratulations, you have completed the setup of using a service VM to act as an IPv6 vRouter. This setup allows further
+open innovation by any 3rd-party. Please refer to relevant sections in User's Guide for further value-added services on
+this IPv6 vRouter.
+
diff --git a/docs/configguide/option-pure-os.rst b/docs/configguide/option-pure-os.rst
new file mode 100644
index 0000000..46dcb6b
--- /dev/null
+++ b/docs/configguide/option-pure-os.rst
@@ -0,0 +1,236 @@
+======================================================================
+Set Up a Service VM as an IPv6 vRouter in Native OpenStack Environment
+======================================================================
+
+If you intend to set up a service VM as an IPv6 vRouter in native OpenStack environment of
+OPNFV Brahmaputra Release base platform, the instructions are as follows.
+
+Please **NOTE** that:
+
+* Because the anti-spoofing rules of Security Group feature in OpenStack prevents
+ a VM from forwarding packets, we need to disable Security Group feature in the
+ native OpenStack environment.
+* The hostnames, IP addresses, and username are for exemplary purpose in instructions.
+ Please change as needed to fit your environment.
+* The instructions apply to both deployment model of single controller node and
+ HA (High Availability) deployment model where multiple controller nodes are used
+
+*****************************
+Install OPNFV and Preparation
+*****************************
+
+**OPNFV-NATIVE-INSTALL-1**: To install pure OpenStack option of OPNFV Brahmaputra Release:
+
+.. code-block:: bash
+
+ deploy --scenario os_ha
+
+**OPNFV-NATIVE-INSTALL-2**: Clone the following GitHub repository to get the
+configuration and metadata files
+
+.. code-block:: bash
+
+ git clone https://github.com/sridhargaddam/opnfv_os_ipv6_poc.git /opt/stack/opnfv_os_ipv6_poc
+
+**********************************************
+Disable Security Groups in OpenStack ML2 Setup
+**********************************************
+
+**OPNFV-NATIVE-SEC-1**: Change the settings in
+``/etc/neutron/plugins/ml2/ml2_conf.ini`` as follows
+
+.. code-block:: bash
+
+ # /etc/neutron/plugins/ml2/ml2_conf.ini
+ [securitygroup]
+ enable_security_group = False
+ firewall_driver = neutron.agent.firewall.NoopFirewallDriver
+
+**OPNFV-NATIVE-SEC-2**: Change the settings in ``/etc/nova/nova.conf`` as follows
+
+.. code-block:: bash
+
+ # /etc/nova/nova.conf
+ [DEFAULT]
+ security_group_api = nova
+ firewall_driver = nova.virt.firewall.NoopFirewallDriver
+
+*********************************
+Set Up Service VM as IPv6 vRouter
+*********************************
+
+**OPNFV-NATIVE-SETUP-1**: Now we assume that OpenStack multi-node setup is up and running. The following
+commands should be executed:
+
+.. code-block:: bash
+
+ source openrc admin demo
+
+**OPNFV-NATIVE-SETUP-2**: Download ``fedora22`` image which would be used for ``vRouter``
+
+.. code-block:: bash
+
+ wget https://download.fedoraproject.org/pub/fedora/linux/releases/22/Cloud/x86_64/Images/Fedora-Cloud-Base-22-20150521.x86_64.qcow2
+
+**OPNFV-NATIVE-SETUP-3**: Import Fedora22 image to ``glance``
+
+.. code-block:: bash
+
+ glance image-create --name 'Fedora22' --disk-format qcow2 --container-format bare --file ./Fedora-Cloud-Base-22-20150521.x86_64.qcow2
+
+**OPNFV-NATIVE-SETUP-4**: Create Neutron routers ``ipv4-router`` and ``ipv6-router``
+which need to provide external connectivity.
+
+.. code-block:: bash
+
+ neutron router-create ipv4-router
+ neutron router-create ipv6-router
+
+**OPNFV-NATIVE-SETUP-5**: Create an external network/subnet ``ext-net`` using
+the appropriate values based on the data-center physical network setup.
+
+.. code-block:: bash
+
+ neutron net-create --router:external ext-net
+
+**OPNFV-NATIVE-SETUP-6**: If your ``opnfv-os-controller`` node has two interfaces ``eth0`` and
+``eth1``, and ``eth1`` is used for external connectivity, move the IP address of ``eth1`` to ``br-ex``.
+
+Please note that the IP address ``198.59.156.113`` and related subnet and gateway addressed in the command
+below are for exemplary purpose. **Please replace them with the IP addresses of your actual network**.
+
+.. code-block:: bash
+
+ sudo ip addr del 198.59.156.113/24 dev eth1
+ sudo ovs-vsctl add-port br-ex eth1
+ sudo ifconfig eth1 up
+ sudo ip addr add 198.59.156.113/24 dev br-ex
+ sudo ifconfig br-ex up
+ sudo ip route add default via 198.59.156.1 dev br-ex
+ neutron subnet-create --disable-dhcp --allocation-pool start=198.59.156.251,end=198.59.156.254 --gateway 198.59.156.1 ext-net 198.59.156.0/24
+
+**OPNFV-NATIVE-SETUP-7**: Verify that ``br-ex`` now has the original external IP address,
+and that the default route is on ``br-ex``
+
+.. code-block:: bash
+
+ $ ip a s br-ex
+ 38: br-ex: <BROADCAST,UP,LOWER_UP> mtu 1430 qdisc noqueue state UNKNOWN group default
+ link/ether 00:50:56:82:42:d1 brd ff:ff:ff:ff:ff:ff
+ inet 198.59.156.113/24 brd 198.59.156.255 scope global br-ex
+ valid_lft forever preferred_lft forever
+ inet6 fe80::543e:28ff:fe70:4426/64 scope link
+ valid_lft forever preferred_lft forever
+ $
+ $ ip route
+ default via 198.59.156.1 dev br-ex
+ 192.168.0.0/24 dev eth0 proto kernel scope link src 192.168.0.10
+ 192.168.122.0/24 dev virbr0 proto kernel scope link src 192.168.122.1
+ 198.59.156.0/24 dev br-ex proto kernel scope link src 198.59.156.113
+
+Please note that the IP addresses above are exemplary purpose.
+
+**OPNFV-NATIVE-SETUP-8**: Create Neutron networks ``ipv4-int-network1`` and
+``ipv6-int-network2`` with port_security disabled
+
+.. code-block:: bash
+
+ neutron net-create --port_security_enabled=False ipv4-int-network1
+ neutron net-create --port_security_enabled=False ipv6-int-network2
+
+**OPNFV-NATIVE-SETUP-9**: Create IPv4 subnet ``ipv4-int-subnet1`` in the internal network
+``ipv4-int-network1``, and associate it to ``ipv4-router``.
+
+.. code-block:: bash
+
+ neutron subnet-create --name ipv4-int-subnet1 --dns-nameserver 8.8.8.8 ipv4-int-network1 20.0.0.0/24
+ neutron router-interface-add ipv4-router ipv4-int-subnet1
+
+**OPNFV-NATIVE-SETUP-10**: Associate the ``ext-net`` to the Neutron routers ``ipv4-router``
+and ``ipv6-router``.
+
+.. code-block:: bash
+
+ neutron router-gateway-set ipv4-router ext-net
+ neutron router-gateway-set ipv6-router ext-net
+
+**OPNFV-NATIVE-SETUP-11**: Create two subnets, one IPv4 subnet ``ipv4-int-subnet2`` and
+one IPv6 subnet ``ipv6-int-subnet2`` in ``ipv6-int-network2``, and associate both subnets to
+``ipv6-router``
+
+.. code-block:: bash
+
+ neutron subnet-create --name ipv4-int-subnet2 --dns-nameserver 8.8.8.8 ipv6-int-network2 10.0.0.0/24
+ neutron subnet-create --name ipv6-int-subnet2 --ip-version 6 --ipv6-ra-mode slaac --ipv6-address-mode slaac ipv6-int-network2 2001:db8:0:1::/64
+ neutron router-interface-add ipv6-router ipv4-int-subnet2
+ neutron router-interface-add ipv6-router ipv6-int-subnet2
+
+**OPNFV-NATIVE-SETUP-12**: Create a keypair
+
+.. code-block:: bash
+
+ nova keypair-add vRouterKey > ~/vRouterKey
+
+**OPNFV-NATIVE-SETUP-13**: Create ports for vRouter (with some specific MAC address
+- basically for automation - to know the IPv6 addresses that would be assigned to the port).
+
+.. code-block:: bash
+
+ neutron port-create --name eth0-vRouter --mac-address fa:16:3e:11:11:11 ipv6-int-network2
+ neutron port-create --name eth1-vRouter --mac-address fa:16:3e:22:22:22 ipv4-int-network1
+
+**OPNFV-NATIVE-SETUP-14**: Create ports for VM1 and VM2.
+
+.. code-block:: bash
+
+ neutron port-create --name eth0-VM1 --mac-address fa:16:3e:33:33:33 ipv4-int-network1
+ neutron port-create --name eth0-VM2 --mac-address fa:16:3e:44:44:44 ipv4-int-network1
+
+**OPNFV-NATIVE-SETUP-15**: Update ``ipv6-router`` with routing information to subnet
+``2001:db8:0:2::/64``
+
+.. code-block:: bash
+
+ neutron router-update ipv6-router --routes type=dict list=true destination=2001:db8:0:2::/64,nexthop=2001:db8:0:1:f816:3eff:fe11:1111
+
+**OPNFV-NATIVE-SETUP-16**: Boot Service VM (``vRouter``), VM1 and VM2
+
+.. code-block:: bash
+
+ nova boot --image Fedora22 --flavor m1.small --user-data /opt/stack/opnfv_os_ipv6_poc/metadata.txt --availability-zone nova:opnfv-os-compute --nic port-id=$(neutron port-list | grep -w eth0-vRouter | awk '{print $2}') --nic port-id=$(neutron port-list | grep -w eth1-vRouter | awk '{print $2}') --key-name vRouterKey vRouter
+ nova list
+ nova console-log vRouter #Please wait for some 10 to 15 minutes so that necessary packages (like radvd) are installed and vRouter is up.
+ nova boot --image cirros-0.3.4-x86_64-uec --flavor m1.tiny --nic port-id=$(neutron port-list | grep -w eth0-VM1 | awk '{print $2}') --availability-zone nova:opnfv-os-controller --key-name vRouterKey --user-data /opt/stack/opnfv_os_ipv6_poc/set_mtu.sh VM1
+ nova boot --image cirros-0.3.4-x86_64-uec --flavor m1.tiny --nic port-id=$(neutron port-list | grep -w eth0-VM2 | awk '{print $2}') --availability-zone nova:opnfv-os-compute --key-name vRouterKey --user-data /opt/stack/opnfv_os_ipv6_poc/set_mtu.sh VM2
+ nova list # Verify that all the VMs are in ACTIVE state.
+
+**OPNFV-NATIVE-SETUP-17**: If all goes well, the IPv6 addresses assigned to the VMs
+would be as shown as follows:
+
+.. code-block:: bash
+
+ vRouter eth0 interface would have the following IPv6 address: 2001:db8:0:1:f816:3eff:fe11:1111/64
+ vRouter eth1 interface would have the following IPv6 address: 2001:db8:0:2::1/64
+ VM1 would have the following IPv6 address: 2001:db8:0:2:f816:3eff:fe33:3333/64
+ VM2 would have the following IPv6 address: 2001:db8:0:2:f816:3eff:fe44:4444/64
+
+**OPNFV-NATIVE-SETUP-18**: Now we can ``SSH`` to ``vRouter``.
+
+Please **NOTE** that in case of HA (High Availability) deployment model where multiple controller
+nodes are used, ``ipv6-router`` created in step **OPNFV-NATIVE-SETUP-4** could be in any of the controller
+node. Thus you need to identify in which controller node ``ipv6-router`` is created in order to
+enter the ``ipv6-router`` namespace. The following command in Neutron will display the
+controller on which the ``ipv6-router`` is spawned.
+
+.. code-block:: bash
+
+ neutron l3-agent-list-hosting-router ipv6-router
+
+Then you login to that controller.
+
+To ``SSH`` to ``vRouter``, you can execute the following command.
+
+.. code-block:: bash
+
+ sudo ip netns exec qrouter-$(neutron router-list | grep -w ipv6-router | awk '{print $2}') ssh -i ~/vRouterKey fedora@2001:db8:0:1:f816:3eff:fe11:1111
+
diff --git a/docs/gapanalysis/gap-analysis-odl-lithium.rst b/docs/gapanalysis/gap-analysis-odl-lithium.rst
index 5178d51..56e86cf 100644
--- a/docs/gapanalysis/gap-analysis-odl-lithium.rst
+++ b/docs/gapanalysis/gap-analysis-odl-lithium.rst
@@ -20,9 +20,9 @@ requirements of VIM-agnostic IPv6 functionality, including infrastructure layer
| | |and would be available only in the later releases of ODL). |
+-------------------------------------------------------------+------------------------+--------------------------------------------------------------------------------+
|IPv6 Router support in ODL |**No** |ODL net-virt provider in Lithium release only supports IPv4 Router. |
-| | |Support for IPv6 Router is planned using ``Routing Manager`` as part of |
-|1. Communication between VMs on same compute node | |Beryllium Release. In the meantime, if IPv6 Routing is necessary, we can |
-|2. Communication between VMs on different compute nodes | |use ODL for L2 connectivity and Neutron L3 agent for IPv4/v6 routing. |
+| | |Support for IPv6 Router is planned in later releases using ``Routing Manager``. |
+|1. Communication between VMs on same compute node | |In the meantime, if IPv6 Routing is necessary, we can use ODL for L2 |
+|2. Communication between VMs on different compute nodes | |connectivity and Neutron L3 agent for IPv4/v6 routing. |
| (east-west) | | |
|3. External routing (north-south) | |**Note**: In Lithium SR3 release, we have the following `issue |
| | |<http://lists.opendaylight.org/pipermail/ovsdb-dev/2015-November/002288.html>`_,|
@@ -35,7 +35,6 @@ requirements of VIM-agnostic IPv6 functionality, including infrastructure layer
|1. SLAAC | |Advertisements based on the IPv6 addressing mode. Router Advertisement |
|2. DHCPv6 Stateless | |is also necessary for VMs to configure the default route. |
|3. DHCPv6 Stateful | | |
-| | |This could be part of ``Routing Manager`` in ``Beryllium`` release. |
+-------------------------------------------------------------+------------------------+--------------------------------------------------------------------------------+
|When using ODL for L2 forwarding/tunneling, is it compatible |Yes | |
|with IPv6. | | |
diff --git a/docs/gapanalysis/gap-analysis-openstack-kilo.rst b/docs/gapanalysis/gap-analysis-openstack-kilo.rst
index 745841d..bad7060 100644
--- a/docs/gapanalysis/gap-analysis-openstack-kilo.rst
+++ b/docs/gapanalysis/gap-analysis-openstack-kilo.rst
@@ -58,10 +58,10 @@ requirements of VIM-agnostic IPv6 functionality, including infrastructure layer
|CLI) as well as via Horizon, including combination of | |https://review.openstack.org/#/c/139731/ for discussion. |
|IPv6/IPv4 and IPv4/IPv6 floating IPs if implemented. | | |
+-----------------------------------------------------------+-------------------------+--------------------------------------------------------------------+
-|Provide IPv6/IPv4 feature parity in support for |**Roadmap** |The L3 configuration should be transparent for the SR-IOV |
+|Provide IPv6/IPv4 feature parity in support for |**To-Do** |The L3 configuration should be transparent for the SR-IOV |
|pass-through capabilities (e.g., SR-IOV). | |implementation. SR-IOV networking support introduced in Juno based |
| | |on the ``sriovnicswitch`` ML2 driver is expected to work with IPv4 |
-| | |and IPv6 enabled VMs. |
+| | |and IPv6 enabled VMs. We need to verify if it works or not |
+-----------------------------------------------------------+-------------------------+--------------------------------------------------------------------+
|Additional IPv6 extensions, for example: IPSEC, IPv6 |**No** |It does not appear to be considered yet (lack of clear requirements)|
|Anycast, Multicast | | |
@@ -100,11 +100,11 @@ requirements of VIM-agnostic IPv6 functionality, including infrastructure layer
|Ability for a VM to support a mix of multiple IPv4 and IPv6|Yes | |
|networks, including multiples of the same type. | | |
+-----------------------------------------------------------+-------------------------+--------------------------------------------------------------------+
-|Support for IPv6 Prefix Delegation. |**Roadmap** |Planned for Liberty |
+|Support for IPv6 Prefix Delegation. |**Roadmap** |Some partial support is available in Liberty release |
+-----------------------------------------------------------+-------------------------+--------------------------------------------------------------------+
|Distributed Virtual Routing (DVR) support for IPv6 |**No** |Blueprint proposed upstream, pending discussion. |
+-----------------------------------------------------------+-------------------------+--------------------------------------------------------------------+
-|IPv6 First-Hop Security, IPv6 ND spoofing. |**Roadmap** |Blueprint proposed upstream. Some patches are under review. |
+|IPv6 First-Hop Security, IPv6 ND spoofing. |**Roadmap** |Supported in Liberty release |
+-----------------------------------------------------------+-------------------------+--------------------------------------------------------------------+
|IPv6 support in Neutron Layer3 High Availability |Yes | |
|(keepalived+VRRP). | | |
diff --git a/docs/reldoc/index.rst b/docs/reldoc/index.rst
new file mode 100644
index 0000000..84afde6
--- /dev/null
+++ b/docs/reldoc/index.rst
@@ -0,0 +1,25 @@
+===============================================================
+Setting Up a Service VM as an IPv6 vRouter with OPNFV B Release
+===============================================================
+
+This section provides instructions to set up a service VM as an IPv6 vRouter using OPNFV Brahmaputra Release
+installers with either pure OpenStack option or Open Daylight L2-only option.
+
+For complete instructions and documentations of setting up service VM as an IPv6 vRouter using ANY method,
+please refer to:
+
+1. IPv6 Configuration Guide (HTML): http://artifacts.opnfv.org/ipv6/docs/setupservicevm/index.html
+2. IPv6 Configuration Guide (PDF): http://artifacts.opnfv.org/ipv6/docs/setupservicevm/setupservicevm.pdf
+3. IPv6 User Guide (HTML): http://artifacts.opnfv.org/ipv6/docs/gapanalysis/index.html
+4. IPv6 User Guide (PDF): http://artifacts.opnfv.org/ipv6/docs/gapanalysis/gapanalysis.pdf
+
+Please see the instructions in the following 2 sections for setup using OPNFV B Release installers.
+Or go to http://artifacts.opnfv.org/ipv6/docs/reldoc/reldoc.pdf to download a PDF version.
+
+.. toctree::
+ :numbered:
+ :maxdepth: 4
+
+ option-pure-os.rst
+ option-odl-l2.rst
+
diff --git a/docs/reldoc/option-odl-l2.rst b/docs/reldoc/option-odl-l2.rst
new file mode 100644
index 0000000..42fb527
--- /dev/null
+++ b/docs/reldoc/option-odl-l2.rst
@@ -0,0 +1,398 @@
+========================================================
+Setup in OpenStack and Open Daylight L2-Only Environment
+========================================================
+
+If you intend to set up a service VM as an IPv6 vRouter in an environment of OpenStack
+and Open Daylight L2-only of OPNFV Brahmaputra Release base platform, the instructions
+are as follows.
+
+Please **NOTE** that:
+
+* The hostnames, IP addresses, and username are for exemplary purpose in instructions.
+ Please change as needed to fit your environment.
+* The instructions apply to both deployment model of single controller node and
+ HA (High Availability) deployment model where multiple controller nodes are used.
+* However, in case of HA, when ``ipv6-router`` is created in step **SETUP-SVM-11**,
+ it could be created in any of the controller node. Thus you need to identify in which
+ controller node ``ipv6-router`` is created in order to manually spawn ``radvd`` daemon
+ inside the ``ipv6-router`` namespace in steps **SETUP-SVM-24** through **SETUP-SVM-30**.
+
+*****************************
+Install OPNFV and Preparation
+*****************************
+
+**OPNFV-INSTALL-1**: To install pure OpenStack option of OPNFV Brahmaputra Release:
+
+.. code-block:: bash
+
+ deploy --scenario os_odl-l2_ha
+
+**OPNFV-INSTALL-2**: Clone the following GitHub repository to get the
+configuration and metadata files
+
+.. code-block:: bash
+
+ git clone https://github.com/sridhargaddam/opnfv_os_ipv6_poc.git /opt/stack/opnfv_os_ipv6_poc
+
+***************************************************
+Source the Credentials in OpenStack Controller Node
+***************************************************
+
+**SETUP-SVM-1**: Login in OpenStack Controller Node. Start a new terminal,
+and change directory to where OpenStack is installed.
+
+**SETUP-SVM-2**: Source the credentials.
+
+.. code-block:: bash
+
+ source openrc admin demo
+
+**************************************
+Add External Connectivity to ``br-ex``
+**************************************
+
+In OpenStack Controller Node, ``eth1`` is configured to provide external/public connectivity
+for both IPv4 and IPv6 (optional). So let us add this interface to ``br-ex`` and move the IP address, including the
+default route from ``eth1`` to ``br-ex``.
+
+**SETUP-SVM-3**: Add ``eth1`` to ``br-ex`` and move the IP address and the default route from ``eth1`` to ``br-ex``
+
+.. code-block:: bash
+
+ sudo ip addr del 198.59.156.113/24 dev eth1
+ sudo ovs-vsctl add-port br-ex eth1
+ sudo ifconfig eth1 up
+ sudo ip addr add 198.59.156.113/24 dev br-ex
+ sudo ifconfig br-ex up
+ sudo ip route add default via 198.59.156.1 dev br-ex
+
+Please note that:
+
+* The IP address ``198.59.156.113`` and related subnet and gateway addressed in the command
+ below are for exemplary purpose. **Please replace them with the IP addresses of your actual network**.
+* **This can be automated in /etc/network/interfaces**.
+
+**SETUP-SVM-4**: Verify that ``br-ex`` now has the original external IP address, and that the default route is on
+``br-ex``
+
+.. code-block:: bash
+
+ $ ip a s br-ex
+ 38: br-ex: <BROADCAST,UP,LOWER_UP> mtu 1430 qdisc noqueue state UNKNOWN group default
+ link/ether 00:50:56:82:42:d1 brd ff:ff:ff:ff:ff:ff
+ inet 198.59.156.113/24 brd 198.59.156.255 scope global br-ex
+ valid_lft forever preferred_lft forever
+ inet6 fe80::543e:28ff:fe70:4426/64 scope link
+ valid_lft forever preferred_lft forever
+ $
+ $ ip route
+ default via 198.59.156.1 dev br-ex
+ 192.168.0.0/24 dev eth0 proto kernel scope link src 192.168.0.10
+ 192.168.122.0/24 dev virbr0 proto kernel scope link src 192.168.122.1
+ 198.59.156.0/24 dev br-ex proto kernel scope link src 198.59.156.113
+
+Please note that The IP addresses above are exemplary purpose
+
+********************************************************
+Create IPv4 Subnet and Router with External Connectivity
+********************************************************
+
+**SETUP-SVM-5**: Create a Neutron router ``ipv4-router`` which needs to provide external connectivity.
+
+.. code-block:: bash
+
+ neutron router-create ipv4-router
+
+**SETUP-SVM-6**: Create an external network/subnet ``ext-net`` using the appropriate values based on the
+data-center physical network setup.
+
+.. code-block:: bash
+
+ neutron net-create --router:external ext-net
+ neutron subnet-create --disable-dhcp --allocation-pool start=198.59.156.251,end=198.59.156.254 --gateway 198.59.156.1 ext-net 198.59.156.0/24
+
+Please note that the IP addresses in the command above are for exemplary purpose. **Please replace the IP addresses of
+your actual network**.
+
+**SETUP-SVM-7**: Associate the ``ext-net`` to the Neutron router ``ipv4-router``.
+
+.. code-block:: bash
+
+ neutron router-gateway-set ipv4-router ext-net
+
+**SETUP-SVM-8**: Create an internal/tenant IPv4 network ``ipv4-int-network1``
+
+.. code-block:: bash
+
+ neutron net-create ipv4-int-network1
+
+**SETUP-SVM-9**: Create an IPv4 subnet ``ipv4-int-subnet1`` in the internal network ``ipv4-int-network1``
+
+.. code-block:: bash
+
+ neutron subnet-create --name ipv4-int-subnet1 --dns-nameserver 8.8.8.8 ipv4-int-network1 20.0.0.0/24
+
+**SETUP-SVM-10**: Associate the IPv4 internal subnet ``ipv4-int-subnet1`` to the Neutron router ``ipv4-router``.
+
+.. code-block:: bash
+
+ neutron router-interface-add ipv4-router ipv4-int-subnet1
+
+********************************************************
+Create IPv6 Subnet and Router with External Connectivity
+********************************************************
+
+Now, let us create a second neutron router where we can "manually" spawn a ``radvd`` daemon to simulate an external
+IPv6 router.
+
+**SETUP-SVM-11**: Create a second Neutron router ``ipv6-router`` which needs to provide external connectivity
+
+.. code-block:: bash
+
+ neutron router-create ipv6-router
+
+**SETUP-SVM-12**: Associate the ``ext-net`` to the Neutron router ``ipv6-router``
+
+.. code-block:: bash
+
+ neutron router-gateway-set ipv6-router ext-net
+
+**SETUP-SVM-13**: Create a second internal/tenant IPv4 network ``ipv4-int-network2``
+
+.. code-block:: bash
+
+ neutron net-create ipv4-int-network2
+
+**SETUP-SVM-14**: Create an IPv4 subnet ``ipv4-int-subnet2`` for the ``ipv6-router`` internal network
+``ipv4-int-network2``
+
+.. code-block:: bash
+
+ neutron subnet-create --name ipv4-int-subnet2 --dns-nameserver 8.8.8.8 ipv4-int-network2 10.0.0.0/24
+
+**SETUP-SVM-15**: Associate the IPv4 internal subnet ``ipv4-int-subnet2`` to the Neutron router ``ipv6-router``.
+
+.. code-block:: bash
+
+ neutron router-interface-add ipv6-router ipv4-int-subnet2
+
+**************************************************
+Prepare Image, Metadata and Keypair for Service VM
+**************************************************
+
+**SETUP-SVM-16**: Download ``fedora22`` image which would be used as ``vRouter``
+
+.. code-block:: bash
+
+ glance image-create --name 'Fedora22' --disk-format qcow2 --container-format bare --is-public true --copy-from https://download.fedoraproject.org/pub/fedora/linux/releases/22/Cloud/x86_64/Images/Fedora-Cloud-Base-22-20150521.x86_64.qcow2
+
+**SETUP-SVM-17**: Create a keypair
+
+.. code-block:: bash
+
+ nova keypair-add vRouterKey > ~/vRouterKey
+
+**SETUP-SVM-18**: Create ports for ``vRouter`` and both the VMs with some specific MAC addresses.
+
+.. code-block:: bash
+
+ neutron port-create --name eth0-vRouter --mac-address fa:16:3e:11:11:11 ipv4-int-network2
+ neutron port-create --name eth1-vRouter --mac-address fa:16:3e:22:22:22 ipv4-int-network1
+ neutron port-create --name eth0-VM1 --mac-address fa:16:3e:33:33:33 ipv4-int-network1
+ neutron port-create --name eth0-VM2 --mac-address fa:16:3e:44:44:44 ipv4-int-network1
+
+**********************************************************************************************************
+Boot Service VM (``vRouter``) with ``eth0`` on ``ipv4-int-network2`` and ``eth1`` on ``ipv4-int-network1``
+**********************************************************************************************************
+
+Let us boot the service VM (``vRouter``) with ``eth0`` interface on ``ipv4-int-network2`` connecting to ``ipv6-router``,
+and ``eth1`` interface on ``ipv4-int-network1`` connecting to ``ipv4-router``.
+
+**SETUP-SVM-19**: Boot the ``vRouter`` using ``Fedora22`` image on the OpenStack Compute Node with hostname
+``opnfv-os-compute``
+
+.. code-block:: bash
+
+ nova boot --image Fedora22 --flavor m1.small --user-data /opt/stack/opnfv_os_ipv6_poc/metadata.txt --availability-zone nova:opnfv-os-compute --nic port-id=$(neutron port-list | grep -w eth0-vRouter | awk '{print $2}') --nic port-id=$(neutron port-list | grep -w eth1-vRouter | awk '{print $2}') --key-name vRouterKey vRouter
+
+Please **note** that ``/opt/stack/opnfv_os_ipv6_poc/metadata.txt`` is used to enable the ``vRouter`` to automatically
+spawn a ``radvd``, and
+
+* Act as an IPv6 vRouter which advertises the RA (Router Advertisements) with prefix
+ ``2001:db8:0:2::/64`` on its internal interface (``eth1``).
+* Forward IPv6 traffic from internal interface (``eth1``)
+
+**SETUP-SVM-20**: Verify that ``Fedora22`` image boots up successfully and vRouter has ``ssh`` keys properly injected
+
+.. code-block:: bash
+
+ nova list
+ nova console-log vRouter
+
+Please note that **it may take a few minutes** for the necessary packages to get installed and ``ssh`` keys
+to be injected.
+
+.. code-block:: bash
+
+ # Sample Output
+ [ 762.884523] cloud-init[871]: ec2: #############################################################
+ [ 762.909634] cloud-init[871]: ec2: -----BEGIN SSH HOST KEY FINGERPRINTS-----
+ [ 762.931626] cloud-init[871]: ec2: 2048 e3:dc:3d:4a:bc:b6:b0:77:75:a1:70:a3:d0:2a:47:a9 (RSA)
+ [ 762.957380] cloud-init[871]: ec2: -----END SSH HOST KEY FINGERPRINTS-----
+ [ 762.979554] cloud-init[871]: ec2: #############################################################
+
+*******************************************
+Boot Two Other VMs in ``ipv4-int-network1``
+*******************************************
+
+In order to verify that the setup is working, let us create two cirros VMs with ``eth1`` interface on the
+``ipv4-int-network1``, i.e., connecting to ``vRouter`` ``eth1`` interface for internal network.
+
+We will have to configure appropriate ``mtu`` on the VMs' interface by taking into account the tunneling
+overhead and any physical switch requirements. If so, push the ``mtu`` to the VM either using ``dhcp``
+options or via ``meta-data``.
+
+**SETUP-SVM-21**: Create VM1 on OpenStack Controller Node with hostname ``opnfv-os-controller``
+
+.. code-block:: bash
+
+ nova boot --image cirros-0.3.4-x86_64-uec --flavor m1.tiny --nic port-id=$(neutron port-list | grep -w eth0-VM1 | awk '{print $2}') --availability-zone nova:opnfv-os-controller --key-name vRouterKey --user-data /opt/stack/opnfv_os_ipv6_poc/set_mtu.sh VM1
+
+**SETUP-SVM-22**: Create VM2 on OpenStack Compute Node with hostname ``opnfv-os-compute``
+
+.. code-block:: bash
+
+ nova boot --image cirros-0.3.4-x86_64-uec --flavor m1.tiny --nic port-id=$(neutron port-list | grep -w eth0-VM2 | awk '{print $2}') --availability-zone nova:opnfv-os-compute --key-name vRouterKey --user-data /opt/stack/opnfv_os_ipv6_poc/set_mtu.sh VM2
+
+**SETUP-SVM-23**: Confirm that both the VMs are successfully booted.
+
+.. code-block:: bash
+
+ nova list
+ nova console-log VM1
+ nova console-log VM2
+
+**********************************
+Spawn ``RADVD`` in ``ipv6-router``
+**********************************
+
+Let us manually spawn a ``radvd`` daemon inside ``ipv6-router`` namespace to simulate an external router.
+First of all, we will have to identify the ``ipv6-router`` namespace and move to the namespace.
+
+Please **NOTE** that in case of HA (High Availability) deployment model where multiple controller
+nodes are used, ``ipv6-router`` created in step **SETUP-SVM-11** could be in any of the controller
+node. Thus you need to identify in which controller node ``ipv6-router`` is created in order to manually
+spawn ``radvd`` daemon inside the ``ipv6-router`` namespace in steps **SETUP-SVM-24** through
+**SETUP-SVM-30**. The following command in Neutron will display the controller on which the
+``ipv6-router`` is spawned.
+
+.. code-block:: bash
+
+ neutron l3-agent-list-hosting-router ipv6-router
+
+Then you login to that controller and execute steps **SETUP-SVM-24**
+through **SETUP-SVM-30**
+
+**SETUP-SVM-24**: identify the ``ipv6-router`` namespace and move to the namespace
+
+.. code-block:: bash
+
+ sudo ip netns exec qrouter-$(neutron router-list | grep -w ipv6-router | awk '{print $2}') bash
+
+**SETUP-SVM-25**: Upon successful execution of the above command, you will be in the router namespace.
+Now let us configure the IPv6 address on the <qr-xxx> interface.
+
+.. code-block:: bash
+
+ export router_interface=$(ip a s | grep -w "global qr-*" | awk '{print $7}')
+ ip -6 addr add 2001:db8:0:1::1 dev $router_interface
+
+**SETUP-SVM-26**: Update the sample file ``/opt/stack/opnfv_os_ipv6_poc/scenario2/radvd.conf``
+with ``$router_interface``.
+
+.. code-block:: bash
+
+ cp /opt/stack/opnfv_os_ipv6_poc/scenario2/radvd.conf /tmp/radvd.$router_interface.conf
+ sed -i 's/$router_interface/'$router_interface'/g' /tmp/radvd.$router_interface.conf
+
+**SETUP-SVM-27**: Spawn a ``radvd`` daemon to simulate an external router. This ``radvd`` daemon advertises an IPv6
+subnet prefix of ``2001:db8:0:1::/64`` using RA (Router Advertisement) on its $router_interface so that ``eth0``
+interface of ``vRouter`` automatically configures an IPv6 SLAAC address.
+
+.. code-block:: bash
+
+ $radvd -C /tmp/radvd.$router_interface.conf -p /tmp/br-ex.pid.radvd -m syslog
+
+**SETUP-SVM-28**: Add an IPv6 downstream route pointing to the ``eth0`` interface of vRouter.
+
+.. code-block:: bash
+
+ ip -6 route add 2001:db8:0:2::/64 via 2001:db8:0:1:f816:3eff:fe11:1111
+
+**SETUP-SVM-29**: The routing table should now look similar to something shown below.
+
+.. code-block:: bash
+
+ ip -6 route show
+ 2001:db8:0:1::1 dev qr-42968b9e-62 proto kernel metric 256
+ 2001:db8:0:1::/64 dev qr-42968b9e-62 proto kernel metric 256 expires 86384sec
+ 2001:db8:0:2::/64 via 2001:db8:0:1:f816:3eff:fe11:1111 dev qr-42968b9e-62 proto ra metric 1024 expires 29sec
+ fe80::/64 dev qg-3736e0c7-7c proto kernel metric 256
+ fe80::/64 dev qr-42968b9e-62 proto kernel metric 256
+
+**SETUP-SVM-30**: If all goes well, the IPv6 addresses assigned to the VMs would be as shown as follows:
+
+.. code-block:: bash
+
+ vRouter eth0 interface would have the following IPv6 address: 2001:db8:0:1:f816:3eff:fe11:1111/64
+ vRouter eth1 interface would have the following IPv6 address: 2001:db8:0:2::1/64
+ VM1 would have the following IPv6 address: 2001:db8:0:2:f816:3eff:fe33:3333/64
+ VM2 would have the following IPv6 address: 2001:db8:0:2:f816:3eff:fe44:4444/64
+
+********************************
+Testing to Verify Setup Complete
+********************************
+
+Now, let us ``ssh`` to one of the VMs, e.g. VM1, to confirm that it has successfully configured the IPv6 address
+using ``SLAAC`` with prefix ``2001:db8:0:2::/64`` from ``vRouter``.
+
+Please note that you need to get the IPv4 address associated to VM1. This can be inferred from ``nova list`` command.
+
+**SETUP-SVM-31**: ``ssh`` VM1
+
+.. code-block:: bash
+
+ ssh -i /home/odl/vRouterKey cirros@<VM1-IPv4-address>
+
+If everything goes well, ``ssh`` will be successful and you will be logged into VM1. Run some commands to verify
+that IPv6 addresses are configured on ``eth0`` interface.
+
+**SETUP-SVM-32**: Show an IPv6 address with a prefix of ``2001:db8:0:2::/64``
+
+.. code-block:: bash
+
+ ip address show
+
+**SETUP-SVM-33**: ping some external IPv6 address, e.g. ``ipv6-router``
+
+.. code-block:: bash
+
+ ping6 2001:db8:0:1::1
+
+If the above ping6 command succeeds, it implies that ``vRouter`` was able to successfully forward the IPv6 traffic
+to reach external ``ipv6-router``.
+
+**SETUP-SVM-34**: When all tests show that the setup works as expected, You can now exit the ``ipv6-router`` namespace.
+
+.. code-block:: bash
+
+ exit
+
+**********
+Next Steps
+**********
+
+Congratulations, you have completed the setup of using a service VM to act as an IPv6 vRouter. This setup allows further
+open innovation by any 3rd-party. Please refer to relevant sections in User's Guide for further value-added services on
+this IPv6 vRouter.
+
diff --git a/docs/reldoc/option-pure-os.rst b/docs/reldoc/option-pure-os.rst
new file mode 100644
index 0000000..46dcb6b
--- /dev/null
+++ b/docs/reldoc/option-pure-os.rst
@@ -0,0 +1,236 @@
+======================================================================
+Set Up a Service VM as an IPv6 vRouter in Native OpenStack Environment
+======================================================================
+
+If you intend to set up a service VM as an IPv6 vRouter in native OpenStack environment of
+OPNFV Brahmaputra Release base platform, the instructions are as follows.
+
+Please **NOTE** that:
+
+* Because the anti-spoofing rules of Security Group feature in OpenStack prevents
+ a VM from forwarding packets, we need to disable Security Group feature in the
+ native OpenStack environment.
+* The hostnames, IP addresses, and username are for exemplary purpose in instructions.
+ Please change as needed to fit your environment.
+* The instructions apply to both deployment model of single controller node and
+ HA (High Availability) deployment model where multiple controller nodes are used
+
+*****************************
+Install OPNFV and Preparation
+*****************************
+
+**OPNFV-NATIVE-INSTALL-1**: To install pure OpenStack option of OPNFV Brahmaputra Release:
+
+.. code-block:: bash
+
+ deploy --scenario os_ha
+
+**OPNFV-NATIVE-INSTALL-2**: Clone the following GitHub repository to get the
+configuration and metadata files
+
+.. code-block:: bash
+
+ git clone https://github.com/sridhargaddam/opnfv_os_ipv6_poc.git /opt/stack/opnfv_os_ipv6_poc
+
+**********************************************
+Disable Security Groups in OpenStack ML2 Setup
+**********************************************
+
+**OPNFV-NATIVE-SEC-1**: Change the settings in
+``/etc/neutron/plugins/ml2/ml2_conf.ini`` as follows
+
+.. code-block:: bash
+
+ # /etc/neutron/plugins/ml2/ml2_conf.ini
+ [securitygroup]
+ enable_security_group = False
+ firewall_driver = neutron.agent.firewall.NoopFirewallDriver
+
+**OPNFV-NATIVE-SEC-2**: Change the settings in ``/etc/nova/nova.conf`` as follows
+
+.. code-block:: bash
+
+ # /etc/nova/nova.conf
+ [DEFAULT]
+ security_group_api = nova
+ firewall_driver = nova.virt.firewall.NoopFirewallDriver
+
+*********************************
+Set Up Service VM as IPv6 vRouter
+*********************************
+
+**OPNFV-NATIVE-SETUP-1**: Now we assume that OpenStack multi-node setup is up and running. The following
+commands should be executed:
+
+.. code-block:: bash
+
+ source openrc admin demo
+
+**OPNFV-NATIVE-SETUP-2**: Download ``fedora22`` image which would be used for ``vRouter``
+
+.. code-block:: bash
+
+ wget https://download.fedoraproject.org/pub/fedora/linux/releases/22/Cloud/x86_64/Images/Fedora-Cloud-Base-22-20150521.x86_64.qcow2
+
+**OPNFV-NATIVE-SETUP-3**: Import Fedora22 image to ``glance``
+
+.. code-block:: bash
+
+ glance image-create --name 'Fedora22' --disk-format qcow2 --container-format bare --file ./Fedora-Cloud-Base-22-20150521.x86_64.qcow2
+
+**OPNFV-NATIVE-SETUP-4**: Create Neutron routers ``ipv4-router`` and ``ipv6-router``
+which need to provide external connectivity.
+
+.. code-block:: bash
+
+ neutron router-create ipv4-router
+ neutron router-create ipv6-router
+
+**OPNFV-NATIVE-SETUP-5**: Create an external network/subnet ``ext-net`` using
+the appropriate values based on the data-center physical network setup.
+
+.. code-block:: bash
+
+ neutron net-create --router:external ext-net
+
+**OPNFV-NATIVE-SETUP-6**: If your ``opnfv-os-controller`` node has two interfaces ``eth0`` and
+``eth1``, and ``eth1`` is used for external connectivity, move the IP address of ``eth1`` to ``br-ex``.
+
+Please note that the IP address ``198.59.156.113`` and related subnet and gateway addressed in the command
+below are for exemplary purpose. **Please replace them with the IP addresses of your actual network**.
+
+.. code-block:: bash
+
+ sudo ip addr del 198.59.156.113/24 dev eth1
+ sudo ovs-vsctl add-port br-ex eth1
+ sudo ifconfig eth1 up
+ sudo ip addr add 198.59.156.113/24 dev br-ex
+ sudo ifconfig br-ex up
+ sudo ip route add default via 198.59.156.1 dev br-ex
+ neutron subnet-create --disable-dhcp --allocation-pool start=198.59.156.251,end=198.59.156.254 --gateway 198.59.156.1 ext-net 198.59.156.0/24
+
+**OPNFV-NATIVE-SETUP-7**: Verify that ``br-ex`` now has the original external IP address,
+and that the default route is on ``br-ex``
+
+.. code-block:: bash
+
+ $ ip a s br-ex
+ 38: br-ex: <BROADCAST,UP,LOWER_UP> mtu 1430 qdisc noqueue state UNKNOWN group default
+ link/ether 00:50:56:82:42:d1 brd ff:ff:ff:ff:ff:ff
+ inet 198.59.156.113/24 brd 198.59.156.255 scope global br-ex
+ valid_lft forever preferred_lft forever
+ inet6 fe80::543e:28ff:fe70:4426/64 scope link
+ valid_lft forever preferred_lft forever
+ $
+ $ ip route
+ default via 198.59.156.1 dev br-ex
+ 192.168.0.0/24 dev eth0 proto kernel scope link src 192.168.0.10
+ 192.168.122.0/24 dev virbr0 proto kernel scope link src 192.168.122.1
+ 198.59.156.0/24 dev br-ex proto kernel scope link src 198.59.156.113
+
+Please note that the IP addresses above are exemplary purpose.
+
+**OPNFV-NATIVE-SETUP-8**: Create Neutron networks ``ipv4-int-network1`` and
+``ipv6-int-network2`` with port_security disabled
+
+.. code-block:: bash
+
+ neutron net-create --port_security_enabled=False ipv4-int-network1
+ neutron net-create --port_security_enabled=False ipv6-int-network2
+
+**OPNFV-NATIVE-SETUP-9**: Create IPv4 subnet ``ipv4-int-subnet1`` in the internal network
+``ipv4-int-network1``, and associate it to ``ipv4-router``.
+
+.. code-block:: bash
+
+ neutron subnet-create --name ipv4-int-subnet1 --dns-nameserver 8.8.8.8 ipv4-int-network1 20.0.0.0/24
+ neutron router-interface-add ipv4-router ipv4-int-subnet1
+
+**OPNFV-NATIVE-SETUP-10**: Associate the ``ext-net`` to the Neutron routers ``ipv4-router``
+and ``ipv6-router``.
+
+.. code-block:: bash
+
+ neutron router-gateway-set ipv4-router ext-net
+ neutron router-gateway-set ipv6-router ext-net
+
+**OPNFV-NATIVE-SETUP-11**: Create two subnets, one IPv4 subnet ``ipv4-int-subnet2`` and
+one IPv6 subnet ``ipv6-int-subnet2`` in ``ipv6-int-network2``, and associate both subnets to
+``ipv6-router``
+
+.. code-block:: bash
+
+ neutron subnet-create --name ipv4-int-subnet2 --dns-nameserver 8.8.8.8 ipv6-int-network2 10.0.0.0/24
+ neutron subnet-create --name ipv6-int-subnet2 --ip-version 6 --ipv6-ra-mode slaac --ipv6-address-mode slaac ipv6-int-network2 2001:db8:0:1::/64
+ neutron router-interface-add ipv6-router ipv4-int-subnet2
+ neutron router-interface-add ipv6-router ipv6-int-subnet2
+
+**OPNFV-NATIVE-SETUP-12**: Create a keypair
+
+.. code-block:: bash
+
+ nova keypair-add vRouterKey > ~/vRouterKey
+
+**OPNFV-NATIVE-SETUP-13**: Create ports for vRouter (with some specific MAC address
+- basically for automation - to know the IPv6 addresses that would be assigned to the port).
+
+.. code-block:: bash
+
+ neutron port-create --name eth0-vRouter --mac-address fa:16:3e:11:11:11 ipv6-int-network2
+ neutron port-create --name eth1-vRouter --mac-address fa:16:3e:22:22:22 ipv4-int-network1
+
+**OPNFV-NATIVE-SETUP-14**: Create ports for VM1 and VM2.
+
+.. code-block:: bash
+
+ neutron port-create --name eth0-VM1 --mac-address fa:16:3e:33:33:33 ipv4-int-network1
+ neutron port-create --name eth0-VM2 --mac-address fa:16:3e:44:44:44 ipv4-int-network1
+
+**OPNFV-NATIVE-SETUP-15**: Update ``ipv6-router`` with routing information to subnet
+``2001:db8:0:2::/64``
+
+.. code-block:: bash
+
+ neutron router-update ipv6-router --routes type=dict list=true destination=2001:db8:0:2::/64,nexthop=2001:db8:0:1:f816:3eff:fe11:1111
+
+**OPNFV-NATIVE-SETUP-16**: Boot Service VM (``vRouter``), VM1 and VM2
+
+.. code-block:: bash
+
+ nova boot --image Fedora22 --flavor m1.small --user-data /opt/stack/opnfv_os_ipv6_poc/metadata.txt --availability-zone nova:opnfv-os-compute --nic port-id=$(neutron port-list | grep -w eth0-vRouter | awk '{print $2}') --nic port-id=$(neutron port-list | grep -w eth1-vRouter | awk '{print $2}') --key-name vRouterKey vRouter
+ nova list
+ nova console-log vRouter #Please wait for some 10 to 15 minutes so that necessary packages (like radvd) are installed and vRouter is up.
+ nova boot --image cirros-0.3.4-x86_64-uec --flavor m1.tiny --nic port-id=$(neutron port-list | grep -w eth0-VM1 | awk '{print $2}') --availability-zone nova:opnfv-os-controller --key-name vRouterKey --user-data /opt/stack/opnfv_os_ipv6_poc/set_mtu.sh VM1
+ nova boot --image cirros-0.3.4-x86_64-uec --flavor m1.tiny --nic port-id=$(neutron port-list | grep -w eth0-VM2 | awk '{print $2}') --availability-zone nova:opnfv-os-compute --key-name vRouterKey --user-data /opt/stack/opnfv_os_ipv6_poc/set_mtu.sh VM2
+ nova list # Verify that all the VMs are in ACTIVE state.
+
+**OPNFV-NATIVE-SETUP-17**: If all goes well, the IPv6 addresses assigned to the VMs
+would be as shown as follows:
+
+.. code-block:: bash
+
+ vRouter eth0 interface would have the following IPv6 address: 2001:db8:0:1:f816:3eff:fe11:1111/64
+ vRouter eth1 interface would have the following IPv6 address: 2001:db8:0:2::1/64
+ VM1 would have the following IPv6 address: 2001:db8:0:2:f816:3eff:fe33:3333/64
+ VM2 would have the following IPv6 address: 2001:db8:0:2:f816:3eff:fe44:4444/64
+
+**OPNFV-NATIVE-SETUP-18**: Now we can ``SSH`` to ``vRouter``.
+
+Please **NOTE** that in case of HA (High Availability) deployment model where multiple controller
+nodes are used, ``ipv6-router`` created in step **OPNFV-NATIVE-SETUP-4** could be in any of the controller
+node. Thus you need to identify in which controller node ``ipv6-router`` is created in order to
+enter the ``ipv6-router`` namespace. The following command in Neutron will display the
+controller on which the ``ipv6-router`` is spawned.
+
+.. code-block:: bash
+
+ neutron l3-agent-list-hosting-router ipv6-router
+
+Then you login to that controller.
+
+To ``SSH`` to ``vRouter``, you can execute the following command.
+
+.. code-block:: bash
+
+ sudo ip netns exec qrouter-$(neutron router-list | grep -w ipv6-router | awk '{print $2}') ssh -i ~/vRouterKey fedora@2001:db8:0:1:f816:3eff:fe11:1111
+
diff --git a/docs/setupservicevm/0-ipv6-configguide-prep-infra.rst b/docs/setupservicevm/0-ipv6-configguide-prep-infra.rst
index 9703991..997e12e 100644
--- a/docs/setupservicevm/0-ipv6-configguide-prep-infra.rst
+++ b/docs/setupservicevm/0-ipv6-configguide-prep-infra.rst
@@ -2,11 +2,17 @@
Infrastructure Setup
====================
-In order to set up the service VM as an IPv6 vRouter, we need to
-prepare 3 hosts, each of which has minimum 8GB RAM and 40GB storage. One host is used as OpenStack Controller
+In order to set up the service VM as an IPv6 vRouter, we need to prepare 3 hosts,
+each of which has minimum 8GB RAM and 40GB storage. One host is used as OpenStack Controller
Node. The second host is used as Open Daylight Controller Node. And the third one is used as
OpenStack Compute Node.
+Please **NOTE** that in case of HA (High Availability) deployment model where multiple controller
+nodes are used, the setup procedure is the same. When ``ipv6-router`` is created in step **SETUP-SVM-11**,
+it could be created in any of the controller node. Thus you need to identify in which controller node
+``ipv6-router`` is created in order to manually spawn ``radvd`` daemon inside the ``ipv6-router`` namespace
+in steps **SETUP-SVM-24** through **SETUP-SVM-30**.
+
For exemplary purpose, we assume:
* The hostname of OpenStack Controller+Network+Compute Node is ``opnfv-os-controller``, and the host IP address
diff --git a/docs/setupservicevm/4-ipv6-configguide-servicevm.rst b/docs/setupservicevm/4-ipv6-configguide-servicevm.rst
index fb9f450..61412e1 100644
--- a/docs/setupservicevm/4-ipv6-configguide-servicevm.rst
+++ b/docs/setupservicevm/4-ipv6-configguide-servicevm.rst
@@ -266,6 +266,20 @@ Spawn ``RADVD`` in ``ipv6-router``
Let us manually spawn a ``radvd`` daemon inside ``ipv6-router`` namespace to simulate an external router.
First of all, we will have to identify the ``ipv6-router`` namespace and move to the namespace.
+Please **NOTE** that in case of HA (High Availability) deployment model where multiple controller
+nodes are used, ``ipv6-router`` created in step **SETUP-SVM-11** could be in any of the controller
+node. Thus you need to identify in which controller node ``ipv6-router`` is created in order to manually
+spawn ``radvd`` daemon inside the ``ipv6-router`` namespace in steps **SETUP-SVM-24** through
+**SETUP-SVM-30**. The following command in Neutron will display the controller on which the
+``ipv6-router`` is spawned.
+
+.. code-block:: bash
+
+ neutron l3-agent-list-hosting-router ipv6-router
+
+Then you login to that controller and execute steps **SETUP-SVM-24**
+through **SETUP-SVM-30**
+
**SETUP-SVM-24**: identify the ``ipv6-router`` namespace and move to the namespace
.. code-block:: bash
diff --git a/docs/setupservicevm/5-ipv6-configguide-scenario-1-native-os.rst b/docs/setupservicevm/5-ipv6-configguide-scenario-1-native-os.rst
index 1a303cc..24899da 100644
--- a/docs/setupservicevm/5-ipv6-configguide-scenario-1-native-os.rst
+++ b/docs/setupservicevm/5-ipv6-configguide-scenario-1-native-os.rst
@@ -22,11 +22,15 @@ For exemplary purpose, we assume:
Underlay Network Topology - Scenario 1
-**Please note that the IP address shown in** :numref:`s1-figure1`
-**are for exemplary purpose. You need to configure your public IP
-address connecting to Internet according to your actual network
-infrastructure. And you need to make sure the private IP address are
-not conflicting with other subnets**.
+**Please NOTE that:**
+
+* **The IP address shown in** :numref:`s1-figure1` **are for exemplary purpose.
+ You need to configure your public IP address connecting to Internet according
+ to your actual network infrastructure. And you need to make sure the private IP address are
+ not conflicting with other subnets**.
+* **Although the deployment model of single controller node is assumed, in case of HA (High Availability)
+ deployment model where multiple controller nodes are used, there is no impact and the setup procedure
+ is the same.**
************
Prerequisite
diff --git a/docs/setupservicevm/architecture-design.rst b/docs/setupservicevm/architecture-design.rst
index 61b9ad5..e2546ca 100644
--- a/docs/setupservicevm/architecture-design.rst
+++ b/docs/setupservicevm/architecture-design.rst
@@ -11,3 +11,6 @@ shown as follows in :numref:`arch-figure1`:
Architectural Design of Using a VM as an IPv6 vRouter
+This design applies to deployment model of single controller node as well as HA (High Availability)
+deployment model of multiple controller nodes.
+
diff --git a/docs/setupservicevm/images/ipv6-topology-scenario-3.png b/docs/setupservicevm/images/ipv6-topology-scenario-3.png
new file mode 100644
index 0000000..5cc16bd
--- /dev/null
+++ b/docs/setupservicevm/images/ipv6-topology-scenario-3.png
Binary files differ
diff --git a/docs/setupservicevm/index.rst b/docs/setupservicevm/index.rst
index 292cffe..d60b172 100644
--- a/docs/setupservicevm/index.rst
+++ b/docs/setupservicevm/index.rst
@@ -23,6 +23,17 @@ environment. There are three scenarios.
controller which is built from the latest stable/Lithium branch which includes the fix.
In this scenario, we can fully automate the setup similar to Scenario 1.
+Please **NOTE** that the instructions in this document assume the deployment model of single
+controller node. In case of HA (High Availability) deployment model where multiple controller
+nodes are used, the setup procedure is the same. In particular:
+
+* There is **No Impact** on Scenario 1 and Scenario 3.
+* For Scenario 2, when ``ipv6-router`` is created in step **SETUP-SVM-11**, it could be
+ created in any of the controller node. Thus you need to identify in which controller node
+ ``ipv6-router`` is created in order to manually spawn ``radvd`` daemon inside the
+ ``ipv6-router`` namespace in steps **SETUP-SVM-24** through **SETUP-SVM-30**.
+
+
.. toctree::
:numbered:
:maxdepth: 4
diff --git a/docs/setupservicevm/scenario-3-0-ipv6-configguide-prep-infra.rst b/docs/setupservicevm/scenario-3-0-ipv6-configguide-prep-infra.rst
index 9703991..5db060c 100644
--- a/docs/setupservicevm/scenario-3-0-ipv6-configguide-prep-infra.rst
+++ b/docs/setupservicevm/scenario-3-0-ipv6-configguide-prep-infra.rst
@@ -2,11 +2,15 @@
Infrastructure Setup
====================
-In order to set up the service VM as an IPv6 vRouter, we need to
-prepare 3 hosts, each of which has minimum 8GB RAM and 40GB storage. One host is used as OpenStack Controller
+In order to set up the service VM as an IPv6 vRouter, we need to prepare 3 hosts,
+each of which has minimum 8GB RAM and 40GB storage. One host is used as OpenStack Controller
Node. The second host is used as Open Daylight Controller Node. And the third one is used as
OpenStack Compute Node.
+**Please NOTE that Although the deployment model of single controller node is assumed, in case of HA
+(High Availability) deployment model where multiple controller nodes are used, there is no impact and the
+setup procedure is the same.**
+
For exemplary purpose, we assume:
* The hostname of OpenStack Controller+Network+Compute Node is ``opnfv-os-controller``, and the host IP address
@@ -17,15 +21,15 @@ For exemplary purpose, we assume:
* We use ``opnfv`` as username to login.
* We use ``devstack`` to install OpenStack Kilo. Please note that OpenStack Liberty can be used as well.
-The underlay network topology of those 3 hosts are shown as follows in :numref:`s2-figure1`:
+The underlay network topology of those 3 hosts are shown as follows in :numref:`s3-figure1`:
-.. figure:: images/ipv6-topology-scenario-2.png
- :name: s2-figure1
+.. figure:: images/ipv6-topology-scenario-3.png
+ :name: s3-figure1
:width: 100%
- Underlay Network Topology - Scenario 2
+ Underlay Network Topology - Scenario 3
-**Please note that the IP address shown in** :numref:`s2-figure1`
+**Please note that the IP address shown in** :numref:`s3-figure1`
**are for exemplary purpose. You need to configure your public IP
address connecting to Internet according to your actual network
infrastructure. And you need to make sure the private IP address are
diff --git a/docs/setupservicevm/scenario-3-1-ipv6-configguide-odl-setup.rst b/docs/setupservicevm/scenario-3-1-ipv6-configguide-odl-setup.rst
index 89152d5..3a068e0 100644
--- a/docs/setupservicevm/scenario-3-1-ipv6-configguide-odl-setup.rst
+++ b/docs/setupservicevm/scenario-3-1-ipv6-configguide-odl-setup.rst
@@ -10,26 +10,22 @@ For exemplary purpose, we assume:
* We use ``opnfv`` as username to login.
* Java 7 is installed in directory ``/usr/lib/jvm/java-1.7.0-openjdk-1.7.0.85-2.6.1.2.el7_1.x86_64/``
-**ODL-1**: Login to Open Daylight Controller Node with username ``opnfv``.
+Please **NOTE** that this Scenario 3 uses an Open Daylight Lithium controller which is built
+from the latest ``stable/Lithium`` branch that includes the fix of a bug, there is a **prerequisite**
+that you are able to build this Open Daylight Lithium Controller from the the latest ``stable/Lithium``
+branch. Please refer to relevant documentation from Open Daylight.
-**ODL-2**: Download the ODL Lithium distribution from
-``http://www.opendaylight.org/software/downloads``
+**ODL-1**: **Prerequisite** - build Open Daylight Lithium Controller from the the latest
+``stable/Lithium`` branch, and make it available for step **ODL-3**.
-.. code-block:: bash
-
- wget https://nexus.opendaylight.org/content/groups/public/org/opendaylight/integration/distribution-karaf/0.3.3-Lithium-SR3/distribution-karaf-0.3.3-Lithium-SR3.tar.gz
+**ODL-2**: Login to Open Daylight Controller Node with username ``opnfv``.
-**Note**: This **ODL-2** is a placeholder, and to be replaced with actual URL of a build from the latest
-``stable/Lithium`` branch which includes the fix.
-
-**ODL-3**: Extract the tar file
+**ODL-3**: Extract the tar file of your custom build of Open Daylight Lithium Controller
+from step **ODL-1**.
.. code-block:: bash
- tar -zxvf distribution-karaf-0.3.3-Lithium-SR3.tar.gz
-
-**Note**: This **ODL-3** is a placeholder, and to be replaced with actual tarball of a build from the latest
-``stable/Lithium`` branch which includes the fix.
+ tar -zxvf <filename-of-your-custom-build>.tar.gz
**ODL-4**: Install Java7
diff --git a/docs/setupservicevm/scenario-3-2-ipv6-configguide-os-controller.rst b/docs/setupservicevm/scenario-3-2-ipv6-configguide-os-controller.rst
index 050be79..d1af395 100644
--- a/docs/setupservicevm/scenario-3-2-ipv6-configguide-os-controller.rst
+++ b/docs/setupservicevm/scenario-3-2-ipv6-configguide-os-controller.rst
@@ -56,8 +56,12 @@ For **Fedora**:
cp /opt/stack/opnfv_os_ipv6_poc/scenario2/local.conf.odl.controller ~/devstack/local.conf
-Please **note** that you need to change the IP address of ``ODL_MGR_IP`` to point to your actual IP address
-of Open Daylight Controller.
+Please **note** that:
+
+* Note 1: Because Scenario 3 and Scenario 2 are essentially the same, and their only difference
+ is using different build of Open Daylight, they share the same ``local.conf`` file of OpenStack.
+* Note 2: You need to change the IP address of ``ODL_MGR_IP`` to point to your actual IP address
+ of Open Daylight Controller.
**OS-N-6**: Initiate Openstack setup by invoking ``stack.sh``
diff --git a/docs/setupservicevm/scenario-3-3-ipv6-configguide-os-compute.rst b/docs/setupservicevm/scenario-3-3-ipv6-configguide-os-compute.rst
index a27ae4c..34af6b2 100644
--- a/docs/setupservicevm/scenario-3-3-ipv6-configguide-os-compute.rst
+++ b/docs/setupservicevm/scenario-3-3-ipv6-configguide-os-compute.rst
@@ -58,9 +58,11 @@ For **Fedora**:
Please Note:
-* Note 1: you need to change the IP address of ``SERVICE_HOST`` to point to your actual IP address
+* Note 1: Because Scenario 3 and Scenario 2 are essentially the same, and their only difference
+ is using different build of Open Daylight, they share the same ``local.conf`` file of OpenStack.
+* Note 2: you need to change the IP address of ``SERVICE_HOST`` to point to your actual IP address
of OpenStack Controller.
-* Note 2: you need to change the IP address of ``ODL_MGR_IP`` to point to your actual IP address
+* Note 3: you need to change the IP address of ``ODL_MGR_IP`` to point to your actual IP address
of Open Daylight Controller.
**OS-M-6**: Initiate Openstack setup by invoking ``stack.sh``