summaryrefslogtreecommitdiffstats
path: root/docs/release
diff options
context:
space:
mode:
Diffstat (limited to 'docs/release')
-rw-r--r--docs/release/installation/_static/.gitkeep0
-rw-r--r--docs/release/installation/abstract.rst10
-rw-r--r--docs/release/installation/appendices.rst82
-rw-r--r--docs/release/installation/index.rst13
-rw-r--r--docs/release/installation/installation_baremetal.rst583
-rw-r--r--docs/release/installation/installation_virtual.rst36
-rw-r--r--docs/release/installation/installationprocedure.rst309
-rw-r--r--docs/release/installation/introduction.rst45
-rw-r--r--docs/release/installation/postinstall.rst352
-rw-r--r--docs/release/installation/requirements.rst48
10 files changed, 1165 insertions, 313 deletions
diff --git a/docs/release/installation/_static/.gitkeep b/docs/release/installation/_static/.gitkeep
new file mode 100644
index 00000000..e69de29b
--- /dev/null
+++ b/docs/release/installation/_static/.gitkeep
diff --git a/docs/release/installation/abstract.rst b/docs/release/installation/abstract.rst
new file mode 100644
index 00000000..4303ec6c
--- /dev/null
+++ b/docs/release/installation/abstract.rst
@@ -0,0 +1,10 @@
+Abstract
+========
+
+This document will explain how to install the Euphrates release of OPNFV with
+JOID including installing JOID, configuring JOID for your environment, and
+deploying OPNFV with different SDN solutions in HA, or non-HA mode.
+
+.. License
+.. =======
+.. TODO: Add license
diff --git a/docs/release/installation/appendices.rst b/docs/release/installation/appendices.rst
new file mode 100644
index 00000000..0746cf8c
--- /dev/null
+++ b/docs/release/installation/appendices.rst
@@ -0,0 +1,82 @@
+.. highlight:: bash
+
+
+Appendices
+=========
+
+
+Appendix A: Single Node Deployment
+----------------------------------
+By default, running the script ./03-maasdeploy.sh will automatically create the KVM VMs on a single machine and configure everything for you.
+
+::
+
+ if [ ! -e ./labconfig.yaml ]; then
+ virtinstall=1
+ labname="default"
+ cp ../labconfig/default/labconfig.yaml ./
+ cp ../labconfig/default/deployconfig.yaml ./
+
+Please change joid/ci/labconfig/default/labconfig.yaml accordingly. The MAAS deployment script will do the following:
+1. Create bootstrap VM.
+2. Install MAAS on the jumphost.
+3. Configure MAAS to enlist and commission VM for Juju bootstrap node.
+
+Later, the 03-massdeploy.sh script will create three additional VMs and register them into the MAAS Server:
+
+::
+
+ if [ "$virtinstall" -eq 1 ]; then
+ sudo virt-install --connect qemu:///system --name $NODE_NAME --ram 8192 --cpu host --vcpus 4 \
+ --disk size=120,format=qcow2,bus=virtio,io=native,pool=default \
+ $netw $netw --boot network,hd,menu=off --noautoconsole --vnc --print-xml | tee $NODE_NAME
+
+ nodemac=`grep "mac address" $NODE_NAME | head -1 | cut -d '"' -f 2`
+ sudo virsh -c qemu:///system define --file $NODE_NAME
+ rm -f $NODE_NAME
+ maas $PROFILE machines create autodetect_nodegroup='yes' name=$NODE_NAME \
+ tags='control compute' hostname=$NODE_NAME power_type='virsh' mac_addresses=$nodemac \
+ power_parameters_power_address='qemu+ssh://'$USER'@'$MAAS_IP'/system' \
+ architecture='amd64/generic' power_parameters_power_id=$NODE_NAME
+ nodeid=$(maas $PROFILE machines read | jq -r '.[] | select(.hostname == '\"$NODE_NAME\"').system_id')
+ maas $PROFILE tag update-nodes control add=$nodeid || true
+ maas $PROFILE tag update-nodes compute add=$nodeid || true
+
+ fi
+
+
+Appendix B: Automatic Device Discovery
+--------------------------------------
+If your bare metal servers support IPMI, they can be discovered and enlisted automatically
+by the MAAS server. You need to configure bare metal servers to PXE boot on the network
+interface where they can reach the MAAS server. With nodes set to boot from a PXE image,
+they will start, look for a DHCP server, receive the PXE boot details, boot the image,
+contact the MAAS server and shut down.
+
+During this process, the MAAS server will be passed information about the node, including
+the architecture, MAC address and other details which will be stored in the database of
+nodes. You can accept and commission the nodes via the web interface. When the nodes have
+been accepted the selected series of Ubuntu will be installed.
+
+
+Appendix C: Machine Constraints
+-------------------------------
+Juju and MAAS together allow you to assign different roles to servers, so that hardware and software can be configured according to their roles. We have briefly mentioned and used this feature in our example. Please visit Juju Machine Constraints https://jujucharms.com/docs/stable/charms-constraints and MAAS tags https://maas.ubuntu.com/docs/tags.html for more information.
+
+
+Appendix D: Offline Deployment
+------------------------------
+When you have limited access policy in your environment, for example, when only the Jump Host has Internet access, but not the rest of the servers, we provide tools in JOID to support the offline installation.
+
+The following package set is provided to those wishing to experiment with a ‘disconnected
+from the internet’ setup when deploying JOID utilizing MAAS. These instructions provide
+basic guidance as to how to accomplish the task, but it should be noted that due to the
+current reliance of MAAS and DNS, that behavior and success of deployment may vary
+depending on infrastructure setup. An official guided setup is in the roadmap for the next release:
+
+1. Get the packages from here: https://launchpad.net/~thomnico/+archive/ubuntu/ubuntu-cloud-mirrors
+
+ .. note::
+ The mirror is quite large 700GB in size, and does not mirror SDN repo/ppa.
+
+2. Additionally to make juju use a private repository of charms instead of using an external location are provided via the following link and configuring environments.yaml to use cloudimg-base-url: https://github.com/juju/docs/issues/757
diff --git a/docs/release/installation/index.rst b/docs/release/installation/index.rst
index aaf9f06d..bab9bf2b 100644
--- a/docs/release/installation/index.rst
+++ b/docs/release/installation/index.rst
@@ -9,8 +9,13 @@ JOID installation instruction
=============================
.. toctree::
- :numbered:
- :maxdepth: 2
-
- ./installationprocedure.rst
+ :numbered:
+ :maxdepth: 2
+ abstract.rst
+ introduction.rst
+ requirements.rst
+ installation_baremetal.rst
+ installation_virtual.rst
+ postinstall.rst
+ appendices.rst
diff --git a/docs/release/installation/installation_baremetal.rst b/docs/release/installation/installation_baremetal.rst
new file mode 100644
index 00000000..ff4e6e53
--- /dev/null
+++ b/docs/release/installation/installation_baremetal.rst
@@ -0,0 +1,583 @@
+.. highlight:: bash
+
+
+Bare Metal Installation
+=======================
+Before proceeding, make sure that your hardware infrastructure satisfies the
+:ref:`setup-requirements`.
+
+
+Networking
+----------
+Make sure you have at least two networks configured:
+
+1. *Admin* (management) network with gateway to access the Internet (for
+ downloading installation resources).
+2. A *public/floating* network to consume by tenants for floating IPs.
+
+You may configure other networks, e.g. for data or storage, based on your
+network options for Openstack.
+
+
+.. _jumphost-install-os:
+
+Jumphost installation and configuration
+---------------------------------------
+
+1. Install Ubuntu 16.04 (Xenial) LTS server on Jumphost (one of the physical
+ nodes).
+
+ .. tip::
+ Use ``ubuntu`` as username as password, as this matches the MAAS
+ credentials installed later.
+
+ During the OS installation, install the OpenSSH server package to
+ allow SSH connections to the Jumphost.
+
+ If the data size of the image is too big or slow (e.g. when mounted
+ through a slow virtual console), you can also use the Ubuntu mini ISO.
+ Install packages: standard system utilities, basic Ubuntu server,
+ OpenSSH server, Virtual Machine host.
+
+ If you have issues with blank console after booting, see
+ `this SO answer <https://askubuntu.com/a/38782>`_ and set
+ ``nomodeset``, (removing ``quiet splash`` can also be useful to see log
+ during booting) either through console in recovery mode or via SSH (if
+ installed).
+
+2. Install git and bridge-utils packages
+
+ ::
+
+ sudo apt install git bridge-utils
+
+3. Configure bridges for each network to be used.
+
+ Example ``/etc/network/interfaces`` file:
+
+ ::
+
+ source /etc/network/interfaces.d/*
+
+ # The loopback network interface (set by Ubuntu)
+ auto lo
+ iface lo inet loopback
+
+ # Admin network interface
+ iface eth0 inet manual
+ auto brAdmin
+ iface brAdmin inet static
+ bridge_ports eth0
+ address 10.5.1.1
+ netmask 255.255.255.0
+
+ # Ext. network for floating IPs
+ iface eth1 inet manual
+ auto brExt
+ iface brExt inet static
+ bridge_ports eth1
+ address 10.5.15.1
+ netmask 255.255.255.0
+
+ ..
+
+ .. note::
+ If you choose to use the separate network for management, public, data
+ and storage, then you need to create bridge for each interface. In case
+ of VLAN tags, use the appropriate network on Jumphost depending on the
+ VLAN ID on the interface.
+
+
+Configure JOID for your lab
+---------------------------
+
+All configuration for the JOID deployment is specified in a ``labconfig.yaml``
+file. Here you describe all your physical nodes, their roles in OpenStack,
+their network interfaces, IPMI parameters etc. It's also where you configure
+your OPNFV deployment and MAAS networks/spaces.
+You can find example configuration files from already existing nodes in the
+`repository <https://gerrit.opnfv.org/gerrit/gitweb?p=joid.git;a=tree;f=labconfig>`_.
+
+First of all, download JOID to your Jumphost. We recommend doing this in your
+home directory.
+
+::
+
+ git clone https://gerrit.opnfv.org/gerrit/p/joid.git
+
+.. tip::
+ You can select the stable version of your choice by specifying the git
+ branch, for example:
+
+ ::
+
+ git clone -b stable/danube https://gerrit.opnfv.org/gerrit/p/joid.git
+
+Create a directory in ``joid/labconfig/<company_name>/<pod_number>/`` and
+create or copy a ``labconfig.yaml`` configuration file to that directory.
+For example:
+
+::
+
+ # All JOID actions are done from the joid/ci directory
+ cd joid/ci
+ mkdir -p ../labconfig/your_company/pod1
+ cp ../labconfig/nokia/pod1/labconfig.yaml ../labconfig/your_company/pod1/
+
+Example ``labconfig.yaml`` configuration file:
+
+::
+
+ lab:
+ location: your_company
+ racks:
+ - rack: pod1
+ nodes:
+ - name: rack-1-m1
+ architecture: x86_64
+ roles: [network,control]
+ nics:
+ - ifname: eth0
+ spaces: [admin]
+ mac: ["12:34:56:78:9a:bc"]
+ - ifname: eth1
+ spaces: [floating]
+ mac: ["12:34:56:78:9a:bd"]
+ power:
+ type: ipmi
+ address: 192.168.10.101
+ user: admin
+ pass: admin
+ - name: rack-1-m2
+ architecture: x86_64
+ roles: [compute,control,storage]
+ nics:
+ - ifname: eth0
+ spaces: [admin]
+ mac: ["23:45:67:89:ab:cd"]
+ - ifname: eth1
+ spaces: [floating]
+ mac: ["23:45:67:89:ab:ce"]
+ power:
+ type: ipmi
+ address: 192.168.10.102
+ user: admin
+ pass: admin
+ - name: rack-1-m3
+ architecture: x86_64
+ roles: [compute,control,storage]
+ nics:
+ - ifname: eth0
+ spaces: [admin]
+ mac: ["34:56:78:9a:bc:de"]
+ - ifname: eth1
+ spaces: [floating]
+ mac: ["34:56:78:9a:bc:df"]
+ power:
+ type: ipmi
+ address: 192.168.10.103
+ user: admin
+ pass: admin
+ - name: rack-1-m4
+ architecture: x86_64
+ roles: [compute,storage]
+ nics:
+ - ifname: eth0
+ spaces: [admin]
+ mac: ["45:67:89:ab:cd:ef"]
+ - ifname: eth1
+ spaces: [floating]
+ mac: ["45:67:89:ab:ce:f0"]
+ power:
+ type: ipmi
+ address: 192.168.10.104
+ user: admin
+ pass: admin
+ - name: rack-1-m5
+ architecture: x86_64
+ roles: [compute,storage]
+ nics:
+ - ifname: eth0
+ spaces: [admin]
+ mac: ["56:78:9a:bc:de:f0"]
+ - ifname: eth1
+ spaces: [floating]
+ mac: ["56:78:9a:bc:df:f1"]
+ power:
+ type: ipmi
+ address: 192.168.10.105
+ user: admin
+ pass: admin
+ floating-ip-range: 10.5.15.6,10.5.15.250,10.5.15.254,10.5.15.0/24
+ ext-port: "eth1"
+ dns: 8.8.8.8
+ opnfv:
+ release: d
+ distro: xenial
+ type: noha
+ openstack: ocata
+ sdncontroller:
+ - type: nosdn
+ storage:
+ - type: ceph
+ disk: /dev/sdb
+ feature: odl_l2
+ spaces:
+ - type: admin
+ bridge: brAdmin
+ cidr: 10.5.1.0/24
+ gateway:
+ vlan:
+ - type: floating
+ bridge: brExt
+ cidr: 10.5.15.0/24
+ gateway: 10.5.15.1
+ vlan:
+
+.. TODO: Details about the labconfig.yaml file
+
+Once you have prepared the configuration file, you may begin with the automatic
+MAAS deployment.
+
+MAAS Install
+------------
+
+This section will guide you through the MAAS deployment. This is the first of
+two JOID deployment steps.
+
+.. note::
+ For all the commands in this document, please do not use a ``root`` user
+ account to run but instead use a non-root user account. We recommend using
+ the ``ubuntu`` user as described above.
+
+ If you have already enabled maas for your environment and installed it then
+ there is no need to enabled it again or install it. If you have patches
+ from previous MAAS install, then you can apply them here.
+
+ Pre-installed MAAS without using the ``03-maasdeploy.sh`` script is not
+ supported. We strongly suggest to use ``03-maasdeploy.sh`` script to deploy
+ the MAAS and JuJu environment.
+
+With the ``labconfig.yaml`` configuration file ready, you can start the MAAS
+deployment. In the joid/ci directory, run the following command:
+
+::
+
+ # in joid/ci directory
+ ./03-maasdeploy.sh custom <absolute path of config>/labconfig.yaml
+
+..
+
+If you prefer, you can also host your ``labconfig.yaml`` file remotely and JOID
+will download it from there. Just run
+
+::
+
+ # in joid/ci directory
+ ./03-maasdeploy.sh custom http://<web_site_location>/labconfig.yaml
+
+..
+
+This step will take approximately 30 minutes to a couple of hours depending on
+your environment.
+This script will do the following:
+
+* If this is your first time running this script, it will download all the
+ required packages.
+* Install MAAS on the Jumphost.
+* Configure MAAS to enlist and commission a VM for Juju bootstrap node.
+* Configure MAAS to enlist and commission bare metal servers.
+* Download and load Ubuntu server images to be used by MAAS.
+
+Already during deployment, once MAAS is installed, configured and launched,
+you can visit the MAAS Web UI and observe the progress of the deployment.
+Simply open the IP of your jumphost in a web browser and navigate to the
+``/MAAS`` directory (e.g. ``http://10.5.1.1/MAAS`` in our example). You can
+login with username ``ubuntu`` and password ``ubuntu``. In the *Nodes* page,
+you can see the bootstrap node and the bare metal servers and their status.
+
+.. hint::
+ If you need to re-run this step, first undo the performed actions by
+ running
+
+ ::
+
+ # in joid/ci
+ ./cleanvm.sh
+ ./cleanmaas.sh
+ # now you can run the ./03-maasdeploy.sh script again
+
+ ..
+
+
+Juju Install
+------------
+
+This section will guide you through the Juju an OPNFV deployment. This is the
+second of two JOID deployment steps.
+
+JOID allows you to deploy different combinations of OpenStack and SDN solutions
+in HA or no-HA mode. For OpenStack, it supports Newton and Ocata. For SDN, it
+supports Open vSwitch, OpenContrail, OpenDaylight and ONOS (Open Network
+Operating System). In addition to HA or no-HA mode, it also supports deploying
+the latest from the development tree (tip).
+
+To deploy OPNFV on the previously deployed MAAS system, use the ``deploy.sh``
+script. For example:
+
+::
+
+ # in joid/ci directory
+ ./deploy.sh -d xenial -m openstack -o ocata -s nosdn -f none -t noha -l custom
+
+The above command starts an OPNFV deployment with Ubuntu Xenial (16.04) distro,
+OpenStack model, Ocata version of OpenStack, Open vSwitch (and no other SDN),
+no special features, no-HA OpenStack mode and with custom labconfig. I.e. this
+corresponds to the ``os-nosdn-nofeature-noha`` OPNFV deployment scenario.
+
+.. note::
+ You can see the usage info of the script by running
+
+ ::
+
+ ./deploy.sh --help
+
+ Possible script arguments are as follows.
+
+ **Ubuntu distro to deploy**
+ ::
+
+ [-d <trusty|xenial>]
+
+ - ``trusty``: Ubuntu 16.04.
+ - ``xenial``: Ubuntu 17.04.
+
+ **Model to deploy**
+ ::
+
+ [-m <openstack|kubernetes>]
+
+ JOID introduces two various models to deploy.
+
+ - ``openstack``: Openstack, which will be used for KVM/LXD
+ container-based workloads.
+ - ``kubernetes``: Kubernetes model will be used for docker-based
+ workloads.
+
+ **Version of Openstack deployed**
+ ::
+
+ [-o <newton|mitaka>]
+
+ - ``newton``: Newton version of OpenStack.
+ - ``ocata``: Ocata version of OpenStack.
+
+ **SDN controller**
+ ::
+
+ [-s <nosdn|odl|opencontrail|onos>]
+
+ - ``nosdn``: Open vSwitch only and no other SDN.
+ - ``odl``: OpenDayLight Boron version.
+ - ``opencontrail``: OpenContrail SDN.
+ - ``onos``: ONOS framework as SDN.
+
+ **Feature to deploy** (comma separated list)
+ ::
+
+ [-f <lxd|dvr|sfc|dpdk|ipv6|none>]
+
+ - ``none``: No special feature will be enabled.
+ - ``ipv6``: IPv6 will be enabled for tenant in OpenStack.
+ - ``lxd``: With this feature hypervisor will be LXD rather than KVM.
+ - ``dvr``: Will enable distributed virtual routing.
+ - ``dpdk``: Will enable DPDK feature.
+ - ``sfc``: Will enable sfc feature only supported with ONOS deployment.
+ - ``lb``: Load balancing in case of Kubernetes will be enabled.
+
+ **Mode of Openstack deployed**
+ ::
+
+ [-t <noha|ha|tip>]
+
+ - ``noha``: No High Availability.
+ - ``ha``: High Availability.
+ - ``tip``: The latest from the development tree.
+
+ **Where to deploy**
+ ::
+
+ [-l <custom|default|...>]
+
+ - ``custom``: For bare metal deployment where labconfig.yaml was provided
+ externally and not part of JOID package.
+ - ``default``: For virtual deployment where installation will be done on
+ KVM created using ``03-maasdeploy.sh``.
+
+ **Architecture**
+ ::
+
+ [-a <amd64|ppc64el|aarch64>]
+
+ - ``amd64``: Only x86 architecture will be used. Future version will
+ support arm64 as well.
+
+This step may take up to a couple of hours, depending on your configuration,
+internet connectivity etc. You can check the status of the deployment by
+running this command in another terminal:
+
+::
+
+ watch juju status --format tabular
+
+
+.. hint::
+ If you need to re-run this step, first undo the performed actions by
+ running
+ ::
+
+ # in joid/ci
+ ./clean.sh
+ # now you can run the ./deploy.sh script again
+
+ ..
+
+
+OPNFV Scenarios in JOID
+-----------------------
+Following OPNFV scenarios can be deployed using JOID. Separate yaml bundle will
+be created to deploy the individual scenario.
+
+======================= ======= ===============================================
+Scenario Owner Known Issues
+======================= ======= ===============================================
+os-nosdn-nofeature-ha Joid
+os-nosdn-nofeature-noha Joid
+os-odl_l2-nofeature-ha Joid Floating ips are not working on this deployment.
+os-nosdn-lxd-ha Joid Yardstick team is working to support.
+os-nosdn-lxd-noha Joid Yardstick team is working to support.
+os-onos-nofeature-ha ONOSFW
+os-onos-sfc-ha ONOSFW
+k8-nosdn-nofeature-noha Joid No support from Functest and Yardstick
+k8-nosdn-lb-noha Joid No support from Functest and Yardstick
+======================= ======= ===============================================
+
+
+.. _troubleshooting:
+
+Troubleshoot
+------------
+By default debug is enabled in script and error messages will be printed on ssh
+terminal where you are running the scripts.
+
+Logs are indispensable when it comes time to troubleshoot. If you want to see
+all the service unit deployment logs, you can run ``juju debug-log`` in another
+terminal. The debug-log command shows the consolidated logs of all Juju agents
+(machine and unit logs) running in the environment.
+
+To view a single service unit deployment log, use ``juju ssh`` to access to the
+deployed unit. For example to login into ``nova-compute`` unit and look for
+``/var/log/juju/unit-nova-compute-0.log`` for more info:
+
+::
+
+ ubuntu@R4N4B1:~$ juju ssh nova-compute/0
+ Warning: Permanently added '172.16.50.60' (ECDSA) to the list of known hosts.
+ Warning: Permanently added '3-r4n3b1-compute.maas' (ECDSA) to the list of known hosts.
+ Welcome to Ubuntu 16.04.1 LTS (GNU/Linux 3.13.0-77-generic x86_64)
+
+ * Documentation: https://help.ubuntu.com/
+ <skipped>
+ Last login: Tue Feb 2 21:23:56 2016 from bootstrap.maas
+ ubuntu@3-R4N3B1-compute:~$ sudo -i
+ root@3-R4N3B1-compute:~# cd /var/log/juju/
+ root@3-R4N3B1-compute:/var/log/juju# ls
+ machine-2.log unit-ceilometer-agent-0.log unit-ceph-osd-0.log unit-neutron-contrail-0.log unit-nodes-compute-0.log unit-nova-compute-0.log unit-ntp-0.log
+ root@3-R4N3B1-compute:/var/log/juju#
+
+.. note::
+ By default Juju will add the Ubuntu user keys for authentication into the
+ deployed server and only ssh access will be available.
+
+Once you resolve the error, go back to the jump host to rerun the charm hook
+with
+
+::
+
+ $ juju resolved --retry <unit>
+
+If you would like to start over, run
+``juju destroy-environment <environment name>`` to release the resources, then
+you can run ``deploy.sh`` again.
+
+To access of any of the nodes or containers, use
+
+::
+
+ juju ssh <service name>/<instance id>
+
+For example:
+
+::
+
+ juju ssh openstack-dashboard/0
+ juju ssh nova-compute/0
+ juju ssh neutron-gateway/0
+
+You can see the available nodes and containers by running
+
+::
+
+ juju status
+
+All charm log files are available under ``/var/log/juju``.
+
+-----
+
+If you have questions, you can join the JOID channel ``#opnfv-joid`` on
+`Freenode <https://webchat.freenode.net/>`_.
+
+
+Common Issues
+-------------
+
+The following are the common issues we have collected from the community:
+
+- The right variables are not passed as part of the deployment procedure.
+
+ ::
+
+ ./deploy.sh -o newton -s nosdn -t ha -l custom -f none
+
+- If you have not setup MAAS with ``03-maasdeploy.sh`` then the
+ ``./clean.sh`` command could hang, the ``juju status`` command may hang
+ because the correct MAAS API keys are not mentioned in cloud listing for
+ MAAS.
+
+ _Solution_: Please make sure you have an MAAS cloud listed using juju
+ clouds and the correct MAAS API key has been added.
+- Deployment times out: use the command ``juju status`` and make sure all
+ service containers receive an IP address and they are executing code.
+ Ensure there is no service in the error state.
+- In case the cleanup process hangs,run the juju destroy-model command
+ manually.
+
+**Direct console access** via the OpenStack GUI can be quite helpful if you
+need to login to a VM but cannot get to it over the network.
+It can be enabled by setting the ``console-access-protocol`` in the
+``nova-cloud-controller`` to ``vnc``. One option is to directly edit the
+``juju-deployer`` bundle and set it there prior to deploying OpenStack.
+
+::
+
+ nova-cloud-controller:
+ options:
+ console-access-protocol: vnc
+
+To access the console, just click on the instance in the OpenStack GUI and
+select the Console tab.
+
+
+
+.. Links:
+.. _`Ubuntu download`: https://www.ubuntu.com/download/server
diff --git a/docs/release/installation/installation_virtual.rst b/docs/release/installation/installation_virtual.rst
new file mode 100644
index 00000000..58340809
--- /dev/null
+++ b/docs/release/installation/installation_virtual.rst
@@ -0,0 +1,36 @@
+.. highlight:: bash
+
+
+Virtual Installation
+=======================
+
+The virtual deployment of JOID is very simple and does not require any special
+configuration. To deploy a virtual JOID environment follow these few simple
+steps:
+
+1. Install a clean Ubuntu 16.04 (Xenial) server on the machine. You can use
+ the tips noted in the first step of the :ref:`jumphost-install-os` for
+ bare metal deployment. However, no specialized configuration is needed,
+ just make sure you have Internet connectivity.
+
+2. Run the MAAS deployment for virtual deployment without customized labconfig
+ file:
+
+ ::
+
+ # in joid/ci directory
+ ./03-maasdeploy.sh
+
+3. Run the Juju/OPNFV deployment with your desired configuration parameters,
+ but with ``-l default -i 1`` for virtual deployment. For example to deploy
+ the Kubernetes model:
+
+ ::
+
+ # in joid/ci directory
+ ./deploy.sh -d xenial -s nosdn -t noha -f none -m kubernetes -l default -i 1
+
+ ..
+
+Now you should have a working JOID deployment with three virtual nodes. In case
+of any issues, refer to the :ref:`troubleshooting` section.
diff --git a/docs/release/installation/installationprocedure.rst b/docs/release/installation/installationprocedure.rst
deleted file mode 100644
index a61df2ca..00000000
--- a/docs/release/installation/installationprocedure.rst
+++ /dev/null
@@ -1,309 +0,0 @@
-Bare Metal Installations:
-=========================
-
-Requirements as per Pharos:
-===========================
-
-Networking:
-===========
-
-**Minimum 2 networks**
-
-| ``1. First for Admin/Management network with gateway to access external network``
-| ``2. Second for floating ip network to consume by tenants for floating ips``
-
-**NOTE: JOID support multiple isolated networks for API, data as well as storage.
-Based on your network options for Openstack.**
-
-**Minimum 6 physical servers**
-
-1. Jump host server:
-
-| ``  Minimum H/W Spec needed``
-| ``  CPU cores: 16``
-| ``  Memory: 32 GB``
-| ``  Hard Disk: 1(250 GB)``
-| ``  NIC: if0(Admin, Management), if1 (external network)``
-
-2. Node servers (minimum 5):
-
-| ``  Minimum H/W Spec``
-| ``  CPU cores: 16``
-| ``  Memory: 32 GB``
-| ``  Hard Disk: 2(1 TB preferred SSD) this includes the space for ceph as well``
-| ``  NIC: if0 (Admin, Management), if1 (external network)``
-
-
-**NOTE: Above configuration is minimum and for better performance and usage of
-the Openstack please consider higher spec for each nodes.**
-
-Make sure all servers are connected to top of rack switch and configured accordingly. No DHCP server should be up and configured. Only gateway at eth0 and eth1 network should be configure to access the network outside your lab.
-
-------------------------
-Jump node configuration:
-------------------------
-
-1. Install Ubuntu 16.04.1 LTS server version of OS on the first server.
-2. Install the git and bridge-utils packages on the server and configure minimum two bridges on jump host:
-
-brAdm and brExt cat /etc/network/interfaces
-
-| ``   # The loopback network interface``
-| ``   auto lo``
-| ``   iface lo inet loopback``
-| ``   iface if0 inet manual``
-| ``   auto brAdm ``
-| ``   iface brAdm inet static``
-| ``       address 10.5.1.1``
-| ``       netmask 255.255.255.0``
-| ``       bridge_ports if0``
-| ``   iface if1 inet manual``
-| ``   auto brExt``
-| ``   iface brExt inet static``
-| ``       address 10.5.15.1``
-| ``       netmask 255.255.255.0``
-| ``       bridge_ports if1``
-
-**NOTE: If you choose to use the separate network for management, pulic , data and
-storage then you need to create bridge for each interface. In case of VLAN tags
-use the appropriate network on jump-host depend upon VLAN ID on the interface.**
-
-
-Configure JOID for your lab
-===========================
-
-**Get the joid code from gerritt**
-
-*git clone https://gerrit.opnfv.org/gerrit/joid.git*
-
-**Enable MAAS (labconfig.yaml is must and base for MAAS installation and scenario deployment)**
-
-If you have already enabled maas for your environment and installed it then there is no need to enabled it again or install it. If you have patches from previous MAAS enablement then you can apply it here.
-
-NOTE: If MAAS is pre installed without 03-maasdeploy.sh not supported. We strongly suggest to use 03-maaseploy.sh to deploy the MAAS and JuJu environment.
-
-If enabling first time then follow it further.
-- Create a directory in joid/labconfig/<company name>/<pod number>/ for example
-
-*mkdir joid/labconfig/intel/pod7/*
-
-- copy labconfig.yaml from pod6 to pod7
-*cp joid/labconfig/intel/pod5/\* joid/labconfig/intel/pod7/*
-
-labconfig.yaml file
-===================
-
--------------
-Prerequisite:
--------------
-
-1. Make sure Jump host node has been configured with bridges on each interface,
-so that appropriate MAAS and JUJU bootstrap VM can be created. For example if
-you have three network admin, data and floating ip then I would suggest to give names
-like brAdm, brData and brExt etc.
-2. You have information about the node MAC address and power management details (IPMI IP, username, password) of the nodes used for deployment.
-
----------------------
-modify labconfig.yaml
----------------------
-
-This file has been used to configure your maas and bootstrap node in a
-VM. Comments in the file are self explanatory and we expect fill up the
-information according to match lab infrastructure information. Sample
-labconfig.yaml can be found at
-https://gerrit.opnfv.org/gerrit/gitweb?p=joid.git;a=blob;f=labconfig/intel/pod6/labconfig.yaml
-
-*lab:
- location: intel
- racks:
- - rack: pod6
- nodes:
- - name: rack-6-m1
- architecture: x86_64
- roles: [network,control]
- nics:
- - ifname: eth1
- spaces: [public]
- mac: ["xx:xx:xx:xx:xx:xx"]
- power:
- type: ipmi
- address: xx.xx.xx.xx
- user: xxxx
- pass: xxxx
- - name: rack-6-m1
- architecture: x86_64
- roles: [network,control]
- nics:
- - ifname: eth1
- spaces: [public]
- mac: ["xx:xx:xx:xx:xx:xx"]
- power:
- type: ipmi
- address: xx.xx.xx.xx
- user: xxxx
- pass: xxxx
- - name: rack-6-m1
- architecture: x86_64
- roles: [network,control]
- nics:
- - ifname: eth1
- spaces: [public]
- mac: ["xx:xx:xx:xx:xx:xx"]
- power:
- type: ipmi
- address: xx.xx.xx.xx
- user: xxxx
- pass: xxxx
- - name: rack-6-m1
- architecture: x86_64
- roles: [network,control]
- nics:
- - ifname: eth1
- spaces: [public]
- mac: ["xx:xx:xx:xx:xx:xx"]
- power:
- type: ipmi
- address: xx.xx.xx.xx
- user: xxxx
- pass: xxxx
- - name: rack-6-m1
- architecture: x86_64
- roles: [network,control]
- nics:
- - ifname: eth1
- spaces: [public]
- mac: ["xx:xx:xx:xx:xx:xx"]
- power:
- type: ipmi
- address: xx.xx.xx.xx
- user: xxxx
- pass: xxxx
- floating-ip-range: 10.5.15.6,10.5.15.250,10.5.15.254,10.5.15.0/24
- ext-port: "eth1"
- dns: 8.8.8.8
-opnfv:
- release: d
- distro: xenial
- type: noha
- openstack: newton
- sdncontroller:
- - type: nosdn
- storage:
- - type: ceph
- disk: /dev/sdb
- feature: odl_l2
- spaces:
- - type: floating
- bridge: brEx
- cidr: 10.5.15.0/24
- gateway: 10.5.15.254
- vlan:
- - type: admin
- bridge: brAdm
- cidr: 10.5.1.0/24
- gateway:
- vlan:*
-
-Deployment of OPNFV using JOID:
-===============================
-
-Once you have done the change in above section then run the following commands to do the automatic deployments.
-
-------------
-MAAS Install
-------------
-
-After integrating the changes as mentioned above run the MAAS install.
-then run the below commands to start the MAAS deployment.
-
-``   ./03-maasdeploy.sh custom <absolute path of config>/labconfig.yaml ``
-or
-``   ./03-maasdeploy.sh custom http://<web site location>/labconfig.yaml ``
-
-For deployment of Danube release on KVM please use the following command.
-
-``   ./03-maasdeploy.sh default ``
-
--------------
-OPNFV Install
--------------
-
-| ``   ./deploy.sh -o newton -s nosdn -t noha -l custom -f none -d xenial -m openstack``
-| ``   ``
-
-./deploy.sh -o newton -s nosdn -t noha -l custom -f none -d xenial -m openstack
-
-NOTE: Possible options are as follows:
-
-choose which sdn controller to use.
- [-s <nosdn|odl|opencontrail|onos>]
- nosdn: openvswitch only and no other SDN.
- odl: OpenDayLight Boron version.
- opencontrail: OpenContrail SDN.
- onos: ONOS framework as SDN.
-
-Mode of Openstack deployed.
- [-t <noha|ha|tip>]
- noha: NO HA mode of Openstack
- ha: HA mode of openstack.
-
-Wihch version of Openstack deployed.
- [-o <Newton|Mitaka>]
- Newton: Newton version of openstack.
- Mitaka: Mitaka version of openstack.
-
-Where to deploy
- [-l <custom | default>] etc...
- custom: For bare metal deployment where labconfig.yaml provided externally and not part of JOID.
- default: For virtual deployment where installation will be done on KVM created using 03-maasdeploy.sh
-
-what feature to deploy. Comma seperated list
- [-f <lxd|dvr|sfc|dpdk|ipv6|none>]
- none: no special feature will be enabled.
- ipv6: ipv6 will be enabled for tenant in openstack.
- lxd: With this feature hypervisor will be LXD rather than KVM.
- dvr: Will enable distributed virtual routing.
- dpdk: Will enable DPDK feature.
- sfc: Will enable sfc feature only supported with onos deployment.
-
-which Ubuntu distro to use.
- [ -d <trusty|xenial> ]
-
-Which model to deploy
-JOID introduces the various model to deploy apart from openstack for docker based container workloads.
-[-m <openstack|kubernetes>]
- openstack: Openstack which will be used for KVM/LXD container based workloads.
- kubernetes: Kubernes model will be used for docker based workloads.
-
-OPNFV Scenarios in JOID
-Following OPNFV scenarios can be deployed using JOID. Seperate yaml bundle will be created to deploy the individual scenario.
-
-Scenario Owner Known Issues
-os-nosdn-nofeature-ha Joid
-os-nosdn-nofeature-noha Joid
-os-odl_l2-nofeature-ha Joid Floating ips are not working on this deployment.
-os-nosdn-lxd-ha Joid Yardstick team is working to support.
-os-nosdn-lxd-noha Joid Yardstick team is working to support.
-os-onos-nofeature-ha ONOSFW
-os-onos-sfc-ha ONOSFW
-k8-nosdn-nofeature-noha Joid No support from Functest and Yardstick
-k8-nosdn-lb-noha Joid No support from Functest and Yardstick
-
-------------
-Troubleshoot
-------------
-
-By default debug is enabled in script and error messages will be printed on ssh terminal where you are running the scripts.
-
-To Access of any control or compute nodes.
-juju ssh <service name>/<instance id>
-for example to login into openstack-dashboard container.
-
-juju ssh openstack-dashboard/0
-juju ssh nova-compute/0
-juju ssh neutron-gateway/0
-
-All charm jog files are availble under /var/log/juju
-
-By default juju will add the current user keys for authentication into the deployed server and only ssh access will be available.
-
diff --git a/docs/release/installation/introduction.rst b/docs/release/installation/introduction.rst
new file mode 100644
index 00000000..52e04f26
--- /dev/null
+++ b/docs/release/installation/introduction.rst
@@ -0,0 +1,45 @@
+Introduction
+============
+
+JOID in brief
+-------------
+JOID as *Juju OPNFV Infrastructure Deployer* allows you to deploy different
+combinations of OpenStack release and SDN solution in HA or non-HA mode. For
+OpenStack, JOID currently supports Newton and Ocata. For SDN, it supports
+Openvswitch, OpenContrail, OpenDayLight, and ONOS. In addition to HA or non-HA
+mode, it also supports deploying from the latest development tree.
+
+JOID heavily utilizes the technology developed in Juju and MAAS.
+
+Juju_ is a state-of-the-art, open source modelling tool for operating software
+in the cloud. Juju allows you to deploy, configure, manage, maintain, and scale
+cloud applications quickly and efficiently on public clouds, as well as on
+physical servers, OpenStack, and containers. You can use Juju from the command
+line or through its beautiful `GUI <JUJU GUI_>`_.
+(source: `Juju Docs <https://jujucharms.com/docs/2.2/about-juju>`_)
+
+MAAS_ is *Metal As A Service*. It lets you treat physical servers like virtual
+machines (instances) in the cloud. Rather than having to manage each server
+individually, MAAS turns your bare metal into an elastic cloud-like resource.
+Machines can be quickly provisioned and then destroyed again as easily as you
+can with instances in a public cloud. ... In particular, it is designed to work
+especially well with Juju, the service and model management service. It's a
+perfect arrangement: MAAS manages the machines and Juju manages the services
+running on those machines.
+(source: `MAAS Docs <https://docs.ubuntu.com/maas/2.1/en/index>`_)
+
+Typical JOID Architecture
+-------------------------
+The MAAS server is installed and configured on Jumphost with Ubuntu 16.04 LTS
+server with access to the Internet. Another VM is created to be managed by
+MAAS as a bootstrap node for Juju. The rest of the resources, bare metal or
+virtual, will be registered and provisioned in MAAS. And finally the MAAS
+environment details are passed to Juju for use.
+
+.. TODO: setup diagram
+
+
+.. Links:
+.. _Juju: https://jujucharms.com/
+.. _`JUJU GUI`: https://jujucharms.com/docs/stable/controllers-gui
+.. _MAAS: https://maas.io/
diff --git a/docs/release/installation/postinstall.rst b/docs/release/installation/postinstall.rst
new file mode 100644
index 00000000..750ea4ad
--- /dev/null
+++ b/docs/release/installation/postinstall.rst
@@ -0,0 +1,352 @@
+.. highlight:: bash
+
+Post Installation
+=================
+
+Testing Your Deployment
+-----------------------
+Once Juju deployment is complete, use ``juju status`` to verify that all
+deployed units are in the _Ready_ state.
+
+Find the OpenStack dashboard IP address from the ``juju status`` output, and
+see if you can login via a web browser. The domain, username and password are
+``admin_domain``, ``admin`` and ``openstack``.
+
+Optionally, see if you can log in to the Juju GUI. Run ``juju gui`` to see the
+login details.
+
+If you deploy OpenDaylight, OpenContrail or ONOS, find the IP address of the
+web UI and login. Please refer to each SDN bundle.yaml for the login
+username/password.
+
+.. note::
+ If the deployment worked correctly, you can get easier access to the web
+ dashboards with the ``setupproxy.sh`` script described in the next section.
+
+
+Create proxies to the dashboards
+--------------------------------
+MAAS, Juju and OpenStack/Kubernetes all come with their own web-based
+dashboards. However, they might be on private networks and require SSH
+tunnelling to see them. To simplify access to them, you can use the following
+script to configure the Apache server on Jumphost to work as a proxy to Juju
+and OpenStack/Kubernetes dashboards. Furthermore, this script also creates
+JOID deployment homepage with links to these dashboards, listing also their
+access credentials.
+
+Simply run the following command after JOID has been deployed.
+
+::
+
+ # run in joid/ci directory
+ # for OpenStack model:
+ ./setupproxy.sh openstack
+ # for Kubernetes model:
+ ./setupproxy.sh kubernetes
+
+You can also use the ``-v`` argument for more verbose output with xtrace.
+
+After the script has finished, it will print out the addresses and credentials
+to the dashboards. You can also find the JOID deployment homepage if you
+open the Jumphost's IP address in your web browser.
+
+
+Configuring OpenStack
+---------------------
+
+At the end of the deployment, the ``admin-openrc`` with OpenStack login
+credentials will be created for you. You can source the file and start
+configuring OpenStack via CLI.
+
+::
+
+ . ~/joid_config/admin-openrc
+
+The script ``openstack.sh`` under ``joid/ci`` can be used to configure the
+OpenStack after deployment.
+
+::
+
+ ./openstack.sh <nosdn> custom xenial newton
+
+Below commands are used to setup domain in heat.
+
+::
+
+ juju run-action heat/0 domain-setup
+
+Upload cloud images and creates the sample network to test.
+
+::
+
+ joid/juju/get-cloud-images
+ joid/juju/joid-configure-openstack
+
+
+Configuring Kubernetes
+----------------------
+
+The script ``k8.sh`` under ``joid/ci`` would be used to show the Kubernetes
+workload and create sample pods.
+
+::
+
+ ./k8.sh
+
+
+Configuring OpenStack
+---------------------
+At the end of the deployment, the ``admin-openrc`` with OpenStack login
+credentials will be created for you. You can source the file and start
+configuring OpenStack via CLI.
+
+::
+
+ cat ~/joid_config/admin-openrc
+ export OS_USERNAME=admin
+ export OS_PASSWORD=openstack
+ export OS_TENANT_NAME=admin
+ export OS_AUTH_URL=http://172.16.50.114:5000/v2.0
+ export OS_REGION_NAME=RegionOne
+
+We have prepared some scripts to help your configure the OpenStack cloud that
+you just deployed. In each SDN directory, for example joid/ci/opencontrail,
+there is a ‘scripts’ folder where you can find the scripts. These scripts are
+created to help you configure a basic OpenStack Cloud to verify the cloud. For
+more information on OpenStack Cloud configuration, please refer to the
+OpenStack Cloud Administrator Guide:
+http://docs.openstack.org/user-guide-admin/.
+Similarly, for complete SDN configuration, please refer to the respective SDN
+administrator guide.
+
+Each SDN solution requires slightly different setup. Please refer to the README
+in each SDN folder. Most likely you will need to modify the ``openstack.sh``
+and ``cloud-setup.sh`` scripts for the floating IP range, private IP network,
+and SSH keys. Please go through ``openstack.sh``, ``glance.sh`` and
+``cloud-setup.sh`` and make changes as you see fit.
+
+Let’s take a look at those for the Open vSwitch and briefly go through each
+script so you know what you need to change for your own environment.
+
+::
+
+ $ ls ~/joid/juju
+ configure-juju-on-openstack get-cloud-images joid-configure-openstack
+
+openstack.sh
+------------
+Let’s first look at ``openstack.sh``. First there are 3 functions defined,
+``configOpenrc()``, ``unitAddress()``, and ``unitMachine()``.
+
+::
+
+ configOpenrc() {
+ cat <<-EOF
+ export SERVICE_ENDPOINT=$4
+ unset SERVICE_TOKEN
+ unset SERVICE_ENDPOINT
+ export OS_USERNAME=$1
+ export OS_PASSWORD=$2
+ export OS_TENANT_NAME=$3
+ export OS_AUTH_URL=$4
+ export OS_REGION_NAME=$5
+ EOF
+ }
+
+ unitAddress() {
+ if [[ "$jujuver" < "2" ]]; then
+ juju status --format yaml | python -c "import yaml; import sys; print yaml.load(sys.stdin)[\"services\"][\"$1\"][\"units\"][\"$1/$2\"][\"public-address\"]" 2> /dev/null
+ else
+ juju status --format yaml | python -c "import yaml; import sys; print yaml.load(sys.stdin)[\"applications\"][\"$1\"][\"units\"][\"$1/$2\"][\"public-address\"]" 2> /dev/null
+ fi
+ }
+
+ unitMachine() {
+ if [[ "$jujuver" < "2" ]]; then
+ juju status --format yaml | python -c "import yaml; import sys; print yaml.load(sys.stdin)[\"services\"][\"$1\"][\"units\"][\"$1/$2\"][\"machine\"]" 2> /dev/null
+ else
+ juju status --format yaml | python -c "import yaml; import sys; print yaml.load(sys.stdin)[\"applications\"][\"$1\"][\"units\"][\"$1/$2\"][\"machine\"]" 2> /dev/null
+ fi
+ }
+
+The function configOpenrc() creates the OpenStack login credentials, the function unitAddress() finds the IP address of the unit, and the function unitMachine() finds the machine info of the unit.
+
+::
+
+ create_openrc() {
+ keystoneIp=$(keystoneIp)
+ if [[ "$jujuver" < "2" ]]; then
+ adminPasswd=$(juju get keystone | grep admin-password -A 5 | grep value | awk '{print $2}' 2> /dev/null)
+ else
+ adminPasswd=$(juju config keystone | grep admin-password -A 5 | grep value | awk '{print $2}' 2> /dev/null)
+ fi
+
+ configOpenrc admin $adminPasswd admin http://$keystoneIp:5000/v2.0 RegionOne > ~/joid_config/admin-openrc
+ chmod 0600 ~/joid_config/admin-openrc
+ }
+
+This finds the IP address of the keystone unit 0, feeds in the OpenStack admin
+credentials to a new file name ‘admin-openrc’ in the ‘~/joid_config/’ folder
+and change the permission of the file. It’s important to change the credentials here if
+you use a different password in the deployment Juju charm bundle.yaml.
+
+::
+
+ neutron net-show ext-net > /dev/null 2>&1 || neutron net-create ext-net \
+ --router:external=True \
+ --provider:network_type flat \
+ --provider:physical_network physnet1
+
+::
+
+ neutron subnet-show ext-subnet > /dev/null 2>&1 || neutron subnet-create ext-net \
+ --name ext-subnet --allocation-pool start=$EXTNET_FIP,end=$EXTNET_LIP \
+ --disable-dhcp --gateway $EXTNET_GW $EXTNET_NET
+
+This section will create the ext-net and ext-subnet for defining the for floating ips.
+
+::
+
+ openstack congress datasource create nova "nova" \
+ --config username=$OS_USERNAME \
+ --config tenant_name=$OS_TENANT_NAME \
+ --config password=$OS_PASSWORD \
+ --config auth_url=http://$keystoneIp:5000/v2.0
+
+This section will create the congress datasource for various services.
+Each service datasource will have entry in the file.
+
+get-cloud-images
+----------------
+
+::
+
+ folder=/srv/data/
+ sudo mkdir $folder || true
+
+ if grep -q 'virt-type: lxd' bundles.yaml; then
+ URLS=" \
+ http://download.cirros-cloud.net/0.3.4/cirros-0.3.4-x86_64-lxc.tar.gz \
+ http://cloud-images.ubuntu.com/xenial/current/xenial-server-cloudimg-amd64-root.tar.gz "
+
+ else
+ URLS=" \
+ http://cloud-images.ubuntu.com/precise/current/precise-server-cloudimg-amd64-disk1.img \
+ http://cloud-images.ubuntu.com/trusty/current/trusty-server-cloudimg-amd64-disk1.img \
+ http://cloud-images.ubuntu.com/xenial/current/xenial-server-cloudimg-amd64-disk1.img \
+ http://mirror.catn.com/pub/catn/images/qcow2/centos6.4-x86_64-gold-master.img \
+ http://cloud.centos.org/centos/7/images/CentOS-7-x86_64-GenericCloud.qcow2 \
+ http://download.cirros-cloud.net/0.3.4/cirros-0.3.4-x86_64-disk.img "
+ fi
+
+ for URL in $URLS
+ do
+ FILENAME=${URL##*/}
+ if [ -f $folder/$FILENAME ];
+ then
+ echo "$FILENAME already downloaded."
+ else
+ wget -O $folder/$FILENAME $URL
+ fi
+ done
+
+This section of the file will download the images to jumphost if not found to
+be used with openstack VIM.
+
+.. note::
+ The image downloading and uploading might take too long and time out. In
+ this case, use juju ssh glance/0 to log in to the glance unit 0 and run the
+ script again, or manually run the glance commands.
+
+joid-configure-openstack
+------------------------
+
+::
+
+ source ~/joid_config/admin-openrc
+
+First, source the the ``admin-openrc`` file.
+
+::
+ #Upload images to glance
+ glance image-create --name="Xenial LXC x86_64" --visibility=public --container-format=bare --disk-format=root-tar --property architecture="x86_64" < /srv/data/xenial-server-cloudimg-amd64-root.tar.gz
+ glance image-create --name="Cirros LXC 0.3" --visibility=public --container-format=bare --disk-format=root-tar --property architecture="x86_64" < /srv/data/cirros-0.3.4-x86_64-lxc.tar.gz
+ glance image-create --name="Trusty x86_64" --visibility=public --container-format=ovf --disk-format=qcow2 < /srv/data/trusty-server-cloudimg-amd64-disk1.img
+ glance image-create --name="Xenial x86_64" --visibility=public --container-format=ovf --disk-format=qcow2 < /srv/data/xenial-server-cloudimg-amd64-disk1.img
+ glance image-create --name="CentOS 6.4" --visibility=public --container-format=bare --disk-format=qcow2 < /srv/data/centos6.4-x86_64-gold-master.img
+ glance image-create --name="Cirros 0.3" --visibility=public --container-format=bare --disk-format=qcow2 < /srv/data/cirros-0.3.4-x86_64-disk.img
+
+Upload the images into Glance to be used for creating the VM.
+
+::
+
+ # adjust tiny image
+ nova flavor-delete m1.tiny
+ nova flavor-create m1.tiny 1 512 8 1
+
+Adjust the tiny image profile as the default tiny instance is too small for Ubuntu.
+
+::
+
+ # configure security groups
+ neutron security-group-rule-create --direction ingress --ethertype IPv4 --protocol icmp --remote-ip-prefix 0.0.0.0/0 default
+ neutron security-group-rule-create --direction ingress --ethertype IPv4 --protocol tcp --port-range-min 22 --port-range-max 22 --remote-ip-prefix 0.0.0.0/0 default
+
+Open up the ICMP and SSH access in the default security group.
+
+::
+
+ # import key pair
+ keystone tenant-create --name demo --description "Demo Tenant"
+ keystone user-create --name demo --tenant demo --pass demo --email demo@demo.demo
+
+ nova keypair-add --pub-key id_rsa.pub ubuntu-keypair
+
+Create a project called ‘demo’ and create a user called ‘demo’ in this project. Import the key pair.
+
+::
+
+ # configure external network
+ neutron net-create ext-net --router:external --provider:physical_network external --provider:network_type flat --shared
+ neutron subnet-create ext-net --name ext-subnet --allocation-pool start=10.5.8.5,end=10.5.8.254 --disable-dhcp --gateway 10.5.8.1 10.5.8.0/24
+
+This section configures an external network ‘ext-net’ with a subnet called ‘ext-subnet’.
+In this subnet, the IP pool starts at 10.5.8.5 and ends at 10.5.8.254. DHCP is disabled.
+The gateway is at 10.5.8.1, and the subnet mask is 10.5.8.0/24. These are the public IPs
+that will be requested and associated to the instance. Please change the network configuration according to your environment.
+
+::
+
+ # create vm network
+ neutron net-create demo-net
+ neutron subnet-create --name demo-subnet --gateway 10.20.5.1 demo-net 10.20.5.0/24
+
+This section creates a private network for the instances. Please change accordingly.
+
+::
+
+ neutron router-create demo-router
+
+ neutron router-interface-add demo-router demo-subnet
+
+ neutron router-gateway-set demo-router ext-net
+
+This section creates a router and connects this router to the two networks we just created.
+
+::
+
+ # create pool of floating ips
+ i=0
+ while [ $i -ne 10 ]; do
+ neutron floatingip-create ext-net
+ i=$((i + 1))
+ done
+
+Finally, the script will request 10 floating IPs.
+
+configure-juju-on-openstack
+~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+This script can be used to do juju bootstrap on openstack so that Juju can be used as model tool to deploy the services and VNF on top of openstack using the JOID.
+
+
diff --git a/docs/release/installation/requirements.rst b/docs/release/installation/requirements.rst
new file mode 100644
index 00000000..a16b7a58
--- /dev/null
+++ b/docs/release/installation/requirements.rst
@@ -0,0 +1,48 @@
+.. _setup-requirements:
+
+Setup Requirements
+==================
+
+Network Requirements
+--------------------
+
+Minimum 2 Networks:
+
+- One for the administrative network with gateway to access the Internet
+- One for the OpenStack public network to access OpenStack instances via
+ floating IPs
+
+JOID supports multiple isolated networks for data as well as storage based on
+your network requirement for OpenStack.
+
+No DHCP server should be up and configured. Configure gateways only on eth0 and
+eth1 networks to access the network outside your lab.
+
+
+Jumphost Requirements
+---------------------
+
+The Jumphost requirements are outlined below:
+
+- OS: Ubuntu 16.04 LTS Server
+- Root access.
+- CPU cores: 16
+- Memory: 32GB
+- Hard Disk: 1× (min. 250 GB)
+- NIC: eth0 (admin, management), eth1 (external connectivity)
+
+Physical nodes requirements (bare metal deployment)
+---------------------------------------------------
+
+Besides Jumphost, a minimum of 5 physical servers for bare metal environment.
+
+- CPU cores: 16
+- Memory: 32GB
+- Hard Disk: 2× (500GB) prefer SSD
+- NIC: eth0 (Admin, Management), eth1 (external network)
+
+**NOTE**: Above configuration is minimum. For better performance and usage of
+the OpenStack, please consider higher specs for all nodes.
+
+Make sure all servers are connected to top of rack switch and configured
+accordingly.