summaryrefslogtreecommitdiffstats
path: root/docs
diff options
context:
space:
mode:
Diffstat (limited to 'docs')
-rw-r--r--docs/images/Dell_Overview.jpgbin0 -> 62317 bytes
-rw-r--r--docs/images/Dell_POD1.jpgbin0 -> 185357 bytes
-rw-r--r--docs/images/Dell_POD2.jpgbin0 -> 195042 bytes
-rw-r--r--docs/images/bridge1.pngbin0 -> 78415 bytes
-rw-r--r--docs/images/bridge2.pngbin0 -> 77276 bytes
-rw-r--r--docs/images/opnfv-example-lab-diagram.pngbin0 -> 70804 bytes
-rw-r--r--docs/images/opnfv-pharos-diagram-v01.jpgbin0 -> 63461 bytes
-rw-r--r--docs/images/pharos-archi1.jpgbin0 -> 104922 bytes
-rw-r--r--docs/labs/Dell.rst356
-rw-r--r--docs/labs/images/spirent_vptc-public-drawing.pngbin0 -> 80627 bytes
-rw-r--r--docs/labs/spirent.rst41
-rw-r--r--docs/labs/templates/lab_details_template.rst71
-rw-r--r--docs/main.rst103
-rw-r--r--docs/pharos.rst104
-rw-r--r--docs/spec.rst234
15 files changed, 806 insertions, 103 deletions
diff --git a/docs/images/Dell_Overview.jpg b/docs/images/Dell_Overview.jpg
new file mode 100644
index 00000000..5a288a81
--- /dev/null
+++ b/docs/images/Dell_Overview.jpg
Binary files differ
diff --git a/docs/images/Dell_POD1.jpg b/docs/images/Dell_POD1.jpg
new file mode 100644
index 00000000..42e4b49e
--- /dev/null
+++ b/docs/images/Dell_POD1.jpg
Binary files differ
diff --git a/docs/images/Dell_POD2.jpg b/docs/images/Dell_POD2.jpg
new file mode 100644
index 00000000..4d4b92cf
--- /dev/null
+++ b/docs/images/Dell_POD2.jpg
Binary files differ
diff --git a/docs/images/bridge1.png b/docs/images/bridge1.png
new file mode 100644
index 00000000..9a557aef
--- /dev/null
+++ b/docs/images/bridge1.png
Binary files differ
diff --git a/docs/images/bridge2.png b/docs/images/bridge2.png
new file mode 100644
index 00000000..7d8c3831
--- /dev/null
+++ b/docs/images/bridge2.png
Binary files differ
diff --git a/docs/images/opnfv-example-lab-diagram.png b/docs/images/opnfv-example-lab-diagram.png
new file mode 100644
index 00000000..5a3901cd
--- /dev/null
+++ b/docs/images/opnfv-example-lab-diagram.png
Binary files differ
diff --git a/docs/images/opnfv-pharos-diagram-v01.jpg b/docs/images/opnfv-pharos-diagram-v01.jpg
new file mode 100644
index 00000000..b6886554
--- /dev/null
+++ b/docs/images/opnfv-pharos-diagram-v01.jpg
Binary files differ
diff --git a/docs/images/pharos-archi1.jpg b/docs/images/pharos-archi1.jpg
new file mode 100644
index 00000000..3c4478a8
--- /dev/null
+++ b/docs/images/pharos-archi1.jpg
Binary files differ
diff --git a/docs/labs/Dell.rst b/docs/labs/Dell.rst
new file mode 100644
index 00000000..44d453a9
--- /dev/null
+++ b/docs/labs/Dell.rst
@@ -0,0 +1,356 @@
+Dell OPNFV Testlab
+==================================================
+
+Overview
+------------------
+
+Dell is hosting an OPNFV testlab at its Santa Clara facility. The testlab would host baremetal servers for the use of OPNFV community as part of the OPNFV Pharos Project.
+
+The Dell Testlab consists of 2 PODs
+ * POD1 for Fuel
+ * POD2 for Foreman
+
+.. image:: images/Dell_Overview.jpg
+ :alt: Dell OPNFV Testlab Overview
+
+Each of the 2 PODs consists of 6 servers that consist of
+ * 1 Jump Server
+ * 3 Servers for Control Nodes
+ * 2 Servers for Compute Nodes
+
+
+Hardware details
+-----------------
+
+
+**POD1-Fuel**
+
+The specifications for the servers within POD1 can be found below:
+
++---------------------+---------------------+----------------+--------------+---------------------+------------+
+| Hostname | Model | Memory | Storage | Processor | Socket |
++---------------------+---------------------+----------------+--------------+---------------------+------------+
+|Fuel Jump Server | Dell PowerEdge M620| 64 GB |600GB HDD |Intel Xeon E5-2640 | 2 |
++---------------------+---------------------+----------------+--------------+---------------------+------------+
+|Node2 | Dell PowerEdge M620| 64 GB |600GB HDD |Intel Xeon E5-2640 | 2 |
++---------------------+---------------------+----------------+--------------+---------------------+------------+
+|Node3 | Dell PowerEdge M620| 64 GB |600GB HDD |Intel Xeon E5-2640 | 2 |
++---------------------+---------------------+----------------+--------------+---------------------+------------+
+|Node4 | Dell PowerEdge M620| 64 GB |600GB HDD |Intel Xeon E5-2640 | 2 |
++---------------------+---------------------+----------------+--------------+---------------------+------------+
+|Node5 | Dell PowerEdge M620| 64 GB |600GB HDD |Intel Xeon E5-2640 | 2 |
++---------------------+---------------------+----------------+--------------+---------------------+------------+
+|Node6 | Dell PowerEdge M620| 64 GB |600GB HDD |Intel Xeon E5-2640 | 2 |
++---------------------+---------------------+----------------+--------------+---------------------+------------+
+
+The specifications for the Network Interfaces of servers within POD1 can be seen below:
+
++---------------------+----------------------------------------------+------+-------------------+-------+----------------------------------+
+| Hostname | NIC Model | Ports| MAC | BW | Roles |
++---------------------+----------------------------------------------+------+-------------------+-------+----------------------------------+
+|Fuel Jump | 1, Broadcom NetXtreme II BCM57810 | em1 | A4:1F:72:11:B4:81 | 10G | Unused |
+| | +------+-------------------+-------+----------------------------------+
+| | | em2 | A4:1F:72:11:B4:84 | 10G | Unused |
+| +----------------------------------------------+------+-------------------+-------+----------------------------------+
+| | 2, Intel 82599 | p3p1 | A4:1F:72:11:B4:85 | 10G | Public |
+| | +------+-------------------+-------+----------------------------------+
+| | | p3p2 | A4:1F:72:11:B4:87 | 10G | Fuel Admin/mgmt/pvt/ storage |
+| +----------------------------------------------+------+-------------------+-------+----------------------------------+
+| | 3, Intel 82599 | p1p1 | A4:1F:72:11:B4:89 | 10G | Unused |
+| | +------+-------------------+-------+----------------------------------+
+| | | p1p2 | A4:1F:72:11:B4:8B | 10G | Unused |
++---------------------+----------------------------------------------+------+-------------------+-------+----------------------------------+
+|Node2 | 1, Broadcom NetXtreme II BCM57810 | em1 | A4:1F:72:11:B4:8E | 10G | Unused |
+| | +------+-------------------+-------+----------------------------------+
+| | | em2 | A4:1F:72:11:B4:91 | 10G | Unused |
+| +----------------------------------------------+------+-------------------+-------+----------------------------------+
+| | 2, Intel 82599 | p3p1 | A4:1F:72:11:B4:92 | 10G | Public |
+| | +------+-------------------+-------+----------------------------------+
+| | | p3p2 | A4:1F:72:11:B4:94 | 10G | Fuel Admin/mgmt/pvt/ storage |
+| +----------------------------------------------+------+-------------------+-------+----------------------------------+
+| | 3, Intel 82599 | p1p1 | A4:1F:72:11:B4:96 | 10G | Unused |
+| | +------+-------------------+-------+----------------------------------+
+| | | p1p2 | A4:1F:72:11:B4:98 | 10G | Unused |
++---------------------+----------------------------------------------+------+-------------------+-------+----------------------------------+
+|Node3 | 1, Broadcom NetXtreme II BCM57810 | em1 | A4:1F:72:11:B4:9B | 10G | Unused |
+| | +------+-------------------+-------+----------------------------------+
+| | | em2 | A4:1F:72:11:B4:9E | 10G | Unused |
+| +----------------------------------------------+------+-------------------+-------+----------------------------------+
+| | 2, Intel 82599 | p3p1 | A4:1F:72:11:B4:9F | 10G | Public |
+| | +------+-------------------+-------+----------------------------------+
+| | | p3p2 | A4:1F:72:11:B4:A1 | 10G | Fuel Admin/mgmt/pvt/ storage |
+| +----------------------------------------------+------+-------------------+-------+----------------------------------+
+| | 3, Intel 82599 | p1p1 | A4:1F:72:11:B4:A3 | 10G | Unused |
+| | +------+-------------------+-------+----------------------------------+
+| | | p1p2 | A4:1F:72:11:B4:A5 | 10G | Unused |
++---------------------+----------------------------------------------+------+-------------------+-------+----------------------------------+
+|Node4 | 1, Broadcom NetXtreme II BCM57810 | em1 | A4:1F:72:11:B4:A8 | 10G | Unused |
+| | +------+-------------------+-------+----------------------------------+
+| | | em2 | A4:1F:72:11:B4:AB | 10G | Unused |
+| +----------------------------------------------+------+-------------------+-------+----------------------------------+
+| | 2, Intel 82599 | p3p1 | A4:1F:72:11:B4:AC | 10G | Public |
+| | +------+-------------------+-------+----------------------------------+
+| | | p3p2 | A4:1F:72:11:B4:AE | 10G | Fuel Admin/mgmt/pvt/ storage |
+| +----------------------------------------------+------+-------------------+-------+----------------------------------+
+| | 3, Intel 82599 | p1p1 | A4:1F:72:11:B4:B0 | 10G | Unused |
+| | +------+-------------------+-------+----------------------------------+
+| | | p1p2 | A4:1F:72:11:B4:B1 | 10G | Unused |
++---------------------+----------------------------------------------+------+-------------------+-------+----------------------------------+
+|Node5 | 1, Broadcom NetXtreme II BCM57810 | em1 | A4:1F:72:11:B4:B5 | 10G | Unused |
+| | +------+-------------------+-------+----------------------------------+
+| | | em2 | A4:1F:72:11:B4:B8 | 10G | Unused |
+| +----------------------------------------------+------+-------------------+-------+----------------------------------+
+| | 2, Intel 82599 | p3p1 | A4:1F:72:11:B4:B9 | 10G | Public |
+| | +------+-------------------+-------+----------------------------------+
+| | | p3p2 | A4:1F:72:11:B4:BB | 10G | Fuel Admin/mgmt/pvt/ storage |
+| +----------------------------------------------+------+-------------------+-------+----------------------------------+
+| | 3, Broadcom NetXtreme II BCM57810 | p1p1 | A4:1F:72:11:B4:BD | 10G | Unused |
+| | +------+-------------------+-------+----------------------------------+
+| | | p1p2 | A4:1F:72:11:B4:C0 | 10G | Unused |
++---------------------+----------------------------------------------+------+-------------------+-------+----------------------------------+
+|Node6 | 1, Broadcom NetXtreme II BCM57810 | em1 | A4:1F:72:11:B4:C2 | 10G | Unused |
+| | +------+-------------------+-------+----------------------------------+
+| | | em2 | A4:1F:72:11:B4:C5 | 10G | Unused |
+| +----------------------------------------------+------+-------------------+-------+----------------------------------+
+| | 2, Intel 82599 | p3p1 | A4:1F:72:11:B4:C6 | 10G | Public |
+| | +------+-------------------+-------+----------------------------------+
+| | | p3p2 | A4:1F:72:11:B4:C8 | 10G | Fuel Admin/mgmt/pvt/ storage |
+| +----------------------------------------------+------+-------------------+-------+----------------------------------+
+| | 3, Broadcom NetXtreme II BCM57810 | p1p1 | A4:1F:72:11:B4:CA | 10G | Unused |
+| | +------+-------------------+-------+----------------------------------+
+| | | p1p2 | A4:1F:72:11:B4:CD | 10G | Unused |
++---------------------+----------------------------------------------+------+-------------------+-------+----------------------------------+
+
+
+
+**POD2-Foreman**
+
+The specifications for the servers within POD2 can be found below:
++---------------------+---------------------+----------------+--------------+---------------------+------------+
+| Hostname | Model | Memory | Storage | Processor | Socket |
++---------------------+---------------------+----------------+--------------+---------------------+------------+
+|Foreman Jump Server | Dell PowerEdge M620| 64 GB |600GB HDD |Intel Xeon E5-2640 | 2 |
++---------------------+---------------------+----------------+--------------+---------------------+------------+
+|Node7 | Dell PowerEdge M620| 64 GB |600GB HDD |Intel Xeon E5-2640 | 2 |
++---------------------+---------------------+----------------+--------------+---------------------+------------+
+|Node8 | Dell PowerEdge M620| 64 GB |600GB HDD |Intel Xeon E5-2640 | 2 |
++---------------------+---------------------+----------------+--------------+---------------------+------------+
+|Node9 | Dell PowerEdge M620| 64 GB |600GB HDD |Intel Xeon E5-2640 | 2 |
++---------------------+---------------------+----------------+--------------+---------------------+------------+
+|Node11 | Dell PowerEdge M620| 64 GB |600GB HDD |Intel Xeon E5-2640 | 2 |
++---------------------+---------------------+----------------+--------------+---------------------+------------+
+|Node12 | Dell PowerEdge M620| 64 GB |600GB HDD |Intel Xeon E5-2640 | 2 |
++---------------------+---------------------+----------------+--------------+---------------------+------------+
+
+
+The specifications for the Network Interfaces of the servers within POD2 can be seen below:
++---------------------+----------------------------------------------+------+-----------------+-------+----------------------------------+
+| Hostname | NIC Model | Ports| MAC | BW | Roles |
++---------------------+----------------------------------------------+------+-----------------+-------+----------------------------------+
+|Foreman Jump | 1, Broadcom NetXtreme II BCM57810 | em1 |A4:1F:72:11:B5:1D| 10G | Foreman Admin |
+| | +------+-----------------+-------+----------------------------------+
+| | | em2 |A4:1F:72:11:B5:20| 10G | Foreman Private/ Storage |
+| +----------------------------------------------+------+-----------------+-------+----------------------------------+
+| | 2, Intel 82599 | p3p1 |A4:1F:72:11:B5:21| 10G | Public |
+| | +------+-----------------+-------+----------------------------------+
+| | | p3p2 |A4:1F:72:11:B5:23| 10G | Unused |
+| +----------------------------------------------+------+-----------------+-------+----------------------------------+
+| | 3, TBD | p1p1 |A4:1F:72:11:B4:89| 10G | Unused |
+| | +------+-----------------+-------+----------------------------------+
+| | | p1p2 |A4:1F:72:11:B4:8B| 10G | Unused |
++---------------------+----------------------------------------------+------+-----------------+-------+----------------------------------+
+|Node7 | 1, Broadcom NetXtreme II BCM57810 | em1 |A4:1F:72:11:B4:CF| 10G | Foreman Admin |
+| | +------+-----------------+-------+----------------------------------+
+| | | em2 |A4:1F:72:11:B4:D2| 10G | Foreman Private/ Storage |
+| +----------------------------------------------+------+-----------------+-------+----------------------------------+
+| | 2, Intel 82599 | p3p1 |A4:1F:72:11:B4:D3| 10G | Public |
+| | +------+-----------------+-------+----------------------------------+
+| | | p3p2 |A4:1F:72:11:B4:D5| 10G | Unused |
+| +----------------------------------------------+------+-----------------+-------+----------------------------------+
+| | 3, Broadcom NetXtreme II BCM57810 | p1p1 |A4:1F:72:11:B4:D7| 10G | Unused |
+| | +------+-----------------+-------+----------------------------------+
+| | | p1p2 |A4:1F:72:11:B4:DA| 10G | Unused |
++---------------------+----------------------------------------------+------+-----------------+-------+----------------------------------+
+|Node8 | 1, Broadcom NetXtreme II BCM57810 | em1 |A4:1F:72:11:B4:DC| 10G | Foreman Admin |
+| | +------+-----------------+-------+----------------------------------+
+| | | em2 |A4:1F:72:11:B4:DF| 10G | Foreman Private/ Storage |
+| +----------------------------------------------+------+-----------------+-------+----------------------------------+
+| | 2, Intel 82599 | p3p1 |A4:1F:72:11:B4:E0| 10G | Public |
+| | +------+-----------------+-------+----------------------------------+
+| | | p3p2 |A4:1F:72:11:B4:E2| 10G | Unused |
+| +----------------------------------------------+------+-----------------+-------+----------------------------------+
+| | 3, Broadcom NetXtreme II BCM57810 | p1p1 |A4:1F:72:11:B4:E4| 10G | Unused |
+| | +------+-----------------+-------+----------------------------------+
+| | | p1p2 |A4:1F:72:11:B4:E7| 10G | Unused |
++---------------------+----------------------------------------------+------+-----------------+-------+----------------------------------+
+|Node9 | 1, Broadcom NetXtreme II BCM57810 | em1 |A4:1F:72:11:B4:E9| 10G | Foreman Admin |
+| | +------+-----------------+-------+----------------------------------+
+| | | em2 |A4:1F:72:11:B4:EC| 10G | Foreman Private/ Storage |
+| +----------------------------------------------+------+-----------------+-------+----------------------------------+
+| | 2, Intel 82599 | p3p1 |A4:1F:72:11:B4:ED| 10G | Public |
+| | +------+-----------------+-------+----------------------------------+
+| | | p3p2 |A4:1F:72:11:B4:EF| 10G | Unused |
+| +----------------------------------------------+------+-----------------+-------+----------------------------------+
+| | 3, Intel 82599 | p1p1 |A4:1F:72:11:B4:F1| 10G | Unused |
+| | +------+-----------------+-------+----------------------------------+
+| | | p1p2 |A4:1F:72:11:B4:F3| 10G | Unused |
++---------------------+----------------------------------------------+------+-----------------+-------+----------------------------------+
+|Node11 | 1, Broadcom NetXtreme II BCM57810 | em1 |A4:1F:72:11:B5:03| 10G | Foreman Admin |
+| | +------+-----------------+-------+----------------------------------+
+| | | em2 |A4:1F:72:11:B5:06| 10G | Foreman Private/ Storage |
+| +----------------------------------------------+------+-----------------+-------+----------------------------------+
+| | 2, Intel 82599 | p3p1 |A4:1F:72:11:B5:07| 10G | Public |
+| | +------+-----------------+-------+----------------------------------+
+| | | p3p2 |A4:1F:72:11:B5:09| 10G | Unused |
+| +----------------------------------------------+------+-----------------+-------+----------------------------------+
+| | 3, Intel 82599 | p1p1 |A4:1F:72:11:B5:0B| 10G | Unused |
+| | +------+-----------------+-------+----------------------------------+
+| | | p1p2 |A4:1F:72:11:B5:0D| 10G | Unused |
++---------------------+----------------------------------------------+------+-----------------+-------+----------------------------------+
+|Node12 | 1, Broadcom NetXtreme II BCM57810 | em1 |A4:1F:72:11:B5:10| 10G | Foreman Admin |
+| | +------+-----------------+-------+----------------------------------+
+| | | em2 |A4:1F:72:11:B5:13| 10G | Foreman Private/ Storage |
+| +----------------------------------------------+------+-----------------+-------+----------------------------------+
+| | 2, Intel 82599 | p3p1 |A4:1F:72:11:B5:14| 10G | Public |
+| | +------+-----------------+-------+----------------------------------+
+| | | p3p2 |A4:1F:72:11:B5:16| 10G | Unused |
+| +----------------------------------------------+------+-----------------+-------+----------------------------------+
+| | 3, TBD | p1p1 |A4:1F:72:11:B4:89| 10G | Unused |
+| | +------+-----------------+-------+----------------------------------+
+| | | p1p2 |A4:1F:72:11:B4:8B| 10G | Unused |
++---------------------+----------------------------------------------+------+-----------------+-------+----------------------------------+
+
+
+
+
+
+
+Software
+---------
+
+The Jump servers in the Testlab are pre-provisioned with the following software:
+
+ * Fuel-Jump Server:
+ 1. OS: Ubuntu 14.04
+ 2. Preprovisoned softwares: KVM, VNC server
+
+
+ * Foreman-Jump Server:
+ 1. OS: Provisioned with CentOS7
+
+
+
+
+
+Networks
+----------
+
+
+**POD1-Foreman Diagram**
+
+.. image:: images/Dell_POD1.jpg
+ :alt: Dell POD1 Networking
+
+
+
+
+**POD2-Foreman Diagram**
+
+.. image:: images/Dell_POD2.jpg
+ :alt: Dell POD2 Networking
+
+
+
+
+**Subnet allocations**
+
++-------------------+----------------+-------------------+---------------+----------+
+| Network name | Address | Mask | Gateway | VLAN id |
++-------------------+----------------+-------------------+---------------+----------+
+| Foreman Admin | 10.4.14.0 | 255.255.255.0 | 10.4.14.100 | Untagged |
++-------------------+----------------+-------------------+---------------+----------+
+| Foreman Private | 10.4.5.0 | 255.255.255.0 | 10.4.5.1 | Untagged |
++-------------------+----------------+-------------------+---------------+----------+
+| Public | 172.18.0.0 | 255.255.255.0 | 172.18.0.1 | Untagged |
++-------------------+----------------+-------------------+---------------+----------+
+|Fuel Admin |10.20.0.0 | 255.255.0.0 | 10.20.0.1 | Untagged |
++-------------------+----------------+-------------------+---------------+----------+
+|Fuel Mangement |192.168.0.0 | 255.255.255.0 |192.168.0.1 | 101 |
++-------------------+----------------+-------------------+---------------+----------+
+|Fuel Storage |192.168.1.0 | 255.255.255.0 |192.168.1.1 | 102 |
++-------------------+----------------+-------------------+---------------+----------+
+
+
+**Lights out Network**
+
+**POD1**
+
++----------------+-------------------------------+------------------+---------------------+---------------------+
+| Hostname | Lights-out address | MAC |Username | Password |
++----------------+-------------------------------+------------------+---------------------+---------------------+
+|Fuel-Jump | 172.18.1.101 |A4:1F:72:11:B4:80 | root | calvin |
++----------------+-------------------------------+------------------+---------------------+---------------------+
+|Node2 | 172.18.1.102 |A4:1F:72:11:B4:8D | root | calvin |
++----------------+-------------------------------+------------------+---------------------+---------------------+
+|Node3 | 172.18.1.103 |A4:1F:72:11:B4:9A | root | calvin |
++----------------+-------------------------------+------------------+---------------------+---------------------+
+|Node4 | 172.18.1.104 |A4:1F:72:11:B4:A7 | root | calvin |
++----------------+-------------------------------+------------------+---------------------+---------------------+
+|Node5 | 172.18.1.105 |A4:1F:72:11:B4:B4 | root | calvin |
++----------------+-------------------------------+------------------+---------------------+---------------------+
+|Node6 | 172.18.1.106 |A4:1F:72:11:B4:C1 | root | calvin |
++----------------+-------------------------------+------------------+---------------------+---------------------+
+
+**POD2**
+
++----------------+-------------------------------+------------------+---------------------+---------------------+
+| Hostname | Lights-out address | MAC |Username | Password |
++----------------+-------------------------------+------------------+---------------------+---------------------+
+|Foreman-Jump | 172.18.1.113 |A4:1F:72:11:B5:1C | root | calvin |
++----------------+-------------------------------+------------------+---------------------+---------------------+
+|Node7 | 172.18.1.107 |A4:1F:72:11:B4:CE | root | calvin |
++----------------+-------------------------------+------------------+---------------------+---------------------+
+|Node8 | 172.18.1.108 |A4:1F:72:11:B4:DB | root | calvin |
++----------------+-------------------------------+------------------+---------------------+---------------------+
+|Node9 | 172.18.1.109 |A4:1F:72:11:B4:E8 | root | calvin |
++----------------+-------------------------------+------------------+---------------------+---------------------+
+|Node11 | 172.18.1.111 |A4:1F:72:11:B5:02 | root | calvin |
++----------------+-------------------------------+------------------+---------------------+---------------------+
+|Node12 | 172.18.1.112 |A4:1F:72:11:B5:0F | root | calvin |
++----------------+-------------------------------+------------------+---------------------+---------------------+
+
+
+
+
+
+
+
+
+
+Remote access infrastructure
+-----------------------------
+
+The Dell OPNFV testlab is free to use for the OPNFV community.
+
+A VPN is used to provide access to the Dell Testlab. Details can be found in *Dell OPNFV-lab Access* document (Attach link)
+
+To access the Testlab, please contact Waqas_Riaz@DELL.com with the following details:
+ * Name
+ * Organization
+ * Purpose of using the lab
+
+ Processing the request can take 2-3 business days.
+
+ **Accessing the Jump Server**
+
+ The credentials for accessing the Jump servers are:
+
+ *Fuel-Jump*
+
+ User: opnfv
+ password: d3ll1234
+
+ *Foreman-Jump*
+
+ User: root
+ password: d3ll1234
diff --git a/docs/labs/images/spirent_vptc-public-drawing.png b/docs/labs/images/spirent_vptc-public-drawing.png
new file mode 100644
index 00000000..174ac454
--- /dev/null
+++ b/docs/labs/images/spirent_vptc-public-drawing.png
Binary files differ
diff --git a/docs/labs/spirent.rst b/docs/labs/spirent.rst
new file mode 100644
index 00000000..4ec987bd
--- /dev/null
+++ b/docs/labs/spirent.rst
@@ -0,0 +1,41 @@
+Spirent Virtual Cloud Test Lab
+===============================
+
+A community provided metal resource hosted at Nephoscale, leveraged for SDN/NFV public testing and OpenDaylight, OpenStack, OPNFV projects.
+
+**Spirent VCT Lab** is currently working on 3 different **OpenStack** environments each one of them deployed on different hardware configuration:
+
+ * **OpenStack Juno - 2014.2.2 release** (CentOS 7, 20 Cores, 64 GB RAM, 1 TB SATA, 40 Gbps)
+ * **OpenStack Juno - 2014.2.2 release** (Ubuntu 14.04, 8 cores, 32 GB RAM, 500 GB SATA, 10 Gbps)
+ * **OpenStack Icehouse - 2014.1.3 release**
+ * **OpenStack Icehouse - 2014.1.3 release**
+
+----
+
+There are a number of different networks referenced in the VPTC Design Blueprint.
+
+ * Public Internet – 1 g
+ * Private Management – 1g
+ * Mission Clients – 10g
+ * Mission Servers – 10g
+
+These can be added or removed as specified by the test methodology.
+There are 8 x 10 gige SFP+ ports available on a typical C100MP used for Avalanche Layer 4-7 testing.
+The N4U offers 2 x 40 gige QSFP+ ports with the MX-2 Spirent Test Center Layer 2-3 testing.
+There are 2 x Cumulus switches with 32 ports of 40 gige QSFP+ ports for a total capacity of 256 ports of 10 gige. We use QSFP+ to SFP+ break out cables to convert a single 40 gige port into 4 x 10 gige ports.
+Together these offer a flexible solution to allow up to 8 simultaneous tests to take place with physical traffic generators at the same time. Assuming a 10 to 1 oversubscription ratio we could handle 80 community users with the current environment.
+For example:
+
+ * An 80 Gbps test would need 4 port pairs of 10 gige each and require 8 mission networks.
+ * Multiple clients sharing common test hardware might have dedicated management networks for their DUTs yet communicate with the APIs and Management services via a shared DMZ network protected by a firewall.
+ * SSL and IPSec VPN will typically be leveraged to connect networks across the untrusted Internet or other third party networks.
+ * Stand-alone DUT servers using STCv and AVv traffic generators could easily scale to hundreds of servers as needed.
+
+.. image:: iamges/spirent_vptc-public-drawing.png
+
+**Documentation tracking**
+
+Revision: _sha1_
+
+Build date: _date_
+
diff --git a/docs/labs/templates/lab_details_template.rst b/docs/labs/templates/lab_details_template.rst
new file mode 100644
index 00000000..bcc909ad
--- /dev/null
+++ b/docs/labs/templates/lab_details_template.rst
@@ -0,0 +1,71 @@
+Pharos Lab Details Template
+============================
+
+Hardware details
+-----------------
+
+**General details**
+
++----------------+------------------+----------------------+-----------------------+------------------+-----------+-------------+----------------------------------------------+---------+
+| Hostname | Node type | HW model | Storage | CPU model | # Sockets | Memory [GB] | # NIC [MAC / IP / VLAN id] | BW |
++----------------+------------------+----------------------+-----------------------+------------------+-----------+-------------+----------------------------------------------+---------+
+| galileo | network | Dell R730 | 3 x 1TB (raid 5) | IntelE5-2690 v2 | 2 | 64 | 1 # 00:ae:ff:cc:dd:12 / 192.168.22.10 / 1003 | 10 Gbps |
+| | | | | | | | 2 # 00:11:22:33:44:55 / 10.7.8.10 / 1 | 1 Gbps |
++----------------+------------------+----------------------+-----------------------+--------------+-----------+-------------+-------------------------------------------+------+---------+
+
+**Lights-out**
+
++----------------+-------------------------------+------------------+---------------------+
+| Hostname | Lights-out address | Username | Password |
+-----------------+-------------------------------+------------------+---------------------+
+| | | | |
++----------------+-------------------------------+------------------+---------------------+
+
+
+Software
+---------
+
+**OS**
+
+
+**Pre-provisioned software**
+
+
+Network
+--------
+
+**Subnet allocations**
+
++--------------+-------------------+-------------------+---------------+---------+
+| Net name | Address | Mask | Gateway | VLAN id |
++--------------+-------------------+-------------------+---------------+---------+
+| | | | | |
++--------------+-------------------+-------------------+---------------+---------+
+
+
+**Firewall rules**
+
+
+**Diagrams**
+
+.. image:: images/<lab-name>_<diagram-name>.png|.jpg
+ :alt: Name of the diagram
+
+Remote access infrastructure
+-----------------------------
+
+**Explanations**
+
+**Diagrams**
+
+.. image:: images/<lab-name>_<diagram-name>.png|.jpg
+ :alt: Name of the diagram
+
+
+Documentation tracking
+-----------------------
+
+Revision: _sha1_
+
+Build date: _date_
+
diff --git a/docs/main.rst b/docs/main.rst
deleted file mode 100644
index 37615c96..00000000
--- a/docs/main.rst
+++ /dev/null
@@ -1,103 +0,0 @@
-Project: Testbed infrastructure (Pharos)
-#########################################
-
-
-The Pharos project deals with the creation of a distributed and federated NFV test capability that will be hosted by a number of companies in the OPNFV community. The goals consist in managing the list of community platforms, describing the different community platforms, offering timeslots and tools to perform tests, sharing the results and the best practices, supporting any test campaigns of the projects of the community (e.g. [[opnfv_functional_testing | functional testing project]], [[platform_performance_benchmarking|Qtip]], [[get_started|BGS]], [[oscar/project_proposal|oscar]],...). Pharos shall provide the infrastructure and the tooling needed by the different projects.
-
-
-.. image:: images/opnfv-test.jpg
-
-Community Test Labs
---------------------
-
-A summary of all Community Hosted OPNFV test labs (existing and planned) is also kept on the `wiki home page <https://wiki.opnfv.org/start#opnfv_community_labs>`. This section here contains additional details and project relationship mappings. //NOTE: Please follow `these instructions <https://wiki.opnfv.org/lab_update_guide>` when updating this list.//
-
-+---------------+----------------------+------------------------------------------------------+-------------------------------------------------+-------------------------------------+----------------------------+
-| Map Position | Hosting Organization | Home page | Contact person | Comments | Location |
-+---------------+----------------------+------------------------------------------------------+-------------------------------------------------+-------------------------------------+----------------------------+
-| 1 | Spirent | https://wiki.opnfv.org/pharos/spirentvctlab | Iben Rodriguez | OpenDaylight, NFV, SDN, & | Nephoscale |
-| | | | iben.rodriguez@spirent.com | OpenStack testing in progress | San Jose, CA |
-+---------------+----------------------+------------------------------------------------------+-------------------------------------------------+-------------------------------------+----------------------------+
-| 2 | China Mobile | | Fu Qiao | PODs dedicated for BGS and | Beijing, China |
-| | | | fuqiao@chinamobile.com | Functest | |
-+---------------+----------------------+------------------------------------------------------+-------------------------------------------------+-------------------------------------+----------------------------+
-| 3 | Ericsson | https://wiki.opnfv.org/get_started/ericsson_hosting | Jonas Bjurel | | Montreal, Canada |
-| | | | jonas.bjurel@ericsson.com | | |
-+---------------+----------------------+------------------------------------------------------+-------------------------------------------------+-------------------------------------+----------------------------+
-| 4 | Huawei | | Radoaca Vasile | TBD | Xi an, China |
-| | | | radoaca.vasile@huawei.com | | |
-+---------------+----------------------+------------------------------------------------------+-------------------------------------------------+-------------------------------------+----------------------------+
-| 5 | Intel | https://wiki.opnfv.org/get_started/intel_hosting | Trevor Cooper | Operational with PODs dedicated to | Intel Labs; Hillsboro, |
-| | | | trevor.cooper@intel.com | BGS and vSwitch projects | Oregon |
-+---------------+----------------------+------------------------------------------------------+-------------------------------------------------+-------------------------------------+----------------------------+
-| 6 | Orange | | Morgan Richomme | Available Q1 2015 | Orange Labs; |
-| | | | morgan.richomme@orange.com | | Lannion, France |
-+---------------+----------------------+------------------------------------------------------+-------------------------------------------------+-------------------------------------+----------------------------+
-| 7 | Cable Labs | | | TBD | |
-| | | | | | |
-+---------------+----------------------+------------------------------------------------------+-------------------------------------------------+-------------------------------------+----------------------------+
-| 8 | Dell | | Wenjing Chu | TBD | Santa Clara, CA |
-| | | | Wenjing_Chu@DELL.com | | |
-+---------------+----------------------+------------------------------------------------------+-------------------------------------------------+-------------------------------------+----------------------------+
-| 9 | Huawei | | Sean Chen | TBD | Santa Clara, CA |
-| | | | | | |
-+---------------+----------------------+------------------------------------------------------+-------------------------------------------------+-------------------------------------+----------------------------+
-
-
-
-Pharos management
-------------------
-
-- `Project proposal <https://wiki.opnfv.org/opnfv_testbed_infrastructure>`
-- A "Pharos compliant" environment is the `standard configuration of a deployed system <https://wiki.opnfv.org/pharos/pharos_specification>` for test purposes
-- `Testing <https://wiki.opnfv.org/pharos_testing>` on "Pharos compliant" environment
-- `Project draft release <https://wiki.opnfv.org/pharos_draft_release>`
-- `Task follow-up <https://wiki.opnfv.org/pharos_tasks>`
-- `FAQ <https://wiki.opnfv.org/pharos_faq>`
-- `meeting & minutes page] <https://wiki.opnfv.org/wiki/test_and_performance_meetings>` <- this page needs to be moved and renamed
-
-Pharos project - Key facts
----------------------------
-
-- Project Creation Date: January 8, 2015
-- Project Category: Integration & Testing
-- Lifecycle State: Incubation
-- Primary Contact: Trevor <trevor.cooper@intel.com>
-- Project Lead: Trevor <trevor.cooper@intel.com>
-- Jira Project Name: Testbed infrastructure Project
-- Jira Project Prefix: PHAROS
-- Committers:
-
- - Trevor Cooper<trevor.cooper@intel.com>
- - Fu Qiao <fuqiao@chinamobile.com>
- - Sheng-ann Yu <sheng-ann.yu@ericsson.com>
- - Wenjing Chu <Wenjing_Chu@DELL.com>
- - Chris Donley <C.Donley@cablelabs.com>
- - Morgan Richomme <morgan.richomme@orange.com>
- - Erica Johnson <erica.johnson@iol.unh.edu>
- - Hui Deng <denghui@chinamobile.com>
- - Prabu Kuppuswamy <prabu.kuppuswamy@spirent.com>
- - Sean Chen <s.chen@huawei.com>
- - Saikrishna M Kotha <saikrishna.kotha@xilinx.com>
- - Eugene Yu <yuyijun@huawei.com>
-
-- Contributors:
-
- - Iben Rodriguez <iben.rodriguez@spirent.com>
-
-
-- IRC : freenode.net #opnfv-pharos `http://webchat.freenode.net/?channels=opnfv-pharos <http://webchat.freenode.net/?channels=opnfv-pharos>`
-- Mailing List : no dedicated mailing list - use opnfv-tech-discuss and tag your emails with [Pharos] in the subject for easier filtering
-- Meetings :
-
- - `meetings <https://wiki.opnfv.org/wiki/test_and_performance_meetings>`
-
-- Repository: pharos
-
-
-Project calendar
------------------
-
-Not defined..
-
-
diff --git a/docs/pharos.rst b/docs/pharos.rst
new file mode 100644
index 00000000..59bea397
--- /dev/null
+++ b/docs/pharos.rst
@@ -0,0 +1,104 @@
+Project: Testbed infrastructure (Pharos)
+#########################################
+
+
+The Pharos project deals with the creation of a distributed and federated NFV test capability that will be hosted by a number of companies in the OPNFV community. The goals consist in managing the list of community platforms, describing the different community platforms, offering timeslots and tools to perform tests, sharing the results and the best practices, supporting any test campaigns of the projects of the community (e.g. [[opnfv_functional_testing | functional testing project]], [[platform_performance_benchmarking|Qtip]], [[get_started|BGS]], [[oscar/project_proposal|oscar]],...). Pharos shall provide the infrastructure and the tooling needed by the different projects.
+
+
+.. image:: images/opnfv-test.jpg
+
+Community Test Labs
+--------------------
+
+A summary of all Community Hosted OPNFV test labs (existing and planned) is also kept on the `wiki home page <https://wiki.opnfv.org/start#opnfv_community_labs>`. This section here contains additional details and project relationship mappings. //NOTE: Please follow `these instructions <https://wiki.opnfv.org/lab_update_guide>` when updating this list.//
+
++-----------+---------------+------------------------------------------------------+---------------------------------------------+-------------------------------------+----------------------+
+| Map | Hosting | Home page | Contact person | Comments | Location |
+| Position | Organization | | | | |
++-----------+---------------+------------------------------------------------------+---------------------------------------------+-------------------------------------+----------------------+
+| 1 | Spirent | https://wiki.opnfv.org/pharos/spirentvctlab | Iben Rodriguez | OpenDaylight, NFV, SDN, & | Nephoscale |
+| | | | iben.rodriguez@spirent.com | OpenStack testing in progress | San Jose, CA |
++-----------+---------------+------------------------------------------------------+---------------------------------------------+-------------------------------------+----------------------+
+| 2 | China Mobile | | Fu Qiao | PODs dedicated for BGS and | Beijing, China |
+| | | | fuqiao@chinamobile.com | Functest | |
++-----------+---------------+------------------------------------------------------+---------------------------------------------+-------------------------------------+----------------------+
+| 3 | Ericsson | https://wiki.opnfv.org/get_started/ericsson_hosting | Jonas Bjurel | | Montreal, Canada |
+| | | | jonas.bjurel@ericsson.com | | |
++-----------+---------------+------------------------------------------------------+---------------------------------------------+-------------------------------------+----------------------+
+| 4 | Huawei | | Radoaca Vasile | TBD | Xi an, China |
+| | | | radoaca.vasile@huawei.com | | |
++-----------+---------------+------------------------------------------------------+---------------------------------------------+-------------------------------------+----------------------+
+| 5 | Intel | https://wiki.opnfv.org/get_started/intel_hosting | Trevor Cooper | Operational with PODs dedicated to | Intel Labs; Hillsboro|
+| | | | trevor.cooper@intel.com | BGS and vSwitch projects | Oregon |
++-----------+---------------+------------------------------------------------------+---------------------------------------------+-------------------------------------+----------------------+
+| 6 | Orange | | Morgan Richomme | Available Q1 2015 | Orange Labs; |
+| | | | morgan.richomme@orange.com | | Lannion, France |
++-----------+---------------+------------------------------------------------------+---------------------------------------------+-------------------------------------+----------------------+
+| 7 | Cable Labs | | | TBD | |
+| | | | | | |
++-----------+---------------+------------------------------------------------------+---------------------------------------------+-------------------------------------+----------------------+
+| 8 | Dell | | Wenjing Chu | TBD | Santa Clara, CA |
+| | | | Wenjing_Chu@DELL.com | | |
++-----------+---------------+------------------------------------------------------+---------------------------------------------+-------------------------------------+----------------------+
+| 9 | Huawei | | Sean Chen | TBD | Santa Clara, CA |
+| | | | | | |
++-----------+---------------+------------------------------------------------------+---------------------------------------------+-------------------------------------+----------------------+
+
+
+
+Pharos management
+------------------
+
+- `Project proposal <https://wiki.opnfv.org/opnfv_testbed_infrastructure>`_
+- A "Pharos compliant" environment is the `standard configuration of a deployed system <https://wiki.opnfv.org/pharos/pharos_specification>`_ for test purposes
+- `Testing <https://wiki.opnfv.org/pharos_testing>`_ on "Pharos compliant" environment
+- `Project draft release <https://wiki.opnfv.org/pharos_draft_release>`_
+- `Task follow-up <https://wiki.opnfv.org/pharos_tasks>`_
+- `FAQ <https://wiki.opnfv.org/pharos_faq>`_
+- `meeting & minutes page] <https://wiki.opnfv.org/wiki/test_and_performance_meetings>`_ <- this page needs to be moved and renamed
+
+Pharos project - Key facts
+---------------------------
+
+- Project Creation Date: January 8, 2015
+- Project Category: Integration & Testing
+- Lifecycle State: Incubation
+- Primary Contact: Trevor <trevor.cooper@intel.com>
+- Project Lead: Trevor <trevor.cooper@intel.com>
+- Jira Project Name: Testbed infrastructure Project
+- Jira Project Prefix: PHAROS
+- Committers:
+
+ - Trevor Cooper<trevor.cooper@intel.com>
+ - Fu Qiao <fuqiao@chinamobile.com>
+ - Sheng-ann Yu <sheng-ann.yu@ericsson.com>
+ - Wenjing Chu <Wenjing_Chu@DELL.com>
+ - Chris Donley <C.Donley@cablelabs.com>
+ - Morgan Richomme <morgan.richomme@orange.com>
+ - Erica Johnson <erica.johnson@iol.unh.edu>
+ - Hui Deng <denghui@chinamobile.com>
+ - Prabu Kuppuswamy <prabu.kuppuswamy@spirent.com>
+ - Sean Chen <s.chen@huawei.com>
+ - Saikrishna M Kotha <saikrishna.kotha@xilinx.com>
+ - Eugene Yu <yuyijun@huawei.com>
+
+- Contributors:
+
+ - Iben Rodriguez <iben.rodriguez@spirent.com>
+
+
+- IRC : freenode.net #opnfv-pharos `http://webchat.freenode.net/?channels=opnfv-pharos <http://webchat.freenode.net/?channels=opnfv-pharos>`_
+- Mailing List : no dedicated mailing list - use opnfv-tech-discuss and tag your emails with [Pharos] in the subject for easier filtering
+- Meetings :
+
+ - `meetings <https://wiki.opnfv.org/wiki/test_and_performance_meetings>`_
+
+- Repository: pharos
+
+**Documentation tracking**
+
+Revision: _sha1_
+
+Build date: _date_
+
+
diff --git a/docs/spec.rst b/docs/spec.rst
new file mode 100644
index 00000000..67911158
--- /dev/null
+++ b/docs/spec.rst
@@ -0,0 +1,234 @@
+Pharos Specification
+=====================
+
+Objectives / Scope
+-------------------
+
+Pharos spec defines the OPNFV test environment (in which OPNFV platform can be deployed and tested …
+
+- Provides a secure, scalable, standard and HA environment
+- Supports full deployment lifecycle (this requires a bare metal environment)
+- Supports functional and performance testing
+- Provides common tooling and test scenarios (including test cases and workloads) available to the community
+- Provides mechanisms and procedures for secure remote access to the test environment
+
+Virtualized environments will be useful but do not provide a fully featured deployment/test capability
+
+The high level architecture may be summarized as follow:
+
+.. image:: images/pharos-archi1.jpg
+
+Constraints of a Pharos compliant OPNFV test-bed environment
+-------------------------------------------------------------
+
+- One jump (provisioning server) in which the installer runs in a VM
+- 2 - 5 compute / controller nodes
+- Jump server provisioned with CentOS7
+- Installer (e.g. Foreman) runs its own DHCP server therefore management network should not have DHCP server
+- Remote access
+- A lights-out network is required for remote management and bare metal provisioning capability
+
+Target Systems State
+---------------------
+
+- Target system state includes default software components, network configuration, storage requirements `https://wiki.opnfv.org/get_started/get_started_system_state <https://wiki.opnfv.org/get_started/get_started_system_state>`
+
+
+Rls 1 specification is modeled from Arno
+
+* First draft of environment for BGS https://wiki.opnfv.org/get_started/get_started_work_environment
+* Fuel environment https://wiki.opnfv.org/get_started/networkingblueprint
+* Foreman environment https://wiki.opnfv.org/get_started_experiment1#topology
+
+Hardware
+---------
+
+**Servers**
+
+CPU:
+
+* Intel Xeon E5-2600 (IvyBridge at least, or similar)
+
+Local Storage:
+
+* Disks: 4 x 500G-2T + 1 x 300GB SSD (leave some room for experiments)
+* First 2 disks should be combined to form a 1 TB virtual store for the OS/Software etc
+* Remaining should be combined to form a virtual disk for CEPH storage
+* The 5'th disk (SSD) for distributed storage (CEPH) journal towards SSD technology.
+* Performance testing requires a mix of compute nodes that have CEPH(swift+Cinder) and without CEPH storage
+* Virtual ISO boot capabilities or a separate PXE boot server (DHCP/tftp or Cobbler)
+
+Memory:
+
+* 32G RAM Minimum
+
+Power Supply Single
+
+* Single power supply acceptable (redundant power not required/nice to have)
+
+**Provisioning**
+
+Pre-provisioning Jump Server
+
+* OS - CentOS7
+* KVM / Qemu
+* Installer (Foreman, Fuel, ...) in a VM
+* Collaboration Tools
+
+Test Tools
+
+Jumphost - `functest <http://artifacts.opnfv.org/functest/docs/functest.html>`_
+
+Controller nodes - bare metal
+
+Compute nodes - bare metal
+
+**Security**
+
+- Servers
+
+ - Default permissions
+ - Server Logins
+ - **Pharos team needs to provide consistent usernames for infrastructure**
+
+Remote management
+------------------
+
+**Remote access**
+
+- Remote access is required for …
+
+ 1. Developers to access deploy/test environments (credentials to be issued per POD / user)
+ 2. Connection of each environment to Jenkins master hosted by Linux Foundation for automated deployment and test
+
+- VPN is optional and dependent on company security rules (out of Pharos scope)
+- POD access rules / restrictions …
+
+ - Refer to individual test-bed as each company may have different access rules and procedures
+
+- Basic requirement is for SSH sessions to be established (initially on jump server)
+- Majority of packages installed on a system (tools or applications) will be pulled from an external storage solution so this type of things should be solved in a very general sense for the projects
+
+Firewall rules
+
+- SSH sessions
+- Jenkins sessions
+
+Lights-out Management:
+
+- Out-of-band management for power on/off/reset and bare-metal provisioning
+- Access to server is through lights-out-management tool and/or a serial console
+- Intel lights-out ⇒ RMM http://www.intel.com/content/www/us/en/server-management/intel-remote-management-module.html
+- HP lights-out ⇒ ILO http://www8.hp.com/us/en/products/servers/ilo/index.html
+- CISCO lights-out ⇒ UCS https://developer.cisco.com/site/ucs-dev-center/index.gsp
+
+Linux Foundation - VPN service for accessing Lights-Out Management (LOM) infrastructure for the UCS-M hardware
+
+- People with admin access to LF infrastructure:
+
+1. amaged@cisco.com
+2. cogibbs@cisco.com
+3. daniel.smith@ericsson.com
+4. dradez@redhat.com
+5. fatih.degirmenci@ericsson.com
+6. fbrockne@cisco.com
+7. jonas.bjurel@ericsson.com
+8. jose.lausuch@ericsson.com
+9. joseph.gasparakis@intel.com
+10. morgan.richomme@orange.com
+11. pbandzi@cisco.com
+12. phladky@cisco.com
+13. stefan.k.berg@ericsson.com
+14. szilard.cserey@ericsson.com
+15. trozet@redhat.com
+
+- The people who require VPN access must have a valid PGP key bearing a valid signature from one of these three people. When issuing OpenVPN credentials, LF will be sending TLS certificates and 2-factor authentication tokens, encrypted to each recipient's PGP key.
+
+Networking
+-----------
+
+Test-bed network
+
+* 24 or 48 Port TOR Switch
+* NICS - 1GE, 10GE - per server can be on-board or PCI-e
+* Connectivity for each data/control network is through a separate NIC. This simplifies Switch Management however requires more NICs on the server and also more switch ports
+* Lights-out network can share with Admin/Management
+
+Network Interfaces
+
+* Option 1: 4x1G Control, 2x40G Data, 48 Port Switch
+
+ * 1 x 1G for ILMI (Lights out Management )
+ * 1 x 1G for Admin/PXE boot
+ * 1 x 1G for control Plane connectivity
+ * 1 x 1G for storage
+ * 2 x 40G (or 10G) for data network (redundancy, NIC bonding, High bandwidth testing)
+
+* Option II: 1x1G Control, 2x 40G (or 10G) Data, 24 Port Switch
+
+ * Connectivity to networks is through VLANs on the Control NIC. Data NIC used for VNF traffic and storage traffic segmented through VLANs
+
+* Option III: 2x1G Control, 2x10G Data, 2x40G Storage, 24 Port Switch
+
+ * Data NIC used for VNF traffic, storage NIC used for control plane and Storage segmented through VLANs (separate host traffic from VNF)
+ * 1 x 1G for IPMI
+ * 1 x 1G for Admin/PXE boot
+ * 2 x 10G for control plane connectivity/Storage
+ * 2 x 40G (or 10G) for data network
+
+Storage Network
+----------------
+
+- Needs specification
+
+** Topology **
+
+- Subnet, VLANs (want to standardize but may be constrained by existing lab setups or rules)
+- IPs
+- Types of NW - lights-out, public, private, admin, storage
+- May be special NW requirements for performance related projects
+- Default gateways
+
+.. image:: images/bridge1.png
+
+controller node bridge topology overview
+
+
+.. image:: images/bridge2.png
+
+compute node bridge topology overview
+
+Architecture
+-------------
+
+** Network Diagram **
+
+The Pharos architecture may be described as follow: Figure 1: Standard Deployment Environment
+
+.. image:: images/opnfv-pharos-diagram-v01.jpg
+
+Figure 1: Standard Deployment Environment
+
+
+Tools
+------
+
+- Jenkins
+- Tempest / Rally
+- Robot
+- Git repository
+- Jira
+- FAQ channel
+
+Sample Network Drawings
+-----------------------
+
+Files for documenting lab network layout. These were contributed as Visio VSDX format compressed as a ZIP file. Here is a sample of what the visio looks like.
+
+Download the visio zip file here: `opnfv-example-lab-diagram.vsdx.zip <https://wiki.opnfv.org/_media/opnfv-example-lab-diagram.vsdx.zip>`
+
+.. image:: images/opnfv-example-lab-diagram.png
+
+FYI: `Here <http://www.opendaylight.org/community/community-labs>` is what the OpenDaylight lab wiki pages look like.
+
+