summaryrefslogtreecommitdiffstats
path: root/docs
diff options
context:
space:
mode:
Diffstat (limited to 'docs')
-rw-r--r--docs/configguide/configguide.rst5
-rw-r--r--docs/images/ZTE-POD.jpgbin0 -> 369826 bytes
-rw-r--r--docs/labs/ZTE.rst241
-rw-r--r--docs/labs/images/ZTE_Overview.jpgbin0 -> 1800970 bytes
-rw-r--r--docs/pharos.rst3
-rw-r--r--docs/specification/hardwarespec.rst44
-rw-r--r--docs/specification/index.rst19
-rw-r--r--docs/specification/jumpserverinstall.rst83
-rw-r--r--docs/specification/networkconfig.rst74
-rw-r--r--docs/specification/objectives.rst20
-rw-r--r--docs/specification/remoteaccess.rst58
11 files changed, 547 insertions, 0 deletions
diff --git a/docs/configguide/configguide.rst b/docs/configguide/configguide.rst
new file mode 100644
index 00000000..92d23b72
--- /dev/null
+++ b/docs/configguide/configguide.rst
@@ -0,0 +1,5 @@
+Pharos Lab Configuration requirements
+=====================================
+
+Add an overview for the Pharos configuration guide here. This should be informative regarding the tasks
+and expectations when configuring a lab to be pharos compliant and refer to the relevant pharos docs published from the project.
diff --git a/docs/images/ZTE-POD.jpg b/docs/images/ZTE-POD.jpg
new file mode 100644
index 00000000..23907f9d
--- /dev/null
+++ b/docs/images/ZTE-POD.jpg
Binary files differ
diff --git a/docs/labs/ZTE.rst b/docs/labs/ZTE.rst
new file mode 100644
index 00000000..809baf27
--- /dev/null
+++ b/docs/labs/ZTE.rst
@@ -0,0 +1,241 @@
+ZTE OPNFV Testlab
+==================================================
+
+Overview
+------------------
+
+ZTE is hosting an OPNFV testlab at Nanjing facility. The testlab would host baremetal servers for the use of OPNFV community as part of the OPNFV Pharos Project.
+
+The ZTE Testlab consists of 1 POD
+ * POD for Fuel
+
+.. image:: images/ZTE_Overview.jpg
+ :alt: ZTE OPNFV Testlab Overview
+
+The POD consists of 8 servers that consist of
+ * 3 Servers for Control Nodes
+ * 3 Servers for Compute Nodes
+ * 2 Servers for spare
+
+
+Hardware details
+-----------------
+
+
+**POD-Fuel**
+
+The specifications for the servers within POD can be found below:
+
++------------------+------------+-----------+-----------+---------------------+--------+
+| Hostname | Model | Memory | Storage | Processor | Socket |
++------------------+------------+-----------+-----------+---------------------+--------+
+| Fuel Jump Server | ZTE R4300 | 32 GB | 600GB HDD | Intel Xeon E5-2680 | 2 |
++------------------+------------+-----------+-----------+---------------------+--------+
+| Node4 | ZTE E9000 | 128 GB | 600GB HDD | Intel Xeon E5-2680 | 2 |
++------------------+------------+-----------+-----------+---------------------+--------+
+| Node5 | ZTE E9000 | 128 GB | 600GB HDD | Intel Xeon E5-2680 | 2 |
++------------------+------------+-----------+-----------+---------------------+--------+
+| Node6 | ZTE E9000 | 128 GB | 600GB HDD | Intel Xeon E5-2680 | 2 |
++------------------+------------+-----------+-----------+---------------------+--------+
+| Node10 | ZTE E9000 | 128 GB | 600GB HDD | Intel Xeon E5-2680 | 2 |
++------------------+------------+-----------+-----------+---------------------+--------+
+| Node11 | ZTE E9000 | 128 GB | 600GB HDD | Intel Xeon E5-2680 | 2 |
++------------------+------------+-----------+-----------+---------------------+--------+
+| Node12 | ZTE E9000 | 128 GB | 600GB HDD | Intel Xeon E5-2680 | 2 |
++------------------+------------+-----------+-----------+---------------------+--------+
+| Node13 | ZTE E9000 | 128 GB | 600GB HDD | Intel Xeon E5-2680 | 2 |
++------------------+------------+-----------+-----------+---------------------+--------+
+| Node14 | ZTE E9000 | 128 GB | 600GB HDD | Intel Xeon E5-2680 | 2 |
++------------------+------------+-----------+-----------+---------------------+--------+
+
+The specifications for the Network Interfaces of servers within POD can be seen below:
+
++---------------------+----------------------------------------------+----------+-------------------+-------+----------------------------------+
+| Hostname | NIC Model | Ports | MAC | BW | Roles |
++---------------------+----------------------------------------------+----------+-------------------+-------+----------------------------------+
+|Fuel Jump | 1, RTL8111/8168/8411 | enp8s0 | 98:f5:37:e1:b4:1b | 10G | mgmt |
+| | +----------+-------------------+-------+----------------------------------+
+| | | enp9s0 | 98:f5:37:e1:b4:1c | 10G | Public |
+| +----------------------------------------------+----------+-------------------+-------+----------------------------------+
+| | 2, Inel 82599 | enp3s0f0 | 90:e2:ba:8b:08:64 | 10G | Unused |
+| | +----------+-------------------+-------+----------------------------------+
+| | | enp3s0f1 | 90:e2:ba:8b:08:65 | 10G | Unused |
++---------------------+----------------------------------------------+----------+-------------------+-------+----------------------------------+
+|Node10 | 1, Intel 82599 | eth0 | 4c:09:b4:b1:de:18 | 10G | Public |
+| | +----------+-------------------+-------+----------------------------------+
+| | | eth1 | 4c:09:b4:b1:de:19 | 10G | Public |
+| +----------------------------------------------+----------+-------------------+-------+----------------------------------+
+| | 2, Intel 82599 | eth2 | 4c:09:b4:b1:de:1a | 10G | storage |
+| | +----------+-------------------+-------+----------------------------------+
+| | | eth3 | 4c:09:b4:b1:de:1b | 10G | storage |
+| +----------------------------------------------+----------+-------------------+-------+----------------------------------+
+| | 3, Intel I350 | eth4 | 4c:09:b4:b2:59:d8 | 10G | Admin/mgmt |
+| | +----------+-------------------+-------+----------------------------------+
+| | | eth5 | 4c:09:b4:b2:59:d9 | 10G | Unused |
++---------------------+----------------------------------------------+----------+-------------------+-------+----------------------------------+
+|Node11 | 1, Intel 82599 | eth0 | 4c:09:b4:b1:de:3c | 10G | Public |
+| | +----------+-------------------+-------+----------------------------------+
+| | | eth1 | 4c:09:b4:b1:de:3d | 10G | Public |
+| +----------------------------------------------+----------+-------------------+-------+----------------------------------+
+| | 2, Intel 82599 | eth2 | 4c:09:b4:b1:de:3e | 10G | Storage |
+| | +----------+-------------------+-------+----------------------------------+
+| | | eth3 | 4c:09:b4:b1:de:3f | 10G | Storage |
+| +----------------------------------------------+----------+-------------------+-------+----------------------------------+
+| | 3, Intel I350 | eth4 | 4c:09:b4:b2:5a:d4 | 10G | Admin/mgmt |
+| | +----------+-------------------+-------+----------------------------------+
+| | | eth5 | 4c:09:b4:b2:5a:d5 | 10G | Unused |
++---------------------+----------------------------------------------+----------+-------------------+-------+----------------------------------+
+|Node12 | 1, Intel 82599 | eth0 | 4c:09:b4:b1:de:08 | 10G | Public |
+| | +----------+-------------------+-------+----------------------------------+
+| | | eth1 | 4c:09:b4:b1:de:09 | 10G | Public |
+| +----------------------------------------------+----------+-------------------+-------+----------------------------------+
+| | 2, Intel 82599 | eth2 | 4c:09:b4:b1:de:0a | 10G | storage |
+| | +----------+-------------------+-------+----------------------------------+
+| | | eth3 | 4c:09:b4:b1:de:0b | 10G | storage |
+| +----------------------------------------------+----------+-------------------+-------+----------------------------------+
+| | 3, Intel I350 | eth4 | 4c:09:b4:b2:59:bd | 10G | Admin/mgmt |
+| | +----------+-------------------+-------+----------------------------------+
+| | | eth5 | 4c:09:b4:b2:59:be | 10G | Unused |
++---------------------+----------------------------------------------+----------+-------------------+-------+----------------------------------+
+|Node4 | 1, Intel 82599 | eth0 | 4c:09:b4:b1:de:1c | 10G | Public |
+| | +----------+-------------------+-------+----------------------------------+
+| | | eth1 | 4c:09:b4:b1:de:1d | 10G | Public |
+| +----------------------------------------------+----------+-------------------+-------+----------------------------------+
+| | 2, Intel 82599 | eth2 | 4c:09:b4:b1:de:1e | 10G | storage |
+| | +----------+-------------------+-------+----------------------------------+
+| | | eth3 | 4c:09:b4:b1:de:1f | 10G | storage |
+| +----------------------------------------------+----------+-------------------+-------+----------------------------------+
+| | 3, Intel I350 | eth4 | 4c:09:b4:b2:59:a2 | 10G | Admin/mgmt |
+| | +----------+-------------------+-------+----------------------------------+
+| | | eth5 | 4c:09:b4:b2:59:a3 | 10G | Unused |
++---------------------+----------------------------------------------+----------+-------------------+-------+----------------------------------+
+|Node5 | 1, Intel 82599 | eth0 | 4c:09:b4:b1:de:24 | 10G | Public |
+| | +----------+-------------------+-------+----------------------------------+
+| | | eth1 | 4c:09:b4:b1:de:25 | 10G | Public |
+| +----------------------------------------------+----------+-------------------+-------+----------------------------------+
+| | 2, Intel 82599 | eth2 | 4c:09:b4:b1:de:26 | 10G | storage |
+| | +----------+-------------------+-------+----------------------------------+
+| | | eth3 | 4c:09:b4:b1:de:27 | 10G | storage |
+| +----------------------------------------------+----------+-------------------+-------+----------------------------------+
+| | 3, Intel I350 | eth4 | 4c:09:b4:b2:59:ab | 10G | Admin/mgmt |
+| | +----------+-------------------+-------+----------------------------------+
+| | | eth5 | 4c:09:b4:b2:59:ac | 10G | Unused |
++---------------------+----------------------------------------------+----------+-------------------+-------+----------------------------------+
+|Node6 | 1, Intel 82599 | eth0 | 4c:09:b4:b1:de:40 | 10G | Public |
+| | +----------+-------------------+-------+----------------------------------+
+| | | eth1 | 4c:09:b4:b1:de:41 | 10G | Public |
+| +----------------------------------------------+----------+-------------------+-------+----------------------------------+
+| | 2, Intel 82599 | eth2 | 4c:09:b4:b1:de:42 | 10G | storage |
+| | +----------+-------------------+-------+----------------------------------+
+| | | eth3 | 4c:09:b4:b1:de:43 | 10G | storage |
+| +----------------------------------------------+----------+-------------------+-------+----------------------------------+
+| | 3, Intel I350 | eth4 | 4c:09:b4:b2:59:fc | 10G | Admin/mgmt |
+| | +----------+-------------------+-------+----------------------------------+
+| | | eth5 | 4c:09:b4:b2:59:fd | 10G | Unused |
++---------------------+----------------------------------------------+----------+-------------------+-------+----------------------------------+
+|Node13 | 1, Intel 82599 | eth0 | 4c:09:b4:b1:de:38 | 10G | Public |
+| | +----------+-------------------+-------+----------------------------------+
+| | | eth1 | 4c:09:b4:b1:de:39 | 10G | Unused |
+| +----------------------------------------------+----------+-------------------+-------+----------------------------------+
+| | 2, Intel 82599 | eth2 | 4c:09:b4:b1:de:3a | 10G | storage |
+| | +----------+-------------------+-------+----------------------------------+
+| | | eth3 | 4c:09:b4:b1:de:3b | 10G | storage |
+| +----------------------------------------------+----------+-------------------+-------+----------------------------------+
+| | 3, Intel I350 | eth4 | 4c:09:b4:b2:59:87 | 10G | Admin/mgmt |
+| | +----------+-------------------+-------+----------------------------------+
+| | | eth5 | 4c:09:b4:b2:59:88 | 10G | Unused |
++---------------------+----------------------------------------------+----------+-------------------+-------+----------------------------------+
+|Node14 | 1, Intel 82599 | eth0 | 4c:09:b4:b1:de:48 | 10G | Public |
+| | +----------+-------------------+-------+----------------------------------+
+| | | eth1 | 4c:09:b4:b1:de:49 | 10G | Unused |
+| +----------------------------------------------+----------+-------------------+-------+----------------------------------+
+| | 2, Intel 82599 | eth2 | 4c:09:b4:b1:de:4a | 10G | storage |
+| | +----------+-------------------+-------+----------------------------------+
+| | | eth3 | 4c:09:b4:b1:de:4b | 10G | storage |
+| +----------------------------------------------+----------+-------------------+-------+----------------------------------+
+| | 3, Intel I350 | eth4 | 4c:09:b4:b2:59:75 | 10G | Admin/mgmt |
+| | +----------+-------------------+-------+----------------------------------+
+| | | eth5 | 4c:09:b4:b2:59:76 | 10G | Unused |
++---------------------+----------------------------------------------+----------+-------------------+-------+----------------------------------+
+
+
+Software
+---------
+
+The Jump servers in the Testlab are pre-provisioned with the following software:
+
+ * Fuel-Jump Server:
+ 1. OS: CentOS
+ 2. Preprovisoned softwares: KVM, VNC server
+
+
+
+Networks
+----------
+
+**POD-Fuel Diagram**
+
+.. image:: images/ZTE_POD.jpg
+ :alt: ZTE POD Networking
+
+**Subnet allocations**
+
++-------------------+----------------+-------------------+---------------+----------+
+| Network name | Address | Mask | Gateway | VLAN id |
++-------------------+----------------+-------------------+---------------+----------+
+| Public |172.10.0.0 | 255.255.255.0 | 172.10.0.1 | Untagged |
++-------------------+----------------+-------------------+---------------+----------+
+|Fuel Admin |192.168.0.0 | 255.255.255.0 | 192.168.0.1 | Untagged |
++-------------------+----------------+-------------------+---------------+----------+
+|Fuel Mangement |192.168.11.0 | 255.255.255.0 | | 101 |
++-------------------+----------------+-------------------+---------------+----------+
+|Fuel Storage |192.168.12.0 | 255.255.255.0 | | 102 |
++-------------------+----------------+-------------------+---------------+----------+
+
+
+**Lights out Network**
+
+**POD**
+
+All nodes can log in by jumpserver.
+
++-----------+-------------------------+-------------------+----------+----------+
+| Hostname | Lights-out address | MAC | Username | Password |
++-----------+-------------------------+-------------------+----------+----------+
+| Fuel-Jump | 58.213.14.182:5902(ssh) | 90:e2:ba:8b:08:65 | opnfv | |
++-----------+-------------------------+-------------------+----------+----------+
+| Node4 | 192.168.0.7 | 06:9d:69:13:5f:45 | | |
++-----------+-------------------------+-------------------+----------+----------+
+| Node5 | 192.168.0.8 | 32:9b:c4:da:10:4c | | |
++-----------+-------------------------+-------------------+----------+----------+
+| Node6 | 192.168.0.6 | 46:18:c4:74:cf:40 | | |
++-----------+-------------------------+-------------------+----------+----------+
+| Node10 | 192.168.0.4 | be:d0:49:d4:06:42 | | |
++-----------+-------------------------+-------------------+----------+----------+
+| Node11 | 192.168.0.3 | a2:d5:c1:bb:2b:49 | | |
++-----------+-------------------------+-------------------+----------+----------+
+| Node12 | 192.168.0.2 | 62:08:00:cd:4c:43 | | |
++-----------+-------------------------+-------------------+----------+----------+
+| Node13 | 192.168.0.9 | 4c:09:b4:b2:59:87 | | |
++-----------+-------------------------+-------------------+----------+----------+
+| Node14 | 192.168.0.10 | 9a:90:8a:db:e1:4c | | |
++-----------+-------------------------+-------------------+----------+----------+
+
+
+Remote access infrastructure
+-----------------------------
+
+The ZTE OPNFV testlab is free to use for the OPNFV community.
+
+A VPN is used to provide access to the ZTE Testlab. Details can be found in *ZTE OPNFV-lab Access* document (Attach link)
+
+To access the Testlab, please contact Zhihui Wu(wu.zhihui1@zte.com.cn) with the following details:
+ * Name
+ * Organization
+ * Purpose of using the lab
+
+ Processing the request can take 2-3 business days.
+
+ **Accessing the Jump Server**
+
+ The credentials for accessing the Jump Server, please contact Zhihui Wu(wu.zhihui1@zte.com.cn).
diff --git a/docs/labs/images/ZTE_Overview.jpg b/docs/labs/images/ZTE_Overview.jpg
new file mode 100644
index 00000000..26eee135
--- /dev/null
+++ b/docs/labs/images/ZTE_Overview.jpg
Binary files differ
diff --git a/docs/pharos.rst b/docs/pharos.rst
index 8af3be01..dc0bcc6a 100644
--- a/docs/pharos.rst
+++ b/docs/pharos.rst
@@ -43,6 +43,9 @@ A summary of all Community Hosted OPNFV test labs (existing and planned) is also
| 9 | Huawei | | Sean Chen | TBD | Santa Clara, CA |
| | | | | | |
+-----------+---------------+------------------------------------------------------+---------------------------------------------+-------------------------------------+----------------------+
+| 10 | ZTE | | Zhihui Wu | BGS Parser Yardstick | Nanjing, China |
+| | | | wu.zhihui1@zte.com.cn | | |
++-----------+---------------+------------------------------------------------------+---------------------------------------------+-------------------------------------+----------------------+
diff --git a/docs/specification/hardwarespec.rst b/docs/specification/hardwarespec.rst
new file mode 100644
index 00000000..a861b7dd
--- /dev/null
+++ b/docs/specification/hardwarespec.rst
@@ -0,0 +1,44 @@
+Pharos compliant environment
+----------------------------
+
+A pharos compliant OPNFV test-bed provides:
+
+- One CentOS 7 jump server on which the virtualized Openstack/OPNFV installer runs
+- In the Brahmaputra release you may select a variety of deployment toolchains to deploy from the jump server.
+- 5 compute / controller nodes (`BGS <https://wiki.opnfv.org/get_started/get_started_work_environment>`_ requires 5 nodes)
+- A configured network topology allowing for LOM, Admin, Public, Private, and Storage Networks
+- Remote access as defined by the Jenkins slave configuration guide
+http://artifacts.opnfv.org/arno.2015.1.0/docs/opnfv-jenkins-slave-connection.arno.2015.1.0.pdf
+
+Hardware requirements
+---------------------
+
+**Servers**
+
+CPU:
+
+* Intel Xeon E5-2600v2 Series
+(Ivy Bridge and newer, or similar)
+
+Local Storage Configuration:
+
+Below describes the minimum for the Pharos spec,
+which is designed to provide enough capacity for
+a reasonably functional environment. Additional
+and/or faster disks are nice to have and may
+produce a better result.
+
+* Disks: 2 x 1TB + 1 x 100GB SSD
+* The first 1TB HDD should be used for OS & additional software/tool installation
+* The second 1TB HDD configured for CEPH object storage
+* Finally, the 100GB SSD should be used as the CEPH journal
+* Performance testing requires a mix of compute nodes that have CEPH(swift+Cinder) and without CEPH storage
+* Virtual ISO boot capabilities or a separate PXE boot server (DHCP/tftp or Cobbler)
+
+Memory:
+
+* 32G RAM Minimum
+
+Power Supply Single
+
+* Single power supply acceptable (redundant power not required/nice to have)
diff --git a/docs/specification/index.rst b/docs/specification/index.rst
new file mode 100644
index 00000000..39764efe
--- /dev/null
+++ b/docs/specification/index.rst
@@ -0,0 +1,19 @@
+Pharos Specification
+====================
+
+.. toctree::
+ :maxdepth: 2
+
+ ./objectives.rst
+ ./hardwarespec.rst
+ ./networkconfig.rst
+ ./jumpserverinstall.rst
+ ./remoteaccess.rst
+
+:Authors: Trevor Cooper (Intel)
+:Version: 1.0
+
+Indices and tables
+==================
+
+* :ref:`search`
diff --git a/docs/specification/jumpserverinstall.rst b/docs/specification/jumpserverinstall.rst
new file mode 100644
index 00000000..a82c3352
--- /dev/null
+++ b/docs/specification/jumpserverinstall.rst
@@ -0,0 +1,83 @@
+**Jump Server Configuration:**
+
+(Rough Placeholder, edit me)
+
+**Fuel**
+
+1. Obtain CentOS 7 Minimal ISO and install
+
+ ``wget http://mirrors.kernel.org/centos/7/isos/x86_64/CentOS-7-x86_64-Minimal-1503-01.iso``
+
+2. Set parameters appropriate for your environment during installation
+
+3. Disable NetworkManager
+
+ ``systemctl disable NetworkManager``
+
+4. Configure your /etc/sysconfig/network-scripts/ifcfg-* files for your network
+
+5. Restart networking
+
+ ``service network restart``
+
+6. Edit /etc/resolv.conf and add a nameserver
+
+ ``vi /etc/resolv.conf``
+
+7. Install libvirt & kvm
+
+ ``yum -y update``
+ ``yum -y install kvm qemu-kvm libvirt``
+ ``systemctl enable libvirtd``
+
+8. Reboot:
+
+ ``shutdown -r now``
+
+9. If you wish to avoid annoying delay when use ssh to log in, disable DNS lookups:
+
+ ``vi /etc/ssh/sshd_config``
+
+ Uncomment "UseDNS yes", change 'yes' to 'no'.
+
+ Save
+
+10. Restart sshd
+
+ ``systemctl restart sshd``
+
+11. Install virt-install
+
+ ``yum -y install virt-install``
+
+12. Visit artifacts.opnfv.org and D/L the OPNFV Fuel ISO
+
+13. Create a bridge using the interface on the PXE network, for example: br0
+
+14. Make a directory owned by qemu:
+
+ ``mkdir /home/qemu; mkdir -p /home/qemu/VMs/fuel-6.0/disk``
+
+ ``chown -R qemu:qemu /home/qemu``
+
+15. Copy the ISO to /home/qemu
+
+ ``cd /home/qemu``
+
+ ``virt-install -n opnfv-2015-05-22_18-34-07-fuel -r 4096 --vcpus=4
+ --cpuset=0-3 -c opnfv-2015-05-22_18-34-07.iso --os-type=linux
+ --os-variant=rhel6 --boot hd,cdrom --disk
+ path=/home/qemu/VMs/mirantis-fuel-6.0/disk/fuel-vhd0.qcow2,bus=virtio,size=50,format=qcow2
+ -w bridge=br0,model=virtio --graphics vnc,listen=0.0.0.0``
+
+16. Temporarily flush the firewall rules to make things easier:
+
+ ``iptables -F``
+
+17. Connect to the console of the installing VM with your favorite VNC client.
+
+18. Change the IP settings to match the pod, use an IP in the PXE/Admin network for the Fuel Master
+
+**Foreman**
+
+TBA
diff --git a/docs/specification/networkconfig.rst b/docs/specification/networkconfig.rst
new file mode 100644
index 00000000..510ea802
--- /dev/null
+++ b/docs/specification/networkconfig.rst
@@ -0,0 +1,74 @@
+Networking
+----------
+
+Test-bed network
+
+* 24 or 48 Port TOR Switch
+* NICS - 1GE, 10GE - per server can be on-board or PCI-e
+* Connectivity for each data/control network is through a separate NIC.
+ *This simplifies Switch Management howeverrequires more NICs on the server and also more switch ports
+* Lights-out network can share with Admin/Management
+
+Network Interfaces
+
+* Option I: 4x1G Control, 2x40G Data, 48 Port Switch
+
+ * 1 x 1G for ILMI (Lights out Management )
+ * 1 x 1G for Admin/PXE boot
+ * 1 x 1G for control Plane connectivity
+ * 1 x 1G for storage
+ * 2 x 40G (or 10G) for data network (redundancy, NIC bonding, High bandwidth testing)
+
+* Option II: 1x1G Control, 2x 40G (or 10G) Data, 24 Port Switch
+
+ * Connectivity to networks is through VLANs on the Control NIC.
+ * Data NIC used for VNF traffic and storage traffic segmented through VLANs
+
+* Option III: 2x1G Control, 2x10G Data, 2x40G Storage, 24 Port Switch
+
+ * Data NIC used for VNF traffic
+ * Storage NIC used for control plane and Storage segmented through VLANs (separate host traffic from VNF)
+ * 1 x 1G for IPMI
+ * 1 x 1G for Admin/PXE boot
+ * 2 x 10G for control plane connectivity/Storage
+ * 2 x 40G (or 10G) for data network
+
+Documented configuration to include:
+- Subnet, VLANs (may be constrained by existing lab setups or rules)
+- IPs
+- Types of NW - lights-out, public, private, admin, storage
+- May be special NW requirements for performance related projects
+- Default gateways
+
+ontroller node bridge topology overview
+
+.. image:: ../images/bridge1.png
+
+compute node bridge topology overview
+
+.. image:: ../images/bridge2.png
+
+Architecture
+-------------
+
+** Network Diagram **
+
+The Pharos architecture may be described as follow:
+Figure 1: Standard Deployment Environment
+
+.. image:: ../images/opnfv-pharos-diagram-v01.jpg
+
+Figure 1: Standard Deployment Environment
+
+Sample Network Drawings
+-----------------------
+
+Files for documenting lab network layout.
+These were contributed as Visio VSDX format compressed as a ZIP
+file. Here is a sample of what the visio looks like.
+
+Download the visio zip file here:
+`opnfv-example-lab-diagram.vsdx.zip
+<https://wiki.opnfv.org/_media/opnfv-example-lab-diagram.vsdx.zip>`_
+
+.. image:: ../images/opnfv-example-lab-diagram.png
diff --git a/docs/specification/objectives.rst b/docs/specification/objectives.rst
new file mode 100644
index 00000000..53423882
--- /dev/null
+++ b/docs/specification/objectives.rst
@@ -0,0 +1,20 @@
+Objectives / Scope
+-------------------
+
+The Pharos specification defines the OPNFV hardware environment
+upon which the OPNFV Brahmaputra platform release can be deployed
+on and tested. This specification defines:
+
+- A secure, scalable, standard and HA environment
+- Supports the full Brahmaputra deployment lifecycle (this requires a bare metal environment)
+- Supports functional and performance testing of the Brahmaputra release
+- Provides mechanisms and procedures for secure remote access to the test environment
+
+Deploying Brahmaputra in a Virtualized environment is possible
+and will be useful, however it does not provide a fully
+featured deployment and test environment for the Brahmaputra
+release of OPNFV.
+
+The high level architecture is outlined in the following diagram:
+
+.. image:: ../images/pharos-archi1.jpg
diff --git a/docs/specification/remoteaccess.rst b/docs/specification/remoteaccess.rst
new file mode 100644
index 00000000..e91d55e1
--- /dev/null
+++ b/docs/specification/remoteaccess.rst
@@ -0,0 +1,58 @@
+Remote management
+------------------
+
+**Remote access**
+
+- Remote access is required for …
+
+ 1. Developers to access deploy/test environments (credentials to be issued per POD / user)
+ 2. Connection of each environment to Jenkins master hosted by Linux Foundation for automated deployment and test
+
+- OpenVPN is generally used for remote however community hosted labs may vary due to company security rules
+- POD access rules / restrictions …
+
+ - Refer to individual test-bed as each company may have different access rules and acceptable usage policies
+
+- Basic requirement is for SSH sessions to be established (initially on jump server)
+- Majority of packages installed on a system (tools or applications) will be pulled from an external repo.
+
+Firewall rules should include
+
+- SSH sessions
+- Jenkins sessions
+
+Lights-out Management:
+
+- Out-of-band management for power on/off/reset and bare-metal provisioning
+- Access to server is through lights-out-management tool and/or a serial console
+- Intel lights-out ⇒ RMM http://www.intel.com/content/www/us/en/server-management/intel-remote-management-module.html
+- HP lights-out ⇒ ILO http://www8.hp.com/us/en/products/servers/ilo/index.html
+- CISCO lights-out ⇒ UCS https://developer.cisco.com/site/ucs-dev-center/index.gsp
+
+Linux Foundation - VPN service for accessing Lights-Out
+Management (LOM) infrastructure for the UCS-M hardware
+
+- People with admin access to LF infrastructure:
+
+1. amaged@cisco.com
+2. cogibbs@cisco.com
+3. daniel.smith@ericsson.com
+4. dradez@redhat.com
+5. fatih.degirmenci@ericsson.com
+6. fbrockne@cisco.com
+7. jonas.bjurel@ericsson.com
+8. jose.lausuch@ericsson.com
+9. joseph.gasparakis@intel.com
+10. morgan.richomme@orange.com
+11. pbandzi@cisco.com
+12. phladky@cisco.com
+13. stefan.k.berg@ericsson.com
+14. szilard.cserey@ericsson.com
+15. trozet@redhat.com
+
+- The people who require VPN access must have a valid
+PGP key bearing a valid signature from one of these
+three people. When issuing OpenVPN credentials, LF
+will be sending TLS certificates and 2-factor
+authentication tokens, encrypted to each recipient's PGP key.
+