summaryrefslogtreecommitdiffstats
path: root/docs/specification
diff options
context:
space:
mode:
Diffstat (limited to 'docs/specification')
-rw-r--r--docs/specification/hardwarespec.rst3
-rw-r--r--docs/specification/jumpserverinstall.rst83
-rw-r--r--docs/specification/networkconfig.rst71
-rw-r--r--docs/specification/pharosspec.rst (renamed from docs/specification/index.rst)14
-rw-r--r--docs/specification/remoteaccess.rst78
5 files changed, 70 insertions, 179 deletions
diff --git a/docs/specification/hardwarespec.rst b/docs/specification/hardwarespec.rst
index a214be44..a66e68f3 100644
--- a/docs/specification/hardwarespec.rst
+++ b/docs/specification/hardwarespec.rst
@@ -1,4 +1,4 @@
-. This work is licensed under a Creative Commons Attribution 4.0 International License.
+.. This work is licensed under a Creative Commons Attribution 4.0 International License.
.. http://creativecommons.org/licenses/by/4.0
.. (c) 2016 OPNFV.
@@ -13,6 +13,7 @@ A pharos compliant OPNFV test-bed provides:
- 5 compute / controller nodes (`BGS <https://wiki.opnfv.org/get_started/get_started_work_environment>`_ requires 5 nodes)
- A configured network topology allowing for LOM, Admin, Public, Private, and Storage Networks
- Remote access as defined by the Jenkins slave configuration guide
+
http://artifacts.opnfv.org/brahmaputra.1.0/docs/opnfv-jenkins-slave-connection.brahmaputra.1.0.html
**Servers**
diff --git a/docs/specification/jumpserverinstall.rst b/docs/specification/jumpserverinstall.rst
deleted file mode 100644
index 8400c2ab..00000000
--- a/docs/specification/jumpserverinstall.rst
+++ /dev/null
@@ -1,83 +0,0 @@
-.. This work is licensed under a Creative Commons Attribution 4.0 International License.
-.. http://creativecommons.org/licenses/by/4.0
-.. (c) 2016 OPNFV.
-
-
-Jump Server Configuration
--------------------------
-
-**Fuel**
-
-1. Obtain CentOS 7 Minimal ISO and install
-
- ``wget http://mirrors.kernel.org/centos/7/isos/x86_64/CentOS-7-x86_64-Minimal-1503-01.iso``
-
-2. Set parameters appropriate for your environment during installation
-
-3. Disable NetworkManager
-
- ``systemctl disable NetworkManager``
-
-4. Configure your /etc/sysconfig/network-scripts/ifcfg-* files for your network
-
-5. Restart networking
-
- ``service network restart``
-
-6. Edit /etc/resolv.conf and add a nameserver
-
- ``vi /etc/resolv.conf``
-
-7. Install libvirt & kvm
-
- ``yum -y update``
- ``yum -y install kvm qemu-kvm libvirt``
- ``systemctl enable libvirtd``
-
-8. Reboot:
-
- ``shutdown -r now``
-
-9. If you wish to avoid annoying delay when use ssh to log in, disable DNS lookups:
-
- ``vi /etc/ssh/sshd_config``
-
- Uncomment "UseDNS yes", change 'yes' to 'no'.
-
- Save
-
-10. Restart sshd
-
- ``systemctl restart sshd``
-
-11. Install virt-install
-
- ``yum -y install virt-install``
-
-12. Visit artifacts.opnfv.org and D/L the OPNFV Fuel ISO
-
-13. Create a bridge using the interface on the PXE network, for example: br0
-
-14. Make a directory owned by qemu:
-
- ``mkdir /home/qemu; mkdir -p /home/qemu/VMs/fuel-6.0/disk``
-
- ``chown -R qemu:qemu /home/qemu``
-
-15. Copy the ISO to /home/qemu
-
- ``cd /home/qemu``
-
- ``virt-install -n opnfv-2015-05-22_18-34-07-fuel -r 4096 --vcpus=4
- --cpuset=0-3 -c opnfv-2015-05-22_18-34-07.iso --os-type=linux
- --os-variant=rhel6 --boot hd,cdrom --disk
- path=/home/qemu/VMs/mirantis-fuel-6.0/disk/fuel-vhd0.qcow2,bus=virtio,size=50,format=qcow2
- -w bridge=br0,model=virtio --graphics vnc,listen=0.0.0.0``
-
-16. Temporarily flush the firewall rules to make things easier:
-
- ``iptables -F``
-
-17. Connect to the console of the installing VM with your favorite VNC client.
-
-18. Change the IP settings to match the pod, use an IP in the PXE/Admin network for the Fuel Master
diff --git a/docs/specification/networkconfig.rst b/docs/specification/networkconfig.rst
index af09e564..94477fc7 100644
--- a/docs/specification/networkconfig.rst
+++ b/docs/specification/networkconfig.rst
@@ -6,70 +6,53 @@
Networking
----------
-Test-bed network
+**Network Hardware**
-* 24 or 48 Port TOR Switch
-* NICS - 1GE, 10GE - per server can be on-board or PCI-e
-* Connectivity for each data/control network is through a separate NIC.
- *This simplifies Switch Management howeverrequires more NICs on the server and also more switch ports
-* Lights-out network can share with Admin/Management
+ * 24 or 48 Port TOR Switch
+ * NICs - Combination of 1GE and 10GE based on network topology options (per server can be on-board or use PCI-e)
+ * Connectivity for each data/control network is through a separate NIC. This simplifies Switch Management however requires more NICs on the server and also more switch ports
+ * BMC (Baseboard Management Controller) for lights-out mangement network using IPMI (Intelligent Platform Management Interface)
-Network Interfaces
+**Network Options**
-* Option I: 4x1G Control, 2x40G Data, 48 Port Switch
+ * Option I: 4x1G Control, 2x10G Data, 48 Port Switch
- * 1 x 1G for ILMI (Lights out Management )
- * 1 x 1G for Admin/PXE boot
- * 1 x 1G for control Plane connectivity
- * 1 x 1G for storage
- * 2 x 40G (or 10G) for data network (redundancy, NIC bonding, High bandwidth testing)
+ * 1 x 1G for lights-out Management
+ * 1 x 1G for Admin/PXE boot
+ * 1 x 1G for control-plane connectivity
+ * 1 x 1G for storage
+ * 2 x 10G for data network (redundancy, NIC bonding, High bandwidth testing)
-* Option II: 1x1G Control, 2x 40G (or 10G) Data, 24 Port Switch
+ * Option II: 1x1G Control, 2x 10G Data, 24 Port Switch
- * Connectivity to networks is through VLANs on the Control NIC.
+ * Connectivity to networks is through VLANs on the Control NIC
* Data NIC used for VNF traffic and storage traffic segmented through VLANs
-* Option III: 2x1G Control, 2x10G Data, 2x40G Storage, 24 Port Switch
+ * Option III: 2x1G Control, 2x10G Data, 2x10G Storage, 24 Port Switch
- * Data NIC used for VNF traffic
+ * Data NIC used for VNF traffic
* Storage NIC used for control plane and Storage segmented through VLANs (separate host traffic from VNF)
- * 1 x 1G for IPMI
- * 1 x 1G for Admin/PXE boot
- * 2 x 10G for control plane connectivity/Storage
- * 2 x 40G (or 10G) for data network
+ * 1 x 1G for lights-out mangement
+ * 1 x 1G for Admin/PXE boot
+ * 2 x 10G for control-plane connectivity/storage
+ * 2 x 10G for data network
Documented configuration to include:
-- Subnet, VLANs (may be constrained by existing lab setups or rules)
-- IPs
-- Types of NW - lights-out, public, private, admin, storage
-- May be special NW requirements for performance related projects
-- Default gateways
-Controller node bridge topology overview
+ - Subnet, VLANs (may be constrained by existing lab setups or rules)
+ - IPs
+ - Types of NW - lights-out, public, private, admin, storage
+ - May be special NW requirements for performance related projects
+ - Default gateways
-.. image:: ../images/bridge1.png
-
-compute node bridge topology overview
+**Sample Network Drawings**
.. image:: ../images/bridge2.png
-**Network Diagram**
-
-The Pharos architecture may be described as follow:
-Figure 1: Standard Deployment Environment
-
.. image:: ../images/opnfv-pharos-diagram-v01.jpg
-Figure 1: Standard Deployment Environment
-
-**Sample Network Drawings**
-
-Files for documenting lab network layout.
-These were contributed as Visio VSDX format compressed as a ZIP
-file. Here is a sample of what the visio looks like.
+.. image:: ../images/opnfv-example-lab-diagram.png
Download the visio zip file here:
`opnfv-example-lab-diagram.vsdx.zip
<https://wiki.opnfv.org/_media/opnfv-example-lab-diagram.vsdx.zip>`_
-
-.. image:: ../images/opnfv-example-lab-diagram.png
diff --git a/docs/specification/index.rst b/docs/specification/pharosspec.rst
index a583087f..c3eb45a7 100644
--- a/docs/specification/index.rst
+++ b/docs/specification/pharosspec.rst
@@ -2,19 +2,17 @@
.. http://creativecommons.org/licenses/by/4.0
.. (c) 2016 OPNFV.
-====================
+.. Top level of Pharos specification documents.
+
+********************
Pharos Specification
-====================
+********************
+
+The Pharos Specification provides information on Pharos hardware and network requirements
.. toctree::
- :maxdepth: 2
./objectives.rst
./hardwarespec.rst
./networkconfig.rst
- ./jumpserverinstall.rst
./remoteaccess.rst
-
-Revision: _sha1_
-
-Build date: |today|
diff --git a/docs/specification/remoteaccess.rst b/docs/specification/remoteaccess.rst
index cb0ad8e2..51950da4 100644
--- a/docs/specification/remoteaccess.rst
+++ b/docs/specification/remoteaccess.rst
@@ -3,61 +3,53 @@
.. (c) 2016 OPNFV.
-Remote management
+Remote Management
------------------
-**Remote access**
+Remote access is required for …
-- Remote access is required for …
+ * Developers to access deploy/test environments (credentials to be issued per POD / user)
+ * Connection of each environment to Jenkins master hosted by Linux Foundation for automated deployment and test
- 1. Developers to access deploy/test environments (credentials to be issued per POD / user)
- 2. Connection of each environment to Jenkins master hosted by Linux Foundation for automated deployment and test
+OpenVPN is generally used for remote however community hosted labs may vary due to company security rules. For POD
+access rules / restrictions refer to individual lab documentation as each company may have different access rules
+and acceptable usage policies.
-- OpenVPN is generally used for remote however community hosted labs may vary due to company security rules
-- POD access rules / restrictions …
+Basic requirements:
- - Refer to individual test-bed as each company may have different access rules and acceptable usage policies
+ * SSH sessions to be established (initially on the jump server)
+ * Packages to be installed on a system (tools or applications) by pullig from an external repo.
-- Basic requirement is for SSH sessions to be established (initially on jump server)
-- Majority of packages installed on a system (tools or applications) will be pulled from an external repo.
+Firewall rules accomodate:
-Firewall rules should include
+ * SSH sessions
+ * Jenkins sessions
-- SSH sessions
-- Jenkins sessions
+Lights-out management network requirements:
-Lights-out Management:
+ * Out-of-band management for power on/off/reset and bare-metal provisioning
+ * Access to server is through a lights-out-management tool and/or a serial console
+ * Refer to applicable light-out mangement information from server manufacturer, such as ...
-- Out-of-band management for power on/off/reset and bare-metal provisioning
-- Access to server is through lights-out-management tool and/or a serial console
-- Intel lights-out ⇒ RMM http://www.intel.com/content/www/us/en/server-management/intel-remote-management-module.html
-- HP lights-out ⇒ ILO http://www8.hp.com/us/en/products/servers/ilo/index.html
-- CISCO lights-out ⇒ UCS https://developer.cisco.com/site/ucs-dev-center/index.gsp
+ * Intel lights-out `RMM <http://www.intel.com/content/www/us/en/server-management/intel-remote-management-module.html>`_
+ * HP lights-out `ILO <http://www8.hp.com/us/en/products/servers/ilo/index.html>`_
+ * CISCO lights-out `UCS <https://developer.cisco.com/site/ucs-dev-center/index.gsp>`_
-Linux Foundation - VPN service for accessing Lights-Out
-Management (LOM) infrastructure for the UCS-M hardware
+Linux Foundation Lab is a UCS-M hardware environment with controlled access *as needed*
-- People with admin access to LF infrastructure:
+ * `Access rules and procedure <https://wiki.opnfv.org/pharos/lf_lab>`_ are maintained on the Wiki
+ * `A list of people <https://wiki.opnfv.org/pharos/lf_lab#opnfv_community_members_with_access_to_opnfv_lf_lab>`_ with access is maintained on the Wiki
+ * Send access requests to infra-steering@lists.opnfv.org with the following information ...
-1. amaged@cisco.com
-2. cogibbs@cisco.com
-3. daniel.smith@ericsson.com
-4. dradez@redhat.com
-5. fatih.degirmenci@ericsson.com
-6. fbrockne@cisco.com
-7. jonas.bjurel@ericsson.com
-8. jose.lausuch@ericsson.com
-9. joseph.gasparakis@intel.com
-10. morgan.richomme@orange.com
-11. pbandzi@cisco.com
-12. phladky@cisco.com
-13. stefan.k.berg@ericsson.com
-14. szilard.cserey@ericsson.com
-15. trozet@redhat.com
-
-- The people who require VPN access must have a valid
-PGP key bearing a valid signature from one of these
-three people. When issuing OpenVPN credentials, LF
-will be sending TLS certificates and 2-factor
-authentication tokens, encrypted to each recipient's PGP key.
+ * Name:
+ * Company:
+ * Approved Project:
+ * Project role:
+ * Why is access needed:
+ * How long is access needed (either a specified time period or define "done"):
+ * What specific POD/machines will be accessed:
+ * What support is needed from LF admins and LF community support team:
+ * Once access is approved please follow instructions for setting up VPN access ... https://wiki.opnfv.org/get_started/lflab_hosting
+ * The people who require VPN access must have a valid PGP key bearing a valid signature from LF
+ * When issuing OpenVPN credentials, LF will be sending TLS certificates and 2-factor authentication tokens, encrypted to each recipient's PGP key