From 920646aba6ee52f5cf494e1f2c279230b664ece0 Mon Sep 17 00:00:00 2001 From: ChristopherPrice Date: Sun, 10 Jan 2016 15:27:07 +0100 Subject: Created a pharos specification docment according to the new toolchain. Created docs/specification and the index.rst fil in the new format for the new toolchain sequence. Will create a specification/index.html link on artifacts and associate pdf document. Made small editorials to the origincal content, but it needs work. Left the original pharos-spec file as it being worked on still. Should be removed once the spcification docs are in equivalent shape. Change-Id: I6edb121766e7e1fdf1f38c70be95347b81b71dcc Signed-off-by: ChristopherPrice --- docs/specification/hardwarespec.rst | 44 +++++++++++++++++++++++++++++++++++++ 1 file changed, 44 insertions(+) create mode 100644 docs/specification/hardwarespec.rst (limited to 'docs/specification/hardwarespec.rst') diff --git a/docs/specification/hardwarespec.rst b/docs/specification/hardwarespec.rst new file mode 100644 index 00000000..a861b7dd --- /dev/null +++ b/docs/specification/hardwarespec.rst @@ -0,0 +1,44 @@ +Pharos compliant environment +---------------------------- + +A pharos compliant OPNFV test-bed provides: + +- One CentOS 7 jump server on which the virtualized Openstack/OPNFV installer runs +- In the Brahmaputra release you may select a variety of deployment toolchains to deploy from the jump server. +- 5 compute / controller nodes (`BGS `_ requires 5 nodes) +- A configured network topology allowing for LOM, Admin, Public, Private, and Storage Networks +- Remote access as defined by the Jenkins slave configuration guide +http://artifacts.opnfv.org/arno.2015.1.0/docs/opnfv-jenkins-slave-connection.arno.2015.1.0.pdf + +Hardware requirements +--------------------- + +**Servers** + +CPU: + +* Intel Xeon E5-2600v2 Series +(Ivy Bridge and newer, or similar) + +Local Storage Configuration: + +Below describes the minimum for the Pharos spec, +which is designed to provide enough capacity for +a reasonably functional environment. Additional +and/or faster disks are nice to have and may +produce a better result. + +* Disks: 2 x 1TB + 1 x 100GB SSD +* The first 1TB HDD should be used for OS & additional software/tool installation +* The second 1TB HDD configured for CEPH object storage +* Finally, the 100GB SSD should be used as the CEPH journal +* Performance testing requires a mix of compute nodes that have CEPH(swift+Cinder) and without CEPH storage +* Virtual ISO boot capabilities or a separate PXE boot server (DHCP/tftp or Cobbler) + +Memory: + +* 32G RAM Minimum + +Power Supply Single + +* Single power supply acceptable (redundant power not required/nice to have) -- cgit 1.2.3-korg