From aa88673c38be12368dead5e8241fb915d790c431 Mon Sep 17 00:00:00 2001 From: Julien Date: Thu, 17 Aug 2017 21:32:10 +0800 Subject: restruct documents according to opnfvdocs Use only development and release for we don't have test codes for now. JIRA: PHAROS-311 Change-Id: Iacfcaba81a7a52e09cf999b8603cc9dc2f8f2b97 Signed-off-by: Julien --- .../release-notes/specification/hardwarespec.rst | 52 ++++++++++++++++++++++ 1 file changed, 52 insertions(+) create mode 100644 docs/release/release-notes/specification/hardwarespec.rst (limited to 'docs/release/release-notes/specification/hardwarespec.rst') diff --git a/docs/release/release-notes/specification/hardwarespec.rst b/docs/release/release-notes/specification/hardwarespec.rst new file mode 100644 index 00000000..8086aa91 --- /dev/null +++ b/docs/release/release-notes/specification/hardwarespec.rst @@ -0,0 +1,52 @@ +.. This work is licensed under a Creative Commons Attribution 4.0 International License. +.. http://creativecommons.org/licenses/by/4.0 +.. (c) 2016 OPNFV. + + +Hardware +-------- + +A pharos compliant OPNFV test-bed provides: + +- One CentOS 7 jump server on which the virtualized Openstack/OPNFV installer runs +- In the Brahmaputra release you may select a variety of deployment toolchains to deploy from the + jump server. +- 5 compute / controller nodes (`BGS + `_ requires 5 nodes) +- A configured network topology allowing for LOM, Admin, Public, Private, and Storage Networks +- Remote access as defined by the Jenkins slave configuration guide + +http://artifacts.opnfv.org/brahmaputra.1.0/docs/opnfv-jenkins-slave-connection.brahmaputra.1.0.html + +**Servers** + +**CPU:** + +* Intel Xeon E5-2600v2 Series or newer +* AArch64 (64bit ARM architecture) compatible (ARMv8 or newer) + +**Firmware:** + +* BIOS/EFI compatible for x86-family blades +* EFI compatible for AArch64 blades + +**Local Storage:** + +Below describes the minimum for the Pharos spec, which is designed to provide enough capacity for +a reasonably functional environment. Additional and/or faster disks are nice to have and mayproduce +a better result. + +* Disks: 2 x 1TB HDD + 1 x 100GB SSD (or greater capacity) +* The first HDD should be used for OS & additional software/tool installation +* The second HDD is configured for CEPH object storage +* The SSD should be used as the CEPH journal +* Performance testing requires a mix of compute nodes with CEPH (Swift+Cinder) and without CEPH storage +* Virtual ISO boot capabilities or a separate PXE boot server (DHCP/tftp or Cobbler) + +**Memory:** + +* 32G RAM Minimum + +**Power Supply** + +* Single power supply acceptable (redundant power not required/nice to have) -- cgit 1.2.3-korg