summaryrefslogtreecommitdiffstats
path: root/docs/release/release-notes/specification/hardwarespec.rst
diff options
context:
space:
mode:
Diffstat (limited to 'docs/release/release-notes/specification/hardwarespec.rst')
-rw-r--r--docs/release/release-notes/specification/hardwarespec.rst15
1 files changed, 8 insertions, 7 deletions
diff --git a/docs/release/release-notes/specification/hardwarespec.rst b/docs/release/release-notes/specification/hardwarespec.rst
index 8086aa91..1eb883e7 100644
--- a/docs/release/release-notes/specification/hardwarespec.rst
+++ b/docs/release/release-notes/specification/hardwarespec.rst
@@ -8,15 +8,16 @@ Hardware
A pharos compliant OPNFV test-bed provides:
-- One CentOS 7 jump server on which the virtualized Openstack/OPNFV installer runs
-- In the Brahmaputra release you may select a variety of deployment toolchains to deploy from the
- jump server.
-- 5 compute / controller nodes (`BGS
- <https://wiki.opnfv.org/get_started/get_started_work_environment>`_ requires 5 nodes)
+- One CentOS/Ubuntu jump server on which the virtualized Openstack/OPNFV installer runs. For an ARM
+ POD, the jump server should also be an ARM server
+- 3 controller nodes
+- 2 compute nodes
- A configured network topology allowing for LOM, Admin, Public, Private, and Storage Networks
- Remote access as defined by the Jenkins slave configuration guide
+ http://artifacts.opnfv.org/octopus/brahmaputra/docs/octopus_docs/opnfv-jenkins-slave-connection.html#jenkins-slaves
-http://artifacts.opnfv.org/brahmaputra.1.0/docs/opnfv-jenkins-slave-connection.brahmaputra.1.0.html
+In the Euphrates release you may select a variety of deployment toolchains to deploy from the
+jump server.
**Servers**
@@ -38,7 +39,7 @@ a better result.
* Disks: 2 x 1TB HDD + 1 x 100GB SSD (or greater capacity)
* The first HDD should be used for OS & additional software/tool installation
-* The second HDD is configured for CEPH object storage
+* The second HDD is configured for CEPH OSD
* The SSD should be used as the CEPH journal
* Performance testing requires a mix of compute nodes with CEPH (Swift+Cinder) and without CEPH storage
* Virtual ISO boot capabilities or a separate PXE boot server (DHCP/tftp or Cobbler)