From d8ed4f63a5af8f8836c8951623e6760ddb15710e Mon Sep 17 00:00:00 2001 From: Julien Date: Mon, 11 Sep 2017 22:15:11 +0800 Subject: Update contents in E release correct some information like: links and descriptions delete unuseful information JIRA: PHAROS-311 Change-Id: I1fceaa13fbff540bcd3f314f4653c7cc8c485091 Signed-off-by: Julien --- docs/release/release-notes/specification/hardwarespec.rst | 14 +++++++------- 1 file changed, 7 insertions(+), 7 deletions(-) (limited to 'docs/release/release-notes/specification/hardwarespec.rst') diff --git a/docs/release/release-notes/specification/hardwarespec.rst b/docs/release/release-notes/specification/hardwarespec.rst index 8086aa91..f4d7530a 100644 --- a/docs/release/release-notes/specification/hardwarespec.rst +++ b/docs/release/release-notes/specification/hardwarespec.rst @@ -8,15 +8,15 @@ Hardware A pharos compliant OPNFV test-bed provides: -- One CentOS 7 jump server on which the virtualized Openstack/OPNFV installer runs -- In the Brahmaputra release you may select a variety of deployment toolchains to deploy from the - jump server. -- 5 compute / controller nodes (`BGS - `_ requires 5 nodes) +- One CentOS/Ubuntu jump server on which the virtualized Openstack/OPNFV installer runs +- 3 controller nodes +- 2 compute nodes - A configured network topology allowing for LOM, Admin, Public, Private, and Storage Networks - Remote access as defined by the Jenkins slave configuration guide + http://artifacts.opnfv.org/octopus/brahmaputra/docs/octopus_docs/opnfv-jenkins-slave-connection.html#jenkins-slaves -http://artifacts.opnfv.org/brahmaputra.1.0/docs/opnfv-jenkins-slave-connection.brahmaputra.1.0.html +In the Euphrates release you may select a variety of deployment toolchains to deploy from the +jump server. **Servers** @@ -38,7 +38,7 @@ a better result. * Disks: 2 x 1TB HDD + 1 x 100GB SSD (or greater capacity) * The first HDD should be used for OS & additional software/tool installation -* The second HDD is configured for CEPH object storage +* The second HDD is configured for CEPH OSD * The SSD should be used as the CEPH journal * Performance testing requires a mix of compute nodes with CEPH (Swift+Cinder) and without CEPH storage * Virtual ISO boot capabilities or a separate PXE boot server (DHCP/tftp or Cobbler) -- cgit 1.2.3-korg