summaryrefslogtreecommitdiffstats
path: root/tools/pharos-validator/docs
diff options
context:
space:
mode:
Diffstat (limited to 'tools/pharos-validator/docs')
-rw-r--r--tools/pharos-validator/docs/howto/virt-manager/HOWTO50
-rwxr-xr-xtools/pharos-validator/docs/howto/virt-manager/bridgevm.sh1
-rwxr-xr-xtools/pharos-validator/docs/howto/virt-manager/genmac.sh3
-rwxr-xr-xtools/pharos-validator/docs/howto/virt-manager/jump-server.sh1
-rwxr-xr-xtools/pharos-validator/docs/howto/virt-manager/node-cycle.sh5
-rw-r--r--tools/pharos-validator/docs/howto/virt-manager/virsh-commands.txt14
-rw-r--r--tools/pharos-validator/docs/initial_proposal.txt49
7 files changed, 0 insertions, 123 deletions
diff --git a/tools/pharos-validator/docs/howto/virt-manager/HOWTO b/tools/pharos-validator/docs/howto/virt-manager/HOWTO
deleted file mode 100644
index bed105a7..00000000
--- a/tools/pharos-validator/docs/howto/virt-manager/HOWTO
+++ /dev/null
@@ -1,50 +0,0 @@
-Syntax guide:
- 1. [[ denotes commands / code ]]
- 2. <> denotes bullet, sub-bullets will have an extra > appended depending on their sub-level
- 3. ${denotes variables the the user is expected to fill out depending on their specific needs}
-
-Tutorials:
- 1. Configure host machine for virtualization
- 2. Make a Virtual Machine with storage
- 3. Make a blank virtual machine awaiting PXE
- 4. Install and save default VM image
- 5. Configure Networking with VMs
-;
-
-1 --
- <> Install Host OS (CentOS7)
- <> Use package manager (yum) to install qemu, kvm, virt-install, and virt-manager)
- <> Edit kernel commandline option with "kvm-intel.nested=1" and or edit /etc/modprobe.d/kvm-intel.conf to contain "options kvm-intel nested=1". This will allow for nested performance to not be absolutely terrible and slow.
- <>> A command to do this is [[ echo "options kvm-intel nested=1" | sudo tee /etc/modprobe.d/kvm-intel.conf ]]
- <>
-
-2 --
- <> Creating new disks uses the command [[ qemu-img create -f raw ${image-name}.img ${size} ]], where image-name is the name of your vm's disk, and size is the size of the disk you want (e.g. 2G will be a 2 Gigabyte disk, 512M will be 512 Megabytes)
- <> Download some installation media (e.g. CentOS7-DVD.iso)
- <> Using those disks utilizing the virt-install tool [[ virt-install -n name_of_vm --graphics none --vcpus=2 -l /path/to/installation.iso --ram=512 --disk path=/path/to/disk.img,cache=none --extra-args="console=ttyS0" ]] don't use --extra-args="console=ttyS0" if you would rather have the VM use the X-display instead of a serial console.
- <>
-
-3 --
- <> TODO
-
-4 --
- <> Either script the install or make a template of the VM
-
-5 --
- <> [[ virsh attach-interface --domain ${name} --type network --source default --model virtio --mac ${mac-address} --config --live ]] Where name is the name of the virtual machine that virsh knows of, and ${mac-address} is any randomly generated MAC address.
- <> Each node will require at least 3 variants of the above command to have 3 more NICs in addition to the one a virtual machine has by default as the pharos specification requires.
- <> You can verify the addition of the above NICs with [[ virsh domiflist ${name} ]] where ${name} is the virtual machine you would like to see the NICs of.
- <> These NICs may be detached with the command [[ virsh detach-interface --domain ${name} --type network --mac ${mac-address} --config ]] where ${name} is the vm you're targeting and ${mac-address} is the NIC's specific MAC address.
-
-0 6 --
- <> Add a virtual interface to a bridge by editing the configuration file that qemu generates.
- <> Change the line from this:
- <interface type='network'>
- <mac address='00:11:22:33:44:55'/>
- <source network='default'/>
- </interface>
- <> To this:
- <interface type='bridge'>
- <mac address='00:11:22:33:44:55'/>
- <source bridge='br0'/>
- </interface>
diff --git a/tools/pharos-validator/docs/howto/virt-manager/bridgevm.sh b/tools/pharos-validator/docs/howto/virt-manager/bridgevm.sh
deleted file mode 100755
index 370132b5..00000000
--- a/tools/pharos-validator/docs/howto/virt-manager/bridgevm.sh
+++ /dev/null
@@ -1 +0,0 @@
-sudo /usr/libexec/qemu-kvm -hda /vm/template/jump-host.img -device e1000,netdev=net0,mac=DE:AD:BE:EF:FE:7A -netdev tap,id=net0
diff --git a/tools/pharos-validator/docs/howto/virt-manager/genmac.sh b/tools/pharos-validator/docs/howto/virt-manager/genmac.sh
deleted file mode 100755
index 10b12f92..00000000
--- a/tools/pharos-validator/docs/howto/virt-manager/genmac.sh
+++ /dev/null
@@ -1,3 +0,0 @@
-#!/bin/bash
-# generate a random mac address for the qemu nic
-printf 'DE:AD:BE:EF:%02X:%02X\n' $((RANDOM%256)) $((RANDOM%256))
diff --git a/tools/pharos-validator/docs/howto/virt-manager/jump-server.sh b/tools/pharos-validator/docs/howto/virt-manager/jump-server.sh
deleted file mode 100755
index 465ea132..00000000
--- a/tools/pharos-validator/docs/howto/virt-manager/jump-server.sh
+++ /dev/null
@@ -1 +0,0 @@
-sudo virt-install -n jump-host-centos7_0 --graphics none --vcpus=2 --ram=512 --os-type=linux -l /iso/CentOS-7-x86_64-DVD-1511.iso --disk path=/vm/template/jump-host.img,cache=none --extra-args console=ttyS0
diff --git a/tools/pharos-validator/docs/howto/virt-manager/node-cycle.sh b/tools/pharos-validator/docs/howto/virt-manager/node-cycle.sh
deleted file mode 100755
index 5b945dc7..00000000
--- a/tools/pharos-validator/docs/howto/virt-manager/node-cycle.sh
+++ /dev/null
@@ -1,5 +0,0 @@
-#!/bin/bash
-
-for i in range $(seq 1 5);do
- qemu-kvm -m 512M -boot n -enable-kvm -net nic -net user,tftp=/srv/tftp/,bootfile=/pxelinux.0 &
-done
diff --git a/tools/pharos-validator/docs/howto/virt-manager/virsh-commands.txt b/tools/pharos-validator/docs/howto/virt-manager/virsh-commands.txt
deleted file mode 100644
index 45f81856..00000000
--- a/tools/pharos-validator/docs/howto/virt-manager/virsh-commands.txt
+++ /dev/null
@@ -1,14 +0,0 @@
-# Installing an OS on a new VM
-virt-install -n jump-host-centos7 --graphics none --vcpus=2 --ram=512 --os-type=linux -l /iso/CentOS-7-x86_64-DVD-1511.iso --disk path=/vm/template/jump-host.img,cache=none --extra-args="console=ttyS0"
-
-# PXE booting a new vm
-virt-install --name jump-host-centos7 --graphics none --vcpus 2 --ram=512 --os-type=linux --os-variant=centos7 --network=bridge:"network_birdge_name" --pxe
-
-# Unused option for pxe
-#--disk path=/vm/template/jump-host.img,cache=none
-
-# Can't delete a VM? Here are some troubleshooting options
-Remember to login as root if needing to destroy virtual machines created by root
-
-# Command to add network interfaces to VM guest
-attach-interface jump-host-centos7_0 --type network --source default --model virtio --mac DE:AD:BE:EF:B4:EF --config --live
diff --git a/tools/pharos-validator/docs/initial_proposal.txt b/tools/pharos-validator/docs/initial_proposal.txt
deleted file mode 100644
index c607ccb3..00000000
--- a/tools/pharos-validator/docs/initial_proposal.txt
+++ /dev/null
@@ -1,49 +0,0 @@
-##OPNFV - Pharos Qualification Tool Project Proposal
-
-Todd Gaunt
-
-May 20, 2016
-
-##Summary
-This proposal is for the project regarding developing a Pharos qualification tool over the course
-of 3 months, based in the requirements listed within the OPNFV Wiki (https://wiki.opnfv.org/display/DEV/Intern+Project%3A+Pharos+Qualification+Tool). I believe I am well suited for the job, as accomplishing the goal of developing a tool to probe for data on a machine is in line with my skill set. I work on linux-boxes daily, and have a good understanding of automating predictable and repeatable tasks such as probing for information and deploying the software to do the job. The tool for testing a
-POD to see if it meets the requirements for the Pharos specification could be a simple command-line suite of scripts written in a simple, ubiquitous language like sh, bash, or python and then be deployed onto a server using a container such as Docker, or even go with a simpler model of a tar.gz package using a tool like GNU stow to temporarily ”install” the package with symbolic links. Additional information for the validation of the pod, per the requirements list is included in the Proposed
-Design section below.
-
-##Proposed Design
-###Deployment of Test Tools to the POD
-Utilize a container solution such as Docker with a small base linux image of either alpine linux (5mb base image, 52mb with python3) or centOS 7 (The native system according to the pharos specification which would be around 197mb base image 334.5mb with python3) to allow for easy installation of the tool without pulling in any external dependencies. This will allow for ease of administration by allowing administrators to not worry about dependency resolution or having a proper package version installed. The base image will be built with a python3 interpreter and it’s dependencies to run the qualification tool. If a docker image approach is not feasible for some reason, a traditional package format such as rpm can be used.
-
-###Qualification of POD Resources
-Load the system configuration/Inventory files the pharos spec states the machines should have. Each machine should have an easily machine-readable file available with system configuration information. The tool will find as much hardware information as possible utilizing the linux filesystem and tools, and compare and or fallback to the inventory file.
-
-####Required Compute
-Poll /proc/cpuinfo for cpu specifications, Intel Xeon E5-2600v2 series or equivalent in processing power (Intel brand not a requirement):
-64-bit
-4 cores
-1.8ghz (maybe a softer requirement here like 1.5ghz or less as different architectures can perform equivalently at different speeds).
-
-####Required RAM
-Poll /proc/meminfo for memory specifications
-- 32G RAM Minimum, anything less than that and the test will fail.
-- ECC memory nice to have but not required
-
-####Required Software
-The jump host requires centos7 and the opnfv/openstack virtualized inside of it so an initial connection can be established and further connections can be made to the nodes within the pod.
-
-####Required Storage
-Poll /sys/block/ for all block devices and their size. Properties of the storage can be discovered here as well. Eg. disk size would be in /sys/block/sdX/size, disk type would be in /sys/block/sdX/queue/rotational. The passing metric is the minimum requirements as defined here: http://artifacts.opnfv.org/pharos/docs/specification/hardwarespec.html
-- Disks: 2 x 1TB HDD + 1 x 100GB SSD (or greater capacity)
-- The first HDD should be used for OS & additional software/tool installation
-- The second HDD is configured for CEPH object storage
-- The SSD should be used as the CEPH journal
-- Performance testing requires a mix of compute nodes with CEPH (Swift+Cinder) and without CEPH storage
-- Virtual ISO boot capabilities or a separate PXE boot server (DHCP/tftp or Cobbler)
-
-####Required Network Connectivity
-Use a tool such as ifconfig/iproute2 to find network interfaces and attempt to connect back to the jump host/ other nodes. Check for valid gpg key certified by someone with administrative access to LF infrastructure for VPNs. This VPN information is laid out in the inventory file provided by the pod.
-
-####Inventory of System(s) via IPMI
-Utilize IPMI interfaces to manage the rebooting of POD’s the tool connects to. If available perhaps use this as a way to find system information/configuration needed for the report.
-Definition of Results (pass/fail evaluation)
-Utilize the Jenkins build automation server to build and deploy the test-tool and receive results. If the test fails describe at which step and why it failed.