summaryrefslogtreecommitdiffstats
path: root/docs/release
diff options
context:
space:
mode:
Diffstat (limited to 'docs/release')
-rw-r--r--docs/release/installation/index.rst15
-rwxr-xr-xdocs/release/installation/installation.rst129
-rw-r--r--docs/release/userguide/index.rst16
-rw-r--r--docs/release/userguide/introduction.rst101
-rw-r--r--docs/release/userguide/test-usage.rst244
5 files changed, 0 insertions, 505 deletions
diff --git a/docs/release/installation/index.rst b/docs/release/installation/index.rst
deleted file mode 100644
index 10296dd..0000000
--- a/docs/release/installation/index.rst
+++ /dev/null
@@ -1,15 +0,0 @@
-.. _storperf-installation:
-
-.. This work is licensed under a Creative Commons Attribution 4.0 International
-.. License.
-.. http://creativecommons.org/licenses/by/4.0
-.. (c) OPNFV, Dell EMC and others.
-
-======================
-StorPerf Installation Guide
-======================
-
-.. toctree::
- :maxdepth: 2
-
- installation.rst
diff --git a/docs/release/installation/installation.rst b/docs/release/installation/installation.rst
deleted file mode 100755
index ae3b3f8..0000000
--- a/docs/release/installation/installation.rst
+++ /dev/null
@@ -1,129 +0,0 @@
-.. This work is licensed under a Creative Commons Attribution 4.0 International License.
-.. http://creativecommons.org/licenses/by/4.0
-.. (c) OPNFV, Dell EMC and others.
-
-===========================
-StorPerf Installation Guide
-===========================
-
-OpenStack Prerequisites
-===========================
-If you do not have an Ubuntu 16.04 image in Glance, you will need to add one.
-There are scripts in storperf/ci directory to assist, or you can use the follow
-code snippets:
-
-.. code-block:: bash
-
- # Put an Ubuntu Image in glance
- wget -q https://cloud-images.ubuntu.com/releases/16.04/release/ubuntu-16.04-server-cloudimg-amd64-disk1.img
- openstack image create "Ubuntu 16.04 x86_64" --disk-format qcow2 --public \
- --container-format bare --file ubuntu-16.04-server-cloudimg-amd64-disk1.img
-
- # Create StorPerf flavor
- openstack flavor create storperf \
- --id auto \
- --ram 8192 \
- --disk 4 \
- --vcpus 2
-
-
-Planning
-===========================
-
-StorPerf is delivered as a `Docker container
-<https://hub.docker.com/r/opnfv/storperf/tags/>`__. There are two possible
-methods for installation in your environment:
- 1. Run container on Jump Host
- 2. Run container in a VM
-
-
-Running StorPerf on Jump Host
-=============================
-
-Requirements:
-
- * Docker must be installed
- * Jump Host must have access to the OpenStack Controller API
- * Jump Host must have internet connectivity for downloading docker image
- * Enough floating IPs must be available to match your agent count
-
-Running StorPerf in a VM
-========================
-
-Requirements:
-
- * VM has docker installed
- * VM has OpenStack Controller credentials and can communicate with the Controller API
- * VM has internet connectivity for downloading the docker image
- * Enough floating IPs must be available to match your agent count
-
-VM Creation
-~~~~~~~~~~~
-
-The following procedure will create the VM in your environment
-
-.. code-block:: console
-
- # Create the StorPerf VM itself. Here we use the network ID generated by OPNFV FUEL.
- ADMIN_NET_ID=`neutron net-list | grep 'admin_internal_net ' | awk '{print $2}'`
-
- nova boot --nic net-id=$ADMIN_NET_ID --flavor m1.small --key-name=StorPerf --image 'Ubuntu 14.04' 'StorPerf Master'
-
-At this point, you may associate a floating IP with the StorPerf master VM.
-
-VM Docker Installation
-~~~~~~~~~~~~~~~~~~~~~~
-
-The following procedure will install Docker on Ubuntu 14.04.
-
-.. code-block:: console
-
- sudo apt-key adv --keyserver hkp://p80.pool.sks-keyservers.net:80 --recv-keys 58118E89F3A912897C070ADBF76221572C52609D
- cat << EOF | sudo tee /etc/apt/sources.list.d/docker.list
- deb https://apt.dockerproject.org/repo ubuntu-trusty main
- EOF
-
- sudo apt-get update
- sudo apt-get install -y docker-engine
- sudo usermod -aG docker ubuntu
-
-Pulling StorPerf Container
-==========================
-
-Danube
-~~~~~~
-
-The tag for the latest stable Danube will be:
-
-.. code-block:: bash
-
- docker pull opnfv/storperf:danube.0.1
-
-Colorado
-~~~~~~~~
-
-The tag for the latest stable Colorado release is:
-
-.. code-block:: bash
-
- docker pull opnfv/storperf:colorado.0.1
-
-Brahmaputra
-~~~~~~~~~~~
-
-The tag for the latest stable Brahmaputra release is:
-
-.. code-block:: bash
-
- docker pull opnfv/storperf:brahmaputra.1.2
-
-Development
-~~~~~~~~~~~
-
-The tag for the latest development version is:
-
-.. code-block:: bash
-
- docker pull opnfv/storperf:master
-
-
diff --git a/docs/release/userguide/index.rst b/docs/release/userguide/index.rst
deleted file mode 100644
index e2f076a..0000000
--- a/docs/release/userguide/index.rst
+++ /dev/null
@@ -1,16 +0,0 @@
-.. _storperf-userguide:
-
-.. This work is licensed under a Creative Commons Attribution 4.0 International
-.. License.
-.. http://creativecommons.org/licenses/by/4.0
-.. (c) OPNFV, Dell EMC and others.
-
-======================
-StorPerf User Guide
-======================
-
-.. toctree::
- :maxdepth: 2
-
- introduction.rst
- test-usage.rst
diff --git a/docs/release/userguide/introduction.rst b/docs/release/userguide/introduction.rst
deleted file mode 100644
index a40750f..0000000
--- a/docs/release/userguide/introduction.rst
+++ /dev/null
@@ -1,101 +0,0 @@
-.. This work is licensed under a Creative Commons Attribution 4.0 International License.
-.. http://creativecommons.org/licenses/by/4.0
-.. (c) OPNFV, Dell EMC and others.
-
-==================================
-StorPerf Container Execution Guide
-==================================
-
-Planning
-========
-
-There are some ports that the container can expose:
-
- * 22 for SSHD. Username and password are root/storperf. This is used for CLI access only
- * 5000 for StorPerf ReST API.
- * 8000 for StorPerf's Graphite Web Server
-
-OpenStack Credentials
-~~~~~~~~~~~~~~~~~~~~~
-
-You must have your OpenStack Controller environment variables defined and passed to
-the StorPerf container. The easiest way to do this is to put the rc file contents
-into a clean file the looks similar to this for V2 authentication:
-
-.. code-block:: console
-
- OS_AUTH_URL=http://10.13.182.243:5000/v2.0
- OS_TENANT_ID=e8e64985506a4a508957f931d1800aa9
- OS_TENANT_NAME=admin
- OS_PROJECT_NAME=admin
- OS_USERNAME=admin
- OS_PASSWORD=admin
- OS_REGION_NAME=RegionOne
-
-For V3 authentication, use the following:
-
-.. code-block:: console
-
- OS_AUTH_URL=http://10.13.182.243:5000/v3
- OS_PROJECT_ID=32ae78a844bc4f108b359dd7320463e5
- OS_PROJECT_NAME=admin
- OS_USER_DOMAIN_NAME=Default
- OS_USERNAME=admin
- OS_PASSWORD=admin
- OS_REGION_NAME=RegionOne
- OS_INTERFACE=public
- OS_IDENTITY_API_VERSION=3
-
-Additionally, if you want your results published to the common OPNFV Test Results
- DB, add the following:
-
-.. code-block:: console
-
- TEST_DB_URL=http://testresults.opnfv.org/testapi
-
-Running StorPerf Container
-==========================
-
-You might want to have the local disk used for storage as the default size of the docker
-container is only 10g. This is done with the -v option, mounting under
-/opt/graphite/storage/whisper
-
-.. code-block:: console
-
- mkdir -p ~/carbon
- sudo chown 33:33 ~/carbon
-
-The recommended method of running StorPerf is to expose only the ReST and Graphite
-ports. The command line below shows how to run the container with local disk for
-the carbon database.
-
-.. code-block:: console
-
- docker run -t --env-file admin-rc -p 5000:5000 -p 8000:8000 -v ~/carbon:/opt/graphite/storage/whisper --name storperf opnfv/storperf
-
-
-Docker Exec
-~~~~~~~~~~~
-
-Instead of exposing port 5022 externally, you can use the exec method in docker. This
-provides a slightly more secure method of running StorPerf container without having to
-expose port 22.
-
-If needed, the container can be entered with docker exec. This is not normally required.
-
-.. code-block:: console
-
- docker exec -it storperf bash
-
-Container with SSH
-~~~~~~~~~~~~~~~~~~
-
-Running the StorPerf Container with all ports open and a local disk for the result
-storage. This is not recommended as the SSH port is open.
-
-.. code-block:: console
-
- docker run -t --env-file admin-rc -p 5022:22 -p 5000:5000 -p 8000:8000 -v ~/carbon:/opt/graphite/storage/whisper --name storperf opnfv/storperf
-
-This will then permit ssh to localhost port 5022 for CLI access.
-
diff --git a/docs/release/userguide/test-usage.rst b/docs/release/userguide/test-usage.rst
deleted file mode 100644
index 2beae69..0000000
--- a/docs/release/userguide/test-usage.rst
+++ /dev/null
@@ -1,244 +0,0 @@
-.. This work is licensed under a Creative Commons Attribution 4.0 International License.
-.. http://creativecommons.org/licenses/by/4.0
-.. (c) OPNFV, Dell EMC and others.
-
-=============================
-StorPerf Test Execution Guide
-=============================
-
-Prerequisites
-=============
-
-This guide requires StorPerf to be running and have its ReST API accessible. If
-the ReST API is not running on port 5000, adjust the commands provided here as
-needed.
-
-Interacting With StorPerf
-=========================
-
-Once the StorPerf container has been started and the ReST API exposed, you can
-interact directly with it using the ReST API. StorPerf comes with a Swagger
-interface that is accessible through the exposed port at:
-
-.. code-block:: console
-
- http://StorPerf:5000/swagger/index.html
-
-The typical test execution follows this pattern:
-
-#. Configure the environment
-#. Initialize the cinder volumes
-#. Execute one or more performance runs
-#. Delete the environment
-
-Configure The Environment
-=========================
-
-The following pieces of information are required to prepare the environment:
-
-- The number of VMs/Cinder volumes to create
-- The Glance image that holds the VM operating system to use. StorPerf has
- only been tested with Ubuntu 16.04
-- The name of the public network that agents will use
-- The size, in gigabytes, of the Cinder volumes to create
-
-The ReST API is a POST to http://StorPerf:5000/api/v1.0/configurations and
-takes a JSON payload as follows.
-
-.. code-block:: json
-
- {
- "agent_count": int,
- "agent_image": string,
- "public_network": string,
- "volume_size": int
- }
-
-This call will block until the stack is created, at which point it will return
-the OpenStack heat stack id.
-
-Initialize the Cinder Volumes
-=============================
-Before executing a test run for the purpose of measuring performance, it is
-necessary to fill the Cinder volume with random data. Failure to execute this
-step can result in meaningless numbers, especially for read performance. Most
-Cinder drivers are smart enough to know what blocks contain data, and which do
-not. Uninitialized blocks return "0" immediately without actually reading from
-the volume.
-
-Initiating the data fill looks the same as a regular performance test, but uses
-the special workload called "_warm_up". StorPerf will never push _warm_up
-data to the OPNFV Test Results DB, nor will it terminate the run on steady state.
-It is guaranteed to run to completion, which fills 100% of the volume with
-random data.
-
-The ReST API is a POST to http://StorPerf:5000/api/v1.0/jobs and
-takes a JSON payload as follows.
-
-.. code-block:: json
-
- {
- "workload": "_warm_up"
- }
-
-This will return a job ID as follows.
-
-.. code-block:: json
-
- {
- "job_id": "edafa97e-457e-4d3d-9db4-1d6c0fc03f98"
- }
-
-This job ID can be used to query the state to determine when it has completed.
-See the section on querying jobs for more information.
-
-Execute a Performance Run
-=========================
-Performance runs can execute either a single workload, or iterate over a matrix
-of workload types, block sizes and queue depths.
-
-Workload Types
-~~~~~~~~~~~~~~
-rr
- Read, Random. 100% read of random blocks
-rs
- Read, Sequential. 100% read of sequential blocks of data
-rw
- Read / Write Mix, Random. 70% random read, 30% random write
-wr
- Write, Random. 100% write of random blocks
-ws
- Write, Sequential. 100% write of sequential blocks.
-
-Block Sizes
-~~~~~~~~~~~
-A comma delimited list of the different block sizes to use when reading and
-writing data. Note: Some Cinder drivers (such as Ceph) cannot support block
-sizes larger than 16k (16384).
-
-Queue Depths
-~~~~~~~~~~~~
-A comma delimited list of the different queue depths to use when reading and
-writing data. The queue depth parameter causes FIO to keep this many I/O
-requests outstanding at one time. It is used to simulate traffic patterns
-on the system. For example, a queue depth of 4 would simulate 4 processes
-constantly creating I/O requests.
-
-Deadline
-~~~~~~~~
-The deadline is the maximum amount of time in minutes for a workload to run. If
-steady state has not been reached by the deadline, the workload will terminate
-and that particular run will be marked as not having reached steady state. Any
-remaining workloads will continue to execute in order.
-
-.. code-block:: json
-
- {
- "block_sizes": "2048,16384,
- "deadline": 20,
- "queue_depths": "2,4",
- "workload": "wr,rr,rw",
- }
-
-Metadata
-~~~~~~~~
-A job can have metadata associated with it for tagging. The following metadata
-is required in order to push results to the OPNFV Test Results DB:
-
-.. code-block:: json
-
- "metadata": {
- "disk_type": "HDD or SDD",
- "pod_name": "OPNFV Pod Name",
- "scenario_name": string,
- "storage_node_count": int,
- "version": string,
- "build_tag": string,
- "test_case": "snia_steady_state"
- }
-
-
-
-Query Jobs Information
-======================
-
-By issuing a GET to the job API http://StorPerf:5000/api/v1.0/jobs?job_id=<ID>,
-you can fetch information about the job as follows:
-
-- &type=status: to report on the status of the job.
-- &type=metrics: to report on the collected metrics.
-- &type=metadata: to report back any metadata sent with the job ReST API
-
-Status
-~~~~~~
-The Status field can be:
-- Running to indicate the job is still in progress, or
-- Completed to indicate the job is done. This could be either normal completion
- or manually terminated via HTTP DELETE call.
-
-Workloads can have a value of:
-- Pending to indicate the workload has not yet started,
-- Running to indicate this is the active workload, or
-- Completed to indicate this workload has completed.
-
-This is an example of a type=status call.
-
-.. code-block:: json
-
- {
- "Status": "Running",
- "TestResultURL": null,
- "Workloads": {
- "eeb2e587-5274-4d2f-ad95-5c85102d055e.ws.queue-depth.1.block-size.16384": "Pending",
- "eeb2e587-5274-4d2f-ad95-5c85102d055e.ws.queue-depth.1.block-size.4096": "Pending",
- "eeb2e587-5274-4d2f-ad95-5c85102d055e.ws.queue-depth.1.block-size.512": "Pending",
- "eeb2e587-5274-4d2f-ad95-5c85102d055e.ws.queue-depth.4.block-size.16384": "Running",
- "eeb2e587-5274-4d2f-ad95-5c85102d055e.ws.queue-depth.4.block-size.4096": "Pending",
- "eeb2e587-5274-4d2f-ad95-5c85102d055e.ws.queue-depth.4.block-size.512": "Pending",
- "eeb2e587-5274-4d2f-ad95-5c85102d055e.ws.queue-depth.8.block-size.16384": "Completed",
- "eeb2e587-5274-4d2f-ad95-5c85102d055e.ws.queue-depth.8.block-size.4096": "Pending",
- "eeb2e587-5274-4d2f-ad95-5c85102d055e.ws.queue-depth.8.block-size.512": "Pending"
- }
- }
-
-Metrics
-~~~~~~~
-Metrics can be queried at any time during or after the completion of a run.
-Note that the metrics show up only after the first interval has passed, and
-are subject to change until the job completes.
-
-This is a sample of a type=metrics call.
-
-.. code-block:: json
-
- {
- "rw.queue-depth.1.block-size.512.read.bw": 52.8,
- "rw.queue-depth.1.block-size.512.read.iops": 106.76199999999999,
- "rw.queue-depth.1.block-size.512.read.lat.mean": 93.176,
- "rw.queue-depth.1.block-size.512.write.bw": 22.5,
- "rw.queue-depth.1.block-size.512.write.iops": 45.760000000000005,
- "rw.queue-depth.1.block-size.512.write.lat.mean": 21764.184999999998
- }
-
-Abort a Job
-===========
-Issuing an HTTP DELETE to the job api http://StorPerf:5000/api/v1.0/jobs will
-force the termination of the whole job, regardless of how many workloads
-remain to be executed.
-
-.. code-block:: bash
-
- curl -X DELETE --header 'Accept: application/json' http://StorPerf:5000/api/v1.0/jobs
-
-Delete the Environment
-======================
-After you are done testing, you can have StorPerf delete the Heat stack by
-issuing an HTTP DELETE to the configurations API.
-
-.. code-block:: bash
-
- curl -X DELETE --header 'Accept: application/json' http://StorPerf:5000/api/v1.0/configurations
-
-You may also want to delete an environment, and then create a new one with a
-different number of VMs/Cinder volumes to test the impact of the number of VMs
-in your environment.