summaryrefslogtreecommitdiffstats
path: root/docs/release
diff options
context:
space:
mode:
Diffstat (limited to 'docs/release')
-rw-r--r--docs/release/configguide/a_b_config_guide.rst6
-rw-r--r--docs/release/configguide/clovisor_config_guide.rst179
-rw-r--r--docs/release/configguide/controller_services_config_guide.rst179
-rw-r--r--docs/release/configguide/imgs/jmeter_overview.pngbin0 -> 78367 bytes
-rw-r--r--docs/release/configguide/imgs/sdc_tracing.pngbin83363 -> 84913 bytes
-rw-r--r--docs/release/configguide/imgs/spinnaker-bake.pngbin0 -> 61742 bytes
-rw-r--r--docs/release/configguide/imgs/spinnaker-deploy.pngbin0 -> 62381 bytes
-rw-r--r--docs/release/configguide/imgs/spinnaker-expected-artifacts.pngbin0 -> 37564 bytes
-rw-r--r--docs/release/configguide/imgs/spinnaker-produces-artifact.pngbin0 -> 17589 bytes
-rw-r--r--docs/release/configguide/imgs/visibility_discovered_active.pngbin0 -> 33626 bytes
-rw-r--r--docs/release/configguide/imgs/visibility_distinct_counts.pngbin0 -> 38887 bytes
-rw-r--r--docs/release/configguide/imgs/visibility_distinct_http.pngbin0 -> 27362 bytes
-rw-r--r--docs/release/configguide/imgs/visibility_monitoring_metrics.pngbin0 -> 96758 bytes
-rw-r--r--docs/release/configguide/imgs/visibility_overview.pngbin0 -> 64705 bytes
-rw-r--r--docs/release/configguide/imgs/visibility_system_counts_response_times.pngbin0 -> 40552 bytes
-rw-r--r--docs/release/configguide/index.rst15
-rw-r--r--docs/release/configguide/jmeter_config_guide.rst300
-rw-r--r--docs/release/configguide/sdc_config_guide.rst224
-rw-r--r--docs/release/configguide/spinnaker_config_guide.rst63
-rw-r--r--docs/release/configguide/visibility_config_guide.rst403
-rw-r--r--docs/release/release-notes/release-notes.rst29
-rw-r--r--docs/release/userguide/index.rst8
-rw-r--r--docs/release/userguide/userguide.rst55
23 files changed, 1311 insertions, 150 deletions
diff --git a/docs/release/configguide/a_b_config_guide.rst b/docs/release/configguide/a_b_config_guide.rst
index 17ffcfd..6a0963f 100644
--- a/docs/release/configguide/a_b_config_guide.rst
+++ b/docs/release/configguide/a_b_config_guide.rst
@@ -42,8 +42,8 @@ The following assumptions must be met before executing the sample script:
.. code-block:: bash
- $ curl -L https://github.com/istio/istio/releases/download/0.6.0/istio-0.6.0-linux.tar.gz | tar xz
- $ cd istio-0.6.0
+ $ curl -L https://github.com/istio/istio/releases/download/1.0.0/istio-1.0.0-linux.tar.gz | tar xz
+ $ cd istio-1.0.0
$ export PATH=$PWD/bin:$PATH
Environment setup
@@ -55,7 +55,7 @@ First setup the environment using the Clover source with the following commands:
$ git clone https://gerrit.opnfv.org/gerrit/clover
$ cd clover
- $ git checkout stable/fraser
+ $ git checkout stable/gambia
$ pip install .
$ cd clover
diff --git a/docs/release/configguide/clovisor_config_guide.rst b/docs/release/configguide/clovisor_config_guide.rst
new file mode 100644
index 0000000..e486e3e
--- /dev/null
+++ b/docs/release/configguide/clovisor_config_guide.rst
@@ -0,0 +1,179 @@
+.. This work is licensed under a Creative Commons Attribution 4.0 International License.
+.. http://creativecommons.org/licenses/by/4.0
+.. SPDX-License-Identifier CC-BY-4.0
+.. (c) Authors of Clover
+
+.. _clovisor_config_guide:
+
+============================
+Clovisor Configuration Guide
+============================
+
+Clovisor requires minimal to no configurations to function as a network tracer.
+It expects configurations to be set at a redis sever running at clover-system
+namespace.
+
+No Configuration
+================
+
+If redis server isn't running as service name **redis** in namespace
+**clovisor** or there isn't any configuration related to Clovisor in that
+redis service, then Clovisor would monitor all pods under the **default**
+namespace. The traces would be sent to **jaeger-collector** service under the
+**clovisor** namespace
+
+Using redis-cli
+===============
+
+Install ``redis-cli`` on the client machine, and look up redis IP address:
+
+.. code-block:: bash
+
+ $ kubectl get services -n clovisor
+
+which one may get something like the following:
+
+.. code-block:: bash
+
+ $
+ NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
+ redis ClusterIP 10.109.151.40 <none> 6379/TCP 16s
+
+if (like above), the external IP isn't visible, one may be able to get the pod
+IP address directly via the pod (for example, it works with Flannel as CNI
+plugin):
+
+.. code-block:: bash
+
+ $ kubectl get pods -n clover-system -o=wide
+ NAME READY STATUS RESTARTS AGE IP NODE
+ redis 2/2 Running 0 34m 10.244.0.187 clover1804
+
+and one can connect to redis via::
+
+ kubectl exec -n clovisor -it redis redis-cli
+
+Jaeger Collector Configuration
+==============================
+
+Clovisor allows user to specify the Jaeger service for which Clovisor would send
+the network traces to, by default it is Jaegar service running in **clovisor** namespace. To change, user can configure via setting the values for
+keys **clovisor_jaeger_collector** and **clovisor_jaeger_agent**::
+
+ redis> SET clovisor_jaeger_collector "jaeger-collector.istio-system:14268"
+ "OK"
+ redis> SET clovisor_jaeger_agent "jaeger-agent.istio-system:6831"
+ "OK"
+
+Configure Monitoring Namespace and Labels
+=========================================
+
+Configruation Value String Format:
+----------------------------------
+
+ <namespace>[:label-key:label-value]
+
+User can configure namespace(s) for Clovisor to tap into via adding namespace
+configuration in redis list **clovisor_labels**::
+
+ redis> LPUSH clovisor_labels "my-namespace"
+ (integer) 1
+
+the above command will cause Clovisor to **NOT** monitor the pods in **default**
+namespace, and only monitor the pods under **my-namespace**.
+
+If user wants to monitor both 'default' and 'my-namespace', she needs to
+explicitly add 'default' namespace back to the list::
+
+ redis> LPUSH clovisor_labels "default"
+ (integer) 2
+ redis> LRANGE clovisor_labels 0 -1
+ 1.) "default"
+ 2.) "my-namespace"
+
+Clovisor allows user to optionally specify which label match on pods to further
+filter the pods to monitor::
+
+ redis> LPUSH clovisor_labels "my-2nd-ns:app:database"
+ (integer) 1
+
+the above configuration would result in Clovisor only monitoring pods in
+my-2nd-ns namespace which matches the label "app:database"
+
+User can specify multiple labels to filter via adding more configuration
+entries::
+
+ redis> LPUSH clovisor_labels "my-2nd-ns:app:web"
+ (integer) 2
+ redis> LRANGE clovisor_labels 0 -1
+ 1.) "my-2nd-ns:app:web"
+ 2.) "my-2nd-ns:app:database"
+
+the result is that Clovisor would monitor pods under namespace my-2nd-ns which
+match **EITHER** app:database **OR** app:web
+
+Currently Clovisor does **NOT** support filtering of more than one label per
+filter, i.e., no configuration option to specify a case where a pod in a
+namespace needs to be matched with TWO or more labels to be monitored
+
+Configure Egress Match IP address, Port Number, and Matching Pods
+=================================================================
+
+Configruation Value String Format:
+----------------------------------
+
+ <IP Address>:<TCP Port Number>[:<Pod Name Prefix>]
+
+By default, Clovisor only traces packets that goes to a pod via its service
+port, and the response packets, i.e., from pod back to client. User can
+configure tracing packet going **OUT** of the pod to the next microservice, or
+an external service also via the **clovior_egress_match** list::
+
+ redis> LPUSH clovior_egress_match "10.0.0.1:3456"
+ (integer) 1
+
+the command above will cause Clovisor to trace packet going out of ALL pods
+under monitoring to match IP address 10.0.0.1 and destination TCP port 3456 on
+the **EGRESS** side --- that is, packets going out of the pod.
+
+User can also choose to ignore the outbound IP address, and only specify the
+port to trace via setting IP address to zero::
+
+ redis> LPUSH clovior_egress_match "0:3456"
+ (integer) 1
+
+the command above will cause Clovisor to trace packets going out of all the pods
+under monitoring that match destination TCP port 3456.
+
+User can further specify a specific pod prefix for such egress rule to be
+applied::
+
+ redis> LPUSH clovior_egress_match "0:3456:proxy"
+ (integer) 1
+
+the command above will cause Clovisor to trace packets going out of pods under
+monitoring which have name starting with the string "proxy" that match destination
+TCP port 3456
+
+Clovisor in Hunter release supports the ability to run user-defined protocol analyzer as a plugin library --- and the corresponding traces will be sent to Jaeger just like all the default Clovisor network tracing. User needs to implement the following interface (only golang is supported at this time)::
+
+ type Parser interface {
+ Parse(session_key string, is_req bool,
+ data []byte)([]byte, map[string]string)
+ }
+
+and compile it with the following command::
+
+ go build --buildmode=plugin -o <something>.so <something>.go
+
+then, for Hunter, one needs to push the .so to each Clovisor instance::
+
+ kubectl cp <something>.so clovisor/clovisor-bnh2v:/proto/<something>.so
+
+do that for each Clovisor pods, and afterward, configure via::
+
+ redis> HSET clovisor_proto_cfg <protocol> "/proto/<something>.so"
+ (integer) 1
+ redis> PUBLISH clovisor_proto_plugin_cfg_chan <protocol>
+ (integer) 6
+
diff --git a/docs/release/configguide/controller_services_config_guide.rst b/docs/release/configguide/controller_services_config_guide.rst
new file mode 100644
index 0000000..d9ad891
--- /dev/null
+++ b/docs/release/configguide/controller_services_config_guide.rst
@@ -0,0 +1,179 @@
+.. This work is licensed under a Creative Commons Attribution 4.0 International License.
+.. http://creativecommons.org/licenses/by/4.0
+.. SPDX-License-Identifier CC-BY-4.0
+.. (c) Authors of Clover
+
+.. _controller_services_config_guide:
+
+==============================================
+Clover Controller Services Configuration Guide
+==============================================
+
+This document provides a guide to use the Clover controller services, which are introduced in
+the Clover Gambia release.
+
+Overview
+=========
+
+Clover controller services allow users to control and access information about Clover
+microservices. Two new components are added to Clover to facilitate an ephemeral, cloud native
+workflow. A CLI interface with the name **cloverctl** interfaces to the Kubernetes (k8s)
+API and also to **clover-controller**, a microservice deployed within the k8s cluster to
+instrument other Clover k8s services including sample network services, visibility/validation
+services and supporting datastores (redis, cassandra). The **clover-controller** service
+provides message routing communicating REST with cloverctl or other API/UI interfaces and
+gRPC to internal k8s cluster microservices. It acts as an internal agent and reduces the need
+to expose multiple Clover services outside of a k8s cluster.
+
+The **clover-controller** is packaged as a docker container with manifests to deploy
+in a Kubernetes (k8s) cluster. The **cloverctl** CLI is packaged as a binary (Golang) within a
+tarball with associated yaml files that can be used to configure and control other Clover
+microservices within the k8s cluster via **clover-controller**. The **cloverctl** CLI can also
+deploy/delete other Clover services within the k8s cluster for convenience.
+
+The **clover-controller** service provides the following functions:
+
+ * **REST API:** interface allows CI scripts/automation to control sample network sample services,
+ visibility and validation services. Analyzed visibility data can be consumed by other
+ services with REST messaging.
+
+ * **CLI Endpoint:** acts as an endpoint for many **cloverctl** CLI commands using the
+ **clover-controller** REST API and relays messages to other services via gRPC.
+
+ * **UI Dashboard:** provides a web interface exposing visibility views to interact with
+ Clover visibility services. It presents analyzed visibility data and provides basic controls
+ such as selecting which user services visibility will track.
+
+The **cloverctl** CLI command syntax is similar to k8s kubectl or istio istioctl CLI tools, using
+a <verb> <noun> convention.
+
+Help can be accessed using the ``--help`` option, as shown below::
+
+ $ cloverctl --help
+
+Deploying Clover system services
+================================
+
+Prerequisites
+-------------
+
+The following assumptions must be met before continuing on to deployment:
+
+ * Installation of Docker has already been performed. It's preferable to install Docker CE.
+ * Installation of k8s in a single-node or multi-node cluster.
+
+.. _controller_services_cli:
+
+Download Clover CLI
+-------------------
+
+Download the cloverctl binary from the location below::
+
+ $ curl -L https://github.com/opnfv/clover/raw/stable/gambia/download/cloverctl.tar.gz | tar xz
+ $ cd cloverctl
+ $ export PATH=$PWD:$PATH
+
+To begin deploying Clover services, ensure the correct k8s context is enabled. Validate that
+the CLI can interact with the k8s API with the command::
+
+ $ cloverctl get services
+
+The command above must return a listing of the current k8s services similar to the output of
+'kubectl get svc --all-namespaces'.
+
+.. _controller_services_controller:
+
+Deploying clover-controller
+---------------------------
+
+To deploy the **clover-controller** service, use the command below:
+
+.. code-block:: bash
+
+ $ cloverctl create system controller
+
+The k8s pod listing below must include the **clover-controller** pod in the **clover-system**
+namespace:
+
+.. code-block:: bash
+
+ $ kubectl get pods --all-namespaces | grep clover-controller
+
+ NAMESPACE NAME READY STATUS
+ clover-system clover-controller-74d8596bb5-jczqz 1/1 Running
+
+.. _exposing_clover_controller:
+
+Exposing clover-controller
+==========================
+
+To expose the **clover-controller** deployment outside of the k8s cluster, a k8s NodePort
+or LoadBalancer service must be employed.
+
+Using NodePort
+--------------
+
+To use a NodePort for the **clover-controller** service, use the following command::
+
+ $ cloverctl create system controller nodeport
+
+The NodePort default is to use port 32044. To modify this, edit the yaml relative
+to the **cloverctl** path at ``yaml/controller/service_nodeport.yaml`` before invoking
+the command above. Delete the ``nodePort:`` key in the yaml to let k8s select an
+available port within the the range 30000-32767.
+
+Using LoadBalancer
+------------------
+
+For k8s clusters that support a LoadBalancer service, such as GKE, one can be created for
+**clover-controller** with the following command::
+
+ $ cloverctl create system controller lb
+
+Setup with cloverctl CLI
+------------------------
+
+The **cloverctl** CLI will communicate with **clover-controller** on the service exposed above
+and requires the IP address of either the load balancer or a cluster node IP address, if a
+NodePort service is used. For a LoadBalancer service, **cloverctl** will automatically find
+the IP address to use and no further action is required.
+
+However, if a NodePort service is used, an additional step is required to configure the IP
+address for **cloverctl** to target. This may be the CNI (ex. flannel/weave) IP address or the IP
+address of an k8s node interface. The **cloverctl** CLI will automatically determine the
+NodePort port number configured. To configure the IP address, create a file named
+``.cloverctl.yaml`` and add a single line to the yaml file with the following::
+
+ ControllerIP: <IP addresss>
+
+This file must be located in your ``HOME`` directory or in the same directory as the **cloverctl**
+binary.
+
+Uninstall from Kubernetes environment
+=====================================
+
+Delete with Clover CLI
+-----------------------
+
+When you're finished working with Clover system services, you can uninstall it with the
+following command:
+
+.. code-block:: bash
+
+ $ cloverctl delete system controller
+ $ cloverctl delete system controller nodeport # for NodePort
+ $ cloverctl delete system controller lb # for LoadBalancer
+
+
+The commands above will remove the clover-controller deployment and service resources
+created from the current k8s context.
+
+Uninstall from Docker environment
+=================================
+
+The OPNFV docker image for the **clover-controller** can be removed with the following commands
+from nodes in the k8s cluster.
+
+.. code-block:: bash
+
+ $ docker rmi opnfv/clover-controller
diff --git a/docs/release/configguide/imgs/jmeter_overview.png b/docs/release/configguide/imgs/jmeter_overview.png
new file mode 100644
index 0000000..ee986e6
--- /dev/null
+++ b/docs/release/configguide/imgs/jmeter_overview.png
Binary files differ
diff --git a/docs/release/configguide/imgs/sdc_tracing.png b/docs/release/configguide/imgs/sdc_tracing.png
index 0df7112..bad575c 100644
--- a/docs/release/configguide/imgs/sdc_tracing.png
+++ b/docs/release/configguide/imgs/sdc_tracing.png
Binary files differ
diff --git a/docs/release/configguide/imgs/spinnaker-bake.png b/docs/release/configguide/imgs/spinnaker-bake.png
new file mode 100644
index 0000000..86e853a
--- /dev/null
+++ b/docs/release/configguide/imgs/spinnaker-bake.png
Binary files differ
diff --git a/docs/release/configguide/imgs/spinnaker-deploy.png b/docs/release/configguide/imgs/spinnaker-deploy.png
new file mode 100644
index 0000000..44b4e92
--- /dev/null
+++ b/docs/release/configguide/imgs/spinnaker-deploy.png
Binary files differ
diff --git a/docs/release/configguide/imgs/spinnaker-expected-artifacts.png b/docs/release/configguide/imgs/spinnaker-expected-artifacts.png
new file mode 100644
index 0000000..f8204f7
--- /dev/null
+++ b/docs/release/configguide/imgs/spinnaker-expected-artifacts.png
Binary files differ
diff --git a/docs/release/configguide/imgs/spinnaker-produces-artifact.png b/docs/release/configguide/imgs/spinnaker-produces-artifact.png
new file mode 100644
index 0000000..ba6ab65
--- /dev/null
+++ b/docs/release/configguide/imgs/spinnaker-produces-artifact.png
Binary files differ
diff --git a/docs/release/configguide/imgs/visibility_discovered_active.png b/docs/release/configguide/imgs/visibility_discovered_active.png
new file mode 100644
index 0000000..6c91559
--- /dev/null
+++ b/docs/release/configguide/imgs/visibility_discovered_active.png
Binary files differ
diff --git a/docs/release/configguide/imgs/visibility_distinct_counts.png b/docs/release/configguide/imgs/visibility_distinct_counts.png
new file mode 100644
index 0000000..57ba901
--- /dev/null
+++ b/docs/release/configguide/imgs/visibility_distinct_counts.png
Binary files differ
diff --git a/docs/release/configguide/imgs/visibility_distinct_http.png b/docs/release/configguide/imgs/visibility_distinct_http.png
new file mode 100644
index 0000000..e15333d
--- /dev/null
+++ b/docs/release/configguide/imgs/visibility_distinct_http.png
Binary files differ
diff --git a/docs/release/configguide/imgs/visibility_monitoring_metrics.png b/docs/release/configguide/imgs/visibility_monitoring_metrics.png
new file mode 100644
index 0000000..f5c6ada
--- /dev/null
+++ b/docs/release/configguide/imgs/visibility_monitoring_metrics.png
Binary files differ
diff --git a/docs/release/configguide/imgs/visibility_overview.png b/docs/release/configguide/imgs/visibility_overview.png
new file mode 100644
index 0000000..f986440
--- /dev/null
+++ b/docs/release/configguide/imgs/visibility_overview.png
Binary files differ
diff --git a/docs/release/configguide/imgs/visibility_system_counts_response_times.png b/docs/release/configguide/imgs/visibility_system_counts_response_times.png
new file mode 100644
index 0000000..a456a61
--- /dev/null
+++ b/docs/release/configguide/imgs/visibility_system_counts_response_times.png
Binary files differ
diff --git a/docs/release/configguide/index.rst b/docs/release/configguide/index.rst
index daf8986..d0c446e 100644
--- a/docs/release/configguide/index.rst
+++ b/docs/release/configguide/index.rst
@@ -3,14 +3,19 @@
.. http://creativecommons.org/licenses/by/4.0
.. (c) OPNFV, Authors of Clover
-.. _clover_config_guides:
+.. _clover_configguide:
-=================================
-OPNFV Clover Configuration Guides
-=================================
+==========================
+Clover Configuration Guide
+==========================
.. toctree::
:maxdepth: 2
+ controller_services_config_guide.rst
sdc_config_guide.rst
- a_b_config_guide.rst
+ jmeter_config_guide.rst
+ visibility_config_guide.rst
+ modsecurity_config_guide.rst
+ spinnaker_config_guide.rst
+ clovisor_config_guide.rst
diff --git a/docs/release/configguide/jmeter_config_guide.rst b/docs/release/configguide/jmeter_config_guide.rst
new file mode 100644
index 0000000..78858d0
--- /dev/null
+++ b/docs/release/configguide/jmeter_config_guide.rst
@@ -0,0 +1,300 @@
+.. This work is licensed under a Creative Commons Attribution 4.0 International License.
+.. http://creativecommons.org/licenses/by/4.0
+.. SPDX-License-Identifier CC-BY-4.0
+.. (c) Authors of Clover
+
+.. _jmeter_config_guide:
+
+=======================================
+JMeter Validation Configuration Guide
+=======================================
+
+This document provides a guide to use the JMeter validation service, which is introduced in
+the Clover Gambia release.
+
+Overview
+=========
+
+`Apache JMeter`_ is a mature, open source application that supports web client emulation. Its
+functionality has been integrated into the Clover project to allow various CI validations
+and performance tests to be performed. The system under test can either be REST services/APIs
+directly or a set of L7 network services. In the latter scenario, Clover nginx servers may
+be employed as an endpoint to allow traffic to be sent end-to-end across a service chain.
+
+The Clover JMeter integration is packaged as docker containers with manifests to deploy
+in a Kubernetes (k8s) cluster. The Clover CLI (**cloverctl**) can be used to configure and
+control the JMeter service within the k8s cluster via **clover-controller**.
+
+The Clover JMeter integration has the following attributes:
+
+ * **Master/Slave Architecture:** uses the native master/slave implementation of JMeter. The master
+ and slaves have distinct OPNFV docker containers for rapid deployment and usage. Slaves allow
+ the scale of the emulation to be increased linearly for performance testing. However, for
+ functional validations and modest scale, the master may be employed without any slaves.
+
+ * **Test Creation & Control:** JMeter makes use of a rich XML-based test plan. While this offers
+ a plethora of configurable options, it can be daunting for a beginner user to edit directly.
+ Clover provides an abstracted yaml syntax exposing a subset of the available configuration
+ parameters. JMeter test plans are generated on the master and tests can be started from
+ **cloverctl** CLI.
+
+ * **Result Collection:** summary log results and detailed per-request results can be retrieved
+ from the JMeter master during and after tests from the **cloverctl** or from a REST API exposed
+ via **clover-controller**.
+
+.. image:: imgs/jmeter_overview.png
+ :align: center
+ :scale: 100%
+
+Deploying Clover JMeter service
+===============================
+
+Prerequisites
+-------------
+
+The following assumptions must be met before continuing on to deployment:
+
+ * Installation of Docker has already been performed. It's preferable to install Docker CE.
+ * Installation of k8s in a single-node or multi-node cluster.
+ * Clover CLI (**cloverctl**) has been downloaded and setup. Instructions to deploy can be found
+ at :ref:`controller_services_controller`
+ * The **clover-controller** service is deployed in the k8s cluster the validation services will
+ be deployed in. Instructions to deploy can be found at :ref:`controller_services_controller`.
+
+Deploy with Clover CLI
+-----------------------
+
+The easiest way to deploy Clover JMeter validation services into your k8s cluster is to use the
+**cloverctl** CLI using the following command:
+
+.. code-block:: bash
+
+ $ cloverctl create system validation
+
+Container images with the Gambia release tag will pulled if the tag is unspecified. The release
+tag is **opnfv-7.0.0** for the Gambia release. To deploy the latest containers from master, use
+the command shown below::
+
+ $ cloverctl create system validation -t latest
+
+The Clover CLI will add master/slave pods to the k8s cluster in the default namespace.
+
+The JMeter master/slave docker images will automatically be pulled from the OPNFV public
+Dockerhub registry. Deployments and respective services will be created with three slave
+replica pods added with the **clover-jmeter-slave** prefix. A single master pod will be
+created with the **clover-jmeter-master** prefix.
+
+Deploy from source
+------------------
+
+To continue to deploy from the source code, clone the Clover git repository and navigate
+within to the directory, as shown below:
+
+.. code-block:: bash
+
+ $ git clone https://gerrit.opnfv.org/gerrit/clover
+ $ cd clover/clover/tools/jmeter/yaml
+ $ git checkout stable/gambia
+
+To deploy the master use the following two commands, which will create a manifest with
+the Gambia release tags and creates the deployment in the k8s cluster::
+
+ $ python render_master.py --image_tag=opnfv-7.0.0 --image_path=opnfv
+ $ kubectl create -f clover-jmeter-master.yaml
+
+JMeter can be injected into an Istio service mesh. To deploy in the default
+namespace within the service mesh, use the following command for manual
+sidecar injection::
+
+ $ istioctl kube-inject -f clover-jmeter-master.yaml | kubectl apply -f -
+
+**Note, when injecting JMeter into the service mesh, only the master will function for
+the Clover integration, as master-slave communication is known not to function with the Java
+RMI API. Ensure 'istioctl' is in your path for the above command.**
+
+To deploy slave replicas, render the manifest yaml and create in k8s adjusting the
+``--replica_count`` value for the number of slave pods desired::
+
+ $ python render_slave.py --image_tag=opnfv-7.0.0 --image_path=opnfv --replica_count=3
+ $ kubectl create -f clover-jmeter-slave.yaml
+
+Verifying the deployment
+------------------------
+
+To verify the validation services are deployed, ensure the following pods are present
+with the command below:
+
+.. code-block:: bash
+
+ $ kubectl get pod --all-namespaces
+
+The listing below must include the following pods assuming deployment in the default
+namespace:
+
+.. code-block:: bash
+
+ NAMESPACE NAME READY STATUS
+ default clover-jmeter-master-688677c96f-8nnnr 1/1 Running
+ default clover-jmeter-slave-7f9695d56-8xh67 1/1 Running
+ default clover-jmeter-slave-7f9695d56-fmpz5 1/1 Running
+ default clover-jmeter-slave-7f9695d56-kg76s 1/1 Running
+ default clover-jmeter-slave-7f9695d56-qfgqj 1/1 Running
+
+Using JMeter Validation
+=======================
+
+Creating a test plan
+--------------------
+
+To employ a test plan that can be used against the :ref:`sdc_config_guide` sample, navigate to
+cloverctl yaml directory and use the sample named 'jmeter_testplan.yaml', which is shown below.
+
+.. code-block:: bash
+
+ load_spec:
+ num_threads: 5
+ loops: 2
+ ramp_time: 60
+ duration: 80
+ url_list:
+ - name: url1
+ url: http://proxy-access-control.default:9180
+ method: GET
+ user-agent: chrome
+ - name: url2
+ url: http://proxy-access-control.default:9180
+ method: GET
+ user-agent: safari
+
+The composition of the yaml file breaks down as follows:
+ * ``load_spec`` section of the yaml defines the load profile of the test.
+ * `num_threads`` parameter defines the maximum number of clients/users the test will emulate.
+ * ``ramp_time`` determines the rate at which threads/users will be setup.
+ * ``loop`` parameter reruns the same test and can be set to 0 to loop forever.
+ * ``duration`` parameter is used to limit the test run time and be used as a hard cutoff when
+ using loop forever.
+ * ``url_list`` section of the yaml defines a set of HTTP requests that each user will perform.
+ It includes the request URL that is given a name (used as reference in detailed per-request
+ results) and the HTTP method to use (ex. GET, POST). The ``user-agent`` parameter allows this
+ HTTP header to be specified per request and can be used to emulate browsers and devices.
+
+The ``url`` syntax is <domain or IP>:<port #>. The colon port number may be omitted if port 80
+is intended.
+
+The test plan yaml is an abstraction of the JMeter XML syntax (uses .jmx extension) and can be
+pushed to the master using the **cloverctl** CLI with the following command:
+
+.. code-block:: bash
+
+ $ cloverctl create testplan –f jmeter_testplan.yaml
+
+The test plan can now be executed and will automatically be distributed to available JMeter slaves.
+
+Starting the test
+-----------------
+
+Once a test plan has been created on the JMeter master, a test can be started for the test plan
+with the following command:
+
+.. code-block:: bash
+
+ $ cloverctl start testplan
+
+The test will be executed from the **clover-jmeter-master** pod, whereby HTTP requests will
+originate directly from the master. The number of aggregate threads/users and request rates
+can be scaled by increasing the thread count or decreasing the ramp time respectively in the
+test plan yaml. However, the scale of the test can also be controlled by adding slaves to the
+test. When slaves are employed, the master will only be used to control slaves and will not be
+a source of traffic. Each slave pod will execute the test plan in its entirety.
+
+To execute tests using slaves, add the flag '-s' to the start command from the Clover CLI as shown
+below:
+
+.. code-block:: bash
+
+ $ cloverctl start testplan –s <slave count>
+
+The **clover-jmeter-slave** pods must be deployed in advance before executing the above command. If
+the steps outlined in section `Deploy with Clover CLI`_ have been followed, three slaves will
+have already been deployed.
+
+Retrieving Results
+------------------
+
+Results for the test can be obtained by executing the following command:
+
+.. code-block:: bash
+
+ $ cloverctl get testresult
+ $ cloverctl get testresult log
+
+The bottom of the log will display a summary of the test results, as shown below::
+
+ 3 in 00:00:00 = 111.1/s Avg: 7 Min: 6 Max: 8 Err: 0 (0.00%)
+ 20 in 00:00:48 = 0.4/s Avg: 10 Min: 6 Max: 31 Err: 0 (0.00%)
+
+Each row of the summary table is a snapshot in time with the final numbers in the last row.
+In this example, 20 requests (5 users/threads x 2 URLs) x loops) were sent successfully
+with no HTTP responses with invalid/error (4xx/5xx) status codes. Longer tests will produce
+a larger number of snapshot rows. Minimum, maximum and average response times are output per
+snapshot.
+
+To obtain detailed, per-request results use the ``detail`` option, as shown below::
+
+ $ cloverctl get testresult detail
+
+ 1541567388622,14,url1,200,OK,ThreadGroup 1-4,text,true,,843,0,1,1,14,0,0
+ 1541567388637,8,url2,200,OK,ThreadGroup 1-4,text,true,,843,0,1,1,8,0,0
+ 1541567388646,6,url1,200,OK,ThreadGroup 1-4,text,true,,843,0,1,1,6,0,0
+ 1541567388653,7,url2,200,OK,ThreadGroup 1-4,text,true,,843,0,1,1,7,0,0
+ 1541567400622,12,url1,200,OK,ThreadGroup 1-5,text,true,,843,0,1,1,12,0,0
+ 1541567400637,8,url2,200,OK,ThreadGroup 1-5,text,true,,843,0,1,1,8,0,0
+ 1541567400645,7,url1,200,OK,ThreadGroup 1-5,text,true,,843,0,1,1,7,0,0
+ 1541567400653,6,url2,200,OK,ThreadGroup 1-5,text,true,,843,0,1,1,6,0,0
+
+Columns are broken down on the following fields:
+ * timeStamp, elapsed, label, responseCode, responseMessage, threadName, dataType, success
+ * failureMessage bytes, sentBytes, grpThreads, allThreads, Latency, IdleTime, Connect
+
+``elapsed`` or ``Latency`` values are in milliseconds.
+
+Uninstall from Kubernetes environment
+=====================================
+
+Delete with Clover CLI
+-----------------------
+
+When you're finished working with JMeter validation services, you can uninstall it with the
+following command:
+
+.. code-block:: bash
+
+ $ cloverctl delete system validation
+
+The command above will remove the clover-jmeter-master and clover-jmeter-slave deployment
+and service resources from the current k8s context.
+
+Delete from source
+------------------
+
+The JMeter validation services can be uninstalled from the source code using the commands below:
+
+.. code-block:: bash
+
+ $ cd clover/samples/scenarios
+ $ kubectl delete -f clover-jmeter-master.yaml
+ $ kubectl delete -f clover-jmeter-slave.yaml
+
+Uninstall from Docker environment
+=================================
+
+The OPNFV docker images can be removed with the following commands from nodes
+in the k8s cluster.
+
+.. code-block:: bash
+
+ $ docker rmi opnfv/clover-jmeter-master
+ $ docker rmi opnfv/clover-jmeter-slave
+ $ docker rmi opnfv/clover-controller
+
+.. _Apache JMeter: https://jmeter.apache.org/
diff --git a/docs/release/configguide/sdc_config_guide.rst b/docs/release/configguide/sdc_config_guide.rst
index b95b6cf..a50795f 100644
--- a/docs/release/configguide/sdc_config_guide.rst
+++ b/docs/release/configguide/sdc_config_guide.rst
@@ -119,7 +119,7 @@ The following assumptions must be met before continuing on to deployment:
* Ubuntu 16.04 was used heavily for development and is advised for greenfield deployments.
* Installation of Docker has already been performed. It's preferable to install Docker CE.
* Installation of Kubernetes has already been performed. The installation in this guide was
- executed in a single-node Kubernetes cluster on a modest virtual machine.
+ executed in a single-node Kubernetes cluster.
* Installation of a pod network that supports the Container Network Interface (CNI). It is
recommended to use flannel, as most development work employed this network add-on. Success
using Weave Net as the CNI plugin has also been reported.
@@ -138,32 +138,32 @@ two commands:
$ docker pull opnfv/clover:<release_tag>
-The <release_tag> is **opnfv-6.0.0** for the Fraser release. However, the latest
-will be pulled if the tag is unspecified. To deploy the Fraser release use these commands:
+The <release_tag> is **opnfv-7.0.0** for the Gambia release. However, the latest
+will be pulled if the tag is unspecified. To deploy the Gambia release use these commands:
.. code-block:: bash
- $ docker pull opnfv/clover:opnfv-6.0.0
+ $ docker pull opnfv/clover:opnfv-7.0.0
$ sudo docker run --rm \
-v ~/.kube/config:/root/.kube/config \
opnfv/clover \
/bin/bash -c '/home/opnfv/repos/clover/samples/scenarios/deploy.sh'
-The deploy script invoked above begins by installing Istio 0.6.0 into your Kubernetes environment.
+The deploy script invoked above begins by installing Istio 1.0.0 into your Kubernetes environment.
It proceeds to deploy the entire SDC manifest. If you've chosen to employ this method of
deployment, you may skip the next section.
Deploy from source
------------------
-Ensure Istio 0.6.0 is installed, as a prerequisite, using the following commands:
+Ensure Istio 1.0.0 is installed, as a prerequisite, using the following commands:
.. code-block:: bash
- $ curl -L https://github.com/istio/istio/releases/download/0.6.0/istio-0.6.0-linux.tar.gz | tar xz
- $ cd istio-0.6.0
+ $ curl -L https://github.com/istio/istio/releases/download/1.0.0/istio-1.0.0-linux.tar.gz | tar xz
+ $ cd istio-1.0.0
$ export PATH=$PWD/bin:$PATH
- $ kubectl apply -f install/kubernetes/istio.yaml
+ $ kubectl apply -f install/kubernetes/istio-demo.yaml
The above sequence of commands installs Istio with manual sidecar injection without mutual TLS
authentication between sidecars.
@@ -175,21 +175,21 @@ within the samples directory as shown below:
$ git clone https://gerrit.opnfv.org/gerrit/clover
$ cd clover/samples/scenarios
- $ git checkout stable/fraser
+ $ git checkout stable/gambia
To deploy the sample in the default Kubernetes namespace, use the following command for Istio
manual sidecar injection:
.. code-block:: bash
- $ kubectl apply -f <(istioctl kube-inject --debug -f service_delivery_controller_opnfv.yaml)
+ $ istioctl kube-inject -f service_delivery_controller_opnfv.yaml | kubectl apply -f -
To deploy in another namespace, use the '-n' option. An example namespace of 'sdc' is shown below:
.. code-block:: bash
$ kubectl create namespace sdc
- $ kubectl apply -n sdc -f <(istioctl kube-inject --debug -f service_delivery_controller_opnfv.yaml)
+ $ istioctl kube-inject -f service_delivery_controller_opnfv.yaml | kubectl apply -n sdc -f -
When using the above SDC manifest, all required docker images will automatically be pulled
from the OPNFV public Dockerhub registry. An example of using a Docker local registry is also
@@ -226,11 +226,20 @@ The result of the Istio deployment must include the following pods:
.. code-block:: bash
- $ NAMESPACE NAME READY STATUS
- istio-system istio-ca-59f6dcb7d9-9frgt 1/1 Running
- istio-system istio-ingress-779649ff5b-mcpgr 1/1 Running
- istio-system istio-mixer-7f4fd7dff-mjpr8 3/3 Running
- istio-system istio-pilot-5f5f76ddc8-cglxs 2/2 Running
+ $ NAMESPACE NAME READY STATUS
+ istio-system grafana-6995b4fbd7-pjgbh 1/1 Running
+ istio-system istio-citadel-54f4678f86-t2dng 1/1 Running
+ istio-system istio-egressgateway-5d7f8fcc7b-hs7t4 1/1 Running
+ istio-system istio-galley-7bd8b5f88f-wtrdv 1/1 Running
+ istio-system istio-ingressgateway-6f58fdc8d7-vqwzj 1/1 Running
+ istio-system istio-pilot-d99689994-b48nz 2/2 Running
+ istio-system istio-policy-766bf4bd6d-l89vx 2/2 Running
+ istio-system istio-sidecar-injector-85ccf84984-xpmxp 1/1 Running
+ istio-system istio-statsd-prom-bridge-55965ff9c8-q25rk 1/1 Running
+ istio-system istio-telemetry-55b6b5bbc7-qrg28 2/2 Running
+ istio-system istio-tracing-77f9f94b98-zljrt 1/1 Running
+ istio-system prometheus-7456f56c96-zjd29 1/1 Running
+ istio-system servicegraph-684c85ffb9-9h6p7 1/1 Running
.. _sdc_ingress_port:
@@ -241,19 +250,66 @@ To determine how incoming http traffic on port 80 will be translated, use the fo
.. code-block:: bash
- $ kubectl get svc -n istio-system
- NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S)
- istio-ingress LoadBalancer 10.104.208.165 <pending> 80:32410/TCP,443:31045/TCP
+ $ kubectl get svc -n istio-system | grep LoadBalancer
+ NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S)
+ istio-ingressgateway LoadBalancer 10.111.40.165 <pending> 80:32410/TCP,443:31390/TCP
**Note, the CLUSTER-IP of the service will be unused in this example since load balancing service
types are unsupported in this configuration. It is normal for the EXTERNAL-IP to show status
<pending> indefinitely**
-In this example, traffic arriving on port 32410 will flow to istio-ingress. The
-istio-ingress service will route traffic to the **proxy-access-control** service based on a
-configured ingress rule, which defines a gateway for external traffic to enter
-the Istio service mesh. This makes the traffic management and policy features of Istio available
-for edge services.
+In this example, traffic arriving on port 32410 will flow to istio-ingressgateway. The
+istio-ingressgateway service will route traffic to the **proxy-access-control** service based on
+configured Istio ``Gateway`` and ``VirtualService`` resources, which are shown below. The
+``Gateway`` defines a gateway for external traffic to enter the Istio service mesh based on
+incoming protocol, port and domain (``hosts:`` section currently using wildcard). The
+``VirtualService`` associates to a particular ``Gateway`` (sdc-gateway here) and allows for route
+rules to be setup. In the example below, any URL with prefix '/' will be routed to the service
+**proxy-access-control** on port 9180. Additionally, ingress traffic can be mirrored by
+adding a directive to the ``VirtualService`` definition. Below, all matching traffic will be
+mirrored to the **snort-ids** (duplicating internal mirroring performed by the
+**proxy-access-control** for illustrative purposes)
+
+This allows the traffic management and policy features of Istio available to external services and
+clients.
+
+.. code-block:: bash
+
+ apiVersion: networking.istio.io/v1alpha3
+ kind: Gateway
+ metadata:
+ name: sdc-gateway
+ spec:
+ selector:
+ istio: ingressgateway # use istio default controller
+ servers:
+ - port:
+ number: 80
+ name: http
+ protocol: HTTP
+ hosts:
+ - "*"
+ ---
+ apiVersion: networking.istio.io/v1alpha3
+ kind: VirtualService
+ metadata:
+ name: sdcsample
+ spec:
+ hosts:
+ - "*"
+ gateways:
+ - sdc-gateway
+ http:
+ - match:
+ - uri:
+ prefix: /
+ route:
+ - destination:
+ host: proxy-access-control
+ port:
+ number: 9180
+ mirror:
+ host: snort-ids
Using the sample
================
@@ -269,6 +325,8 @@ flannel CNI IP address, as shown below:
$ wget http://10.244.0.1:32410/
$ curl http://10.244.0.1:32410/
+An IP address of a node within the Kubernetes cluster may also be employed.
+
An HTTP response will be returned as a result of the wget or curl command, if the SDC sample
is operating correctly. However, the visibility into what services were accessed within
the service mesh remains hidden. The next section `Exposing tracing and monitoring`_ shows how
@@ -279,26 +337,14 @@ to inspect the internals of the Istio service mesh.
Exposing tracing and monitoring
-------------------------------
-To gain insight into the service mesh, the Jaeger tracing and Prometheus monitoring tools
-can also be deployed. These tools can show how the sample functions in the service mesh.
-Using the Clover container, issue the following command to deploy these tools
-into your Kubernetes environment:
-
-.. code-block:: bash
-
- $ sudo docker run --rm \
- -v ~/.kube/config:/root/.kube/config \
- opnfv/clover \
- /bin/bash -c '/home/opnfv/repos/clover/samples/scenarios/view.sh'
-
The Jaeger tracing UI is exposed outside of the Kubernetes cluster via any node IP in the cluster
using the following commands **(above command already executes the two commands below)**:
.. code-block:: bash
- $ kubectl expose -n istio-system deployment jaeger-deployment --port=16686 --type=NodePort
+ $ kubectl expose -n istio-system deployment istio-tracing --port=16686 --type=NodePort
-Likewise, the Promethues monitoring UI is exposed with the following command:
+Likewise, the Prometheus monitoring UI is exposed with the following command:
.. code-block:: bash
@@ -309,9 +355,9 @@ following command:
.. code-block:: bash
- $ kubectl get svc --all-namespaces
+ $ kubectl get svc -n istio-system | grep NodePort
NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S)
- istio-system jaeger-deployment NodePort 10.105.94.85 <none> 16686:32174/TCP
+ istio-system istio-tracing NodePort 10.105.94.85 <none> 16686:32174/TCP
istio-system prometheus NodePort 10.97.74.230 <none> 9090:32708/TCP
In the example above, the Jaeger tracing web-based UI will be available on port 32174 and
@@ -321,13 +367,28 @@ URLs for Jaeger and Prometheus respectively::
http://<node IP>:32174
http://<node IP>:32708
-Where node IP is an IP from one of the Kubernetes cluster node(s).
+Where node IP is an IP of one of the Kubernetes cluster node(s) on a CNI IP address.
+Alternatively, the tracing and monitoring services can be exposed with a LoadBalancer
+service if supported by your Kubernetes cluster (such as GKE), as shown below for tracing::
+
+ kind: Service
+ apiVersion: v1
+ metadata:
+ name: istio-tracing
+ spec:
+ selector:
+ app: jaeger
+ ports:
+ - name: http
+ protocol: TCP
+ port: 80
+ targetPort: 16686
+ type: LoadBalancer
.. image:: imgs/sdc_tracing.png
:align: center
:scale: 100%
-
The diagram above shows the Jaeger tracing UI after traces have been fetched for the
**proxy-access-control** service. After executing an HTTP request using the simple curl/wget
commands outlined in `Using the sample`_ , a list of SDC services will be displayed
@@ -336,8 +397,9 @@ the drop-down and click the ``Find Traces`` button at the bottom of the left con
The blue box denotes what should be displayed for the services that were involved in
handling the request including:
- * istio-ingress
+ * istio-ingressgateway
* proxy-access-control
+ * snort-ids
* http-lb
* clover-server1 OR clover-server2 OR clover-server3
@@ -347,10 +409,9 @@ Modifying the run-time configuration of services
================================================
The following control-plane actions can be invoked via GRPC messaging from a controlling agent.
-For this example, it is conducted from the host OS of a Kubernetes cluster node.
-
-**Note, the subsequent instructions assume the flannel network CNI plugin is installed. Other
-Kubernetes networking plugins may work but have not been validated.**
+For this example, it is conducted from the host OS of a Kubernetes cluster node using Clover
+system services. This requires **clover-controller** and **cloverctl** CLI be deployed. See
+instructions at :ref:`controller_services_controller`.
.. _sdc_modify_lb:
@@ -359,21 +420,16 @@ Modifying the http-lb server list
By default, both versions of the load balancers send incoming HTTP requests to
**clover-server1/2/3** in round-robin fashion. To have the version 2 load balancer
-(**http-lb-v2**) send its traffic to **clover-server4/5** instead, issue the following command:
+(**http-lb-v2**) send its traffic to **clover-server4/5** instead, issue the following command
+from the **cloverctl** CLI::
-.. code-block:: bash
+ $ cloverctl set lb -f lbv2.yaml
- $ sudo docker run --rm \
- -v ~/.kube/config:/root/.kube/config \
- opnfv/clover \
- /bin/bash -c 'python /home/opnfv/repos/clover/samples/services/nginx/docker/grpc/nginx_client.py \
- --service_type=lbv2 --service_name=http-lb-v2'
+The ``lbv2.yaml`` is available from the yaml directory relative to the **cloverctl** binary.
If the command executes successfully, the return message should appear as below::
- Pod IP: 10.244.0.184
Modified nginx config
- Modification complete
If several more HTTP GET requests are subsequently sent to the ingress, the Jaeger UI should
begin to display requests flowing to **clover-server4/5** from **http-lb-v2**. The **http-lb-v1**
@@ -402,40 +458,35 @@ for the alerts. Drilling down into the trace will show a GPRC message from snort
$ wget -U 'asafaweb.com' http://10.244.0.1:32410/
Or alternatively with curl, issue this command to trigger the alert:
-
+:
.. code-block:: bash
$ curl -A 'asafaweb.com' http://10.244.0.1:32410/
The community rule can be copied to local rules in order to ensure an alert is generated
-each time the HTTP GET request is observed by snort using the following command.
-
-.. code-block:: bash
+each time the HTTP GET request is observed by snort using the following commands from
+the **cloverctl** CLI::
- $ sudo docker run --rm \
- -v ~/.kube/config:/root/.kube/config \
- opnfv/clover \
- /bin/bash -c 'python /home/opnfv/repos/clover/samples/services/snort_ids/docker/grpc/snort_client.py \
- --cmd=addscan --service_name=snort-ids'
+ $ cloverctl create idsrules -f idsrule_scan.yaml
+ $ cloverctl stop ids
+ $ cloverctl start ids
-Successful completion of the above command will yield output similar to the following::
+The ``idsrule_scan.yaml`` is available from the yaml directory relative to the **cloverctl**
+binary. Successful completion of the above commands will yield output similar to the following::
- Pod IP: 10.244.0.183
- Stopped Snort on pid: 34, Cleared Snort logs
- Started Snort on pid: 91
Added to local rules
+ Stopped Snort on pid: 48, Cleared Snort logs
+ Started Snort on pid: 155
-To add an ICMP rule to snort service, use the following command:
+To add an ICMP rule to snort service, use the following command::
-.. code-block:: bash
+ $ cloverctl create idsrules -f idsrule_icmp.yaml
+ $ cloverctl stop ids
+ $ cloverctl start ids
- $ sudo docker run --rm \
- -v ~/.kube/config:/root/.kube/config \
- opnfv/clover \
- /bin/bash -c 'python /home/opnfv/repos/clover/samples/services/snort_ids/docker/grpc/snort_client.py \
- --cmd=addicmp --service_name=snort-ids'
+The ``idsrule_icmp.yaml`` is available from the yaml directory relative to the **cloverctl**
-Successful execution of the above command will trigger alerts whenever ICMP packets are observed
+Successful execution of the above commands will trigger alerts whenever ICMP packets are observed
by the snort service. An alert can be generated by pinging the snort service using the flannel IP
address assigned to the **snort-ids** pod. The Jaeger UI can again be inspected and should display
the same ``ProcessAlert`` messages flowing from the **snort-ids** to the **proxy-access-control**
@@ -560,12 +611,6 @@ custom rule is ``10000001`` and is output in the above listing.
To exit the Redis CLI, use the command ``exit``.
-A-B Validation
---------------
-
-Please see the configuration guide at :ref:`a_b_config_guide` for details on
-validating A-B route rules using the sample in this guide.
-
Uninstall from Kubernetes envionment
====================================
@@ -622,9 +667,8 @@ was installed from source and use the following command:
.. code-block:: bash
- $ cd istio-0.6.0
- $ kubectl delete -f install/kubernetes/istio.yaml
-
+ $ cd istio-1.0.0
+ $ kubectl delete -f install/kubernetes/istio-demo.yaml
Uninstall from Docker environment
=================================
@@ -640,15 +684,13 @@ The OPNFV docker images can be removed with the following commands:
$ docker rmi opnfv/clover
If deployment was performed with the Clover container, the first four images above will not
-be present. The Redis, Prometheus and Jaeger docker images can be removed with the following
-commands, if deployed from source:
+be present. The Redis docker images can be removed with the following commands, if deployed
+from source:
.. code-block:: bash
$ docker rmi k8s.gcr.io/redis
$ docker rmi kubernetes/redis
- $ docker rmi prom/prometheus
- $ docker rmi jaegertracing/all-in-one
If docker images were built locally, they can be removed with the following commands:
diff --git a/docs/release/configguide/spinnaker_config_guide.rst b/docs/release/configguide/spinnaker_config_guide.rst
index f4a3e12..3c46e82 100644
--- a/docs/release/configguide/spinnaker_config_guide.rst
+++ b/docs/release/configguide/spinnaker_config_guide.rst
@@ -240,3 +240,66 @@ Deleting the kubernetes provider in spinnaker:
.. code-block:: bash
$ cloverctl delete provider kubernetes -n my-kubernetes
+
+Deploy Helm Charts
+==================
+
+Currently, spinnaker support to deploy applications with the helm chart. More information please refer to `Deploy Helm Charts <https://www.spinnaker.io/guides/user/kubernetes-v2/deploy-helm/>`_.
+
+Upload helm charts to artifacts
+-------------------------------
+
+Before doing this, please package the helm chart first. how to package the chart, refer to `helm documentation <https://docs.helm.sh/helm/#helm_package>`_.
+
+.. code-block:: bash
+
+ $ wget https://dl.minio.io/client/mc/release/linux-amd64/mc
+ $ chmod +x mc
+ $ ./mc config host add my_minio http://{minio-service-ip}:9000 dont-use-this for-production S3v4
+ $ ./mc mb my_minio/s3-account
+ $ ./mc cp test-0.1.0.tgz my_minio/s3-account/test-0.1.0.tgz
+
+**NOTE:** the minio-service-ip is 10.233.21.175 in this example
+
+Configure Pipeline
+------------------
+
+This pipeline include three stages,configuration, bake and deploy.
+
+Configuration stage
+:::::::::::::::::::
+
+We can configure Automated triggers and expected artifacts in this stage.
+We just declare expected artifacts and trigger the pipeline manually.
+
+.. image:: imgs/spinnaker-expected-artifacts.png
+ :align: center
+ :scale: 100%
+
+**NOTE:** We need to enable "Use Default Artifact", when we need trigger the pipeline manually
+
+Bake Manifest stage
+:::::::::::::::::::
+
+For example, we have a test "Bake(Manifest)" stage below
+
+.. image:: imgs/spinnaker-bake.png
+ :align: center
+ :scale: 100%
+
+Spinnaker has automatically created an embedded/base64 artifact that is bound when the stage completes, representing the fully baked manifest set to be deployed downstream.
+
+.. image:: imgs/spinnaker-produces-artifact.png
+ :align: center
+ :scale: 100%
+
+Deploy Manifest stage
+:::::::::::::::::::::
+
+After the chart was baked by helm, we can configure a "Deploy(Manifest)" stage to deploy the manifest produced by previous stage as shown below.
+
+.. image:: imgs/spinnaker-deploy.png
+ :align: center
+ :scale: 100%
+
+Once this pipeline runs completely, you can see every resource in your Helm chart get deployed.
diff --git a/docs/release/configguide/visibility_config_guide.rst b/docs/release/configguide/visibility_config_guide.rst
new file mode 100644
index 0000000..77db2f7
--- /dev/null
+++ b/docs/release/configguide/visibility_config_guide.rst
@@ -0,0 +1,403 @@
+.. This work is licensed under a Creative Commons Attribution 4.0 International License.
+.. http://creativecommons.org/licenses/by/4.0
+.. SPDX-License-Identifier CC-BY-4.0
+.. (c) Authors of Clover
+
+.. _visibility_config_guide:
+
+==============================================
+Clover Visibility Services Configuration Guide
+==============================================
+
+This document provides a guide to use Clover visibility services, which are initially delivered in
+the Clover Gambia release. A key assumption of this guide is that Istio 1.0.x has been deployed
+to Kubernetes (k8s), as it is a foundational element for Clover visibility services.
+
+Overview
+=========
+
+Clover visibility services are an integrated set of microservices that allow HTTP/gRPC traffic to
+be observed and analyzed in an Istio service mesh within k8s managed clusters. It leverages
+observability open source projects from the CNCF community such as Jaeger for distributed tracing
+and Prometheus for monitoring. These tools are packaged with Istio and service mesh sidecars have
+extensive hooks built in to interface with them. They gather low-level, per HTTP request driven
+data. Clover visibility services focus on enriching the data, gathering it from various sources
+and analyzing it at the system or aggregate level.
+
+The visibility services are comprised of the following microservices all deployed within the
+**clover-system** namespace in a k8s cluster:
+
+ * **clover-controller** - exposes REST interface external to the k8s cluster and
+ used to relay messages to other Clover services via gRPC from external agents including
+ **cloverctl** CLI, web browsers and other APIs, scripts or CI jobs. It incorporates a web
+ application with dashboard views to consume analyzed visibility data and control other
+ Clover services.
+
+ * **clover-collector** - gathers data from tracing (Jaeger) and monitoring (Prometheus)
+ infrastructure that is integrated with Istio using a pull model.
+
+ * **clover-spark** - is a Clover specific Apache Spark service. It leverages Spark 2.3.x native
+ k8s support and includes visibility services artifacts to execute Spark jobs.
+
+ * **clover-spark-submit** - simple service to continually perform Spark job submits interacting
+ with the k8s API to spawn driver and executor pods.
+
+ * **cassandra** - a sink for visibility data from **clover-collector** with specific schemas
+ for monitoring and tracing.
+
+ * **redis** - holds configuration data and analyzed data for visibility services. Used by
+ **clover-controller** web application and REST API to maintain state and exchange data.
+
+The table below shows key details of the visibility service manifests outlined above:
+
++---------------------+----------------------+---------------------------+-----------------------+
+| Service | Kubernetes | Docker Image | Ports |
+| | Deployment App Name | | |
++=====================+======================+===========================+=======================+
+| Controller | clover-controller | opnfv/clover-controller | HTTP: 80 (external) |
+| | | | gRPC: 50052, 50054 |
+| | | | |
++---------------------+----------------------+---------------------------+-----------------------+
+| Collector | clover-collector | opnfv/clover-collector | Jaeger: 16686 |
+| | | | Prometheus: 9090 |
+| | | | gRPC: 50054 |
+| | | | Datastore: 6379, 9042 |
++---------------------+----------------------+---------------------------+-----------------------+
+| Spark | clover-spark | opnfv/clover-spark | Datastore: 6379, 9042 |
+| | clover-spark-submit | opnfv/clover-spark-submit | |
+| | | | |
+| | | | |
+| | | | |
++---------------------+----------------------+---------------------------+-----------------------+
+| Data Stores | cassandra | cassandra:3 | 9042 |
+| | redis | k8s.gcr.io/redis:v1 | 6379 |
+| | | kubernetes/redis:v1 | |
++---------------------+----------------------+---------------------------+-----------------------+
+
+The **redis** and **cassandra** data stores use community container images while the other
+services use Clover-specific Dockerhub OPNFV images.
+
+Additionally, visibility services are operated with the **cloverctl** CLI. Further information on
+setting up **clover-controller** and **cloverctl** can be found at
+:ref:`controller_services_config_guide`.
+
+
+.. image:: imgs/visibility_overview.png
+ :align: center
+ :scale: 100%
+
+The diagram above shows the flow of data through the visibility services where all blue arrows
+denote the path of data ingestion originating from the observability tools. The
+**clover-collector** reads data from these underlying tools using their REST query interfaces
+and inserts into schemas within the **cassandra** data store.
+
+Apache Spark jobs are used to analyze data within **cassandra**. Spark is deployed using native
+Kubernetes support added since Spark version 2.3. The **clover-spark-submit**
+container continually submits jobs to the Kubernetes API. The API spawns a Spark driver pod which
+in turn spawns executor pods to run Clover-specific jobs packaged in the **clover-spark**
+service.
+
+Analyzed data from **clover-spark** jobs is written to **redis**, an in-memory data store. The
+**clover-controller** provides a REST API for the analyzed visibility data to be read by other
+services (**cloverctl**, CI jobs, etc.) or viewed using a Clover provided visibility web
+dashboard.
+
+Deploying the visibility engine
+===============================
+
+.. _visibility_prerequisites:
+
+Prerequisites
+-------------
+
+The following assumptions must be met before continuing on to deployment:
+
+ * Installation of Docker has already been performed. It's preferable to install Docker CE.
+ * Installation of k8s in a single-node or multi-node cluster with at least
+ twelve cores and 16GB of memory. Google Kubernetes Engine (GKE) clusters are supported.
+ * Installation of Istio in the k8s cluster. See :ref:`sdc_deploy_container`.
+ * Clover CLI (**cloverctl**) has been downloaded and setup. Instructions to deploy can be found
+ at :ref:`controller_services_controller`.
+
+
+Deploy with Clover CLI
+----------------------
+
+To deploy the visibility services into your k8s cluster use the **cloverctl** CLI command
+shown below::
+
+ $ cloverctl create system visibility
+
+Container images with the Gambia release tag will pulled if the tag is unspecified. The release
+tag is **opnfv-7.0.0** for the Gambia release. To deploy the latest containers from master, use
+the command shown below::
+
+ $ cloverctl create system visibility -t latest
+
+ Using config file: /home/earrage/.cloverctl.yaml
+ Creating visibility services
+ Created clover-system namespace
+ Created statefulset "cassandra".
+ Created service "cassandra"
+ Created pod "redis".
+ Created service "redis"
+ Created deployment "clover-collector".
+ Image: opnfv/clover-collector:latest
+ Created service "clover-collector"
+ Created deployment "clover-controller".
+ Image: opnfv/clover-controller:latest
+ Created service "clover-controller-internal"
+ Created serviceaccount "clover-spark".
+ Created clusterrolebinding "clover-spark-default".
+ Created clusterrolebinding "clover-spark".
+ Created deployment "clover-spark-submit".
+ Image: opnfv/clover-spark-submit:latest
+
+Verifying the deployment
+------------------------
+
+To verify the visibility services deployment, ensure the following pods have been deployed
+with the command below::
+
+ $ kubectl get pod --all-namespaces
+
+ NAMESPACE NAME READY STATUS
+ clover-system clover-collector-7dcc5d849f-6jc6m 1/1 Running
+ clover-system clover-controller-74d8596bb5-qrr6b 1/1 Running
+ clover-system cassandra-0 1/1 Running
+ clover-system redis 2/2 Running
+ clover-system clover-spark-submit-6c4d5bcdf8-kc6l9 1/1 Running
+
+Additionally, spark driver and executor pods will continuously be deployed as displayed below::
+
+ clover-system clover-spark-0fa43841362b3f27b35eaf6112965081-driver
+ clover-system clover-spark-fast-d5135cdbdd8330f6b46431d9a7eb3c20-driver
+ clover-system clover-spark-0fa43841362b3f27b35eaf6112965081-exec-3
+ clover-system clover-spark-0fa43841362b3f27b35eaf6112965081-exec-4
+
+Initializing visibility services
+================================
+
+In order to setup visibility services, initialization and start commands must be
+invoked from the **cloverctl** CLI. There are sample yaml files in yaml directory
+from the **cloverctl** binary path. Navigate to this directory to execute the next
+sequence of commands.
+
+Initialize the visibility schemas in cassandra with the following command::
+
+ $ cloverctl init visibility
+
+ Using config file: /home/earrage/.cloverctl.yaml
+ clover-controller address: http://10.145.71.21:32044
+ Added visibility schemas in cassandra
+
+The initial configuration to the visibility services are the Jaeger tracing and Prometheus
+connection parameters and sample interval to **clover-collector**. To start visibility
+use the sample yaml provided and execute the command::
+
+ cloverctl start visibility -f start_visibility.yaml
+
+ Started collector on pid: 44
+
+The ``start_visibility.yaml`` has defaults for the tracing and monitoring modules packaged with
+Istio 1.0.0.
+
+Configure and control visibility
+================================
+
+The core requirement for Clover visibility services to function, is for your services to be
+added to the Istio service mesh. Istio deployment and usage instructions are in the
+:ref:`sdc_config_guide` and the Service Delivery Controller (SDC) sample can be used to
+evaluate the Clover visibility services initially. A user may inject their own web-based services
+into the service mesh and track separately.
+
+Connecting to visibility dashboard UI
+-------------------------------------
+
+The **clover-controller** service comes packaged with a web-based UI with a visibility view.
+To access the dashboard, navigate to the **clover-controller** address for either a ``NodePort``
+or ``LoadBalancer`` service
+
+ * http://<node or CNI IP address>:<``NodePort`` port>/
+ * http://<``LoadBalancer`` IP address>/
+
+See :ref:`exposing_clover_controller` to expose **clover-controller** externally with a k8s
+service.
+
+Set runtime parameters using Clover CLI
+---------------------------------------
+
+The services visibility will track are based on the deployment/pod names specified in the k8s
+resources. Using some sample services from the SDC guide, the **proxy-access-control**,
+**clover-server1**, **clover-server2** and **clover-server3** services are specified in the
+``set_visibility.yaml`` sample yaml referenced below.
+
+To modify the configuration of the services visibility will track, use the **cloverctl CLI**,
+executing the following command::
+
+ cloverctl set visibility -f set_visibility.yaml
+
+Use the ``services:`` section of the yaml to configure service names to track.
+
+.. code-block:: bash
+
+ # set_visibility.yaml
+ services:
+ - name: proxy_access_control
+ - name: clover_server1
+ - name: clover_server2
+ - name: clover_server3
+ metric_prefixes:
+ - prefix: envoy_cluster_outbound_9180__
+ - prefix: envoy_cluster_inbound_9180__
+ metric_suffixes:
+ - suffix: _default_svc_cluster_local_upstream_rq_2xx
+ - suffix: _default_svc_cluster_local_upstream_cx_active
+ custom_metrics:
+ - metric: envoy_tracing_zipkin_spans_sent
+
+Set runtime parameters using dashboard UI
+-----------------------------------------
+
+The services being tracked by visibility can also be configured by selecting from the
+boxes under **Discovered Services** within the dashboard, as shown in the graphic below.
+Services can be multi-selected by using by holding the ``Ctrl`` or ``command`` (Mac OS)
+keyboard button down while selecting or unselecting. The SDC services that were configured from
+the **cloverctl** CLI above are currently active, denoted as the boxes with blue backgrounds.
+
+.. image:: imgs/visibility_discovered_active.png
+ :align: center
+ :scale: 100%
+
+In order for any services to be discovered from Jaeger tracing and displayed within the dashboard,
+some traffic must target the services of interest. Using curl/wget to send HTTP requests
+to your services will cause services to be discovered. Using Clover JMeter validation services,
+as detailed :ref:`jmeter_config_guide` against SDC sample services will also generate a service
+listing. The **cloverctl** CLI commands below will generate traces through the SDC service chain
+with the JMeter master injected into the service mesh::
+
+ $ cloverctl create testplan –f yaml/jmeter_testplan.yaml # yaml located with cloverctl binary
+ $ cloverctl start testplan
+
+Clearing visibility data
+-------------------------
+
+To clear visibility data in cassandra and redis, which truncates **cassandra** tables and
+deletes or zeros out **redis** keys, use the following command::
+
+ $ cloverctl clear visibility
+
+This can be useful when analyzing or observing an issue during a particular time horizon.
+The same function can be performed from the dashboard UI using the ``Clear`` button under
+``Visibility Controls``, as illustrated in the graphic from the previous section.
+
+Viewing visibility data
+========================
+
+The visibility dashboard can be used to view visibility data in real-time. The page will
+automatically refresh every 5 seconds. To disable continuous page refresh and freeze on a
+snapshot of the data, use the slider at the top of the page that defaults to ``On``. Toggling
+it will result in it displaying ``Off``.
+
+The visibility dashboard displays various metrics and graphs of analyzed data described in
+subsequent sections.
+
+System metrics
+--------------
+
+System metrics provide aggregate counts of cassandra tables including total traces, spans
+and metrics, as depicted on the left side of the graphic below.
+
+.. image:: imgs/visibility_system_counts_response_times.png
+ :align: center
+ :scale: 100%
+
+The metrics counter will continually increase, as it is based on time series data from
+Prometheus. The trace count will correspond to the number of HTTP requests sent to services
+within the Istio service mesh. The span count ties to trace count, as it is a child object
+under Jaeger tracing data hierarchy and is based on the service graph (number of interactions
+between microservices for a given request). It will increase more rapidly when service graph
+depths are larger.
+
+Per service response times
+--------------------------
+
+Per service response times are displayed on the right side of the graphic above and are
+calculated from tracing data when visibility is started. The minimum, maximum and average
+response times are output over the entire analysis period.
+
+Group by span field counts
+--------------------------
+
+This category groups schema fields in various combinations to gain insight into the composition
+of HTTP data and can be used by CI scripts to perform various validations. Metrics include:
+
+ * Per service
+ * Distinct URL
+ * Distinct URL / HTTP status code
+ * Distinct user-agent (HTTP header)
+ * Per service / distinct URL
+
+The dashboard displays bar/pie charts with counts and percentages, as depicted below. Each distinct
+key is displayed when hovering your mouse over a chart value.
+
+.. image:: imgs/visibility_distinct_counts.png
+ :align: center
+ :scale: 100%
+
+Distinct HTTP details
+---------------------
+
+A listing of distinct HTTP user-agents, request URLs and status codes is shown below divided
+with tabs.
+
+.. image:: imgs/visibility_distinct_http.png
+ :align: center
+ :scale: 100%
+
+
+Monitoring Metrics
+------------------
+
+The Istio sidecars (Envoy) provide a lengthy set of metrics exposed through Prometheus. These
+metrics can be analyzed with the visibility service by setting up metrics, as outlined in section
+`Set runtime parameters using Clover CLI`_. Use ``metric_prefixes`` and ``metric_suffixes``
+sections of the set visibility yaml for many Envoy metrics that have a key with the service
+straddled by a prefix/suffix. A row in the table and a graph will be displayed for each
+combination of service, prefix and suffix.
+
+The metrics are displayed in tabular and scatter plots over time formats from the dashboard, as
+shown in the graphic below:
+
+.. image:: imgs/visibility_monitoring_metrics.png
+ :align: center
+ :scale: 100%
+
+Uninstall from Kubernetes envionment
+====================================
+
+Delete with Clover CLI
+----------------------
+
+When you're finished working with Clover visibility services, you can uninstall them with the
+following command::
+
+ $ cloverctl delete system visibility
+
+The command above will remove the SDC sample services, Istio components and Jaeger/Prometheus
+tools from your Kubernetes environment.
+
+Uninstall from Docker environment
+=================================
+
+The OPNFV docker images can be removed with the following commands:
+
+.. code-block:: bash
+
+ $ docker rmi opnfv/clover-collector
+ $ docker rmi opnfv/clover-spark
+ $ docker rmi opnfv/clover-spark-submit
+ $ docker rmi opnfv/clover-controller
+ $ docker rmi k8s.gcr.io/redis
+ $ docker rmi kubernetes/redis
+ $ docker rmi cassandra:3
diff --git a/docs/release/release-notes/release-notes.rst b/docs/release/release-notes/release-notes.rst
index f345f61..5f9154d 100644
--- a/docs/release/release-notes/release-notes.rst
+++ b/docs/release/release-notes/release-notes.rst
@@ -4,7 +4,7 @@
.. (c) Authors of Clover
-This document provides Clover project's release notes for the OPNFV Fraser release.
+This document provides Clover project's release notes for the OPNFV Hunter release.
.. contents::
:depth: 3
@@ -18,24 +18,24 @@ Version history
| **Date** | **Ver.** | **Author** | **Comment** |
| | | | |
+--------------------+--------------------+--------------------+--------------------+
-| 2018-03-14 | Fraser 1.0 | Stephen Wong | First draft |
+| 2019-04-30 | Hunter 1.0 | Stephen Wong | First draft |
| | | | |
+--------------------+--------------------+--------------------+--------------------+
Important notes
===============
-The Clover project for OPNFV Fraser can ONLY be run on Kubernetes version 1.9 or
-later
+The Clover project for OPNFV Hunter is tested on Kubernetes version 1.9 and
+1.11. It is only tested on Istio 1.0.
Summary
=======
-Clover Fraser release provides tools for installation and validation of various
-upstream cloud native projects including Istio, fluentd, Jaegar, and Prometheus.
-In addition, the Fraser release also includes a sample VNF, its Kubernetes
-manifest, simple tools to validate route rules from Istio, as well as an
-example A-B testing framework.
+Clover Hunter release further enhances the Gambia release by:
+
+#. Integration with ONAP SDC, running on Istio, to demonstrate Clover's
+ visibility engine
+#. Network Tracing: Clovisor has significant stability and feature enhancements
Release Data
============
@@ -47,13 +47,13 @@ Release Data
| **Repo/commit-ID** | |
| | |
+--------------------------------------+--------------------------------------+
-| **Release designation** | Fraser |
+| **Release designation** | Hunter |
| | |
+--------------------------------------+--------------------------------------+
-| **Release date** | 2018-04-27
+| **Release date** | 2019-05-10 |
| | |
+--------------------------------------+--------------------------------------+
-| **Purpose of the delivery** | OPNFV Fraser release |
+| **Purpose of the delivery** | OPNFV Hunter release |
| | |
+--------------------------------------+--------------------------------------+
@@ -62,18 +62,17 @@ Version change
Module version changes
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-OPNFV Fraser marks the first release for Clover
Document version changes
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-OPNFV Fraser marks the first release for Clover
+Clover Hunter has updated the config guide and user guide accordingly
Reason for version
^^^^^^^^^^^^^^^^^^^^
Feature additions
~~~~~~~~~~~~~~~~~~~~~~~
-<None> (no backlog)
+See Summary above
Bug corrections
~~~~~~~~~~~~~~~~~~~~~
diff --git a/docs/release/userguide/index.rst b/docs/release/userguide/index.rst
index 5be100f..d09a9d7 100644
--- a/docs/release/userguide/index.rst
+++ b/docs/release/userguide/index.rst
@@ -3,9 +3,11 @@
.. http://creativecommons.org/licenses/by/4.0
.. (c) OPNFV, Authors of Clover
-=================================
-OPNFV Clover User Guide
-=================================
+.. _clover_userguide:
+
+=================
+Clover User Guide
+=================
.. toctree::
:maxdepth: 1
diff --git a/docs/release/userguide/userguide.rst b/docs/release/userguide/userguide.rst
index d99359b..468dee3 100644
--- a/docs/release/userguide/userguide.rst
+++ b/docs/release/userguide/userguide.rst
@@ -5,56 +5,45 @@
================================================================
-Clover User Guide (Fraser Release)
+Clover User Guide (Gambia Release)
================================================================
-This document provides the Clover user guide for the OPNFV Fraser release.
+This document provides the Clover user guide for the OPNFV Hunter release.
Description
===========
-As project Clover's first release, the Fraser release includes installation and simple
-validation of foundational upstream projects including Istio, fluentd, Jaeger, and
-Prometheus. The Clover Fraser release also provides a sample set of web-oriented network
-services, which follow a micro-service design pattern, its Kubernetes manifest, and an
-automated script to demonstrate a sample A-B testing use-case. The A-B sample script
-validates performance criteria using Istio request routing functionality leveraging
-the sample services deployed within Istio and the tracing data available within Jaeger.
+Clover Hunter builds on previous release to further enhance the toolset for
+cloud native network functions operations. The main emphasis on the release are:
-What is in Fraser?
+#. ONAP SDC on Istio with Clover providing visibility
+#. Clovisor enhancement and stability
+
+What is in Hunter?
==================
* Sample micro-service composed VNF named Service Delivery Controller (SDC)
- * Logging module: fluentd and elasticsearch Kubernetes manifests,
- and fluentd installation validation
-
- * Tracing module: Jaeger Kubernetes manifest, installation validation,
- Jaegar tracing query tools, and module for trace data output to datastore
+ * Istio 1.0 support
- * Monitoring module: Prometheus Kubernetes manifest, installation
- validation, and sample Prometheous query of Istio related metrics
+ * clover-collector: gathers and collects metrics and traces from Prometheus and
+ Jaeger, and provides a single access point for such data
- * Istio route-rules sample yaml and validation tools
+ * Visibility: utilizes an analytic engine to correlate and organize data
+ collected by clover-collector
- * Test scripts
+ * cloverctl: Clover's new CLI
- * Sample code for an A-B testing demo shown during ONS North America 2018
+ * Clovisor: Clover's cloud native, CNI-plugin agnostic network tracing tool
-Usage
-=====
+ * Integration of HTTP Security Modules with Istio 1.0
- * Python modules to validate installation of fluentd logging, Jaeger tracing, and
- Prometheus monitoring. Deployment and validation instructions can be found at:
- :ref:`logging`, :ref:`tracing`, and :ref:`monitoring` respectively.
+ * JMeter: integrating jmeter as test client
- * Deployment and usage of SDC sample
- - Services designed and implemented with micro-service design pattern
- - Tested and validated via Istio service mesh tools
- Detailed usage instructions for the sample can be found at :ref:`sdc_config_guide`
+ * Clover UI: sample UI to offer single pane view / configuration point of the
+ Clover system
- * An example use-case for A-B testing. Detailed usage instructions for this sample A-B
- validation can be found at: :ref:`a_b_config_guide`
+Usage
+=====
- * Sample tool to validate Istio route rules:
- tools/python clover_validate_route_rules.py -s <service name> -t <test id>
+ * Please refer to configguildes for usage detail on various modules