diff options
Diffstat (limited to 'docs/release/configguide')
-rw-r--r-- | docs/release/configguide/clovisor_config_guide.rst | 156 | ||||
-rw-r--r-- | docs/release/configguide/controller_services_config_guide.rst | 181 | ||||
-rw-r--r-- | docs/release/configguide/index.rst | 14 | ||||
-rw-r--r-- | docs/release/configguide/jmeter_config_guide.rst | 298 |
4 files changed, 645 insertions, 4 deletions
diff --git a/docs/release/configguide/clovisor_config_guide.rst b/docs/release/configguide/clovisor_config_guide.rst new file mode 100644 index 0000000..9b5f4a3 --- /dev/null +++ b/docs/release/configguide/clovisor_config_guide.rst @@ -0,0 +1,156 @@ +.. This work is licensed under a Creative Commons Attribution 4.0 International License. +.. http://creativecommons.org/licenses/by/4.0 +.. SPDX-License-Identifier CC-BY-4.0 +.. (c) Authors of Clover + +.. _clovisor_config_guide: + +============================ +Clovisor Configuration Guide +============================ + +Clovisor requires minimal to no configurations to function as a network tracer. +It expects configurations to be set at a redis sever running at clover-system +namespace. + +No Configuration +================ + +If redis server isn't running as service name **redis** in namespace +**clover-system** or there isn't any configuration related to Clovisor in that +redis service, then Clovisor would monitor all pods under the **default** +namespace. The traces would be sent to **jaeger-collector** service under the +**clover-system** namespace + +Using redis-cli +=============== + +Install ``redis-cli`` on the client machine, and look up redis IP address: + +.. code-block:: bash + + $ kubectl get services -n clover-system + +which one may get something like the following: + +.. code-block:: bash + + $ + NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE + redis ClusterIP 10.109.151.40 <none> 6379/TCP 16s + +if (like above), the external IP isn't visible, one may be able to get the pod +IP address directly via the pod (for example, it works with Flannel as CNI +plugin): + +.. code-block:: bash + + $ kubectl get pods -n clover-system -o=wide + NAME READY STATUS RESTARTS AGE IP NODE + redis 2/2 Running 0 34m 10.244.0.187 clover1804 + +and one can connect to redis via:: + + redis-cli -h 10.244.0.187 -p 6379 + +Jaeger Collector Configuration +============================== + +Clovisor allows user to specify the Jaeger service for which Clovisor would send +the network traces to. This is configured via setting the values for +keys **clovisor_jaeger_collector** and **clovisor_jaeger_agent**:: + + redis> SET clovisor_jaeger_collector "jaeger-collector.istio-system:14268" + "OK" + redis> SET clovisor_jaeger_agent "jaeger-agent.istio-system:6831" + "OK" + +Configure Monitoring Namespace and Labels +========================================= + +Configruation Value String Format: +---------------------------------- + + <namespace>[:label-key:label-value] + +User can configure namespace(s) for Clovisor to tap into via adding namespace +configuration in redis list **clovisor_labels**:: + + redis> LPUSH clovisor_labels "my-namespace" + (integer) 1 + +the above command will cause Clovisor to **NOT** monitor the pods in **default** +namespace, and only monitor the pods under **my-namespace**. + +If user wants to monitor both 'default' and 'my-namespace', she needs to +explicitly add 'default' namespace back to the list:: + + redis> LPUSH clovisor_labels "default" + (integer) 2 + redis> LRANGE clovisor_labels 0 -1 + 1.) "default" + 2.) "my-namespace" + +Clovisor allows user to optionally specify which label match on pods to further +filter the pods to monitor:: + + redis> LPUSH clovisor_labels "my-2nd-ns:app:database" + (integer) 1 + +the above configuration would result in Clovisor only monitoring pods in +my-2nd-ns namespace which matches the label "app:database" + +User can specify multiple labels to filter via adding more configuration +entries:: + + redis> LPUSH clovisor_labels "my-2nd-ns:app:web" + (integer) 2 + redis> LRANGE clovisor_labels 0 -1 + 1.) "my-2nd-ns:app:web" + 2.) "my-2nd-ns:app:database" + +the result is that Clovisor would monitor pods under namespace my-2nd-ns which +match **EITHER** app:database **OR** app:web + +Currently Clovisor does **NOT** support filtering of more than one label per +filter, i.e., no configuration option to specify a case where a pod in a +namespace needs to be matched with TWO or more labels to be monitored + +Configure Egress Match IP address, Port Number, and Matching Pods +================================================================= + +Configruation Value String Format: +---------------------------------- + + <IP Address>:<TCP Port Number>[:<Pod Name Prefix>] + +By default, Clovisor only traces packets that goes to a pod via its service +port, and the response packets, i.e., from pod back to client. User can +configure tracing packet going **OUT** of the pod to the next microservice, or +an external service also via the **clovior_egress_match** list:: + + redis> LPUSH clovior_egress_match "10.0.0.1:3456" + (integer) 1 + +the command above will cause Clovisor to trace packet going out of ALL pods +under monitoring to match IP address 10.0.0.1 and destination TCP port 3456 on +the **EGRESS** side --- that is, packets going out of the pod. + +User can also choose to ignore the outbound IP address, and only specify the +port to trace via setting IP address to zero:: + + redis> LPUSH clovior_egress_match "0:3456" + (integer) 1 + +the command above will cause Clovisor to trace packets going out of all the pods +under monitoring that match destination TCP port 3456. + +User can further specify a specific pod prefix for such egress rule to be +applied:: + + redis> LPUSH clovior_egress_match "0:3456:proxy" + (integer) 1 + +the command above will cause Clovisor to trace packets going out of pods under +monitoring which have name starting with the string "proxy" that match destination +TCP port 3456 diff --git a/docs/release/configguide/controller_services_config_guide.rst b/docs/release/configguide/controller_services_config_guide.rst new file mode 100644 index 0000000..6671458 --- /dev/null +++ b/docs/release/configguide/controller_services_config_guide.rst @@ -0,0 +1,181 @@ +.. This work is licensed under a Creative Commons Attribution 4.0 International License. +.. http://creativecommons.org/licenses/by/4.0 +.. SPDX-License-Identifier CC-BY-4.0 +.. (c) Authors of Clover + +.. _controller_services_config_guide: + +============================================== +Clover Controller Services Configuration Guide +============================================== + +This document provides a guide to use the Clover controller services, which are introduced in +the Clover Gambia release. + +Overview +========= + +Clover controller services allow users to control and access information about Clover +microservices. Two new components are added to Clover to facilitate an ephemeral, cloud native +workflow. A CLI interface with the name **cloverctl** interfaces to the Kubernetes (k8s) +API and also to **clover-controller**, a microservice deployed within the k8s cluster to +instrument other Clover k8s services including sample network services, visibility/validation +services and supporting datastores (redis, cassandra). The **clover-controller** service +provides message routing communicating REST with cloverctl or other API/UI interfaces and +gRPC to internal k8s cluster microservices. It acts as an internal agent and reduces the need +to expose multiple Clover services outside of a k8s cluster. + +The **clover-controller** is packaged as a docker container with manifests to deploy +in a Kubernetes (k8s) cluster. The **cloverctl** CLI is packaged as a binary (Golang) within a +tarball with associated yaml files that can be used to configure and control other Clover +microservices within the k8s cluster via **clover-controller**. The **cloverctl** CLI can also +deploy/delete other Clover services within the k8s cluster for convenience. + +The **clover-controller** service provides the following functions: + + * **REST API:** interface allows CI scripts/automation to control sample network sample services, + visibility and validation services. Analyzed visibility data can be consumed by other + services with REST messaging. + + * **CLI Endpoint:** acts as an endpoint for many **cloverctl** CLI commands using the + **clover-controller** REST API and relays messages to other services via gRPC. + + * **UI Dashboard:** provides a web interface exposing visibility views to interact with + Clover visibility services. It presents analyzed visibility data and provides basic controls + such as selecting which user services visibility will track. + +.. image:: imgs/controller_services.png + :align: center + :scale: 100% + +The **cloverctl** CLI command syntax is similar to k8s kubectl or istio istioctl CLI tools, using +a <verb> <noun> convention. + +Help can be accessed using the ``--help`` option, as shown below:: + + $ cloverctl --help + +Deploying Clover system services +================================ + +Prerequisites +------------- + +The following assumptions must be met before continuing on to deployment: + + * Installation of Docker has already been performed. It's preferable to install Docker CE. + * Installation of k8s in a single-node or multi-node cluster. + +.. _controller_services_cli: + +Download Clover CLI +------------------- + +Download the cloverctl binary from the location below:: + + $ curl -L https://github.com/opnfv/clover/raw/stable/gambia/download/cloverctl.tar.gz | tar xz + $ cd cloverctl + $ export PATH=$PWD:$PATH + +To begin deploying Clover services, ensure the correct k8s context is enabled. Validate that +the CLI can interact with the k8s API with the command:: + + $ cloverctl get services + +The command above must return a listing of the current k8s services similar to the output of +'kubectl get svc --all-namespaces'. + +.. _controller_services_controller: + +Deploying clover-controller +--------------------------- + +To deploy the **clover-controller** service, use the command below: + +.. code-block:: bash + + $ cloverctl create system controller + +The k8s pod listing below must include the **clover-controller** pod in the **clover-system** +namespace: + +.. code-block:: bash + + $ kubectl get pods --all-namespaces | grep clover-controller + + NAMESPACE NAME READY STATUS + clover-system clover-controller-74d8596bb5-jczqz 1/1 Running + +Exposing clover-controller +========================== + +To expose the **clover-controller** deployment outside of the k8s cluster, a k8s NodePort +or LoadBalancer service must be employed. + +Using NodePort +-------------- + +To use a NodePort for the **clover-controller** service, use the following command:: + + $ cloverctl create system controller nodeport + +The NodePort default is to use port 32044. To modify this, edit the yaml relative +to the **cloverctl** path at ``yaml/controller/service_nodeport.yaml`` before invoking +the command above. Delete the ``nodePort:`` key in the yaml to let k8s select an +available port within the the range 30000-32767. + +Using LoadBalancer +------------------ + +For k8s clusters that support a LoadBalancer service, such as GKE, one can be created for +**clover-controller** with the following command:: + + $ cloverctl create system controller lb + +Setup with cloverctl CLI +------------------------ + +The **cloverctl** CLI will communicate with **clover-controller** on the service exposed above +and requires the IP address of either the load balancer or a cluster node IP address, if a +NodePort service is used. For a LoadBalancer service, **cloverctl** will automatically find +the IP address to use and no further action is required. + +However, if a NodePort service is used, an additional step is required to configure the IP +address for **cloverctl** to target. This may be the CNI (ex. flannel/weave) IP address or the IP +address of an k8s node interface. The **cloverctl** CLI will automatically determine the +NodePort port number configured. To configure the IP address, create a file named +``.cloverctl.yaml`` and add a single line to the yaml file with the following:: + + ControllerIP: <IP addresss> + +This file must be located in your ``HOME`` directory or in the same directory as the **cloverctl** +binary. + +Uninstall from Kubernetes environment +===================================== + +Delete with Clover CLI +----------------------- + +When you're finished working with Clover system services, you can uninstall it with the +following command: + +.. code-block:: bash + + $ cloverctl delete system controller + $ cloverctl delete system controller nodeport # for NodePort + $ cloverctl delete system controller lb # for LoadBalancer + + +The commands above will remove the clover-controller deployment and service resources +created from the current k8s context. + +Uninstall from Docker environment +================================= + +The OPNFV docker image for the **clover-controller** can be removed with the following commands +from nodes in the k8s cluster. + +.. code-block:: bash + + $ docker rmi opnfv/clover-controller diff --git a/docs/release/configguide/index.rst b/docs/release/configguide/index.rst index daf8986..41c1eca 100644 --- a/docs/release/configguide/index.rst +++ b/docs/release/configguide/index.rst @@ -3,14 +3,20 @@ .. http://creativecommons.org/licenses/by/4.0 .. (c) OPNFV, Authors of Clover -.. _clover_config_guides: +.. _clover_configguide: -================================= -OPNFV Clover Configuration Guides -================================= +========================== +Clover Configuration Guide +========================== .. toctree:: :maxdepth: 2 + controller_services_config_guide.rst sdc_config_guide.rst a_b_config_guide.rst + jmeter_config_guide.rst + visibility_config_guide.rst + modsecurity_config_guide.rst + spinnaker_config_guide.rst + clovisor_config_guide.rst diff --git a/docs/release/configguide/jmeter_config_guide.rst b/docs/release/configguide/jmeter_config_guide.rst new file mode 100644 index 0000000..de1d2f5 --- /dev/null +++ b/docs/release/configguide/jmeter_config_guide.rst @@ -0,0 +1,298 @@ +.. This work is licensed under a Creative Commons Attribution 4.0 International License. +.. http://creativecommons.org/licenses/by/4.0 +.. SPDX-License-Identifier CC-BY-4.0 +.. (c) Authors of Clover + +.. _jmeter_config_guide: + +======================================= +JMeter Validation Configuration Guide +======================================= + +This document provides a guide to use the JMeter validation service, which is introduced in +the Clover Gambia release. + +Overview +========= + +Apache JMeter is a mature, open source application that supports web client emulation. Its +functionality has been integrated into the Clover project to allow various CI validations +and performance tests to be performed. The system under test can either be REST services/APIs +directly or a set of L7 network services. In the latter scenario, Clover nginx servers may +be employed as an endpoint to allow traffic to be sent end-to-end across a service chain. + +The Clover JMeter integration is packaged as docker containers with manifests to deploy +in a Kubernetes (k8s) cluster. The Clover CLI (**cloverctl**) can be used to configure and +control the JMeter service within the k8s cluster via **clover-controller**. + +The Clover JMeter integration has the following attributes: + + * **Master/Slave Architecture:** uses the native master/slave implementation of JMeter. The master + and slaves have distinct OPNFV docker containers for rapid deployment and usage. Slaves allow + the scale of the emulation to be increased linearly for performance testing. However, for + functional validations and modest scale, the master may be employed without any slaves. + + * **Test Creation & Control:** JMeter makes use of a rich XML-based test plan. While this offers + a plethora of configurable options, it can be daunting for a beginner user to edit directly. + Clover provides an abstracted yaml syntax exposing a subset of the available configuration + parameters. JMeter test plans are generated on the master and tests can be started from + **cloverctl** CLI. + + * **Result Collection:** summary log results and detailed per-request results can be retrieved + from the JMeter master during and after tests from the **cloverctl** or from a REST API exposed + via **clover-controller**. + +.. image:: imgs/jmeter_overview.png + :align: center + :scale: 100% + +Deploying Clover JMeter service +=============================== + +Prerequisites +------------- + +The following assumptions must be met before continuing on to deployment: + + * Installation of Docker has already been performed. It's preferable to install Docker CE. + * Installation of k8s in a single-node or multi-node cluster. + * Clover CLI (**cloverctl**) has been downloaded and setup. Instructions to deploy can be found + at :ref:`controller_services_controller` + * The **clover-controller** service is deployed in the k8s cluster the validation services will + be deployed in. Instructions to deploy can be found at :ref:`controller_services_controller`. + +Deploy with Clover CLI +----------------------- + +The easiest way to deploy Clover JMeter validation services into your k8s cluster is to use the +**cloverctl** CLI using the following command: + +.. code-block:: bash + + $ cloverctl create system validation + +Container images with the Gambia release tag will pulled if the tag is unspecified. The release +tag is **opnfv-7.0.0** for the Gambia release. To deploy the latest containers from master, use +the command shown below:: + + $ cloverctl create system validation -t latest + +The Clover CLI will add master/slave pods to the k8s cluster in the default namespace. + +The JMeter master/slave docker images will automatically be pulled from the OPNFV public +Dockerhub registry. Deployments and respective services will be created with three slave +replica pods added with the **clover-jmeter-slave** prefix. A single master pod will be +created with the **clover-jmeter-master** prefix. + +Deploy from source +------------------ + +To continue to deploy from the source code, clone the Clover git repository and navigate +within to the directory, as shown below: + +.. code-block:: bash + + $ git clone https://gerrit.opnfv.org/gerrit/clover + $ cd clover/clover/tools/jmeter/yaml + $ git checkout stable/gambia + +To deploy the master use the following two commands, which will create a manifest with +the Gambia release tags and creates the deployment in the k8s cluster:: + + $ python render_master.py --image_tag=opnfv-7.0.0 --image_path=opnfv + $ kubectl create -f clover-jmeter-master.yaml + +JMeter can be injected into an Istio service mesh. To deploy in the default +namespace within the service mesh, use the following command for manual +sidecar injection:: + + $ istioctl kube-inject -f clover-jmeter-master.yaml | kubectl apply -f - + +**Note, when injecting JMeter into the service mesh, only the master will function for +the Clover integration, as master-slave communication is known not to function with the Java +RMI API. Ensure 'istioctl' is in your path for the above command.** + +To deploy slave replicas, render the manifest yaml and create in k8s adjusting the +``--replica_count`` value for the number of slave pods desired:: + + $ python render_slave.py --image_tag=opnfv-7.0.0 --image_path=opnfv --replica_count=3 + $ kubectl create -f clover-jmeter-slave.yaml + +Verifying the deployment +------------------------ + +To verify the validation services are deployed, ensure the following pods are present +with the command below: + +.. code-block:: bash + + $ kubectl get pod --all-namespaces + +The listing below must include the following pods assuming deployment in the default +namespace: + +.. code-block:: bash + + NAMESPACE NAME READY STATUS + default clover-jmeter-master-688677c96f-8nnnr 1/1 Running + default clover-jmeter-slave-7f9695d56-8xh67 1/1 Running + default clover-jmeter-slave-7f9695d56-fmpz5 1/1 Running + default clover-jmeter-slave-7f9695d56-kg76s 1/1 Running + default clover-jmeter-slave-7f9695d56-qfgqj 1/1 Running + +Using JMeter Validation +======================= + +Creating a test plan +-------------------- + +To employ a test plan that can be used against the :ref:`sdc_config_guide` sample, navigate to + cloverctl yaml directory and use the sample named 'jmeter_testplan.yaml', which is shown below. + +.. code-block:: bash + + load_spec: + num_threads: 5 + loops: 2 + ramp_time: 60 + duration: 80 + url_list: + - name: url1 + url: http://proxy-access-control.default:9180 + method: GET + user-agent: chrome + - name: url2 + url: http://proxy-access-control.default:9180 + method: GET + user-agent: safari + +The composition of the yaml file breaks down as follows: + * ``load_spec`` section of the yaml defines the load profile of the test. + * `num_threads`` parameter defines the maximum number of clients/users the test will emulate. + * ``ramp_time`` determines the rate at which threads/users will be setup. + * ``loop`` parameter reruns the same test and can be set to 0 to loop forever. + * ``duration`` parameter is used to limit the test run time and be used as a hard cutoff when + using loop forever. + * ``url_list`` section of the yaml defines a set of HTTP requests that each user will perform. + It includes the request URL that is given a name (used as reference in detailed per-request + results) and the HTTP method to use (ex. GET, POST). The ``user-agent`` parameter allows this + HTTP header to be specified per request and can be used to emulate browsers and devices. + +The ``url`` syntax is <domain or IP>:<port #>. The colon port number may be omitted if port 80 +is intended. + +The test plan yaml is an abstraction of the JMeter XML syntax (uses .jmx extension) and can be +pushed to the master using the **cloverctl** CLI with the following command: + +.. code-block:: bash + + $ cloverctl create testplan –f jmeter_testplan.yaml + +The test plan can now be executed and will automatically be distributed to available JMeter slaves. + +Starting the test +----------------- + +Once a test plan has been created on the JMeter master, a test can be started for the test plan +with the following command: + +.. code-block:: bash + + $ cloverctl start testplan + +The test will be executed from the **clover-jmeter-master** pod, whereby HTTP requests will +originate directly from the master. The number of aggregate threads/users and request rates +can be scaled by increasing the thread count or decreasing the ramp time respectively in the +test plan yaml. However, the scale of the test can also be controlled by adding slaves to the +test. When slaves are employed, the master will only be used to control slaves and will not be +a source of traffic. Each slave pod will execute the test plan in its entirety. + +To execute tests using slaves, add the flag '-s' to the start command from the Clover CLI as shown +below: + +.. code-block:: bash + + $ cloverctl start testplan –s <slave count> + +The **clover-jmeter-slave** pods must be deployed in advance before executing the above command. If +the steps outlined in section `Deploy with Clover CLI`_ have been followed, three slaves will +have already been deployed. + +Retrieving Results +------------------ + +Results for the test can be obtained by executing the following command: + +.. code-block:: bash + + $ cloverctl get testresult + $ cloverctl get testresult log + +The bottom of the log will display a summary of the test results, as shown below:: + + 3 in 00:00:00 = 111.1/s Avg: 7 Min: 6 Max: 8 Err: 0 (0.00%) + 20 in 00:00:48 = 0.4/s Avg: 10 Min: 6 Max: 31 Err: 0 (0.00%) + +Each row of the summary table is a snapshot in time with the final numbers in the last row. +In this example, 20 requests (5 users/threads x 2 URLs) x loops) were sent successfully +with no HTTP responses with invalid/error (4xx/5xx) status codes. Longer tests will produce +a larger number of snapshot rows. Minimum, maximum and average response times are output per +snapshot. + +To obtain detailed, per-request results use the ``detail`` option, as shown below:: + + $ cloverctl get testresult detail + + 1541567388622,14,url1,200,OK,ThreadGroup 1-4,text,true,,843,0,1,1,14,0,0 + 1541567388637,8,url2,200,OK,ThreadGroup 1-4,text,true,,843,0,1,1,8,0,0 + 1541567388646,6,url1,200,OK,ThreadGroup 1-4,text,true,,843,0,1,1,6,0,0 + 1541567388653,7,url2,200,OK,ThreadGroup 1-4,text,true,,843,0,1,1,7,0,0 + 1541567400622,12,url1,200,OK,ThreadGroup 1-5,text,true,,843,0,1,1,12,0,0 + 1541567400637,8,url2,200,OK,ThreadGroup 1-5,text,true,,843,0,1,1,8,0,0 + 1541567400645,7,url1,200,OK,ThreadGroup 1-5,text,true,,843,0,1,1,7,0,0 + 1541567400653,6,url2,200,OK,ThreadGroup 1-5,text,true,,843,0,1,1,6,0,0 + +Columns are broken down on the following fields: + * timeStamp, elapsed, label, responseCode, responseMessage, threadName, dataType, success + * failureMessage bytes, sentBytes, grpThreads, allThreads, Latency, IdleTime, Connect + +``elapsed`` or ``Latency`` values are in milliseconds. + +Uninstall from Kubernetes environment +===================================== + +Delete with Clover CLI +----------------------- + +When you're finished working with JMeter validation services, you can uninstall it with the +following command: + +.. code-block:: bash + + $ cloverctl delete system validation + +The command above will remove the clover-jmeter-master and clover-jmeter-slave deployment +and service resources from the current k8s context. + +Delete from source +------------------ + +The JMeter validation services can be uninstalled from the source code using the commands below: + +.. code-block:: bash + + $ cd clover/samples/scenarios + $ kubectl delete -f clover-jmeter-master.yaml + $ kubectl delete -f clover-jmeter-slave.yaml + +Uninstall from Docker environment +================================= + +The OPNFV docker images can be removed with the following commands from nodes +in the k8s cluster. + +.. code-block:: bash + + $ docker rmi opnfv/clover-jmeter-master + $ docker rmi opnfv/clover-jmeter-slave + $ docker rmi opnfv/clover-controller |