Age | Commit message (Collapse) | Author | Files | Lines |
|
|
|
|
|
|
|
- Uses client-go package to interface to k8s API and implement
functions as cloverkube package.
- Identifies GKE LB IP for clover-controller for user
- Identifies NodePort port number for clover-controller for user
if environment is local k8s (assumes flannel CNI currently)
- Deploys and deletes clover-collector and clover-controller with
native client-go constructs (currently images are defined with
local registry). Future work will implement other clover services
and Istio components. Uses the clover-system namespace.
- Uses Cobra go package to implement CLI (used in kubectl and
istioctl) using cloverctl <verb> <noun> convention.
- Interfaces to clover-controller to configure clover services
(visibility, IDS ...) within the cluster via REST messaging
- Start visibility (collector) engine using input yaml file or
defaults
- Init, stop and clear (truncate Cassandra tables) visibility
engine or get basic stats.
- Add custom rules to IDS from input yaml file and start/stop
IDS
- Generate jmeter testplan on jmeter-master using input yaml
file. Start tests and output log/results from CLI.
- Specify number of jmeter slaves to initiate tests on from
CLI. Automatically find IP addresses of jmeter slaves within
the k8s cluster.
- Sample yaml files for adding IDS rules, starting visibility
engine and generating jmeter test plans.
- Build script to install go and get dependent packages.
- Implement a custom Istio inject package for manual sidecar
injection (cloverinject). Currently, unused as it is built from
Istio 0.8.0/1.0.0 code base.
Change-Id: Ibb8d08cb98267bdffb8905c221473f177d51bbb3
Signed-off-by: Eddie Arrage <eddie.arrage@huawei.com>
|
|
|
|
|
|
- First pass of clover-controller which resides within the k8s
cluster and provides interfaces to all Clover services
- Only service that should need to be exposed outside of
cluster
- Docker build of container that uses stack of nginx, gunicorn
and flask to provide REST interface
- REST interface is intended to serve cloverctl CLI and
dashboard browser UI
- Implements GRPC messaging to clover-collector and snort
- GRPC interfaces files for snort/nginx are added to
container from repo. Collector GRPC files will be removed
from controller/control/api once patch below is merged
https://gerrit.opnfv.org/gerrit/#/c/57245/ and added
similarly
- Provides first pass callback for file upload from
clover-server.
- Some REST messages implement JSON for passing params
to internal services
- Redis interface added to obtain data from services.
Currently, a simple interface to retrieve snort event
information
- YAML manifest renderer to add to k8s. Uses NodePort
service currently, defaulting to port 32044.
- Removed collector gRPC interface files with merge of collector
- Expose tracing and monitoring host/port parameters, as these vary
depending on Istio version and Jaeger version
- Add logging to flask blueprints
- Added jmeter blueprint interface with REST for
testplan generation, start test and result retrieval
- Added flask Response to REST reply messages
- Retrieve some basic stats from collector in json
response
Change-Id: I59eaeb860445ade4b45bba22747a61fb0cf0bbd4
Signed-off-by: Eddie Arrage <eddie.arrage@huawei.com>
|
|
- Jmeter can be used for L4-7 functional and performance testing
- Jmeter master has gRPC server for management
- Generates Jmeter test plans from minimal yaml params file
(sample to be added with cloverctl) using template
- Optionally span tests across slave containers to allow greater
loads to be generated
- Specify loop/thread/slave count and URL list, which
dictates target and number of connections that will be attempted
- clover-controller will interface to gRPC interface on Jmeter
master
- Start tests on master and retrieve log/result files
- Render master and slave k8s manifests files
Change-Id: Id144c8f551b7d375ff252c8de0611f895b50387c
Signed-off-by: Eddie Arrage <eddie.arrage@huawei.com>
|
|
- Left the file samples/scenarios/service_delivery_controller_opnfv.yaml unchanged.
- Added a yaml definition of Cassandra StatefulSet and its service into a separate file under tools directory
- Cassandra Service run with 1 replica
- Deleted 'data-plane-ns' and use 'default' instead for cassandra containers.
- Revoked changes for samples/scenarios/service_delivery_controller_opnfv.yaml.
- Added new line (Wutien suggested it)
JIRA: CLOVER-000
Change-Id: I2bb4249cf2523f5011d6fefc69dc469a90e20eaf
Signed-off-by: iharijono <indra.harijono@huawei.com>
|
|
|
|
Change-Id: I0335fa912a3ca2dff5c989fa06183065216f10e4
Signed-off-by: wutianwei <wutianwei1@huawei.com>
|
|
if we set testid and start test immediately,
the first test's result can't be got from jaeger
Change-Id: Ia2ab8a91d8c5f9956ea4d3d7c2436fb05490acee
Signed-off-by: wutianwei <wutianwei1@huawei.com>
|
|
|
|
- Added a container named clover-collector using clover
container as a base with build script
- GRPC server to manage collector process
- Cassandra DB client interface to initialize visibility keyspace
- Init messaging adds table schemas for tracing - traces & spans
- Adds table for monitoring - metrics
- Does not implement Cassandra server but developed using
public Cassandra docker container
- Collector process in simple loop that periodically fetches
traces and monitoring data and inserts to Cassandra - not optimized
for batch retrieval yet for monitoring
- CLI interface added to collector process and used
by GRPC server for configuration
- Simple GRPC client script to test GRPC server and start/stop
of collector process
- Collector process can be configured with access for tracing,
monitoring and Cassandra
- Added a return value in monitoring query method
- Added ability to truncate tracing, metrics and spans tables
in cql
- Added cql prepared statements and batch insert for metrics
and spans
- Align cql connection to cql deployment within k8s
- Fix issue with cql host list using ast and collect process
args with background argument
- Added redis interface to accept service/metric list
externally for monitoring (will work in conjunction
with clover-controller)
- Use k8s DNS names and default ports for monitoring, tracing
and cassandra
- Added yaml manifest renderer/template for collector
Change-Id: I3e4353e28844c4ce9c185ff4638012b66c7fff67
Signed-off-by: Eddie Arrage <eddie.arrage@huawei.com>
|
|
Change-Id: Ib5b2240de3276164fe9e272bf36f0d1f89f409c0
Signed-off-by: Yujun Zhang <zhang.yujunz@zte.com.cn>
|
|
Change-Id: I6a1e526bec4160bcdac32d4124acb110b9cf6959
Signed-off-by: Yujun Zhang <zhang.yujunz@zte.com.cn>
|
|
the SDC application"
|
|
and on the SDC application
Change-Id: I6e1bd84a6d674a2c4c4484722b20415f5402a59c
Signed-off-by: Stephen Wong <stephen.kf.wong@gmail.com>
|
|
- cluster health is not red
- indics found
- log entry created by istio found
- requests in and out http load balance matches
pytest is used as the test runner and wrapped in `validate.py`
Change-Id: Iad540b69d05118fadc97df679cf3424513c15e38
Signed-off-by: Yujun Zhang <zhang.yujunz@zte.com.cn>
|
|
- Changed default Jaeger ports to 16686 for use with basic
kubernetes port-forward and CI scripts
- Added CLI to validate script to disable istio service check
by default. This requires at least a single http request
to istio-ingress after Jaeger deployment. It can be enabled
with 'python validate.py -s'. Port and IP address for Jaeger
can optionally be specified with '-ip' and '-port' options
- Modified tracing doc to add k8s port-forward example in addition
to k8s expose
Change-Id: I10fb4d3cccfa50370d44ed7446f67a49c538bba9
Signed-off-by: Eddie Arrage <eddie.arrage@huawei.com>
|
|
- Use a community yaml for redis in k8s as simple data store
- Redis can be used for tracing and also by the snort-ids
to store alerts that can be processed by other services
- If flannel is used, the redis CLI can be accessed on the
host OS with redis-cli -h <flannel ip>
- Within the k8s cluster, the redis service can be accessed with
DNS using name 'redis'
- The same yaml for redis is also included in toplevel manifest for SDC
scenario. Included here if intention is to use separately (tracing
only)
Change-Id: Ibad283a4cc8938fe01f5de6b7743bdb5511be3af
Signed-off-by: Eddie Arrage <eddie.arrage@huawei.com>
|
|
- install dependent deb/pip packages
- install basic tools istioctl, kubectl
- install clover source code
- build/upload docker image script
- update requirements.txt
- update module import path
- To use this image use need setup kube-config file.
e.g. `docker run -v /root/config:/root/.kube/config -it clover bash`
Change-Id: I91044bb99ce8e2b785ef03212d961a97b3d42233
Signed-off-by: QiLiang <liangqi1@huawei.com>
|
|
orchestration/kube_client, and tools/clover_validate_rr
Add an 'orchestration' directory. Please note that
'orchestration' does NOT mean Clover does any orchestration ---
similar to how Clover doesn't by itself implement tracing or
logging, orchestration is a directory for code related to Docker
orchestration client --- such as k8s client
kube_client utilizes the Kubernetes python client (a dependency)
to perform tasks against Kubernetes API server. For this commit,
it is only tested for weighted route rule verification, it does
three tasks:
(1) get a list of pods under a namespace --- pod dictionary now
only contains pod name and label dictionary: used to match
pod name with the node name in traces from OpenTracing
(2) check to see if a particular pod is up in a particular
namespace: used to check if Istio pods are running in
istio-system namespace
(3) check if a container exists in a list of pods under a
namespace: used to check if application pods have
istio-proxy container running
route_rule directly invokes istioctl as there isn't any Istio
Python client yet. Currently it reads and parses routerules
from Istio, and validates if a particular trace result matches
the routerules
Finally, a sample tool clover_validate_rr is provided. This
tool assumes a previous test has been ran (with an id with
both the route-rule-under-test and corresponding traces are
stored --- currently the assumption is tests were ran with
redis-master running on system). The tool can be invoked:
python clover_validate_rr.py -t <test-id> -s <service name>
where test-id is the ID of the test (most likely uuid) and
service name is the name of the service running in the
Kubernetes cluster upon which test traces should be fetched
against
Change-Id: Ic8ab6efc23c71ac4643bee796ef986a86f6fc7dd
Signed-off-by: Stephen Wong <stephen.kf.wong@gmail.com>
|
|
|
|
|
|
- Uses REST interface to obtain traces for services from Jaeger
- Discover services availabe in tracing
- Works only with Jaeger at the moment (not zipkin)
- Optional Redis interface added to store traces per test
- Install doc and validation script added for Jaeger
- Renamed doc to docs
Change-Id: I420137c818df290ecd40aa8d318c6961c511a947
Signed-off-by: Eddie Arrage <eddie.arrage@huawei.com>
|
|
- install prometheus
- validate the installation
- add prometheus query function
- TODO: test collecting telemetry data from istio
JIRA: CLOVER-7
Change-Id: I983be2db78c8c5c20c0acee9ae81e891884e07fb
Signed-off-by: QiLiang <liangqi1@huawei.com>
|
|
Change-Id: Idbe25c162fb19c59ad4e57fd32a749d1d5a29f63
Signed-off-by: QiLiang <liangqi1@huawei.com>
|
|
- install fluentd with elastic stack
- validate the installation
JIRA: CLOVER-5
Change-Id: I181a7277bc332ceac549d384cf2c3817a182b06e
Signed-off-by: Yujun Zhang <zhang.yujunz@zte.com.cn>
|
|
Adding the basic directories, and the corresponding __init__.py (empty for now)
Change-Id: I811620e170ea4aa9363238f1949f299c6fd9d751
Signed-off-by: Stephen Wong <stephen.kf.wong@gmail.com>
|