Age | Commit message (Collapse) | Author | Files | Lines |
|
|
|
|
|
- Left the file samples/scenarios/service_delivery_controller_opnfv.yaml unchanged.
- Added a yaml definition of Cassandra StatefulSet and its service into a separate file under tools directory
- Cassandra Service run with 1 replica
- Deleted 'data-plane-ns' and use 'default' instead for cassandra containers.
- Revoked changes for samples/scenarios/service_delivery_controller_opnfv.yaml.
- Added new line (Wutien suggested it)
JIRA: CLOVER-000
Change-Id: I2bb4249cf2523f5011d6fefc69dc469a90e20eaf
Signed-off-by: iharijono <indra.harijono@huawei.com>
|
|
|
|
Change-Id: I0335fa912a3ca2dff5c989fa06183065216f10e4
Signed-off-by: wutianwei <wutianwei1@huawei.com>
|
|
if we set testid and start test immediately,
the first test's result can't be got from jaeger
Change-Id: Ia2ab8a91d8c5f9956ea4d3d7c2436fb05490acee
Signed-off-by: wutianwei <wutianwei1@huawei.com>
|
|
|
|
- Added a container named clover-collector using clover
container as a base with build script
- GRPC server to manage collector process
- Cassandra DB client interface to initialize visibility keyspace
- Init messaging adds table schemas for tracing - traces & spans
- Adds table for monitoring - metrics
- Does not implement Cassandra server but developed using
public Cassandra docker container
- Collector process in simple loop that periodically fetches
traces and monitoring data and inserts to Cassandra - not optimized
for batch retrieval yet for monitoring
- CLI interface added to collector process and used
by GRPC server for configuration
- Simple GRPC client script to test GRPC server and start/stop
of collector process
- Collector process can be configured with access for tracing,
monitoring and Cassandra
- Added a return value in monitoring query method
- Added ability to truncate tracing, metrics and spans tables
in cql
- Added cql prepared statements and batch insert for metrics
and spans
- Align cql connection to cql deployment within k8s
- Fix issue with cql host list using ast and collect process
args with background argument
- Added redis interface to accept service/metric list
externally for monitoring (will work in conjunction
with clover-controller)
- Use k8s DNS names and default ports for monitoring, tracing
and cassandra
- Added yaml manifest renderer/template for collector
Change-Id: I3e4353e28844c4ce9c185ff4638012b66c7fff67
Signed-off-by: Eddie Arrage <eddie.arrage@huawei.com>
|
|
Change-Id: Ib5b2240de3276164fe9e272bf36f0d1f89f409c0
Signed-off-by: Yujun Zhang <zhang.yujunz@zte.com.cn>
|
|
Change-Id: I6a1e526bec4160bcdac32d4124acb110b9cf6959
Signed-off-by: Yujun Zhang <zhang.yujunz@zte.com.cn>
|
|
the SDC application"
|
|
and on the SDC application
Change-Id: I6e1bd84a6d674a2c4c4484722b20415f5402a59c
Signed-off-by: Stephen Wong <stephen.kf.wong@gmail.com>
|
|
- cluster health is not red
- indics found
- log entry created by istio found
- requests in and out http load balance matches
pytest is used as the test runner and wrapped in `validate.py`
Change-Id: Iad540b69d05118fadc97df679cf3424513c15e38
Signed-off-by: Yujun Zhang <zhang.yujunz@zte.com.cn>
|
|
- Changed default Jaeger ports to 16686 for use with basic
kubernetes port-forward and CI scripts
- Added CLI to validate script to disable istio service check
by default. This requires at least a single http request
to istio-ingress after Jaeger deployment. It can be enabled
with 'python validate.py -s'. Port and IP address for Jaeger
can optionally be specified with '-ip' and '-port' options
- Modified tracing doc to add k8s port-forward example in addition
to k8s expose
Change-Id: I10fb4d3cccfa50370d44ed7446f67a49c538bba9
Signed-off-by: Eddie Arrage <eddie.arrage@huawei.com>
|
|
- Use a community yaml for redis in k8s as simple data store
- Redis can be used for tracing and also by the snort-ids
to store alerts that can be processed by other services
- If flannel is used, the redis CLI can be accessed on the
host OS with redis-cli -h <flannel ip>
- Within the k8s cluster, the redis service can be accessed with
DNS using name 'redis'
- The same yaml for redis is also included in toplevel manifest for SDC
scenario. Included here if intention is to use separately (tracing
only)
Change-Id: Ibad283a4cc8938fe01f5de6b7743bdb5511be3af
Signed-off-by: Eddie Arrage <eddie.arrage@huawei.com>
|
|
- install dependent deb/pip packages
- install basic tools istioctl, kubectl
- install clover source code
- build/upload docker image script
- update requirements.txt
- update module import path
- To use this image use need setup kube-config file.
e.g. `docker run -v /root/config:/root/.kube/config -it clover bash`
Change-Id: I91044bb99ce8e2b785ef03212d961a97b3d42233
Signed-off-by: QiLiang <liangqi1@huawei.com>
|
|
orchestration/kube_client, and tools/clover_validate_rr
Add an 'orchestration' directory. Please note that
'orchestration' does NOT mean Clover does any orchestration ---
similar to how Clover doesn't by itself implement tracing or
logging, orchestration is a directory for code related to Docker
orchestration client --- such as k8s client
kube_client utilizes the Kubernetes python client (a dependency)
to perform tasks against Kubernetes API server. For this commit,
it is only tested for weighted route rule verification, it does
three tasks:
(1) get a list of pods under a namespace --- pod dictionary now
only contains pod name and label dictionary: used to match
pod name with the node name in traces from OpenTracing
(2) check to see if a particular pod is up in a particular
namespace: used to check if Istio pods are running in
istio-system namespace
(3) check if a container exists in a list of pods under a
namespace: used to check if application pods have
istio-proxy container running
route_rule directly invokes istioctl as there isn't any Istio
Python client yet. Currently it reads and parses routerules
from Istio, and validates if a particular trace result matches
the routerules
Finally, a sample tool clover_validate_rr is provided. This
tool assumes a previous test has been ran (with an id with
both the route-rule-under-test and corresponding traces are
stored --- currently the assumption is tests were ran with
redis-master running on system). The tool can be invoked:
python clover_validate_rr.py -t <test-id> -s <service name>
where test-id is the ID of the test (most likely uuid) and
service name is the name of the service running in the
Kubernetes cluster upon which test traces should be fetched
against
Change-Id: Ic8ab6efc23c71ac4643bee796ef986a86f6fc7dd
Signed-off-by: Stephen Wong <stephen.kf.wong@gmail.com>
|
|
|
|
|
|
- Uses REST interface to obtain traces for services from Jaeger
- Discover services availabe in tracing
- Works only with Jaeger at the moment (not zipkin)
- Optional Redis interface added to store traces per test
- Install doc and validation script added for Jaeger
- Renamed doc to docs
Change-Id: I420137c818df290ecd40aa8d318c6961c511a947
Signed-off-by: Eddie Arrage <eddie.arrage@huawei.com>
|
|
- install prometheus
- validate the installation
- add prometheus query function
- TODO: test collecting telemetry data from istio
JIRA: CLOVER-7
Change-Id: I983be2db78c8c5c20c0acee9ae81e891884e07fb
Signed-off-by: QiLiang <liangqi1@huawei.com>
|
|
Change-Id: Idbe25c162fb19c59ad4e57fd32a749d1d5a29f63
Signed-off-by: QiLiang <liangqi1@huawei.com>
|
|
- install fluentd with elastic stack
- validate the installation
JIRA: CLOVER-5
Change-Id: I181a7277bc332ceac549d384cf2c3817a182b06e
Signed-off-by: Yujun Zhang <zhang.yujunz@zte.com.cn>
|
|
Adding the basic directories, and the corresponding __init__.py (empty for now)
Change-Id: I811620e170ea4aa9363238f1949f299c6fd9d751
Signed-off-by: Stephen Wong <stephen.kf.wong@gmail.com>
|