aboutsummaryrefslogtreecommitdiffstats
path: root/docs/release/userguide/collectd.ves.userguide.rst
diff options
context:
space:
mode:
authorMytnyk, Volodymyr <volodymyrx.mytnyk@intel.com>2017-10-25 14:07:58 +0100
committerMytnyk, Volodymyr <volodymyrx.mytnyk@intel.com>2017-10-25 17:35:02 +0100
commit981d0b4208c31265344c52ff3ad2913c0284cb9f (patch)
treee3260652728476527fb73507696edfd1b711b0f2 /docs/release/userguide/collectd.ves.userguide.rst
parentecf1ba1c5000718d1f0d90270af33039b488c835 (diff)
ves: update VES app user guide
- Updated VES application user guide according to latest changes of VES application. - Fixed sphinx user guide warnings Change-Id: I6e402f2ab5f05ace47d779f87fe650e305973128 Signed-off-by: Mytnyk, Volodymyr <volodymyrx.mytnyk@intel.com>
Diffstat (limited to 'docs/release/userguide/collectd.ves.userguide.rst')
-rw-r--r--docs/release/userguide/collectd.ves.userguide.rst333
1 files changed, 166 insertions, 167 deletions
diff --git a/docs/release/userguide/collectd.ves.userguide.rst b/docs/release/userguide/collectd.ves.userguide.rst
index 7077cecd..e7fcf738 100644
--- a/docs/release/userguide/collectd.ves.userguide.rst
+++ b/docs/release/userguide/collectd.ves.userguide.rst
@@ -10,52 +10,10 @@ The Barometer repository contains a python based application for VES.
The application currently supports pushing platform relevant metrics through the
additional measurements field for VES.
-Collectd has a write_kafka plugin that will send collectd metrics and values to
-a Kafka Broker.
-The VES application uses Kafka Consumer to receive metrics from the Kafka
-Broker.
+Collectd has a ``write_kafka`` plugin that sends collectd metrics and values to
+a Kafka Broker. The VES application uses Kafka Consumer to receive metrics
+from the Kafka Broker.
-Installation Instructions:
---------------------------
-1. Clone this repo:
-
- .. code:: bash
-
- git clone https://gerrit.opnfv.org/gerrit/barometer
-
-2. Install collectd
-
- .. code:: bash
-
- $ sudo apt-get install collectd
-
- CentOS 7.x use:
-
- .. code:: bash
-
- $ sudo yum install collectd
-
- .. note:: You may need to add epel repository if the above does not work.
-
- .. code:: bash
-
- $ sudo yun install epel-release
-
-3. Modify the collectd configuration script: `/etc/collectd/collectd.conf`
-
- .. code:: bash
-
- <Plugin write_kafka>
- Property "metadata.broker.list" "localhost:9092"
- <Topic "collectd">
- Format JSON
- </Topic>
- </Plugin>
-
- .. note::
-
- The above configuration is for a single host setup. Simply change localhost to remote
- server IP addess or hostname.
Install Kafka Broker
--------------------
@@ -86,7 +44,9 @@ Install Kafka Broker
$ sudo yum install zookeeper
- .. note:: You may need to add the the repository that contains zookeeper
+ .. note:: You may need to add the repository that contains zookeeper.
+ To do so, follow the step below and try to install `zookeeper` again
+ using steps above. Otherwise, skip next step.
.. code:: bash
@@ -99,11 +59,20 @@ Install Kafka Broker
$ sudo zookeeper-server start
+ if you get the error message like ``ZooKeeper data directory is missing at /var/lib/zookeeper``
+ during the start of zookeeper, initialize zookeeper data directory using
+ the command below and start zookeeper again, otherwise skip the next step.
+
+ .. code:: bash
+
+ $ sudo /usr/lib/zookeeper/bin/zkServer-initialize.sh
+ No myid provided, be sure to specify it in /var/lib/zookeeper/myid if using non-standalone
+
To test if Zookeeper is running as a daemon.
.. code:: bash
- $ sudo telnet localhost 2181
+ $ telnet localhost 2181
Type 'ruok' & hit enter.
Expected response is 'imok'. Zookeeper is running fine.
@@ -115,25 +84,26 @@ Install Kafka Broker
.. code:: bash
+ $ sudo yum install python-pip
$ sudo pip install kafka-python
2. Download Kafka:
.. code:: bash
- $ sudo wget "http://www-eu.apache.org/dist/kafka/0.11.0.0/kafka_2.11-0.11.0.0.tgz"
+ $ wget "http://www-eu.apache.org/dist/kafka/0.11.0.0/kafka_2.11-0.11.0.0.tgz"
3. Extract the archive:
.. code:: bash
- $ sudo tar -xvzf kafka_2.11-0.11.0.0.tgz
+ $ tar -xvzf kafka_2.11-0.11.0.0.tgz
4. Configure Kafka Server:
.. code:: bash
- $ sudo vi kafka_2.11-0.11.0.0/config/server.properties
+ $ vi kafka_2.11-0.11.0.0/config/server.properties
By default Kafka does not allow you to delete topics. Please uncomment:
@@ -170,180 +140,209 @@ Install Kafka Broker
localhost:2181 --topic TopicTest --from-beginning
-VES application configuration description:
-------------------------------------------
+Install collectd
+----------------
-Within the VES directory there is a configuration file called 'ves_app.conf'.
+Install development tools:
-.. note:: Details of the Vendor Event Listener REST service
+.. code:: bash
-REST resources are defined with respect to a ServerRoot:
+ $ sudo yum group install 'Development Tools'
+
+.. The libkafka installed via yum pkg manager is 0.11.0 which doesn't work with
+ collectd (compilation issue). Thus, we have to use the library installed
+ from sources using latest stable version which works with collectd.
+
+Install Apache Kafka C/C++ client library:
.. code:: bash
- ServerRoot = https://{Domain}:{Port}/{optionalRoutingPath}
+ $ git clone https://github.com/edenhill/librdkafka.git ~/librdkafka
+ $ cd ~/librdkafka
+ $ git checkout -b v0.9.5 v0.9.5
+ $ ./configure --prefix=/usr
+ $ make
+ $ sudo make install
-REST resources are of the form:
+Build collectd with Kafka support:
.. code:: bash
- {ServerRoot}/eventListener/v{apiVersion}`
- {ServerRoot}/eventListener/v{apiVersion}/{topicName}`
- {ServerRoot}/eventListener/v{apiVersion}/eventBatch`
+ $ git clone https://github.com/collectd/collectd.git ~/collectd
+ $ cd ~/collectd
+ $ ./build.sh
+ $ ./configure --with-librdkafka=/usr --without-perl-bindings --enable-perl=no
+ $ make && sudo make install
-**Domain** *"host"*
- VES domain name. It can be IP address or hostname of VES collector
- (default: `127.0.0.1`)
+Configure and start collectd. Create ``/opt/collectd/etc/collectd.conf``
+collectd configuration file as following:
-**Port** *port*
- VES port (default: `30000`)
+.. note:: The following collectd configuration file allows user to run VES
+ application in the guest mode. To run the VES in host mode, please follow
+ the `Configure VES in host mode`_ steps.
-**Path** *"path"*
- Used as the "optionalRoutingPath" element in the REST path (default: `empty`)
-
-**Topic** *"path"*
- Used as the "topicName" element in the REST path (default: `empty`)
+.. include:: collectd-ves-guest.conf
+ :code: bash
-**UseHttps** *true|false*
- Allow application to use HTTPS instead of HTTP (default: `false`)
+Start collectd process as a service as described in :ref:`install-collectd-as-a-service`.
-**Username** *"username"*
- VES collector user name (default: `empty`)
+.. Start collectd process as a service as described in `Barometer User Guide
+ <http://artifacts.opnfv.org/barometer/docs/index.html#installing-collectd-as-a-service>`_.
-**Password** *"passwd"*
- VES collector password (default: `empty`)
-**FunctionalRole** *"role"*
- Used as the 'functionalRole' field of 'commonEventHeader' event (default:
- `Collectd VES Agent`)
+Setup VES Test Collector
+------------------------
-**SendEventInterval** *interval*
- This configuration option controls how often (sec) collectd data is sent to
- Vendor Event Listener (default: `20`)
+.. note:: Test Collector setup is required only for VES application testing
+ purposes.
-**ApiVersion** *version*
- Used as the "apiVersion" element in the REST path (default: `5.1`)
+Install dependencies:
-**KafkaPort** *port*
- Kafka Port (Default ``9092``)
+.. code:: bash
-**KafkaBroker** *host*
- Kafka Broker domain name. It can be an IP address or hostname of local or remote server
- (default: localhost)
+ $ sudo pip install jsonschema
-Other collectd.conf configurations
-----------------------------------
-Please ensure that FQDNLookup is set to false
+Clone VES Test Collector:
.. code:: bash
- FQDNLookup false
+ $ git clone https://github.com/att/evel-test-collector.git ~/evel-test-collector
-Please ensure that the virt plugin is enabled and configured as follows.
+Modify VES Test Collector config file to point to existing log directory and
+schema file:
.. code:: bash
- LoadPlugin virt
+ $ sed -i.back 's/^\(log_file[ ]*=[ ]*\).*/\1collector.log/' ~/evel-test-collector/config/collector.conf
+ $ sed -i.back 's/^\(schema_file[ ]*=.*\)event_format_updated.json$/\1CommonEventFormat.json/' ~/evel-test-collector/config/collector.conf
- <Plugin virt>
- Connection "qemu:///system"
- RefreshInterval 60
- HostnameFormat uuid
- PluginInstanceFormat name
- ExtraStats "cpu_util perf"
- </Plugin>
+Start VES Test Collector:
+.. code:: bash
-.. note:: For more detailed information on the `virt` plugin configuration,
- requirements etc., please see the userguide of the collectd virt plugin.
+ $ cd ~/evel-test-collector/code/collector
+ $ nohup python ./collector.py --config ../../config/collector.conf > collector.stdout.log &
+
+
+Setup VES application (guest mode)
+----------------------------------
-Please ensure that the cpu plugin is enabled and configured as follows
+Install dependencies:
.. code:: bash
- LoadPlugin cpu
+ $ sudo pip install pyyaml
- <Plugin cpu>
- ReportByCpu false
- ValuesPercentage true
- </Plugin>
+Clone Barometer repo:
+
+.. code:: bash
+
+ $ git clone https://gerrit.opnfv.org/gerrit/barometer ~/barometer
+ $ cd ~/barometer/3rd_party/collectd-ves-app/ves_app
+ $ nohup python ves_app.py --events-schema=guest.yaml --config=ves_app_config.conf > ves_app.stdout.log &
.. note::
- The ``ReportByCpu`` option should be set to `true` (default)
- if VES application is running on guest machine ('GuestRunning' = true).
+ The above configuration is used for a localhost. The VES application can be
+ configured to use remote real VES collector and remote Kafka server. To do
+ so, the IP addresses/host names needs to be changed in ``collector.conf``
+ and ``ves_app_config.conf`` files accordingly.
-Please ensure that the aggregation plugin is enabled and configured as follows
-(required if 'GuestRunning' = true)
-.. code:: bash
+Configure VES in host mode
+--------------------------
- LoadPlugin aggregation
+Running the VES in host mode looks like steps described in
+`Setup VES application (guest mode)`_ but with the following exceptions:
- <Plugin aggregation>
- <Aggregation>
- Plugin "cpu"
- Type "percent"
- GroupBy "Host"
- GroupBy "TypeInstance"
- SetPlugin "cpu-aggregation"
- CalculateAverage true
- </Aggregation>
- </Plugin>
+- The ``host.yaml`` configuration file should be used instead of ``guest.yaml``
+ file when VES application is running.
-If application is running on a guest side, it is important to enable uuid plugin
-too. In this case the hostname in event message will be represented as UUID
-instead of system host name.
+- Collectd should be running on host machine only.
-.. code:: bash
+- Addition ``libvirtd`` dependencies needs to be installed on a host where
+ collectd daemon is running. To install those dependencies, see :ref:`virt-plugin`
+ section of Barometer user guide.
- LoadPlugin uuid
+- At least one VM instance should be up and running by hypervisor on the host.
-If a custom UUID needs to be provided, the following configuration is required in collectd.conf
-file:
+- The next (minimum) configuration needs to be provided to collectd to be able
+ to generate the VES message to VES collector.
-.. code:: bash
+ .. include:: collectd-ves-host.conf
+ :code: bash
- <Plugin uuid>
- UUIDFile "/etc/uuid"
- </Plugin>
+ to apply this configuration, the ``/opt/collectd/etc/collectd.conf`` file
+ needs to be modified based on example above and collectd daemon needs to
+ be restarted using the command below:
-Where "/etc/uuid" is a file containing custom UUID.
+ .. code:: bash
-Please also ensure that the following plugins are enabled:
+ $ sudo systemctl restart collectd
-.. code:: bash
+.. note:: The list of the plugins can be extented depends on your needs.
- LoadPlugin disk
- LoadPlugin interface
- LoadPlugin memory
-VES application with collectd notifications example
----------------------------------------------------
+VES application configuration description
+-----------------------------------------
-A good example of collectD notification is monitoring of the total CPU usage on a VM
-using the 'threshold' plugin. The following configuration will setup VES plugin to send 'Fault'
-event every time a total VM CPU value is out of range (e.g.: WARNING: VM CPU TOTAL > 50%,
-CRITICAL: VM CPU TOTAL > 96%) and send 'Fault' NORMAL event if the CPU value is back
-to normal. In the example below, there is one VM with two CPUs configured which is running
-on the host with a total of 48 cores. Thus, the threshold value 2.08 (100/48) means that
-one CPU of the VM is fully loaded (e.g.: 50% of total CPU usage of the VM) and 4.0 means
-96% of total CPU usage of the VM. Those values can also be obtained by virt-top
-command line tool.
+**Details of the Vendor Event Listener REST service**
-.. code:: bash
+REST resources are defined with respect to a ``ServerRoot``::
+
+ ServerRoot = https://{Domain}:{Port}/{optionalRoutingPath}
+
+REST resources are of the form::
+
+ {ServerRoot}/eventListener/v{apiVersion}`
+ {ServerRoot}/eventListener/v{apiVersion}/{topicName}`
+ {ServerRoot}/eventListener/v{apiVersion}/eventBatch`
+
+Within the VES directory (``3rd_party/collectd-ves-app/ves_app``) there is a
+configuration file called ``ves_app.conf``. The description of the
+configuration options are described below:
+
+**Domain** *"host"*
+ VES domain name. It can be IP address or hostname of VES collector
+ (default: ``127.0.0.1``)
+
+**Port** *port*
+ VES port (default: ``30000``)
+
+**Path** *"path"*
+ Used as the "optionalRoutingPath" element in the REST path (default: empty)
+
+**Topic** *"path"*
+ Used as the "topicName" element in the REST path (default: empty)
+
+**UseHttps** *true|false*
+ Allow application to use HTTPS instead of HTTP (default: ``false``)
+
+**Username** *"username"*
+ VES collector user name (default: empty)
+
+**Password** *"passwd"*
+ VES collector password (default: empty)
+
+**SendEventInterval** *interval*
+ This configuration option controls how often (sec) collectd data is sent to
+ Vendor Event Listener (default: ``20``)
+
+**ApiVersion** *version*
+ Used as the "apiVersion" element in the REST path (default: ``5.1``)
+
+**KafkaPort** *port*
+ Kafka Port (Default ``9092``)
+
+**KafkaBroker** *host*
+ Kafka Broker domain name. It can be an IP address or hostname of local or remote server
+ (default: ``localhost``)
- LoadPlugin threshold
- <Plugin "threshold">
- <Plugin "virt">
- <Type "percent">
- WarningMax 2.08
- FailureMax 4.0
- Instance "virt_cpu_total"
- </Type>
- </Plugin>
- </Plugin>
+VES notification support
+------------------------
-More detailed information on how to configure collectD thresholds can be found at
-https://collectd.org/documentation/manpages/collectd-threshold.5.shtml
+The VES application already supports YAML notification definitions but due to
+the collectd Kafka plugin limitations, collectd notifications cannot be received
+by the VES application. Thus, the VES notification (defined by YAML) will not be
+generated and sent to VES collector.