diff options
Diffstat (limited to 'docs/release/userguide')
-rw-r--r-- | docs/release/userguide/collectd.ves.userguide.rst | 20 | ||||
-rw-r--r-- | docs/release/userguide/docker.userguide.rst | 703 | ||||
-rw-r--r-- | docs/release/userguide/feature.userguide.rst | 402 | ||||
-rw-r--r-- | docs/release/userguide/index.rst | 15 | ||||
-rw-r--r-- | docs/release/userguide/installguide.docker.rst | 1045 | ||||
-rw-r--r-- | docs/release/userguide/installguide.oneclick.rst | 410 | ||||
-rw-r--r-- | docs/release/userguide/ves-app-guest-mode.png | bin | 6066 -> 26468 bytes | |||
-rw-r--r-- | docs/release/userguide/ves-app-host-mode.png | bin | 21203 -> 18878 bytes | |||
-rw-r--r-- | docs/release/userguide/ves-app-hypervisor-mode.png | bin | 8057 -> 30392 bytes |
9 files changed, 1617 insertions, 978 deletions
diff --git a/docs/release/userguide/collectd.ves.userguide.rst b/docs/release/userguide/collectd.ves.userguide.rst index 8b666114..2d3760b8 100644 --- a/docs/release/userguide/collectd.ves.userguide.rst +++ b/docs/release/userguide/collectd.ves.userguide.rst @@ -1,6 +1,7 @@ .. This work is licensed under a Creative Commons Attribution 4.0 International License. .. http://creativecommons.org/licenses/by/4.0 -.. (c) OPNFV, Intel Corporation and others. +.. (c) Anuket, Intel Corporation and others. +.. _barometer-ves-userguide: ========================== VES Application User Guide @@ -208,7 +209,7 @@ Clone Barometer repo and start the VES application: $ git clone https://gerrit.opnfv.org/gerrit/barometer $ cd barometer/3rd_party/collectd-ves-app/ves_app - $ nohup python ves_app.py --events-schema=guest.yaml --config=ves_app_config.conf > ves_app.stdout.log & + $ nohup python ves_app.py --events-schema=yaml/guest.yaml --config=config/ves_app_config.conf > ves_app.stdout.log & Modify Collectd configuration file ``collectd.conf`` as following: @@ -291,7 +292,7 @@ Clone Barometer repo and start the VES application: $ git clone https://gerrit.opnfv.org/gerrit/barometer $ cd barometer/3rd_party/collectd-ves-app/ves_app - $ nohup python ves_app.py --events-schema=host.yaml --config=ves_app_config.conf > ves_app.stdout.log & + $ nohup python ves_app.py --events-schema=yaml/host.yaml --config=config/ves_app_config.conf > ves_app.stdout.log & .. figure:: ves-app-host-mode.png @@ -316,6 +317,7 @@ Start collectd process as a service as described in :ref:`install-collectd-as-a- .. note:: The list of the plugins can be extented depends on your needs. +.. _Setup VES Test Collector: Setup VES Test Collector ------------------------ @@ -366,7 +368,7 @@ REST resources are of the form:: {ServerRoot}/eventListener/v{apiVersion}/{topicName}` {ServerRoot}/eventListener/v{apiVersion}/eventBatch` -Within the VES directory (``3rd_party/collectd-ves-app/ves_app``) there is a +Within the VES directory (``3rd_party/collectd-ves-app/ves_app/config``) there is a configuration file called ``ves_app_conf.conf``. The description of the configuration options are described below: @@ -930,13 +932,13 @@ Limitations definition and the format is descibed in the document. -.. _collectd: http://collectd.org/ +.. _collectd: https://collectd.org/ .. _Kafka: https://kafka.apache.org/ -.. _`VES`: https://wiki.opnfv.org/display/fastpath/VES+plugin+updates +.. _`VES`: https://wiki.anuket.io/display/HOME/VES+plugin+updates .. _`VES shema definition`: https://gerrit.onap.org/r/gitweb?p=demo.git;a=tree;f=vnfs/VES5.0/evel/evel-test-collector/docs/att_interface_definition;hb=refs/heads/master .. _`PyYAML documentation`: https://pyyaml.org/wiki/PyYAMLDocumentation -.. _`collectd plugin description`: https://github.com/collectd/collectd/blob/master/src/collectd.conf.pod -.. _`collectd data types file`: https://github.com/collectd/collectd/blob/master/src/types.db -.. _`collectd data types description`: https://github.com/collectd/collectd/blob/master/src/types.db.pod +.. _`collectd plugin description`: https://github.com/collectd/collectd/blob/main/src/collectd.conf.pod +.. _`collectd data types file`: https://github.com/collectd/collectd/blob/main/src/types.db +.. _`collectd data types description`: https://github.com/collectd/collectd/blob/main/src/types.db.pod .. _`python regular expression syntax`: https://docs.python.org/2/library/re.html#regular-expression-syntax .. _`Kafka collectd plugin`: https://collectd.org/wiki/index.php/Plugin:Write_Kafka diff --git a/docs/release/userguide/docker.userguide.rst b/docs/release/userguide/docker.userguide.rst deleted file mode 100644 index 33e060af..00000000 --- a/docs/release/userguide/docker.userguide.rst +++ /dev/null @@ -1,703 +0,0 @@ -.. This work is licensed under a Creative Commons Attribution 4.0 International License. -.. http://creativecommons.org/licenses/by/4.0 -.. (c) <optionally add copywriters name> - -=================================== -OPNFV Barometer Docker User Guide -=================================== - -.. contents:: - :depth: 3 - :local: - -The intention of this user guide is to outline how to install and test the Barometer project's -docker images. The `OPNFV docker hub <https://hub.docker.com/u/opnfv/?page=1>`_ contains 5 docker -images from the Barometer project: - - 1. `Collectd docker image <https://hub.docker.com/r/opnfv/barometer-collectd/>`_ - 2. `Influxdb docker image <https://hub.docker.com/r/opnfv/barometer-influxdb/>`_ - 3. `Grafana docker image <https://hub.docker.com/r/opnfv/barometer-grafana/>`_ - 4. `Kafka docker image <https://hub.docker.com/r/opnfv/barometer-kafka/>`_ - 5. `VES application docker image <https://hub.docker.com/r/opnfv/barometer-ves/>`_ - -For description of images please see section `Barometer Docker Images Description`_ - -For steps to build and run Collectd image please see section `Build and Run Collectd Docker Image`_ - -For steps to build and run InfluxDB and Grafana images please see section `Build and Run InfluxDB and Grafana Docker Images`_ - -For steps to build and run VES and Kafka images please see section `Build and Run VES and Kafka Docker Images`_ - -For overview of running VES application with Kafka please see the `VES Application User Guide -<http://docs.opnfv.org/en/latest/submodules/barometer/docs/release/userguide/collectd.ves.userguide.html>`_ - -Barometer Docker Images Description ------------------------------------ - -.. Describe the specific features and how it is realised in the scenario in a brief manner -.. to ensure the user understand the context for the user guide instructions to follow. - -Barometer Collectd Image -^^^^^^^^^^^^^^^^^^^^^^^^ -The barometer collectd docker image gives you a collectd installation that includes all -the barometer plugins. - -.. note:: - The Dockerfile is available in the docker/barometer-collectd directory in the barometer repo. - The Dockerfile builds a CentOS 7 docker image. - The container MUST be run as a privileged container. - -Collectd is a daemon which collects system performance statistics periodically -and provides a variety of mechanisms to publish the collected metrics. It -supports more than 90 different input and output plugins. Input plugins -retrieve metrics and publish them to the collectd deamon, while output plugins -publish the data they receive to an end point. Collectd also has infrastructure -to support thresholding and notification. - -Collectd docker image has enabled the following collectd plugins (in addition -to the standard collectd plugins): - -* hugepages plugin -* Open vSwitch events Plugin -* Open vSwitch stats Plugin -* mcelog plugin -* PMU plugin -* RDT plugin -* virt -* SNMP Agent -* Kafka_write plugin - -Plugins and third party applications in Barometer repository that will be available in the -docker image: - -* Open vSwitch PMD stats -* ONAP VES application -* gnocchi plugin -* aodh plugin -* Legacy/IPMI - -InfluxDB + Grafana Docker Images -^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ - -The Barometer project's InfluxDB and Grafana docker images are 2 docker images that database and graph -statistics reported by the Barometer collectd docker. InfluxDB is an open-source time series database -tool which stores the data from collectd for future analysis via Grafana, which is a open-source -metrics anlytics and visualisation suite which can be accessed through any browser. - -VES + Kafka Docker Images -^^^^^^^^^^^^^^^^^^^^^^^^^ - -The Barometer project's VES application and Kafka docker images are based on a CentOS 7 image. Kafka -docker image has a dependancy on `Zookeeper <https://zookeeper.apache.org/>`_. Kafka must be able to -connect and register with an instance of Zookeeper that is either running on local or remote host. -Kafka recieves and stores metrics recieved from Collectd. VES application pulls latest metrics from Kafka -which it normalizes into VES format for sending to a VES collector. Please see details in `VES Application User Guide -<http://docs.opnfv.org/en/latest/submodules/barometer/docs/release/userguide/collectd.ves.userguide.html>`_ - -Installing Docker ------------------ -.. Describe the specific capabilities and usage for <XYZ> feature. -.. Provide enough information that a user will be able to operate the feature on a deployed scenario. - -On Ubuntu -^^^^^^^^^^ -.. note:: - * sudo permissions are required to install docker. - * These instructions are for Ubuntu 16.10 - -To install docker: - -.. code:: bash - - $ sudo apt-get install curl - $ sudo curl -fsSL https://get.docker.com/ | sh - $ sudo usermod -aG docker <username> - $ sudo systemctl status docker - -Replace <username> above with an appropriate user name. - -On CentOS -^^^^^^^^^^ -.. note:: - * sudo permissions are required to install docker. - * These instructions are for CentOS 7 - -To install docker: - -.. code:: bash - - $ sudo yum remove docker docker-common docker-selinux docker-engine - $ sudo yum install -y yum-utils device-mapper-persistent-data lvm2 - $ sudo yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo - $ sudo yum-config-manager --enable docker-ce-edge - $ sudo yum-config-manager --enable docker-ce-test - $ sudo yum install docker-ce - $ sudo usermod -aG docker <username> - $ sudo systemctl status docker - -Replace <username> above with an appropriate user name. - -.. note:: - If this is the first time you are installing a package from a recently added - repository, you will be prompted to accept the GPG key, and the key’s - fingerprint will be shown. Verify that the fingerprint is correct, and if so, - accept the key. The fingerprint should match060A 61C5 1B55 8A7F 742B 77AA C52F - EB6B 621E 9F35. - - Retrieving key from https://download.docker.com/linux/centos/gpg - Importing GPG key 0x621E9F35: - Userid : "Docker Release (CE rpm) <docker@docker.com>" - Fingerprint: 060a 61c5 1b55 8a7f 742b 77aa c52f eb6b 621e 9f35 - From : https://download.docker.com/linux/centos/gpg - Is this ok [y/N]: y - -Proxy Configuration: -^^^^^^^^^^^^^^^^^^^^ -.. note:: - This applies for both CentOS and Ubuntu. - -If you are behind an HTTP or HTTPS proxy server, you will need to add this -configuration in the Docker systemd service file. - -1. Create a systemd drop-in directory for the docker service: - -.. code:: bash - - $ sudo mkdir -p /etc/systemd/system/docker.service.d - -2. Create a file -called /etc/systemd/system/docker.service.d/http-proxy.conf that adds -the HTTP_PROXY environment variable: - -.. code:: bash - - [Service] - Environment="HTTP_PROXY=http://proxy.example.com:80/" - -Or, if you are behind an HTTPS proxy server, create a file -called /etc/systemd/system/docker.service.d/https-proxy.conf that adds -the HTTPS_PROXY environment variable: - -.. code:: bash - - [Service] - Environment="HTTPS_PROXY=https://proxy.example.com:443/" - -Or create a single file with all the proxy configurations: -/etc/systemd/system/docker.service.d/proxy.conf - -.. code:: bash - - [Service] - Environment="HTTP_PROXY=http://proxy.example.com:80/" - Environment="HTTPS_PROXY=https://proxy.example.com:443/" - Environment="FTP_PROXY=ftp://proxy.example.com:443/" - Environment="NO_PROXY=localhost" - -3. Flush changes: - -.. code:: bash - - $ sudo systemctl daemon-reload - -4. Restart Docker: - -.. code:: bash - - $ sudo systemctl restart docker - -5. Check docker environment variables: - -.. code:: bash - - sudo systemctl show --property=Environment docker - -Test docker installation -^^^^^^^^^^^^^^^^^^^^^^^^ -.. note:: - This applies for both CentOS and Ubuntu. - -.. code:: bash - - $ sudo docker run hello-world - -The output should be something like: - -.. code:: bash - - Unable to find image 'hello-world:latest' locally - latest: Pulling from library/hello-world - 5b0f327be733: Pull complete - Digest: sha256:07d5f7800dfe37b8c2196c7b1c524c33808ce2e0f74e7aa00e603295ca9a0972 - Status: Downloaded newer image for hello-world:latest - - Hello from Docker! - This message shows that your installation appears to be working correctly. - - To generate this message, Docker took the following steps: - 1. The Docker client contacted the Docker daemon. - 2. The Docker daemon pulled the "hello-world" image from the Docker Hub. - 3. The Docker daemon created a new container from that image which runs the - executable that produces the output you are currently reading. - 4. The Docker daemon streamed that output to the Docker client, which sent it - to your terminal. - -To try something more ambitious, you can run an Ubuntu container with: - -.. code:: bash - - $ docker run -it ubuntu bash - -Build and Run Collectd Docker Image ------------------------------------ - -Download the collectd docker image -^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ -If you wish to use a pre-built barometer image, you can pull the barometer -image from https://hub.docker.com/r/opnfv/barometer-collectd/ - -.. code:: bash - - $ docker pull opnfv/barometer-collectd - -Build the collectd docker image -^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ - -.. code:: bash - - $ git clone https://gerrit.opnfv.org/gerrit/barometer - $ cd barometer/docker/barometer-collectd - $ sudo docker build -t opnfv/barometer-collectd --build-arg http_proxy=`echo $http_proxy` \ - --build-arg https_proxy=`echo $https_proxy` -f Dockerfile . - -.. note:: - In the above mentioned ``docker build`` command, http_proxy & https_proxy arguments needs to be - passed only if system is behind an HTTP or HTTPS proxy server. - -Check the docker images: - -.. code:: bash - - $ sudo docker images - -Output should contain a barometer-collectd image: - -.. code:: - - REPOSITORY TAG IMAGE ID CREATED SIZE - opnfv/barometer-collectd latest 05f2a3edd96b 3 hours ago 1.2GB - centos 7 196e0ce0c9fb 4 weeks ago 197MB - centos latest 196e0ce0c9fb 4 weeks ago 197MB - hello-world latest 05a3bd381fc2 4 weeks ago 1.84kB - -Run the collectd docker image -^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ -.. code:: bash - - $ sudo docker run -tid --net=host -v `pwd`/../src/collectd_sample_configs:/opt/collectd/etc/collectd.conf.d \ - -v /var/run:/var/run -v /tmp:/tmp --privileged opnfv/barometer-collectd /run_collectd.sh - -.. note:: - The docker collectd image contains configuration for all the collectd plugins. In the command - above we are overriding /opt/collectd/etc/collectd.conf.d by mounting a host directory - `pwd`/../src/collectd_sample_configs that contains only the sample configurations we are interested - in running. *It's important to do this if you don't have DPDK, or RDT installed on the host*. - Sample configurations can be found at: - https://github.com/opnfv/barometer/tree/master/src/collectd/collectd_sample_configs - -Check your docker image is running - -.. code:: bash - - sudo docker ps - -To make some changes when the container is running run: - -.. code:: bash - - sudo docker exec -ti <CONTAINER ID> /bin/bash - -Build and Run InfluxDB and Grafana docker images ------------------------------------------------- - -Overview -^^^^^^^^ -The barometer-influxdb image is based on the influxdb:1.3.7 image from the influxdb dockerhub. To -view detils on the base image please visit -`https://hub.docker.com/_/influxdb/ <https://hub.docker.com/_/influxdb/>`_ Page includes details of -exposed ports and configurable enviromental variables of the base image. - -The barometer-grafana image is based on grafana:4.6.3 image from the grafana dockerhub. To view -details on the base image please visit -`https://hub.docker.com/r/grafana/grafana/ <https://hub.docker.com/r/grafana/grafana/>`_ Page -includes details on exposed ports and configurable enviromental variables of the base image. - -The barometer-grafana image includes pre-configured source and dashboards to display statistics exposed -by the barometer-collectd image. The default datasource is an influxdb database running on localhost -but the address of the influxdb server can be modified when launching the image by setting the -environmental variables influxdb_host to IP or hostname of host on which influxdb server is running. - -Additional dashboards can be added to barometer-grafana by mapping a volume to /opt/grafana/dashboards. -Incase where a folder is mounted to this volume only files included in this folder will be visible -inside barometer-grafana. To ensure all default files are also loaded please ensure they are included in -volume folder been mounted. Appropriate example are given in section `Run the Grafana docker image`_ - -Download the InfluxDB and Grafana docker images -^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ -If you wish to use pre-built barometer project's influxdb and grafana images, you can pull the -images from https://hub.docker.com/r/opnfv/barometer-influxdb/ and https://hub.docker.com/r/opnfv/barometer-grafana/ - -.. note:: - If your preference is to build images locally please see sections `Build InfluxDB Docker Image`_ and - `Build Grafana Docker Image`_ - -.. code:: bash - - $ docker pull opnfv/barometer-influxdb - $ docker pull opnfv/barometer-grafana - -.. note:: - If you have pulled the pre-built barometer-influxdb and barometer-grafana images there is no - requirement to complete steps outlined in sections `Build InfluxDB Docker Image`_ and - `Build Grafana Docker Image`_ and you can proceed directly to section - `Run the Influxdb and Grafana Images`_ If you wish to run the barometer-influxdb and - barometer-grafana images via Docker Compose proceed directly to section - `Docker Compose`_. - -Build InfluxDB docker image -^^^^^^^^^^^^^^^^^^^^^^^^^^^ - -Build influxdb image from Dockerfile - -.. code:: bash - - $ cd barometer/docker/barometer-influxdb - $ sudo docker build -t opnfv/barometer-influxdb --build-arg http_proxy=`echo $http_proxy` \ - --build-arg https_proxy=`echo $https_proxy` -f Dockerfile . - -.. note:: - In the above mentioned ``docker build`` command, http_proxy & https_proxy arguments needs to - be passed only if system is behind an HTTP or HTTPS proxy server. - -Check the docker images: - -.. code:: bash - - $ sudo docker images - -Output should contain an influxdb image: - -.. code:: - - REPOSITORY TAG IMAGE ID CREATED SIZE - opnfv/barometer-influxdb latest 1e4623a59fe5 3 days ago 191MB - -Build Grafana docker image -^^^^^^^^^^^^^^^^^^^^^^^^^^ - -Build Grafana image from Dockerfile - -.. code:: bash - - $ cd barometer/docker/barometer-grafana - $ sudo docker build -t opnfv/barometer-grafana --build-arg http_proxy=`echo $http_proxy` \ - --build-arg https_proxy=`echo $https_proxy` -f Dockerfile . - -.. note:: - In the above mentioned ``docker build`` command, http_proxy & https_proxy arguments needs to - be passed only if system is behind an HTTP or HTTPS proxy server. - -Check the docker images: - -.. code:: bash - - $ sudo docker images - -Output should contain an influxdb image: - -.. code:: - - REPOSITORY TAG IMAGE ID CREATED SIZE - opnfv/barometer-grafana latest 05f2a3edd96b 3 hours ago 1.2GB - -Run the Influxdb and Grafana Images ------------------------------------ - -Run the InfluxDB docker image -^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ -.. code:: bash - - $ sudo docker run -tid --net=host -v /var/lib/influxdb:/var/lib/influxdb -p 8086:8086 -p 25826:25826 opnfv/barometer-influxdb - -Check your docker image is running - -.. code:: bash - - sudo docker ps - -To make some changes when the container is running run: - -.. code:: bash - - sudo docker exec -ti <CONTAINER ID> /bin/bash - -Run the Grafana docker image -^^^^^^^^^^^^^^^^^^^^^^^^^^^^ - -Connecting to an influxdb instance running on local system and adding own custom dashboards - -.. code:: bash - - $ sudo docker run -tid --net=host -v /var/lib/grafana:/var/lib/grafana -v ${PWD}/dashboards:/opt/grafana/dashboards \ - -p 3000:3000 opnfv/barometer-grafana - -Connecting to an influxdb instance running on remote system with hostname of someserver and IP address -of 192.168.121.111 - -.. code:: bash - - $ sudo docker run -tid --net=host -v /var/lib/grafana:/var/lib/grafana -p 3000:3000 -e \ - influxdb_host=someserver --add-host someserver:192.168.121.111 opnfv/barometer-grafana - -Check your docker image is running - -.. code:: bash - - sudo docker ps - -To make some changes when the container is running run: - -.. code:: bash - - sudo docker exec -ti <CONTAINER ID> /bin/bash - -Connect to <host_ip>:3000 with a browser and log into grafana: admin/admin - - -Build and Run VES and Kafka Docker Images ------------------------------------------- - -Download VES and Kafka docker images -^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ - -If you wish to use pre-built barometer project's VES and kafka images, you can pull the -images from https://hub.docker.com/r/opnfv/barometer-ves/ and https://hub.docker.com/r/opnfv/barometer-kafka/ - -.. note:: - If your preference is to build images locally please see sections `Build the Kafka Image`_ and - `Build VES Image`_ - -.. code:: bash - - $ docker pull opnfv/barometer-kafka - $ docker pull opnfv/barometer-ves - -.. note:: - If you have pulled the pre-built images there is no requirement to complete steps outlined - in sections `Build Kafka Docker Image`_ and `Build VES Docker Image`_ and you can proceed directly to section - `Run Kafka Docker Image`_ If you wish to run the docker images via Docker Compose proceed directly to section `Docker Compose`_. - -Build Kafka docker image -^^^^^^^^^^^^^^^^^^^^^^^^ - -Build Kafka docker image: - -.. code:: bash - - $ cd barometer/docker/barometer-kafka - $ sudo docker build -t opnfv/barometer-kafka --build-arg http_proxy=`echo $http_proxy` \ - --build-arg https_proxy=`echo $https_proxy` -f Dockerfile . - -.. note:: - In the above mentioned ``docker build`` command, http_proxy & https_proxy arguments needs - to be passed only if system is behind an HTTP or HTTPS proxy server. - -Check the docker images: - -.. code:: bash - - $ sudo docker images - -Output should contain a barometer image: - -.. code:: - - REPOSITORY TAG IMAGE ID CREATED SIZE - opnfv/barometer-kafka latest 05f2a3edd96b 3 hours ago 1.2GB - -Build VES docker image -^^^^^^^^^^^^^^^^^^^^^^ - -Build VES application docker image: - -.. code:: bash - - $ cd barometer/docker/barometer-ves - $ sudo docker build -t opnfv/barometer-ves --build-arg http_proxy=`echo $http_proxy` \ - --build-arg https_proxy=`echo $https_proxy` -f Dockerfile . - -.. note:: - In the above mentioned ``docker build`` command, http_proxy & https_proxy arguments needs - to be passed only if system is behind an HTTP or HTTPS proxy server. - -Check the docker images: - -.. code:: bash - - $ sudo docker images - -Output should contain a barometer image: - -.. code:: - - REPOSITORY TAG IMAGE ID CREATED SIZE - opnfv/barometer-ves latest 05f2a3edd96b 3 hours ago 1.2GB - -Run Kafka docker image -^^^^^^^^^^^^^^^^^^^^^^ - -.. note:: - Before running Kafka an instance of Zookeeper must be running for the Kafka broker to register - with. Zookeeper can be running locally or on a remote platform. Kafka's broker_id and address of - its zookeeper instance can be configured by setting values for environmental variables 'broker_id' - and 'zookeeper_node'. In instance where 'broker_id' and/or 'zookeeper_node' is not set the default - setting of broker_id=0 and zookeeper_node=localhost is used. In intance where Zookeeper is running - on same node as Kafka and there is a one to one relationship between Zookeeper and Kafka, default - setting can be used. The docker argument `add-host` adds hostname and IP address to - /etc/hosts file in container - -Run zookeeper docker image: - -.. code:: bash - - $ sudo docker run -tid --net=host -p 2181:2181 zookeeper:3.4.11 - -Run kafka docker image which connects with a zookeeper instance running on same node with a 1:1 relationship - -.. code:: bash - - $ sudo docker run -tid --net=host -p 9092:9092 opnfv/barometer-kafka - - -Run kafka docker image which connects with a zookeeper instance running on a node with IP address of -192.168.121.111 using broker ID of 1 - -.. code:: bash - - $ sudo docker run -tid --net=host -p 9092:9092 --env broker_id=1 --env zookeeper_node=zookeeper --add-host \ - zookeeper:192.168.121.111 opnfv/barometer-kafka - -Run VES Application docker image -^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ -.. note:: - VES application uses configuration file ves_app_config.conf from directory - barometer/3rd_party/collectd-ves-app/ves_app/config/ and host.yaml file from - barometer/3rd_party/collectd-ves-app/ves_app/yaml/ by default. If you wish to use a custom config - file it should be mounted to mount point /opt/ves/config/ves_app_config.conf. To use an alternative yaml - file from folder barometer/3rd_party/collectd-ves-app/ves_app/yaml the name of the yaml file to use - should be passed as an additional command. If you wish to use a custom file the file should be - mounted to mount point /opt/ves/yaml/ Please see examples below - -Run VES docker image with default configuration - -.. code:: bash - - $ sudo docker run -tid --net=host opnfv/barometer-ves - -Run VES docker image with guest.yaml files from barometer/3rd_party/collectd-ves-app/ves_app/yaml/ - -.. code:: bash - - $ sudo docker run -tid --net=host opnfv/barometer-ves guest.yaml - - -Run VES docker image with using custom config and yaml files. In example below yaml/ folder cotains -file named custom.yaml - -.. code:: bash - - $ sudo docker run -tid --net=host -v ${PWD}/custom.config:/opt/ves/config/ves_app_config.conf \ - -v ${PWD}/yaml/:/opt/ves/yaml/ opnfv/barometer-ves custom.yaml - -Docker Compose --------------- - -Install docker-compose -^^^^^^^^^^^^^^^^^^^^^^ - -On the node where you want to run influxdb + grafana or the node where you want to run the VES app -zookeeper and Kafka containers together: - -.. note:: - The default configuration for all these containers is to run on the localhost. If this is not - the model you want to use then please make the appropriate configuration changes before launching - the docker containers. - -1. Start by installing docker compose - -.. code:: bash - - $ sudo curl -L https://github.com/docker/compose/releases/download/1.17.0/docker-compose-`uname -s`-`uname -m` -o /usr/bin/docker-compose - -.. note:: - Use the latest Compose release number in the download command. The above command is an example, - and it may become out-of-date. To ensure you have the latest version, check the Compose repository - release page on GitHub. - -2. Apply executable permissions to the binary: - -.. code:: bash - - $ sudo chmod +x /usr/bin/docker-compose - -3. Test the installation. - -.. code:: bash - - $ sudo docker-compose --version - -Run the InfluxDB and Grafana containers using docker compose -^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ - -Launch containers: - -.. code:: bash - - $ cd barometer/docker/compose/influxdb-grafana/ - $ sudo docker-compose up -d - -Check your docker images are running - -.. code:: bash - - $ sudo docker ps - -Connect to <host_ip>:3000 with a browser and log into grafana: admin/admin - -Run the Kafka, zookeeper and VES containers using docker compose -^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ - -Launch containers: - -.. code:: bash - - $ cd barometer/docker/compose/ves/ - $ sudo docker-compose up -d - -Check your docker images are running - -.. code:: bash - - $ sudo docker ps - -Testing the docker image -^^^^^^^^^^^^^^^^^^^^^^^^ -TODO - -References -^^^^^^^^^^^ -.. [1] https://docs.docker.com/engine/admin/systemd/#httphttps-proxy -.. [2] https://docs.docker.com/engine/installation/linux/docker-ce/centos/#install-using-the-repository -.. [3] https://docs.docker.com/engine/userguide/ - - diff --git a/docs/release/userguide/feature.userguide.rst b/docs/release/userguide/feature.userguide.rst index 55a248b9..2750bd8d 100644 --- a/docs/release/userguide/feature.userguide.rst +++ b/docs/release/userguide/feature.userguide.rst @@ -1,10 +1,12 @@ +.. _feature-userguide: + .. This work is licensed under a Creative Commons Attribution 4.0 International License. .. http://creativecommons.org/licenses/by/4.0 -.. (c) <optionally add copywriters name> +.. (c) Anuket and others -=================================== -OPNFV Barometer User Guide -=================================== +=========================== +Anuket Barometer User Guide +=========================== Barometer collectd plugins description --------------------------------------- @@ -20,11 +22,15 @@ to support thresholding and notification. Barometer has enabled the following collectd plugins: -* *dpdkstat plugin*: A read plugin that retrieves stats from the DPDK extended - NIC stats API. +* *dpdk_telemetry plugin*: A read plugin to collect dpdk interface stats and + application or global stats from dpdk telemetry library. The ``dpdk_telemetry`` + plugin provides both DPDK NIC Stats and DPDK application stats. + This plugin doesn't deal with dpdk events. + The mimimum dpdk version required to use this plugin is 19.08. -* *dpdkevents plugin*: A read plugin that retrieves DPDK link status and DPDK - forwarding cores liveliness status (DPDK Keep Alive). +.. note:: + The ``dpdk_telemetry`` plugin should only be used if your dpdk application + doesn't already have more relevant metrics available (e.g.ovs_stats). * `gnocchi plugin`_: A write plugin that pushes the retrieved stats to Gnocchi. It's capable of pushing any stats read through collectd to @@ -62,12 +68,13 @@ Barometer has enabled the following collectd plugins: from collectd and translates requested values from collectd's internal format to SNMP format. Supports SNMP: get, getnext and walk requests. -All the plugins above are available on the collectd master, except for the -Gnocchi and Aodh plugins as they are Python-based plugins and only C plugins -are accepted by the collectd community. The Gnocchi and Aodh plugins live in -the OpenStack repositories. +All the plugins above are available on the collectd main branch, except for +the Gnocchi and Aodh plugins as they are Python-based plugins and only C +plugins are accepted by the collectd community. The Gnocchi and Aodh plugins +live in the OpenStack repositories. -Other plugins existing as a pull request into collectd master: +.. TODO: Update this to reflect merging of these PRs +Other plugins existing as a pull request into collectd main: * *Legacy/IPMI*: A read plugin that reports platform thermals, voltages, fanspeed, current, flow, power etc. Also, the plugin monitors Intelligent @@ -91,19 +98,16 @@ Read Plugins/application: Intel RDT plugin, virt plugin, Open vSwitch stats plug Open vSwitch PMD stats application. Collectd capabilities and usage ------------------------------------- +------------------------------- .. Describe the specific capabilities and usage for <XYZ> feature. .. Provide enough information that a user will be able to operate the feature on a deployed scenario. -.. note:: Plugins included in the OPNFV E release will be built-in for Apex integration - and can be configured as shown in the examples below. - - The collectd plugins in OPNFV are configured with reasonable defaults, but can - be overridden. +The collectd plugins in Anuket are configured with reasonable defaults, but can +be overridden. Building all Barometer upstreamed plugins from scratch ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ -The plugins that have been merged to the collectd master branch can all be +The plugins that have been merged to the collectd main branch can all be built and configured through the barometer repository. .. note:: @@ -136,12 +140,12 @@ Sample configuration files can be found in '/opt/collectd/etc/collectd.conf.d' By default, `collectd_exec` user is used in the exec.conf provided in the sample configurations directory under src/collectd in the Barometer repo. These scripts *DO NOT* create this user. You need to create this user or modify the configuration in the sample configurations directory - under src/collectd to use another existing non root user before running build_base_machine.sh. + under src/collectd to use another existing non root user before running build_base_machine.sh. .. note:: If you are using any Open vSwitch plugins you need to run: -.. code:: bash + .. code:: bash $ sudo ovs-vsctl set-manager ptcp:6640 @@ -160,18 +164,18 @@ collectd, check out the `collectd-openstack-plugins GSG`_. Below is the per plugin installation and configuration guide, if you only want to install some/particular plugins. -DPDK plugins -^^^^^^^^^^^^^ +DPDK telemetry plugin +^^^^^^^^^^^^^^^^^^^^^ Repo: https://github.com/collectd/collectd -Branch: master +Branch: main -Dependencies: DPDK (http://dpdk.org/) +Dependencies: `DPDK <https://www.dpdk.org/>`_ (runtime), libjansson (compile-time) -.. note:: DPDK statistics plugin requires DPDK version 16.04 or later. +.. note:: DPDK telemetry plugin requires DPDK version 19.08 or later. To build and install DPDK to /usr please see: -https://github.com/collectd/collectd/blob/master/docs/BUILD.dpdkstat.md +https://github.com/collectd/collectd/blob/main/docs/BUILD.dpdkstat.md Building and installing collectd: @@ -184,83 +188,35 @@ Building and installing collectd: $ make $ sudo make install -.. note:: If DPDK was installed in a non standard location you will need to - specify paths to the header files and libraries using *LIBDPDK_CPPFLAGS* and - *LIBDPDK_LDFLAGS*. You will also need to add the DPDK library symbols to the - shared library path using *ldconfig*. Note that this update to the shared - library path is not persistant (i.e. it will not survive a reboot). - -Example of specifying custom paths to DPDK headers and libraries: - -.. code:: bash - - $ ./configure LIBDPDK_CPPFLAGS="path to DPDK header files" LIBDPDK_LDFLAGS="path to DPDK libraries" - This will install collectd to default folder ``/opt/collectd``. The collectd configuration file (``collectd.conf``) can be found at ``/opt/collectd/etc``. -To configure the dpdkstats plugin you need to modify the configuration file to -include: - -.. code:: bash - - LoadPlugin dpdkstat - <Plugin dpdkstat> - Coremask "0xf" - ProcessType "secondary" - FilePrefix "rte" - EnabledPortMask 0xffff - PortName "interface1" - PortName "interface2" - </Plugin> - -To configure the dpdkevents plugin you need to modify the configuration file to +To configure the dpdk_telemetry plugin you need to modify the configuration file to include: .. code:: bash - <LoadPlugin dpdkevents> - Interval 1 - </LoadPlugin> - - <Plugin "dpdkevents"> - <EAL> - Coremask "0x1" - MemoryChannels "4" - FilePrefix "rte" - </EAL> - <Event "link_status"> - SendEventsOnUpdate false - EnabledPortMask 0xffff - SendNotification true - </Event> - <Event "keep_alive"> - SendEventsOnUpdate false - LCoreMask "0xf" - KeepAliveShmName "/dpdk_keepalive_shm_name" - SendNotification true - </Event> + LoadPlugin dpdk_telemetry + <Plugin dpdk_telemetry> + #ClientSocketPath "/var/run/.client" + #DpdkSocketPath "/var/run/dpdk/rte/telemetry" </Plugin> -.. note:: Currently, the DPDK library doesn’t support API to de-initialize - the DPDK resources allocated on the initialization. It means, the collectd - plugin will not be able to release the allocated DPDK resources - (locks/memory/pci bindings etc.) correctly on collectd shutdown or reinitialize - the DPDK library if primary DPDK process is restarted. The only way to release - those resources is to terminate the process itself. For this reason, the plugin - forks off a separate collectd process. This child process becomes a secondary - DPDK process which can be run on specific CPU cores configured by user through - collectd configuration file (“Coremask” EAL configuration option, the - hexadecimal bitmask of the cores to run on). +The plugin uses default values (as shown) for the socket paths, if you use different values, +uncomment and update ``ClientSocketPath`` and ``DpdkSocketPath`` as required. For more information on the plugin parameters, please see: -https://github.com/collectd/collectd/blob/master/src/collectd.conf.pod +https://github.com/collectd/collectd/blob/main/src/collectd.conf.pod -.. note:: dpdkstat plugin initialization time depends on read interval. It - requires 5 read cycles to set up internal buffers and states, during that time - no statistics are submitted. Also, if plugin is running and the number of DPDK - ports is increased, internal buffers are resized. That requires 3 read cycles - and no port statistics are submitted during that time. +.. note:: + + To gather metrics from a DPDK application, telemetry needs to be enabled. + This can be done by setting the ``CONFIG_RTE_LIBRTE_TELEMETRY=y`` config flag. + The application then needs to be run with the ``--telemetry`` EAL option, e.g. + :: + $dpdk/app/testpmd --telemetry -l 2,3,4 -n 4 + +For more information on the ``dpdk_telemetry`` plugin, see the `anuket wiki <https://wiki.anuket.io/display/HOME/DPDK+Telemetry+Plugin>`_. The Address-Space Layout Randomization (ASLR) security feature in Linux should be disabled, in order for the same hugepage memory mappings to be present in all @@ -283,31 +239,14 @@ To fully enable ASLR: and only when all implications of this change have been understood. For more information on multi-process support, please see: -http://dpdk.org/doc/guides/prog_guide/multi_proc_support.html - -**DPDK stats plugin limitations:** - -1. The DPDK primary process application should use the same version of DPDK - that collectd DPDK plugin is using; - -2. L2 statistics are only supported; - -3. The plugin has been tested on Intel NIC’s only. - -**DPDK stats known issues:** - -* DPDK port visibility +https://doc.dpdk.org/guides/prog_guide/multi_proc_support.html - When network port controlled by Linux is bound to DPDK driver, the port - will not be available in the OS. It affects the SNMP write plugin as those - ports will not be present in standard IF-MIB. Thus, additional work is - required to be done to support DPDK ports and statistics. Hugepages Plugin ^^^^^^^^^^^^^^^^^ Repo: https://github.com/collectd/collectd -Branch: master +Branch: main Dependencies: None, but assumes hugepages are configured. @@ -335,25 +274,18 @@ configuration file (``collectd.conf``) can be found at ``/opt/collectd/etc``. To configure the hugepages plugin you need to modify the configuration file to include: -.. code:: bash - - LoadPlugin hugepages - <Plugin hugepages> - ReportPerNodeHP true - ReportRootHP true - ValuesPages true - ValuesBytes false - ValuesPercentage false - </Plugin> +.. literalinclude:: ../../../src/collectd/collectd_sample_configs/hugepages.conf + :start-at: LoadPlugin + :language: bash For more information on the plugin parameters, please see: -https://github.com/collectd/collectd/blob/master/src/collectd.conf.pod +https://github.com/collectd/collectd/blob/main/src/collectd.conf.pod Intel PMU Plugin ^^^^^^^^^^^^^^^^ Repo: https://github.com/collectd/collectd -Branch: master +Branch: main Dependencies: @@ -381,7 +313,7 @@ CPU event list json file: .. code:: bash - $ wget https://raw.githubusercontent.com/andikleen/pmu-tools/master/event_download.py + $ wget https://raw.githubusercontent.com/andikleen/pmu-tools/main/event_download.py $ python event_download.py This will download the json files to the location: $HOME/.cache/pmu-events/. If you don't want to @@ -404,36 +336,27 @@ configuration file (``collectd.conf``) can be found at ``/opt/collectd/etc``. To configure the PMU plugin you need to modify the configuration file to include: -.. code:: bash - - <LoadPlugin intel_pmu> - Interval 1 - </LoadPlugin> - <Plugin "intel_pmu"> - ReportHardwareCacheEvents true - ReportKernelPMUEvents true - ReportSoftwareEvents true - </Plugin> - -If you want to monitor Intel CPU specific CPU events, make sure to enable the -additional two options shown below: - -.. code:: bash +.. literalinclude:: ../../../src/collectd/collectd_sample_configs/intel_pmu.conf + :start-at: LoadPlugin + :language: bash - <Plugin intel_pmu> - ReportHardwareCacheEvents true - ReportKernelPMUEvents true - ReportSoftwareEvents true - EventList "$HOME/.cache/pmu-events/GenuineIntel-6-2D-core.json" - HardwareEvents "L2_RQSTS.CODE_RD_HIT,L2_RQSTS.CODE_RD_MISS" "L2_RQSTS.ALL_CODE_RD" - </Plugin> +If you want to monitor Intel CPU specific CPU events, make sure to uncomment the +``EventList`` and ``HardwareEvents`` options above. .. note:: If you set XDG_CACHE_HOME to anything other than the variable above - you will need to modify the path for the EventList configuration. +Use ``Cores`` option to monitor metrics only for configured cores. If an empty string is provided +as value for this field default cores configuration is applied - that is all available cores +are monitored separately. To limit monitoring to cores 0-7 set the option as shown below: + +.. code:: bash + + Cores "[0-7]" + For more information on the plugin parameters, please see: -https://github.com/collectd/collectd/blob/master/src/collectd.conf.pod +https://github.com/collectd/collectd/blob/main/src/collectd.conf.pod .. note:: @@ -448,18 +371,18 @@ Intel RDT Plugin ^^^^^^^^^^^^^^^^ Repo: https://github.com/collectd/collectd -Branch: master +Branch: main Dependencies: - * PQoS/Intel RDT library https://github.com/01org/intel-cmt-cat.git - * msr kernel module +* PQoS/Intel RDT library https://github.com/intel/intel-cmt-cat +* msr kernel module Building and installing PQoS/Intel RDT library: .. code:: bash - $ git clone https://github.com/01org/intel-cmt-cat.git + $ git clone https://github.com/intel/intel-cmt-cat $ cd intel-cmt-cat $ make $ make install PREFIX=/usr @@ -486,17 +409,12 @@ configuration file (``collectd.conf``) can be found at ``/opt/collectd/etc``. To configure the RDT plugin you need to modify the configuration file to include: -.. code:: bash - - <LoadPlugin intel_rdt> - Interval 1 - </LoadPlugin> - <Plugin "intel_rdt"> - Cores "" - </Plugin> +.. literalinclude:: ../../../src/collectd/collectd_sample_configs/rdt.conf + :start-at: LoadPlugin + :language: bash For more information on the plugin parameters, please see: -https://github.com/collectd/collectd/blob/master/src/collectd.conf.pod +https://github.com/collectd/collectd/blob/main/src/collectd.conf.pod IPMI Plugin ^^^^^^^^^^^^ @@ -504,7 +422,7 @@ Repo: https://github.com/collectd/collectd Branch: feat_ipmi_events, feat_ipmi_analog -Dependencies: OpenIPMI library (http://openipmi.sourceforge.net/) +Dependencies: `OpenIPMI library <https://openipmi.sourceforge.io/>`_ The IPMI plugin is already implemented in the latest collectd and sensors like temperature, voltage, fanspeed, current are already supported there. @@ -597,7 +515,7 @@ To configure the IPMI plugin you need to modify the file to include: dispatch the values to collectd and send SEL notifications. For more information on the IPMI plugin parameters and SEL feature configuration, -please see: https://github.com/collectd/collectd/blob/master/src/collectd.conf.pod +please see: https://github.com/collectd/collectd/blob/main/src/collectd.conf.pod Extended analog sensors support doesn't require additional configuration. The usual collectd IPMI documentation can be used: @@ -608,15 +526,15 @@ collectd IPMI documentation can be used: IPMI documentation: - https://www.kernel.org/doc/Documentation/IPMI.txt -- http://www.intel.com/content/www/us/en/servers/ipmi/ipmi-second-gen-interface-spec-v2-rev1-1.html +- https://www.intel.com/content/www/us/en/products/docs/servers/ipmi/ipmi-second-gen-interface-spec-v2-rev1-1.html Mcelog Plugin ^^^^^^^^^^^^^^ Repo: https://github.com/collectd/collectd -Branch: master +Branch: main -Dependencies: mcelog +Dependencies: `mcelog <http://mcelog.org/>`_ Start by installing mcelog. @@ -699,21 +617,12 @@ configuration file (``collectd.conf``) can be found at ``/opt/collectd/etc``. To configure the mcelog plugin you need to modify the configuration file to include: -.. code:: bash - - <LoadPlugin mcelog> - Interval 1 - </LoadPlugin> - <Plugin mcelog> - <Memory> - McelogClientSocket "/var/run/mcelog-client" - PersistentNotification false - </Memory> - #McelogLogfile "/var/log/mcelog" - </Plugin> +.. literalinclude:: ../../../src/collectd/collectd_sample_configs/mcelog.conf + :start-at: LoadPlugin + :language: bash For more information on the plugin parameters, please see: -https://github.com/collectd/collectd/blob/master/src/collectd.conf.pod +https://github.com/collectd/collectd/blob/main/src/collectd.conf.pod Simulating a Machine Check Exception can be done in one of 3 ways: @@ -809,15 +718,15 @@ To inject corrected memory errors: * Check the MCE statistic: mcelog --client. Check the mcelog log for injected error details: less /var/log/mcelog. Open vSwitch Plugins -^^^^^^^^^^^^^^^^^^^^^ +^^^^^^^^^^^^^^^^^^^^ OvS Plugins Repo: https://github.com/collectd/collectd -OvS Plugins Branch: master +OvS Plugins Branch: main OvS Events MIBs: The SNMP OVS interface link status is provided by standard -IF-MIB (http://www.net-snmp.org/docs/mibs/IF-MIB.txt) +`IF-MIB <http://www.net-snmp.org/docs/mibs/IF-MIB.txt>`_ -Dependencies: Open vSwitch, Yet Another JSON Library (https://github.com/lloyd/yajl) +Dependencies: Open vSwitch, `Yet Another JSON Library <https://github.com/lloyd/yajl>`_ On Centos, install the dependencies and Open vSwitch: @@ -826,7 +735,7 @@ On Centos, install the dependencies and Open vSwitch: $ sudo yum install yajl-devel Steps to install Open vSwtich can be found at -http://docs.openvswitch.org/en/latest/intro/install/fedora/ +https://docs.openvswitch.org/en/latest/intro/install/fedora/ Start the Open vSwitch service: @@ -846,7 +755,7 @@ Clone and install the collectd ovs plugin: $ git clone $REPO $ cd collectd - $ git checkout master + $ git checkout main $ ./build.sh $ ./configure --enable-syslog --enable-logfile --enable-debug $ make @@ -854,47 +763,33 @@ Clone and install the collectd ovs plugin: This will install collectd to default folder ``/opt/collectd``. The collectd configuration file (``collectd.conf``) can be found at ``/opt/collectd/etc``. -To configure the OVS events plugin you need to modify the configuration file to include: +To configure the OVS events plugin you need to modify the configuration file +(uncommenting and updating values as appropriate) to include: -.. code:: bash - - <LoadPlugin ovs_events> - Interval 1 - </LoadPlugin> - <Plugin ovs_events> - Port "6640" - Address "127.0.0.1" - Socket "/var/run/openvswitch/db.sock" - Interfaces "br0" "veth0" - SendNotification true - </Plugin> +.. literalinclude:: ../../../src/collectd/collectd_sample_configs/ovs_events.conf + :start-at: LoadPlugin + :language: bash To configure the OVS stats plugin you need to modify the configuration file -to include: - -.. code:: bash +(uncommenting and updating values as appropriate) to include: - <LoadPlugin ovs_stats> - Interval 1 - </LoadPlugin> - <Plugin ovs_stats> - Port "6640" - Address "127.0.0.1" - Socket "/var/run/openvswitch/db.sock" - Bridges "br0" - </Plugin> +.. literalinclude:: ../../../src/collectd/collectd_sample_configs/ovs_stats.conf + :start-at: LoadPlugin + :language: bash For more information on the plugin parameters, please see: -https://github.com/collectd/collectd/blob/master/src/collectd.conf.pod +https://github.com/collectd/collectd/blob/main/src/collectd.conf.pod OVS PMD stats -^^^^^^^^^^^^^^ -Repo: https://gerrit.opnfv.org/gerrit/barometer +^^^^^^^^^^^^^ +Repo: https://gerrit.opnfv.org/gerrit/gitweb?p=barometer.git Prequistes: -1. Open vSwitch dependencies are installed. -2. Open vSwitch service is running. -3. Ovsdb-server manager is configured. + +#. Open vSwitch dependencies are installed. +#. Open vSwitch service is running. +#. Ovsdb-server manager is configured. + You can refer `Open vSwitch Plugins`_ section above for each one of them. OVS PMD stats application is run through the exec plugin. @@ -913,18 +808,17 @@ to include: .. note:: Exec plugin configuration has to be changed to use appropriate user before starting collectd service. -ovs_pmd_stat.sh calls the script for OVS PMD stats application with its argument: +``ovs_pmd_stat.sh`` calls the script for OVS PMD stats application with its argument: -.. code:: bash - - sudo python /usr/local/src/ovs_pmd_stats.py" "--socket-pid-file" - "/var/run/openvswitch/ovs-vswitchd.pid" +.. literalinclude:: ../../../src/collectd/collectd_sample_configs/ovs_pmd_stats.sh + :start-at: python + :language: bash SNMP Agent Plugin ^^^^^^^^^^^^^^^^^ Repo: https://github.com/collectd/collectd -Branch: master +Branch: main Dependencies: NET-SNMP library @@ -1062,7 +956,7 @@ The ``snmpwalk`` command can be used to validate the collectd configuration: retreived using standard IF-MIB tables. For more information on the plugin parameters, please see: -https://github.com/collectd/collectd/blob/master/src/collectd.conf.pod +https://github.com/collectd/collectd/blob/main/src/collectd.conf.pod For more details on AgentX subagent, please see: http://www.net-snmp.org/tutorial/tutorial-5/toolkit/demon/ @@ -1070,12 +964,12 @@ http://www.net-snmp.org/tutorial/tutorial-5/toolkit/demon/ .. _virt-plugin: virt plugin -^^^^^^^^^^^^ +^^^^^^^^^^^ Repo: https://github.com/collectd/collectd -Branch: master +Branch: main -Dependencies: libvirt (https://libvirt.org/), libxml2 +Dependencies: `libvirt <https://libvirt.org/>`_, libxml2 On Centos, install the dependencies: @@ -1103,7 +997,7 @@ metrics depends on running libvirt daemon version. .. note:: Please keep in mind that RDT metrics (part of *Performance monitoring events*) have to be supported by hardware. For more details on hardware support, please see: - https://github.com/01org/intel-cmt-cat + https://github.com/intel/intel-cmt-cat Additionally perf metrics **cannot** be collected if *Intel RDT* plugin is enabled. @@ -1206,14 +1100,12 @@ statistics are disabled. They can be enabled with ``ExtraStats`` option. </Plugin> For more information on the plugin parameters, please see: -https://github.com/collectd/collectd/blob/master/src/collectd.conf.pod +https://github.com/collectd/collectd/blob/main/src/collectd.conf.pod .. _install-collectd-as-a-service: Installing collectd as a service ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ -**NOTE**: In an OPNFV installation, collectd is installed and configured as a -service. Collectd service scripts are available in the collectd/contrib directory. To install collectd as a service: @@ -1244,33 +1136,27 @@ Reload $ sudo systemctl status collectd.service should show success Additional useful plugins -^^^^^^^^^^^^^^^^^^^^^^^^^^ +^^^^^^^^^^^^^^^^^^^^^^^^^ **Exec Plugin** : Can be used to show you when notifications are being generated by calling a bash script that dumps notifications to file. (handy -for debug). Modify /opt/collectd/etc/collectd.conf: +for debug). Modify ``/opt/collectd/etc/collectd.conf`` to include the +``NotificationExec`` config option, taking care to add the right directory path +to the ``write_notification.sh`` script: -.. code:: bash - - LoadPlugin exec - <Plugin exec> - # Exec "user:group" "/path/to/exec" - NotificationExec "user" "<path to barometer>/barometer/src/collectd/collectd_sample_configs/write_notification.sh" - </Plugin> +.. literalinclude:: ../../../src/collectd/collectd_sample_configs/exec.conf + :start-at: LoadPlugin + :emphasize-lines: 6 + :language: bash -write_notification.sh (just writes the notification passed from exec through -STDIN to a file (/tmp/notifications)): - -.. code:: bash +``write_notification.sh`` writes the notification passed from exec through +STDIN to a file (``/tmp/notifications``): - #!/bin/bash - rm -f /tmp/notifications - while read x y - do - echo $x$y >> /tmp/notifications - done +.. literalinclude:: ../../../src/collectd/collectd_sample_configs/write_notification.sh + :start-at: rm -f + :language: bash -output to /tmp/notifications should look like: +output to ``/tmp/notifications`` should look like: .. code:: bash @@ -1317,7 +1203,7 @@ For more information on configuring and installing OpenStack plugins for collectd, check out the `collectd-openstack-plugins GSG`_. Security -^^^^^^^^^ +^^^^^^^^ * AAA – on top of collectd there secure agents like SNMP V3, Openstack agents etc. with their own AAA methods. @@ -1328,7 +1214,7 @@ Security * Ensuring that only one instance of the program is executed by collectd at any time * Forcing the plugin to check that custom programs are never executed with superuser - privileges. + privileges. * Protection of Data in flight: @@ -1347,14 +1233,14 @@ Security * `CVE-2010-4336`_ fixed https://mailman.verplant.org/pipermail/collectd/2010-November/004277.html in Version 4.10.2. - * http://www.cvedetails.com/product/20310/Collectd-Collectd.html?vendor_id=11242 + * https://www.cvedetails.com/product/20310/Collectd-Collectd.html?vendor_id=11242 * It's recommended to only use collectd plugins from signed packages. References ^^^^^^^^^^^ .. [1] https://collectd.org/wiki/index.php/Naming_schema -.. [2] https://github.com/collectd/collectd/blob/master/src/daemon/plugin.h +.. [2] https://github.com/collectd/collectd/blob/main/src/daemon/plugin.h .. [3] https://collectd.org/wiki/index.php/Value_list_t .. [4] https://collectd.org/wiki/index.php/Data_set .. [5] https://collectd.org/documentation/manpages/types.db.5.shtml @@ -1362,10 +1248,10 @@ References .. [7] https://collectd.org/wiki/index.php/Meta_Data_Interface .. _Barometer OPNFV Summit demo: https://prezi.com/kjv6o8ixs6se/software-fastpath-service-quality-metrics-demo/ -.. _gnocchi plugin: https://github.com/openstack/collectd-openstack-plugins/tree/stable/ocata/ -.. _aodh plugin: https://github.com/openstack/collectd-openstack-plugins/tree/stable/ocata/ -.. _collectd-openstack-plugins GSG: https://github.com/openstack/collectd-openstack-plugins/blob/master/doc/source/GSG.rst -.. _grafana guide: https://wiki.opnfv.org/display/fastpath/Installing+and+configuring+InfluxDB+and+Grafana+to+display+metrics+with+collectd +.. _gnocchi plugin: https://opendev.org/x/collectd-openstack-plugins/src/branch/stable/ocata/ +.. _aodh plugin: https://opendev.org/x/collectd-openstack-plugins/src/branch/stable/ocata/ +.. _collectd-openstack-plugins GSG: https://opendev.org/x/collectd-openstack-plugins/src/branch/master/doc/source/GSG.rst +.. _grafana guide: https://wiki.anuket.io/display/HOME/Installing+and+configuring+InfluxDB+and+Grafana+to+display+metrics+with+collectd .. _CVE-2017-7401: https://www.cvedetails.com/cve/CVE-2017-7401/ .. _CVE-2016-6254: https://www.cvedetails.com/cve/CVE-2016-6254/ .. _CVE-2010-4336: https://www.cvedetails.com/cve/CVE-2010-4336/ diff --git a/docs/release/userguide/index.rst b/docs/release/userguide/index.rst index e880f3a9..566bb692 100644 --- a/docs/release/userguide/index.rst +++ b/docs/release/userguide/index.rst @@ -2,24 +2,23 @@ .. This work is licensed under a Creative Commons Attribution 4.0 International License. .. http://creativecommons.org/licenses/by/4.0 -.. (c) Intel and OPNFV +.. (c) Intel, Anuket and others =========================== -OPNFV Barometer User Guide +Anuket Barometer User Guide =========================== -.. The feature user guide should provide an OPNFV user with enough information to -.. use the features provided by the feature project in the supported scenarios. -.. This guide should walk a user through the usage of the features once a scenario -.. has been deployed and is active according to the installation guide provided -.. by the installer project. +.. The feature user guide should provide an Anuket user with enough information +.. to use the features provided by the feature project. .. toctree:: :maxdepth: 1 feature.userguide collectd.ves.userguide.rst - docker.userguide.rst + installguide.docker.rst + installguide.oneclick.rst + .. The feature.userguide.rst file should contain the text for this document .. additional documents can be added to this directory and added in the right order .. to this file as a list below. diff --git a/docs/release/userguide/installguide.docker.rst b/docs/release/userguide/installguide.docker.rst new file mode 100644 index 00000000..9141eef6 --- /dev/null +++ b/docs/release/userguide/installguide.docker.rst @@ -0,0 +1,1045 @@ +.. _barometer-docker-userguide: +.. This work is licensed under a Creative Commons Attribution 4.0 International License. +.. http://creativecommons.org/licenses/by/4.0 +.. (c) Anuket and others + +===================================== +Anuket Barometer Docker Install Guide +===================================== + +.. contents:: + :depth: 3 + :local: + +The intention of this user guide is to outline how to install and test the Barometer project's +docker images. The `Anuket docker hub <https://hub.docker.com/u/anuket/>`_ contains 5 docker +images from the Barometer project: + + 1. `Collectd docker image <https://hub.docker.com/r/anuket/barometer-collectd/>`_ + 2. `Influxdb docker image <https://hub.docker.com/r/anuket/barometer-influxdb/>`_ + 3. `Grafana docker image <https://hub.docker.com/r/anuket/barometer-grafana/>`_ + 4. `Kafka docker image <https://hub.docker.com/r/anuket/barometer-kafka>`_ + 5. `VES application docker image <https://hub.docker.com/r/anuket/barometer-ves/>`_ + +For description of images please see section `Barometer Docker Images Description`_ + +For steps to build and run Collectd image please see section `Build and Run Collectd Docker Image`_ + +For steps to build and run InfluxDB and Grafana images please see section `Build and Run InfluxDB and Grafana Docker Images`_ + +For steps to build and run VES and Kafka images please see section `Build and Run VES and Kafka Docker Images`_ + +For overview of running VES application with Kafka please see the :ref:`VES Application User Guide <barometer-ves-userguide>` + +For an alternative installation method using ansible, please see the :ref:`Barometer One Click Install Guide <barometer-oneclick-userguide>`. + +For post-installation verification and troubleshooting, please see the :ref:`Barometer post installation guide <barometer-postinstall>`. + +Barometer Docker Images Description +----------------------------------- + +.. Describe the specific features and how it is realised in the scenario in a brief manner +.. to ensure the user understand the context for the user guide instructions to follow. + +Barometer Collectd Image +^^^^^^^^^^^^^^^^^^^^^^^^ +The barometer collectd docker image gives you a collectd installation that includes all +the barometer plugins. + +.. note:: + The Dockerfile is available in the docker/barometer-collectd directory in the barometer repo. + The Dockerfile builds a CentOS 8 docker image. + The container MUST be run as a privileged container. + +Collectd is a daemon which collects system performance statistics periodically +and provides a variety of mechanisms to publish the collected metrics. It +supports more than 90 different input and output plugins. Input plugins +retrieve metrics and publish them to the collectd deamon, while output plugins +publish the data they receive to an end point. Collectd also has infrastructure +to support thresholding and notification. + +Collectd docker image has enabled the following collectd plugins (in addition +to the standard collectd plugins): + +* hugepages plugin +* Open vSwitch events Plugin +* Open vSwitch stats Plugin +* mcelog plugin +* PMU plugin +* RDT plugin +* virt +* SNMP Agent +* Kafka_write plugin + +Plugins and third party applications in Barometer repository that will be available in the +docker image: + +* Open vSwitch PMD stats +* ONAP VES application +* gnocchi plugin +* aodh plugin +* Legacy/IPMI + +InfluxDB + Grafana Docker Images +^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ + +The Barometer project's InfluxDB and Grafana docker images are 2 docker images that database and graph +statistics reported by the Barometer collectd docker. InfluxDB is an open-source time series database +tool which stores the data from collectd for future analysis via Grafana, which is a open-source +metrics anlytics and visualisation suite which can be accessed through any browser. + +VES + Kafka Docker Images +^^^^^^^^^^^^^^^^^^^^^^^^^ + +The Barometer project's VES application and Kafka docker images are based on a CentOS 7 image. Kafka +docker image has a dependancy on `Zookeeper <https://zookeeper.apache.org/>`_. Kafka must be able to +connect and register with an instance of Zookeeper that is either running on local or remote host. +Kafka recieves and stores metrics recieved from Collectd. VES application pulls latest metrics from Kafka +which it normalizes into VES format for sending to a VES collector. Please see details in +:ref:`VES Application User Guide <barometer-ves-userguide>` + +Installing Docker +----------------- +.. Describe the specific capabilities and usage for <XYZ> feature. +.. Provide enough information that a user will be able to operate the feature on a deployed scenario. + +.. note:: + The below sections provide steps for manual installation and configuration + of docker images. They are not neccessary if docker images were installed with + use of Ansible-Playbook. + +On Ubuntu +^^^^^^^^^ +.. note:: + * sudo permissions are required to install docker. + * These instructions are for Ubuntu 16.10 + +To install docker: + +.. code:: bash + + $ sudo apt-get install curl + $ sudo curl -fsSL https://get.docker.com/ | sh + $ sudo usermod -aG docker <username> + $ sudo systemctl status docker + +Replace <username> above with an appropriate user name. + +On CentOS +^^^^^^^^^^ +.. note:: + * sudo permissions are required to install docker. + * These instructions are for CentOS 7 + +To install docker: + +.. code:: bash + + $ sudo yum remove docker docker-common docker-selinux docker-engine + $ sudo yum install -y yum-utils device-mapper-persistent-data lvm2 + $ sudo yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo + $ sudo yum-config-manager --enable docker-ce-edge + $ sudo yum-config-manager --enable docker-ce-test + $ sudo yum install docker-ce + $ sudo usermod -aG docker <username> + $ sudo systemctl status docker + +Replace <username> above with an appropriate user name. + +.. note:: + If this is the first time you are installing a package from a recently added + repository, you will be prompted to accept the GPG key, and the key’s + fingerprint will be shown. Verify that the fingerprint is correct, and if so, + accept the key. The fingerprint should match060A 61C5 1B55 8A7F 742B 77AA C52F + EB6B 621E 9F35. + + Retrieving key from https://download.docker.com/linux/centos/gpg + Importing GPG key 0x621E9F35: +.. :: + Userid : "Docker Release (CE rpm) <docker@docker.com>" + Fingerprint: 060a 61c5 1b55 8a7f 742b 77aa c52f eb6b 621e 9f35 + From : https://download.docker.com/linux/centos/gpg + Is this ok [y/N]: y + +Manual proxy configuration for docker +^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ + +.. note:: + This applies for both CentOS and Ubuntu. + +If you are behind an HTTP or HTTPS proxy server, you will need to add this +configuration in the Docker systemd service file. + +1. Create a systemd drop-in directory for the docker service: + +.. code:: bash + + $ sudo mkdir -p /etc/systemd/system/docker.service.d + +2. Create a file +called /etc/systemd/system/docker.service.d/http-proxy.conf that adds +the HTTP_PROXY environment variable: + +.. code:: bash + + [Service] + Environment="HTTP_PROXY=http://proxy.example.com:80/" + +Or, if you are behind an HTTPS proxy server, create a file +called /etc/systemd/system/docker.service.d/https-proxy.conf that adds +the HTTPS_PROXY environment variable: + +.. code:: bash + + [Service] + Environment="HTTPS_PROXY=https://proxy.example.com:443/" + +Or create a single file with all the proxy configurations: +/etc/systemd/system/docker.service.d/proxy.conf + +.. code:: bash + + [Service] + Environment="HTTP_PROXY=http://proxy.example.com:80/" + Environment="HTTPS_PROXY=https://proxy.example.com:443/" + Environment="FTP_PROXY=ftp://proxy.example.com:443/" + Environment="NO_PROXY=localhost" + +3. Flush changes: + +.. code:: bash + + $ sudo systemctl daemon-reload + +4. Restart Docker: + +.. code:: bash + + $ sudo systemctl restart docker + +5. Check docker environment variables: + +.. code:: bash + + sudo systemctl show --property=Environment docker + +Test docker installation +^^^^^^^^^^^^^^^^^^^^^^^^ +.. note:: + This applies for both CentOS and Ubuntu. + +.. code:: bash + + $ sudo docker run hello-world + +The output should be something like: + +.. code:: bash + + Trying to pull docker.io/library/hello-world...Getting image source signatures + Copying blob 0e03bdcc26d7 done + Copying config bf756fb1ae done + Writing manifest to image destination + Storing signatures + + Hello from Docker! + This message shows that your installation appears to be working correctly. + + To generate this message, Docker took the following steps: + 1. The Docker client contacted the Docker daemon. + 2. The Docker daemon pulled the "hello-world" image from the Docker Hub. + 3. The Docker daemon created a new container from that image which runs the + executable that produces the output you are currently reading. + 4. The Docker daemon streamed that output to the Docker client, which sent it + to your terminal. + + To try something more ambitious, you can run an Ubuntu container with: + $ docker run -it ubuntu bash + + Share images, automate workflows, and more with a free Docker ID: + https://hub.docker.com/ + + For more examples and ideas, visit: + https://docs.docker.com/get-started/ + +Build and Run Collectd Docker Image +----------------------------------- + +Collectd-barometer flavors +^^^^^^^^^^^^^^^^^^^^^^^^^^ + +Before starting to build and run the Collectd container, understand the available +flavors of Collectd containers: + +* barometer-collectd - stable release, based on collectd 5.12 +* barometer-collectd-latest - release based on collectd 'main' branch +* barometer-collectd-experimental - release based on collectd 'main' + branch that can also include a set of experimental (not yet merged into + upstream) pull requests + +.. note:: + Experimental container is not tested across various OS'es and the stability + of the container can change. Usage of experimental flavor is at users risk. + +Stable `barometer-collectd` container is intended for work in production +environment as it is based on latest collectd official release. +`barometer-collectd-latest` and `barometer-collectd-experimental` containers +can be used in order to try new collectd features. +All flavors are located in `barometer` git repository - respective Dockerfiles +are stored in subdirectories of `docker/` directory + + +.. code:: bash + + $ git clone https://gerrit.opnfv.org/gerrit/barometer + $ ls barometer/docker|grep collectd + barometer-collectd + barometer-collectd-latest + barometer-collectd-experimental + +.. note:: + Main directory of barometer source code (directory that contains 'docker', + 'docs', 'src' and systems sub-directories) will be referred as + ``<BAROMETER_REPO_DIR>`` + +Download the collectd docker image +^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ +If you wish to use a pre-built barometer image, you can pull the barometer +image from `dockerhub <https://hub.docker.com/r/anuket/barometer-collectd/>`_ + +.. code:: bash + + $ docker pull anuket/barometer-collectd + +Build stable collectd container +^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ + +.. code:: bash + + $ cd <BAROMETER_REPO_DIR>/docker/barometer-collectd + $ sudo docker build -t anuket/barometer-collectd --build-arg http_proxy=`echo $http_proxy` \ + --build-arg https_proxy=`echo $https_proxy` --network=host -f Dockerfile . + +.. note:: + In the above mentioned ``docker build`` command, http_proxy & https_proxy arguments needs to be + passed only if system is behind an HTTP or HTTPS proxy server. + +Check the docker images: + +.. code:: bash + + $ sudo docker images + +Output should contain a ``barometer-collectd`` image: + +.. code:: + + REPOSITORY TAG IMAGE ID CREATED SIZE + anuket/barometer-collectd latest 39f5e0972178 2 months ago 1.28GB + centos 7 196e0ce0c9fb 4 weeks ago 197MB + centos latest 196e0ce0c9fb 4 weeks ago 197MB + hello-world latest 05a3bd381fc2 4 weeks ago 1.84kB + +.. note:: + If you do not plan to use `barometer-collectd-latest` and + `barometer-collectd-experimental` containers, then you can proceed directly + to section `Run the collectd stable docker image`_ + + +Build barometer-collectd-latest container +^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ + +.. code:: bash + + $ cd <BAROMETER_REPO_DIR> + $ sudo docker build -t anuket/barometer-collectd-latest \ + --build-arg http_proxy=`echo $http_proxy` \ + --build-arg https_proxy=`echo $https_proxy` --network=host -f \ + docker/barometer-collectd-latest/Dockerfile . + +.. note:: + For `barometer-collectd-latest` and `barometer-collectd-experimental` containers + proxy parameters should be passed only if system is behind an HTTP or HTTPS + proxy server (same as for stable collectd container) + +Build barometer-collectd-experimental container +^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ + +The barometer-collectd-experimental container use the ``main`` branch of +collectd, but allows the user to apply a number of pull requests, which are +passed via the COLLECTD_PULL_REQUESTS build arg, which is passed to docker as +shown in the example below. +COLLECTD_PULL_REQUESTS should be a comma-delimited string of pull request IDs. + +.. code:: bash + + $ cd <BAROMETER_REPO_DIR> + $ sudo docker build -t anuket/barometer-collectd-experimental \ + --build-arg http_proxy=`echo $http_proxy` \ + --build-arg https_proxy=`echo $https_proxy` \ + --build-arg COLLECTD_PULL_REQUESTS=1234,5678 \ + --network=host -f docker/barometer-collectd-experimental/Dockerfile . + +.. note:: + For `barometer-collectd-latest` and `barometer-collectd-experimental` containers + proxy parameters should be passed only if system is behind an HTTP or HTTPS + proxy server (same as for stable collectd container) + +Build collectd-6 +^^^^^^^^^^^^^^^^ + +The barometer-collectd-experimental Dockerfile can be used to build +collectd-6.0, which is currently under development. In order to do this, the +``COLLECTD_FLAVOR`` build arg can be passed to the docker build command. +The optional ``COLLECTD_PULL_REQUESTS`` arg can be passed as well, to test +proposed patches to collectd. + +.. code:: bash + + $ cd <BAROMETER_REPO_DIR> + $ sudo docker build -t anuket/barometer-collectd-6 \ + --build-arg COLLECTD_FLAVOR=collectd-6 \ + --build-arg COLLECTD_PULL_REQUESTS=1234,5678 \ + --network=host -f docker/barometer-collectd-experimental/Dockerfile . + +The instructions for running the collectd-6 container are the same as for the +collectd-experimental container. + +There are a few useful build args that can be used to further customise the +collectd-6 build: + +* **COLLECTD_CONFIG_CMD_ARGS** + For testing with new plugins for collectd-6, as un-ported plugins are + disabled by default. + This new option lets the ./configure command be run with extra args, + e.g. --enable-cpu --enable-<my-newly-ported-plugin>, which means that + plugin can be enabled for the PR that is being tested. + +* **COLLECTD_TAG** + This overrides the default tag selected by the flavors, and allows checking + out out an arbitrary branch (e.g. PR branch instead of using the + ``COLLECTD_PULL_REQUESTS`` arg, which rebases each PR on top of the + nominal branch. + To check out a PR, use the following args with the docker build command: + ``--build-arg COLLECTD_TAG=pull/<PR_ID>/head`` + +Run the collectd stable docker image +^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ +.. code:: bash + + $ cd <BAROMETER_REPO_DIR> + $ sudo docker run -ti --net=host -v \ + `pwd`/src/collectd/collectd_sample_configs:/opt/collectd/etc/collectd.conf.d \ + -v /var/run:/var/run -v /tmp:/tmp -v /sys/fs/resctrl:/sys/fs/resctrl \ + --privileged anuket/barometer-collectd + +.. note:: + The docker collectd image contains configuration for all the collectd + plugins. In the command above we are overriding + /opt/collectd/etc/collectd.conf.d by mounting a host directory + src/collectd/collectd_sample_configs that contains only the sample + configurations we are interested in running. + + *If some dependencies for plugins listed in configuration directory + aren't met, then collectd startup may fail(collectd tries to + initialize plugins configurations for all given config files that can + be found in shared configs directory and may fail if some dependency + is missing).* + + If `DPDK` or `RDT` can't be installed on host, then corresponding config + files should be removed from shared configuration directory + (`<BAROMETER_REPO_DIR>/src/collectd/collectd_sample_configs/`) prior + to starting barometer-collectd container. By example: in case of missing + `DPDK` functionality on the host, `dpdk_telemetry.conf` should be removed. + + Sample configurations can be found at: + https://github.com/opnfv/barometer/tree/master/src/collectd/collectd_sample_configs + + List of barometer-collectd dependencies on host for various plugins + can be found at: + https://wiki.anuket.io/display/HOME/Barometer-collectd+host+dependencies + + The Resource Control file system (/sys/fs/resctrl) can be bound from host to + container only if this directory exists on the host system. Otherwise omit + the '-v /sys/fs/resctrl:/sys/fs/resctrl' part in docker run command. + More information about resctrl can be found at: + https://github.com/intel/intel-cmt-cat/wiki/resctrl + +Check your docker image is running + +.. code:: bash + + sudo docker ps + +To make some changes when the container is running run: + +.. code:: bash + + sudo docker exec -ti <CONTAINER ID> /bin/bash + +Run the barometer-collectd-latest docker image +^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ +Run command for ``barometer-collectd-latest`` container is very similar to +command used for stable container - the only differences are name of the image +and location of the sample configuration files (as different version of +collectd plugins requiring different configuration files) + + +.. code:: bash + + $ cd <BAROMETER_REPO_DIR> + $ sudo docker run -ti --net=host -v \ + `pwd`/src/collectd/collectd_sample_configs-latest:/opt/collectd/etc/collectd.conf.d \ + -v /var/run:/var/run -v /tmp:/tmp -v /sys/fs/resctrl:/sys/fs/resctrl \ + --privileged anuket/barometer-collectd-latest + +.. note:: + Barometer collectd docker images are sharing some directories with host + (e.g. /tmp) therefore only one of collectd barometer flavors can be run + at a time. In other words, if you want to try `barometer-collectd-latest` or + `barometer-collectd-experimental` image, please stop instance of + `barometer-collectd(stable)` image first. + + The Resource Control file system (/sys/fs/resctrl) can be bound from host to + container only if this directory exists on the host system. Otherwise omit + the '-v /sys/fs/resctrl:/sys/fs/resctrl' part in docker run command. + More information about resctrl can be found at: + https://github.com/intel/intel-cmt-cat/wiki/resctrl + +Run the barometer-collectd-experimental docker image +^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ +Barometer-collectd-experimental container shares default configuration files +with 'barometer-collectd-latest' equivalent but some of experimental pull +requests may require modified configuration. Additional configuration files that +are required specifically by experimental container can be found in +`docker/barometer-collectd-experimental/experimental-configs/` +directory. Content of this directory (all \*.conf files) should be copied to +``src/collectd/collectd_sample_configs-latest`` directory before first run of +experimental container. + +.. code:: bash + + $ cd <BAROMETER_REPO_DIR> + $ cp docker/barometer-collectd-experimental/experimental-configs/*.conf \ + src/collectd/collectd_sample_configs-latest + +When configuration files are up to date for experimental container, it can be +launched using following command (almost identical to run-command for ``latest`` +collectd container) + +.. code:: bash + + $ cd <BAROMETER_REPO_DIR> + $ sudo docker run -ti --net=host -v \ + `pwd`/src/collectd/collectd_sample_configs-latest:/opt/collectd/etc/collectd.conf.d \ + -v /var/run:/var/run -v /tmp:/tmp -v /sys/fs/resctrl:/sys/fs/resctrl --privileged \ + anuket/barometer-collectd-experimental + +.. note:: + The Resource Control file system (/sys/fs/resctrl) can be bound from host to + container only if this directory exists on the host system. Otherwise omit + the '-v /sys/fs/resctrl:/sys/fs/resctrl' part in docker run command. + More information about resctrl can be found at: + https://github.com/intel/intel-cmt-cat/wiki/resctrl + + +Build and Run InfluxDB and Grafana docker images +------------------------------------------------ + +Overview +^^^^^^^^ +The barometer-influxdb image is based on the influxdb:1.3.7 image from the influxdb dockerhub. To +view detils on the base image please visit +`https://hub.docker.com/_/influxdb/ <https://hub.docker.com/_/influxdb/>`_ Page includes details of +exposed ports and configurable enviromental variables of the base image. + +The barometer-grafana image is based on grafana:4.6.3 image from the grafana dockerhub. To view +details on the base image please visit +`https://hub.docker.com/r/grafana/grafana/ <https://hub.docker.com/r/grafana/grafana/>`_ Page +includes details on exposed ports and configurable enviromental variables of the base image. + +The barometer-grafana image includes pre-configured source and dashboards to display statistics exposed +by the barometer-collectd image. The default datasource is an influxdb database running on localhost +but the address of the influxdb server can be modified when launching the image by setting the +environmental variables influxdb_host to IP or hostname of host on which influxdb server is running. + +Additional dashboards can be added to barometer-grafana by mapping a volume to /opt/grafana/dashboards. +Incase where a folder is mounted to this volume only files included in this folder will be visible +inside barometer-grafana. To ensure all default files are also loaded please ensure they are included in +volume folder been mounted. Appropriate example are given in section `Run the Grafana docker image`_ + +Download the InfluxDB and Grafana docker images +^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ +If you wish to use pre-built barometer project's influxdb and grafana images, you can pull the +images from https://hub.docker.com/r/anuket/barometer-influxdb/ and https://hub.docker.com/r/anuket/barometer-grafana/ + +.. note:: + If your preference is to build images locally please see sections `Build InfluxDB Docker Image`_ and + `Build Grafana Docker Image`_ + +.. code:: bash + + $ docker pull anuket/barometer-influxdb + $ docker pull anuket/barometer-grafana + +.. note:: + If you have pulled the pre-built barometer-influxdb and barometer-grafana images there is no + requirement to complete steps outlined in sections `Build InfluxDB Docker Image`_ and + `Build Grafana Docker Image`_ and you can proceed directly to section + `Run the Influxdb and Grafana Images`_ + +Build InfluxDB docker image +^^^^^^^^^^^^^^^^^^^^^^^^^^^ + +Build influxdb image from Dockerfile + +.. code:: bash + + $ cd barometer/docker/barometer-influxdb + $ sudo docker build -t anuket/barometer-influxdb --build-arg http_proxy=`echo $http_proxy` \ + --build-arg https_proxy=`echo $https_proxy` --network=host -f Dockerfile . + +.. note:: + In the above mentioned ``docker build`` command, http_proxy & https_proxy arguments needs to + be passed only if system is behind an HTTP or HTTPS proxy server. + +Check the docker images: + +.. code:: bash + + $ sudo docker images + +Output should contain an influxdb image: + +.. code:: + + REPOSITORY TAG IMAGE ID CREATED SIZE + anuket/barometer-influxdb latest c5a09a117067 2 months ago 191MB + +Build Grafana docker image +^^^^^^^^^^^^^^^^^^^^^^^^^^ + +Build Grafana image from Dockerfile + +.. code:: bash + + $ cd barometer/docker/barometer-grafana + $ sudo docker build -t anuket/barometer-grafana --build-arg http_proxy=`echo $http_proxy` \ + --build-arg https_proxy=`echo $https_proxy` -f Dockerfile . + +.. note:: + In the above mentioned ``docker build`` command, http_proxy & https_proxy arguments needs to + be passed only if system is behind an HTTP or HTTPS proxy server. + +Check the docker images: + +.. code:: bash + + $ sudo docker images + +Output should contain an influxdb image: + +.. code:: + + REPOSITORY TAG IMAGE ID CREATED SIZE + anuket/barometer-grafana latest 3724ab87f0b1 2 months ago 284MB + +Run the Influxdb and Grafana Images +----------------------------------- + +Run the InfluxDB docker image +^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ +.. code:: bash + + $ sudo docker run -tid -v /var/lib/influxdb:/var/lib/influxdb --net=host\ + --name bar-influxdb anuket/barometer-influxdb + +Check your docker image is running + +.. code:: bash + + sudo docker ps + +To make some changes when the container is running run: + +.. code:: bash + + sudo docker exec -ti <CONTAINER ID> /bin/bash + +When both collectd and InfluxDB containers are located +on the same host, then no additional configuration have to be added and you +can proceed directly to `Run the Grafana docker image`_ section. + +Modify collectd to support InfluxDB on another host +^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ +If InfluxDB and collectd containers are located on separate hosts, then +additional configuration have to be done in ``collectd`` container - it +normally sends data using network plugin to 'localhost/127.0.0.1' therefore +changing output location is required: + +1. Stop and remove running bar-collectd container (if it is running) + + .. code:: bash + + $ sudo docker ps #to get collectd container name + $ sudo docker rm -f <COLLECTD_CONTAINER_NAME> + +2. Go to location where shared collectd config files are stored + + .. code:: bash + + $ cd <BAROMETER_REPO_DIR> + $ cd src/collectd/collectd_sample_configs + +3. Edit content of ``network.conf`` file. + By default this file looks like that: + + .. code:: + + LoadPlugin network + <Plugin network> + Server "127.0.0.1" "25826" + </Plugin> + + ``127.0.0.1`` string has to be replaced with the IP address of host where + InfluxDB container is running (e.g. ``192.168.121.111``). Edit this using your + favorite text editor. + +4. Start again collectd container like it is described in + `Run the collectd stable docker image`_ chapter + + .. code:: bash + + $ cd <BAROMETER_REPO_DIR> + $ sudo docker run -ti --name bar-collectd --net=host -v \ + `pwd`/src/collectd/collectd_sample_configs:/opt/collectd/etc/collectd.conf.d \ + -v /var/run:/var/run -v /tmp:/tmp --privileged anuket/barometer-collectd + +Now collectd container will be sending data to InfluxDB container located on +remote Host pointed by IP configured in step 3. + +Run the Grafana docker image +^^^^^^^^^^^^^^^^^^^^^^^^^^^^ + +Connecting to an influxdb instance running on local system and adding own custom dashboards + +.. code:: bash + + $ cd <BAROMETER_REPO_DIR> + $ sudo docker run -tid -v /var/lib/grafana:/var/lib/grafana \ + -v ${PWD}/docker/barometer-grafana/dashboards:/opt/grafana/dashboards \ + --name bar-grafana --net=host anuket/barometer-grafana + +Connecting to an influxdb instance running on remote system with hostname of someserver and IP address +of 192.168.121.111 + +.. code:: bash + + $ sudo docker run -tid -v /var/lib/grafana:/var/lib/grafana --net=host -e \ + influxdb_host=someserver --add-host someserver:192.168.121.111 --name \ + bar-grafana anuket/barometer-grafana + +Check your docker image is running + +.. code:: bash + + sudo docker ps + +To make some changes when the container is running run: + +.. code:: bash + + sudo docker exec -ti <CONTAINER ID> /bin/bash + +Connect to <host_ip>:3000 with a browser and log into grafana: admin/admin + +Cleanup of influxdb/grafana configuration +^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ + +When user wants to remove current grafana and influxdb configuration, +folowing actions have to be performed + +1. Stop and remove running influxdb and grafana containers + +.. code:: bash + + sudo docker rm -f bar-grafana bar-influxdb + +2. Remove shared influxdb and grafana folders from the Host + +.. code:: bash + + sudo rm -rf /var/lib/grafana + sudo rm -rf /var/lib/influxdb + +.. note:: + Shared folders are storing configuration of grafana and influxdb + containers. In case of changing influxdb or grafana configuration + (e.g. moving influxdb to another host) it is good to perform cleanup + on shared folders to not affect new setup with an old configuration. + +Build and Run VES and Kafka Docker Images +----------------------------------------- + +Download VES and Kafka docker images +^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ + +If you wish to use pre-built barometer project's VES and kafka images, you can pull the +images from https://hub.docker.com/r/anuket/barometer-ves/ and https://hub.docker.com/r/anuket/barometer-kafka/ + +.. note:: + If your preference is to build images locally please see sections `Build Kafka Docker Image`_ and + `Build VES Docker Image`_ + +.. code:: bash + + $ docker pull anuket/barometer-kafka + $ docker pull anuket/barometer-ves + +.. note:: + If you have pulled the pre-built images there is no requirement to complete steps outlined + in sections `Build Kafka Docker Image`_ and `Build VES Docker Image`_ and you can proceed directly to section + `Run Kafka Docker Image`_ + +Build Kafka docker image +^^^^^^^^^^^^^^^^^^^^^^^^ + +Build Kafka docker image: + +.. code:: bash + + $ cd barometer/docker/barometer-kafka + $ sudo docker build -t anuket/barometer-kafka --build-arg http_proxy=`echo $http_proxy` \ + --build-arg https_proxy=`echo $https_proxy` -f Dockerfile . + +.. note:: + In the above mentioned ``docker build`` command, http_proxy & https_proxy arguments needs + to be passed only if system is behind an HTTP or HTTPS proxy server. + +Check the docker images: + +.. code:: bash + + $ sudo docker images + +Output should contain a barometer image: + +.. code:: + + REPOSITORY TAG IMAGE ID CREATED SIZE + anuket/barometer-kafka latest 75a0860b8d6e 2 months ago 902MB + +Build VES docker image +^^^^^^^^^^^^^^^^^^^^^^ + +Build VES application docker image: + +.. code:: bash + + $ cd barometer/docker/barometer-ves + $ sudo docker build -t anuket/barometer-ves --build-arg http_proxy=`echo $http_proxy` \ + --build-arg https_proxy=`echo $https_proxy` -f Dockerfile . + +.. note:: + In the above mentioned ``docker build`` command, http_proxy & https_proxy arguments needs + to be passed only if system is behind an HTTP or HTTPS proxy server. + +Check the docker images: + +.. code:: bash + + $ sudo docker images + +Output should contain a barometer image: + +.. code:: + + REPOSITORY TAG IMAGE ID CREATED SIZE + anuket/barometer-ves latest 36a4a953e1b4 2 months ago 723MB + +Run Kafka docker image +^^^^^^^^^^^^^^^^^^^^^^ + +.. note:: + Before running Kafka an instance of Zookeeper must be running for the Kafka broker to register + with. Zookeeper can be running locally or on a remote platform. Kafka's broker_id and address of + its zookeeper instance can be configured by setting values for environmental variables 'broker_id' + and 'zookeeper_node'. In instance where 'broker_id' and/or 'zookeeper_node' is not set the default + setting of broker_id=0 and zookeeper_node=localhost is used. In intance where Zookeeper is running + on same node as Kafka and there is a one to one relationship between Zookeeper and Kafka, default + setting can be used. The docker argument `add-host` adds hostname and IP address to + /etc/hosts file in container + +Run zookeeper docker image: + +.. code:: bash + + $ sudo docker run -tid --net=host -p 2181:2181 zookeeper:3.4.11 + +Run kafka docker image which connects with a zookeeper instance running on same node with a 1:1 relationship + +.. code:: bash + + $ sudo docker run -tid --net=host -p 9092:9092 anuket/barometer-kafka + + +Run kafka docker image which connects with a zookeeper instance running on a node with IP address of +192.168.121.111 using broker ID of 1 + +.. code:: bash + + $ sudo docker run -tid --net=host -p 9092:9092 --env broker_id=1 --env zookeeper_node=zookeeper --add-host \ + zookeeper:192.168.121.111 anuket/barometer-kafka + +Run VES Application docker image +^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ +.. note:: + VES application uses configuration file ves_app_config.conf from directory + barometer/3rd_party/collectd-ves-app/ves_app/config/ and host.yaml file from + barometer/3rd_party/collectd-ves-app/ves_app/yaml/ by default. If you wish to use a custom config + file it should be mounted to mount point /opt/ves/config/ves_app_config.conf. To use an alternative yaml + file from folder barometer/3rd_party/collectd-ves-app/ves_app/yaml the name of the yaml file to use + should be passed as an additional command. If you wish to use a custom file the file should be + mounted to mount point /opt/ves/yaml/ Please see examples below + +Run VES docker image with default configuration + +.. code:: bash + + $ sudo docker run -tid --net=host anuket/barometer-ves + +Run VES docker image with guest.yaml files from barometer/3rd_party/collectd-ves-app/ves_app/yaml/ + +.. code:: bash + + $ sudo docker run -tid --net=host anuket/barometer-ves guest.yaml + + +Run VES docker image with using custom config and yaml files. In example below yaml/ folder cotains +file named custom.yaml + +.. code:: bash + + $ sudo docker run -tid --net=host -v ${PWD}/custom.config:/opt/ves/config/ves_app_config.conf \ + -v ${PWD}/yaml/:/opt/ves/yaml/ anuket/barometer-ves custom.yaml + +Run VES Test Collector application +^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ + +VES Test Collector application can be used for displaying platform +wide metrics that are collected by barometer-ves container. +Setup instructions are located in: :ref:`Setup VES Test Collector` + +Build and Run DMA and Redis Docker Images +----------------------------------------- + +Download DMA docker images +^^^^^^^^^^^^^^^^^^^^^^^^^^ + +If you wish to use pre-built barometer project's DMA images, you can pull the +images from https://hub.docker.com/r/opnfv/barometer-dma/ + +.. note:: + If your preference is to build images locally please see sections `Build DMA Docker Image`_ + +.. code:: bash + + $ docker pull opnfv/barometer-dma + +.. note:: + If you have pulled the pre-built images there is no requirement to complete steps outlined + in sections `Build DMA Docker Image`_ and you can proceed directly to section + `Run DMA Docker Image`_ + +Build DMA docker image +^^^^^^^^^^^^^^^^^^^^^^ + +Build DMA docker image: + +.. code:: bash + + $ cd barometer/docker/barometer-dma + $ sudo docker build -t opnfv/barometer-dma --build-arg http_proxy=`echo $http_proxy` \ + --build-arg https_proxy=`echo $https_proxy` -f Dockerfile . + +.. note:: + In the above mentioned ``docker build`` command, http_proxy & https_proxy arguments needs + to be passed only if system is behind an HTTP or HTTPS proxy server. + +Check the docker images: + +.. code:: bash + + $ sudo docker images + +Output should contain a barometer image: + +.. code:: + + REPOSITORY TAG IMAGE ID CREATED SIZE + opnfv/barometer-dma latest 2f14fbdbd498 3 hours ago 941 MB + +Run Redis docker image +^^^^^^^^^^^^^^^^^^^^^^ + +.. note:: + Before running DMA, Redis must be running. + +Run Redis docker image: + +.. code:: bash + + $ sudo docker run -tid -p 6379:6379 --name barometer-redis redis + +Check your docker image is running + +.. code:: bash + + sudo docker ps + +Run DMA docker image +^^^^^^^^^^^^^^^^^^^^ +.. note:: + + Run DMA docker image with default configuration + +.. code:: bash + + $ cd barometer/docker/barometer-dma + $ sudo mkdir /etc/barometer-dma + $ sudo cp ../../src/dma/examples/config.toml /etc/barometer-dma/ + $ sudo vi /etc/barometer-dma/config.toml + (edit amqp_password and os_password:OpenStack admin password) + + $ sudo su - + (When there is no key for SSH access authentication) + # ssh-keygen + (Press Enter until done) + (Backup if necessary) + # cp ~/.ssh/authorized_keys ~/.ssh/authorized_keys_org + # cat ~/.ssh/authorized_keys_org ~/.ssh/id_rsa.pub \ + > ~/.ssh/authorized_keys + # exit + + $ sudo docker run -tid --net=host --name server \ + -v /etc/barometer-dma:/etc/barometer-dma \ + -v /root/.ssh/id_rsa:/root/.ssh/id_rsa \ + -v /etc/collectd/collectd.conf.d:/etc/collectd/collectd.conf.d \ + opnfv/barometer-dma /server + + $ sudo docker run -tid --net=host --name infofetch \ + -v /etc/barometer-dma:/etc/barometer-dma \ + -v /var/run/libvirt:/var/run/libvirt \ + opnfv/barometer-dma /infofetch + + (Execute when installing the threshold evaluation binary) + $ sudo docker cp infofetch:/threshold ./ + $ sudo ln -s ${PWD}/threshold /usr/local/bin/ + +References +^^^^^^^^^^ +.. [1] https://docs.docker.com/config/daemon/systemd/#httphttps-proxy +.. [2] https://docs.docker.com/engine/install/centos/#install-using-the-repository +.. [3] https://docs.docker.com/engine/userguide/ + + diff --git a/docs/release/userguide/installguide.oneclick.rst b/docs/release/userguide/installguide.oneclick.rst new file mode 100644 index 00000000..78203a12 --- /dev/null +++ b/docs/release/userguide/installguide.oneclick.rst @@ -0,0 +1,410 @@ +.. This work is licensed under a Creative Commons Attribution 4.0 International License. +.. http://creativecommons.org/licenses/by/4.0 +.. (c) Anuket and others +.. _barometer-oneclick-userguide: + +======================================== +Anuket Barometer One Click Install Guide +======================================== + +.. contents:: + :depth: 3 + :local: + +The intention of this user guide is to outline how to use the ansible +playbooks for a one click installation of Barometer. A more in-depth +installation guide is available with the +:ref:`Docker user guide <barometer-docker-userguide>`. + + +One Click Install with Ansible +------------------------------ + + +Proxy for package manager on host +^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ +.. note:: + This step has to be performed only if host is behind HTTP/HTTPS proxy + +Proxy URL have to be set in dedicated config file + +1. CentOS - ``/etc/yum.conf`` + +.. code:: bash + + proxy=http://your.proxy.domain:1234 + +2. Ubuntu - ``/etc/apt/apt.conf`` + +.. code:: bash + + Acquire::http::Proxy "http://your.proxy.domain:1234" + +After update of config file, apt mirrors have to be updaited via +``apt-get update`` + +.. code:: bash + + $ sudo apt-get update + +Proxy environment variables (for docker and pip) +^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ +.. note:: + This step has to be performed only if host is behind HTTP/HTTPS proxy + +Configuring proxy for packaging system is not enough, also some proxy +environment variables have to be set in the system before ansible scripts +can be started. +Barometer configures docker proxy automatically via ansible task as a part +of *one click install* process - user only has to provide proxy URL using common +shell environment variables and ansible will automatically configure proxies +for docker(to be able to fetch barometer images). Another component used by +ansible (e.g. pip is used for downloading python dependencies) will also benefit +from setting proxy variables properly in the system. + +Proxy variables used by ansible One Click Install: + * ``http_proxy`` + * ``https_proxy`` + * ``ftp_proxy`` + * ``no_proxy`` + +Variables mentioned above have to be visible for superuser (because most +actions involving ``ansible-barometer`` installation require root privileges). +Proxy variables are commonly defined in ``/etc/environment`` file (but any other +place is good as long as variables can be seen by commands using ``su``). + +Sample proxy configuration in ``/etc/environment``: + +.. code:: bash + + http_proxy=http://your.proxy.domain:1234 + https_proxy=http://your.proxy.domain:1234 + ftp_proxy=http://your.proxy.domain:1234 + no_proxy=localhost + +Install Ansible +^^^^^^^^^^^^^^^ +.. note:: + * sudo permissions or root access are required to install ansible. + * ansible version needs to be 2.4+, because usage of import/include statements + +The following steps have been verified with Ansible 2.6.3 on Ubuntu 16.04 and 18.04. +To install Ansible 2.6.3 on Ubuntu: + +.. code:: bash + + $ sudo apt-get install python + $ sudo apt-get install python-pip + $ sudo -H pip install 'ansible==2.6.3' + $ sudo apt-get install git + +The following steps have been verified with Ansible 2.6.3 on Centos 7.5. +To install Ansible 2.6.3 on Centos: + +.. code:: bash + + $ sudo yum install python + $ sudo yum install epel-release + $ sudo yum install python-pip + $ sudo -H pip install 'ansible==2.6.3' + $ sudo yum install git + +.. note:: + When using multi-node-setup, please make sure that ``python`` package is + installed on all of the target nodes (ansible during 'Gathering facts' + phase is using ``python2`` and it may not be installed by default on some + distributions - e.g. on Ubuntu 16.04 it has to be installed manually) + +Clone barometer repo +^^^^^^^^^^^^^^^^^^^^ + +.. code:: bash + + $ git clone https://gerrit.opnfv.org/gerrit/barometer + $ cd barometer + +Install ansible dependencies +^^^^^^^^^^^^^^^^^^^^^^^^^^^^ + +To run the ansible playbooks for the one-click install, additional dependencies are needed. +There are listed in requirements.yml and can be installed using:: + + $ ansible-galaxy install -r $barometer_dir/requirements.yml + + +Edit inventory file +^^^^^^^^^^^^^^^^^^^ +Edit inventory file and add hosts: +``$barometer_dir/docker/ansible/default.inv`` + +.. code:: bash + + [collectd_hosts] + localhost + + [collectd_hosts:vars] + install_mcelog=true + insert_ipmi_modules=true + #to use master or experimental container set the collectd flavor below + #possible values: stable|master|experimental + flavor=stable + + [influxdb_hosts] + #hostname or ip must be used. + #using localhost will cause issues with collectd network plugin. + #hostname + + [grafana_hosts] + #NOTE: As per current support, Grafana and Influxdb should be same host. + #hostname + + [prometheus_hosts] + #localhost + + [zookeeper_hosts] + #NOTE: currently one zookeeper host is supported + #hostname + + [kafka_hosts] + #hostname + + [ves_hosts] + #hostname + +Change localhost to different hosts where neccessary. +Hosts for influxdb and grafana are required only for ``collectd_service.yml``. +Hosts for zookeeper, kafka and ves are required only for ``collectd_ves.yml``. + +.. note:: + Zookeeper, Kafka and VES need to be on the same host, there is no + support for multi node setup. + +To change host for kafka edit ``kafka_ip_addr`` in +``./roles/config_files/vars/main.yml``. + +Additional plugin dependencies +^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ + +By default ansible will try to fulfill dependencies for ``mcelog`` and +``ipmi`` plugin. For ``mcelog`` plugin it installs mcelog daemon. For ipmi it +tries to insert ``ipmi_devintf`` and ``ipmi_si`` kernel modules. +This can be changed in inventory file with use of variables ``install_mcelog`` +and ``insert_ipmi_modules``, both variables are independent: + +.. code:: bash + + [collectd_hosts:vars] + install_mcelog=false + insert_ipmi_modules=false + +.. note:: + On Ubuntu 18.04 the deb package for mcelog daemon is not available in official + Ubuntu repository. In that case ansible scripts will try to download, make and + install the daemon from mcelog git repository. + +Configure ssh keys +^^^^^^^^^^^^^^^^^^ + +Generate ssh keys if not present, otherwise move onto next step. +ssh keys are required for Ansible to connect the host you use for Barometer Installation. + +.. code:: bash + + $ sudo ssh-keygen + +Copy ssh key to all target hosts. It requires to provide root password. +The example is for ``localhost``. + +.. code:: bash + + $ sudo -i + $ ssh-copy-id root@localhost + +Verify that key is added and password is not required to connect. + +.. code:: bash + + $ sudo ssh root@localhost + +.. note:: + Keys should be added to every target host and [localhost] is only used as an + example. For multinode installation keys need to be copied for each node: + [collectd_hostname], [influxdb_hostname] etc. + +Build the Collectd containers +^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ + +This is an optional step, if you do not wish to build the containers locally, please continue to `Download and run Collectd+Influxdb+Grafana containers`_. +This step will build the container images locally, allowing for testing of new changes to collectd. +This is particularly useful for the ``experimental`` flavour for testing PRs, and for building a ``collectd-6`` container. + +To run the playbook and build the containers, run:: + sudo ansible-playbook docker/ansible/collectd_build.yml + +By default, all contaienrs will be built. +Since this can take a while, it is recommended that you choose a flavor to build using tags:: + + sudo ansible-playbook docker/ansible/collectd_build.yml --tags='collectd-6,latest' + +The available tags are: + +* *stable* builds the ``barometer-collectd`` image +* *latest* builds the ``barometer-collectd-latest`` image +* *experimental* builds the ``barometer-collectd-experimental`` container, with optional PRs +* *collectd-6* builds the ``baromter-collectd-6`` container, with optional PR(s) + +* *flask_test* builds a small webapp that displays the metrics sent via the write_http plugin + +.. note:: + The flask_test tag must be explicitly enabled. + This can be done either through the ``--tags='flask_test'`` (to build just + this container) or with ``--tags=all`` to build this and all the other + containers as well. + +Download and run Collectd+Influxdb+Grafana containers +^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ + +The One Click installation features easy and scalable deployment of Collectd, +Influxdb and Grafana containers using Ansible playbook. The following steps goes +through more details. + +.. code:: bash + + $ sudo -H ansible-playbook -i default.inv collectd_service.yml + +Check the three containers are running, the output of ``docker ps`` should be similar to: + +.. code:: bash + + $ sudo docker ps + CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES + 4c2143fb6bbd anuket/barometer-grafana "/run.sh" 59 minutes ago Up 4 minutes bar-grafana + 5e356cb1cb04 anuket/barometer-influxdb "/entrypoint.sh infl…" 59 minutes ago Up 4 minutes bar-influxdb + 2ddac8db21e2 anuket/barometer-collectd "/run_collectd.sh" About an hour ago Up 4 minutes bar-collectd + +To make some changes when a container is running run: + +.. code:: bash + + $ sudo docker exec -ti <CONTAINER ID> /bin/bash + +Connect to ``<host_ip>:3000`` with a browser and log into Grafana: admin/admin. +For short introduction please see the: +`Grafana guide <https://grafana.com/docs/grafana/latest/guides/getting_started/>`_. + +The collectd configuration files can be accessed directly on target system in +``/opt/collectd/etc/collectd.conf.d``. It can be used for manual changes or +enable/disable plugins. If configuration has been modified it is required to +restart collectd: + +.. code:: bash + + $ sudo docker restart bar-collectd + +Download and run collectd+kafka+ves containers +^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ + +.. code:: bash + + $ sudo ansible-playbook -i default.inv collectd_ves.yml + +Check the containers are running, the output of ``docker ps`` should be similar to: + +.. code:: bash + + $ sudo docker ps + CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES + d041d8fff849 zookeeper:3.4.11 "/docker-entrypoint.…" 2 minutes ago Up 2 minutes bar-zookeeper + da67b81274bc anuket/barometer-ves "./start_ves_app.sh …" 2 minutes ago Up 2 minutes bar-ves + 2c25e0c79f93 anuket/barometer-kafka "/src/start_kafka.sh" 2 minutes ago Up 2 minutes bar-kafka + b161260c90ed anuket/barometer-collectd "/run_collectd.sh" 2 minutes ago Up 2 minutes bar-collectd + + +To make some changes when a container is running run: + +.. code:: bash + + $ sudo docker exec -ti <CONTAINER ID> /bin/bash + +List of default plugins for collectd container +^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ +.. note:: + From Jerma release, the supported dpdk version is 19.11 + + If you would like to use v18.11, make the following changes: + + 1. Update the dpdk version to v18.11 in ``<barometer>/src/package-list.mk`` + 2. Replace all ``common_linux`` string with ``common_linuxapp`` in ``<barometer>/src/dpdk/Makefile`` + + If you would like to downgrade to a version lower than v18.11, make the following changes: + + 1. Update the dpdk version to a version lower than v18.11 (e.g.:- v16.11) in ``<barometer>/src/package-list.mk`` + 2. Replace all ``common_linux`` string with ``common_linuxapp`` in ``<barometer>/src/dpdk/Makefile`` + 3. Change the Makefile path from ``(WORKDIR)/kernel/linux/kni/Makefile`` to ``(WORKDIR)/lib/librte_eal/linuxapp/kni/Makefile`` in ``(WORK_DIR)/src/dpdk/Makefile``. + +By default the collectd is started with default configuration which includes +the following plugins: + +* ``csv``, ``contextswitch``, ``cpu``, ``cpufreq``, ``df``, ``disk``, + ``ethstat``, ``ipc``, ``irq``, ``load``, ``memory``, ``numa``, + ``processes``, ``swap``, ``turbostat``, ``uuid``, ``uptime``, ``exec``, + ``hugepages``, ``intel_pmu``, ``ipmi``, ``write_kafka``, ``logfile``, + ``logparser``, ``mcelog``, ``network``, ``intel_rdt``, ``rrdtool``, + ``snmp_agent``, ``syslog``, ``virt``, ``ovs_stats``, ``ovs_events``, + ``dpdk_telemetry``. + +.. note:: + Some of the plugins are loaded depending on specific system requirements and can be omitted if + dependency is not met, this is the case for: + + * ``hugepages``, ``ipmi``, ``mcelog``, ``intel_rdt``, ``virt``, ``ovs_stats``, ``ovs_events`` + + For instructions on how to disable certain plugins see the `List and description of tags used in ansible scripts`_ section. + +List and description of tags used in ansible scripts +^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ + +Tags can be used to run a specific part of the configuration without running +the whole playbook. To run a specific parts only: + +.. code:: bash + + $ sudo ansible-playbook -i default.inv collectd_service.yml --tags "syslog,cpu,uuid" + +To disable some parts or plugins: + +.. code:: bash + + $ sudo ansible-playbook -i default.inv collectd_service.yml --skip-tags "en_default_all,syslog,cpu,uuid" + +List of available tags: + +``install_docker`` + Install docker and required dependencies with package manager. + +``add_docker_proxy`` + Configure proxy file for docker service if proxy is set on host environment. + +``rm_config_dir`` + Remove collectd config files. + +``copy_additional_configs`` + Copy additional configuration files to target system. Path to additional + configuration is stored in + ``$barometer_dir/docker/ansible/roles/config_files/docs/main.yml`` as + ``additional_configs_path``. + +``en_default_all`` + Set of default read plugins: ``contextswitch``, ``cpu``, ``cpufreq``, ``df``, + ``disk``, ``ethstat``, ``ipc``, ``irq``, ``load``, ``memory``, ``numa``, + ``processes``, ``swap``, ``turbostat``, ``uptime``. + +``plugins tags`` + The following tags can be used to enable/disable plugins: ``csv``, + ``contextswitch``, ``cpu``, ``cpufreq``, ``df``, ``disk,`` ``ethstat``, + ``ipc``, ``irq``, ``load``, ``memory``, ``numa``, ``processes``, ``swap``, + ``turbostat``, ``uptime``, ``exec``, ``hugepages``, ``ipmi``, ``kafka``, + ``logfile``, ``logparser``, ``mcelog``, ``network``, ``pmu``, ``rdt``, + ``rrdtool``, ``snmp``, ``syslog``, ``unixsock``, ``virt``, ``ovs_stats``, + ``ovs_events``, ``uuid``, ``dpdk_telemetry``. + diff --git a/docs/release/userguide/ves-app-guest-mode.png b/docs/release/userguide/ves-app-guest-mode.png Binary files differindex 4d05dae2..45ffbc43 100644 --- a/docs/release/userguide/ves-app-guest-mode.png +++ b/docs/release/userguide/ves-app-guest-mode.png diff --git a/docs/release/userguide/ves-app-host-mode.png b/docs/release/userguide/ves-app-host-mode.png Binary files differindex 5a21d3a8..fd9e5592 100644 --- a/docs/release/userguide/ves-app-host-mode.png +++ b/docs/release/userguide/ves-app-host-mode.png diff --git a/docs/release/userguide/ves-app-hypervisor-mode.png b/docs/release/userguide/ves-app-hypervisor-mode.png Binary files differindex 5a58787c..25dac94b 100644 --- a/docs/release/userguide/ves-app-hypervisor-mode.png +++ b/docs/release/userguide/ves-app-hypervisor-mode.png |