aboutsummaryrefslogtreecommitdiffstats
path: root/scripts
AgeCommit message (Collapse)AuthorFilesLines
2017-04-07Avoid awk error in hosts-config.sh for large deploymentsSteven Hardy1-8/+10
This ports the fixes made to the legacy 51-hosts script, which this script is derived from, to tht. See related t-i-e patch Ibe0a9f6ec10d55750e3b0e16301236141f988d69 Change-Id: Ide922af93a5d185bd592e220327326f1d244c4e2 Closes-Bug: #1674732
2016-12-08Don't rely on lsb_release for hosts template writeSteve Baker1-11/+3
This is problematic for the containerised heat-agents, lsb_release has to be bind-mounted in, and atomic host doesn't even have lsb_release installed. Instead just write to every /etc/cloud/templates/hosts.*.tmpl file. Change-Id: If2aab7e9b1e03aa657baf1c33aa4392ef7044075
2016-11-30Configure /etc/hosts via os-collect-config scriptDan Prince1-0/+47
This patch moves the t-i-e element code for hosts configuration into a t-h-t shell script that gets driven by a os-collect-config script hook. This helps accomplish several goals: - moves us away from t-i-e - gives us better signal handling in the error case (where the previous element relied on 99-refresh-completed - Allows the t-h-t undercloud installer to more easily consume this since it doesn't rely on the old os-apply-config metadata (which that installer doesn't support). Change-Id: I73c3d4818ef531a3559fab272521f44519e2f486
126 127 128 129 130 131 132 133 134 135
==============================================================================
Readme for collectd docker container in barometer project
==============================================================================

This text file includes information about environment preparation and
deployment collectd in docker container

Table of content:
1. DESCRIPTION
2. SYSTEM REQUIREMENTS
3. INSTALLATION NOTES - barometer-collectd
4. INSTALLATION NOTES - barometer-collectd-master
5. ADDITIONAL STEPS

------------------------------------------------------------------------------
1. DESCRIPTION

This Dockerfile provides instruction for building collect in isolated container.
There are currently two variants of collectd container:
 - barometer-collectd - it is based on stable collect release
 - barometer-collectd-master - development container that is based on
   latest 'master' branch for collectd project. It contains all available
   collectd plugins and features that are available on 'master' branch but
   some issues with configuration or stability may occur

------------------------------------------------------------------------------
2. SYSTEM REQUIREMENTS

  Docker >= 17.06.0-ce

------------------------------------------------------------------------------
3. INSTALLATION NOTES: barometer-collectd (stable container)

To build docker container, that is based on stable collectd release, run:
sudo docker build -f ./docker/barometer-collectd/Dockerfile ./docker/barometer-collectd
from barometer folder.

sudo docker images # get docker image id
sudo docker run -ti --net=host -v `pwd`/src/collectd/collectd_sample_configs:/opt/collectd/etc/collectd.conf.d \
-v /var/run:/var/run -v /tmp:/tmp --privileged <image id>

To make some changes run
sudo docker run -ti --net=host -v `pwd`/src/collectd/collectd_sample_configs:/opt/collectd/etc/collectd.conf.d \
-v /var/run:/var/run -v /tmp:/tmp --privileged --entrypoint=/bin/bash <image id>

/opt/collectd/sbin/collectd -f

------------------------------------------------------------------------------
4. INSTALLATION NOTES: barometer-collectd-master (development container)

To build docker barometer-collectd-master container run (it is based on master branch from collectd):
sudo docker build -f ./docker/barometer-collectd-master/Dockerfile .
from root barometer folder.

To run builded image run
sudo docker images # get docker image id
sudo docker run -ti --net=host -v `pwd`/src/collectd/collectd_sample_configs-master:/opt/collectd/etc/collectd.conf.d \
-v /var/run:/var/run -v /tmp:/tmp --privileged <image id>

NOTE: barometer-collectd-master container uses a different sample configurations files
compared to regular barometer-collectd container (src/collectd/collectd_sample_configs-master)

To make some changes run
sudo docker run -ti --net=host -v `pwd`/src/collectd/collectd_sample_configs-master:/opt/collectd/etc/collectd.conf.d \
-v /var/run:/var/run -v /tmp:/tmp --privileged --entrypoint=/bin/bash <image id>

/opt/collectd/sbin/collectd -f

------------------------------------------------------------------------------
5. ADDITIONAL STEPS

To check if container works properly additional packages should be installed
on host system.

DEVELOPMENT TOOLS
sudo yum group install "Development Tools"

MCELOG
To simulate mcelog message use instruction in http://artifacts.opnfv.org/barometer/docs/index.html#mcelog-plugin

git clone https://github.com/andikleen/mce-inject
cd mce-inject/
make
sudo make install
modprobe mce-inject

go to mcelog folder
sudo make test

if runs multiple times mcelog service shoud be restarted(cause mcelog make test exits closes mcelog)

VIRT
http://artifacts.opnfv.org/barometer/docs/index.html#virt-plugin
Check that libvirtd is running on the remote host
systemctl status libvirtd
virsh list
virsh perf instance-00000003
sudo virsh perf instance-00000003 --enable cpu_cycles --live
sudo virsh perf instance-00000003 --enable cmt --live
sudo virsh perf instance-00000003 --enable mbmt --live
sudo virsh perf instance-00000003 --enable mbml --live
sudo virsh perf instance-00000003 --enable instructions --live
sudo virsh perf instance-00000003 --enable cache_references --live
sudo virsh perf instance-00000003 --enable cache_mises --live
sudo virsh perf instance-00000003 --enable cache_misses --live

OVS
To successfuly run ovs plugins in Docker you need an ovs instance to connect to

sudo yum install -y openvswitch-switch
sudo service openvswitch-switch start
sudo ovs-vsctl set-manager ptcp:6640

Alternatively you can build ovs from source
yum -y install make gcc openssl-devel autoconf automake rpm-build \
       redhat-rpm-config python-devel openssl-devel kernel-devel  \
       kernel-debug-devel libtool wget python-six selinux-policy-devel
mkdir -p ~/rpmbuild/SOURCES
cd ~/rpmbuild/SOURCES
wget http://openvswitch.org/releases/openvswitch-2.5.3.tar.gz
tar xfz openvswitch-2.5.3.tar.gz
sed 's/openvswitch-kmod, //g' rhel/openvswitch.spec > rhel/openvswitch_no_kmod.spec
rpmbuild -bb --nocheck rhel/openvswitch_no_kmod.spec
cd ../RPMS/x86_64/
yum install -y openvswitch-2.5.3-1.x86_64.rpm
sudo systemctl start openvswitch.service
sudo ovs-vsctl set-manager ptcp:6640

To check if connection is successfull please check
sudo ovs-vsctl show
319efc53-b321-49a9-b628-e8d70f9bd8a9
    Manager "ptcp:6640"
        is_connected: true - can be a marker that ovs plugins successfully connected
    ovs_version: "2.5.3"
on the host.