blob: c922ab64ab0d6866aebe240ef35d84abdcd53044 (
plain)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
|
==============================================================================
Readme for collectd docker container in barometer project
==============================================================================
This text file includes information about environment preparation and
deployment collectd in docker container
Table of content:
1. DESCRIPTION
2. SYSTEM REQUIREMENTS
3. INSTALLATION NOTES - barometer-collectd
4. INSTALLATION NOTES - barometer-collectd-master
5. ADDITIONAL STEPS
------------------------------------------------------------------------------
1. DESCRIPTION
This Dockerfile provides instruction for building collect in isolated container.
There are currently two variants of collectd container:
- barometer-collectd - it is based on stable collect release
- barometer-collectd-master - development container that is based on
latest 'master' branch for collectd project. It contains all available
collectd plugins and features that are available on 'master' branch but
some issues with configuration or stability may occur
------------------------------------------------------------------------------
2. SYSTEM REQUIREMENTS
Docker >= 17.06.0-ce
------------------------------------------------------------------------------
3. INSTALLATION NOTES: barometer-collectd (stable container)
To build docker container, that is based on stable collectd release, run:
sudo docker build -f ./docker/barometer-collectd/Dockerfile ./docker/barometer-collectd
from barometer folder.
sudo docker images # get docker image id
sudo docker run -ti --net=host -v `pwd`/src/collectd/collectd_sample_configs:/opt/collectd/etc/collectd.conf.d \
-v /var/run:/var/run -v /tmp:/tmp --privileged <image id>
To make some changes run
sudo docker run -ti --net=host -v `pwd`/src/collectd/collectd_sample_configs:/opt/collectd/etc/collectd.conf.d \
-v /var/run:/var/run -v /tmp:/tmp --privileged --entrypoint=/bin/bash <image id>
/opt/collectd/sbin/collectd -f
------------------------------------------------------------------------------
4. INSTALLATION NOTES: barometer-collectd-master (development container)
To build docker barometer-collectd-master container run (it is based on master branch from collectd):
sudo docker build -f ./docker/barometer-collectd-master/Dockerfile .
from root barometer folder.
To run builded image run
sudo docker images # get docker image id
sudo docker run -ti --net=host -v `pwd`/src/collectd/collectd_sample_configs-master:/opt/collectd/etc/collectd.conf.d \
-v /var/run:/var/run -v /tmp:/tmp --privileged <image id>
NOTE: barometer-collectd-master container uses a different sample configurations files
compared to regular barometer-collectd container (src/collectd/collectd_sample_configs-master)
To make some changes run
sudo docker run -ti --net=host -v `pwd`/src/collectd/collectd_sample_configs-master:/opt/collectd/etc/collectd.conf.d \
-v /var/run:/var/run -v /tmp:/tmp --privileged --entrypoint=/bin/bash <image id>
/opt/collectd/sbin/collectd -f
------------------------------------------------------------------------------
5. ADDITIONAL STEPS
To check if container works properly additional packages should be installed
on host system.
DEVELOPMENT TOOLS
sudo yum group install "Development Tools"
MCELOG
To simulate mcelog message use instruction in http://artifacts.opnfv.org/barometer/docs/index.html#mcelog-plugin
git clone https://github.com/andikleen/mce-inject
cd mce-inject/
make
sudo make install
modprobe mce-inject
go to mcelog folder
sudo make test
if runs multiple times mcelog service shoud be restarted(cause mcelog make test exits closes mcelog)
VIRT
http://artifacts.opnfv.org/barometer/docs/index.html#virt-plugin
Check that libvirtd is running on the remote host
systemctl status libvirtd
virsh list
virsh perf instance-00000003
sudo virsh perf instance-00000003 --enable cpu_cycles --live
sudo virsh perf instance-00000003 --enable cmt --live
sudo virsh perf instance-00000003 --enable mbmt --live
sudo virsh perf instance-00000003 --enable mbml --live
sudo virsh perf instance-00000003 --enable instructions --live
sudo virsh perf instance-00000003 --enable cache_references --live
sudo virsh perf instance-00000003 --enable cache_mises --live
sudo virsh perf instance-00000003 --enable cache_misses --live
OVS
To successfuly run ovs plugins in Docker you need an ovs instance to connect to
sudo yum install -y openvswitch-switch
sudo service openvswitch-switch start
sudo ovs-vsctl set-manager ptcp:6640
Alternatively you can build ovs from source
yum -y install make gcc openssl-devel autoconf automake rpm-build \
redhat-rpm-config python-devel openssl-devel kernel-devel \
kernel-debug-devel libtool wget python-six selinux-policy-devel
mkdir -p ~/rpmbuild/SOURCES
cd ~/rpmbuild/SOURCES
wget http://openvswitch.org/releases/openvswitch-2.5.3.tar.gz
tar xfz openvswitch-2.5.3.tar.gz
sed 's/openvswitch-kmod, //g' rhel/openvswitch.spec > rhel/openvswitch_no_kmod.spec
rpmbuild -bb --nocheck rhel/openvswitch_no_kmod.spec
cd ../RPMS/x86_64/
yum install -y openvswitch-2.5.3-1.x86_64.rpm
sudo systemctl start openvswitch.service
sudo ovs-vsctl set-manager ptcp:6640
To check if connection is successfull please check
sudo ovs-vsctl show
319efc53-b321-49a9-b628-e8d70f9bd8a9
Manager "ptcp:6640"
is_connected: true - can be a marker that ovs plugins successfully connected
ovs_version: "2.5.3"
on the host.
|