Age | Commit message (Collapse) | Author | Files | Lines |
|
Change-Id: I3c9b7acb8972bd34afc01f3aad9a1f5ec9a1b99f
JIRA: 0
Signed-off-by: Dan Smith <daniel.smith@ericsson.com>
|
|
|
|
Adding change.sh (for automation of ODL after deployemt) and updates odl_docker.pp file
Change-Id: I7868f8e7381d726f1ce7fd9600ba048ee3119ab7
JIRA: 0
Signed-off-by: Dan Smith <daniel.smith@ericsson.com>
|
|
External router is needed for rally to execute correctly even though it
is not required for tenants to access external networks. This patch
creates that router. Also, metadata server was not being used because
password was not set.
JIRA: BGS-55
Change-Id: If25f4f8ee2be3e49193e9e49c370cce68dde45cf
Signed-off-by: Tim Rozet <trozet@redhat.com>
|
|
Patch changes behavior to do the following
External Network:
- openvswitch is now installed at the beginning of the puppet run
- public interface config is changed to be an ovsport on br-ex
- br-ex is created with the IP address formerly on public interface
- neutron is configured to use br-ex
- after neutron is running, an external provider_network and
provider_subnet are created
New global parameters required (only if external_network_flag is true):
- public_gateway
- public_dns
- public_network
- public_subnet
- public_allocation_start
- public_allocation_end
Heat is now in HA and added to deployment by default:
Introduces 6 new required global params:
- heat_admin_vip
- heat_private_vip
- heat_public_vip
- heat_cfn_admin_vip
- heat_cfn_private_vip
- heat_cfn_public_vip
JIRA: BGS-31
Change-Id: Ic4428b31c2a3028aa46c4da73e4d0f338b6651d3
Signed-off-by: Tim Rozet <trozet@redhat.com>
|
|
Patch changes parameters from being interface names of linux interfaces
to being network subnets. This removes the need to actually specify the
network interface to puppet module and is found out dynamically at
puppet runtime.
JIRA: BGS-42
Change-Id: Ibab114c46dd2ec9fde244b6687bf272849b15d6b
Signed-off-by: Tim Rozet <trozet@redhat.com>
|
|
|
|
whitespace"
|
|
PATCHSET3: fixes whitepsace
Uses ceph_deploy.pp to create a Ceph cluster that is integrated into
OpenStack. The current model is 1 OSD and 1 Ceph mon per controller,
clustered together, resulting in 3 OSDs and 3 Mons. The network used
for storage is optional, provided by storage_iface. If that variable is
unset, then storage network will run on private network.
JIRA: BGS-13
Change-Id: I242bfeb18c3f3b1e2fc7f7ed21dbfaa9f58337e8
Signed-off-by: Tim Rozet <trozet@redhat.com>
|
|
|
|
PATCHSET2: Fixes whitespace
Module can be used to deploy Ceph monitor and OSD per host.
Relies on https://github.com/stackforge/puppet-ceph/
JIRA: BGS-13
Change-Id: Icf15f85a09f48feed6a2cc7160f03fb0fcfbe9ce
Signed-off-by: Tim Rozet <trozet@redhat.com>
|
|
NTP class is needed to keep the Ceph cluster in sync. python-rados
package is now provided by EPEL for Ceph and replaces python-ceph.
QuickStack originally provided python-ceph, but that is now removed.
JIRA: BGS-13
Change-Id: Ia6fb79fc2e5dc54630c7949a1d65629d7b36877c
Signed-off-by: Tim Rozet <trozet@redhat.com>
|
|
These repos are not needed. EPEL contains the necessary packages for
CentOS 7 to install Ceph. The package "python-ceph" has also been
renamed to "python-rados" and that dependency has been removed from
quickstack.
JIRA: BGS-13
Change-Id: I8f76da0acd98ad5bd7348bfd13451dbca58677a5
Signed-off-by: Tim Rozet <trozet@redhat.com>
|
|
|
|
|
|
installation"
|
|
Uses Ceph Giant version as this has been tested to work on intel lab
JIRA: BGS-13
Change-Id: I3c0f533c7fe6104122ce1845acbaffd1ed7bfd48
Signed-off-by: Tim Rozet <trozet@redhat.com>
|
|
Moves the installation functionality from controller_networker to this
puppet module in order to break up functionality that is only needed on
1 out of 3 control nodes. Defaults to port 8081 to avoid conflict with
Swift service.
JIRA: BGS-6
Change-Id: I45550a7e95be04b39c2817d18f4d8c2ea0df69c2
Signed-off-by: Tim Rozet <trozet@redhat.com>
|
|
PATCH SET3: Fixes whitespace: L21, L26
Changes to the module include:
- ha_flag used to indicate mandatory HA parameters, such as vips for
each openstack service, instead of one single controller IP.
- Ceph variables introduced and defaulted for use with Cinder. Control
node also uses these same defaults, along with the Ceph installer.
- Minor fix for vnc proxy to work inside of Horizon for compute node
when consoling in.
JIRA: BGS-6
Change-Id: I61a2ebc5598e7c044a8b3d694de3daceaabcf53b
Signed-off-by: Tim Rozet <trozet@redhat.com>
|
|
Changes include:
- If ha_flag is set to "true", Control node will be setup in HA for the
following services: rabbitmq, galera mariadb, keystone, glance, nova,
cinder, horizon, neutron
- Required parameters for HA:
https://gist.github.com/trozet/d3a2a2f88ba750b83008
- Removes OpenDaylight installation from this puppet manifest. Will be
part of a separate commit that only installs OpenDaylight. This is
due to the fact that ODL will only run on a single control node in non
HA mode.
JIRA: BGS-6
Change-Id: I77836a5eefc99de265f8f8120ff2fdfd7d6bb72a
Signed-off-by: Tim Rozet <trozet@redhat.com>
|
|
If we used 'make all' for build iso, only release section from
f_odl_docker Makefile was executed, so odl docker image and docker
binary were not created.
This patch also try resolve problems which appeared during
execution of start_odl_container.sh on controller node:
* install and run cgroup-lite to mitigate 'failed to find the cgroup
root' error
* use proper path to docker image and binary
* prevent import of docker image before daemon is not fully
initialized
* because docker binary is delivered by puppet, we should execute
commands against it, not try use system binary which is probably
not present on controller node
* stop use daemon mode('-d') of 'docker run' if user want to have
access to container shell
* fix name of start script which is run inside container when daemon
mode is used
It looks like file 'fuel/build/f_odl_docker/scripts/start_odl_container.sh'
is not used and duplicate:
'fuel/build/f_odl_docker/puppet/modules/opnfv/scripts/start_odl_container.sh'
JIRA:
Change-Id: Ia6064dbacf30902bda557e5d0b631b5f5f207b5e
Signed-off-by: Michal Skalski <mskalski@mirantis.com>
|
|
AUTHOR: DANIEL SMITH
*** PLEASE MERGE **
UPDATED TO REFLECT INPUT FRMO J. BJUREL
- fixed whitespaces
- fixed location of .erb file (they should have been in templates - the directory was there just int he wrong spot)
- removed opcheck.pp from common opnfv class
- Removed Debug from fuel/build/Makefile :)
UPDATE: Input from S. Berg and F. Bockners incorporated. Merge required before we can refactor to common puppet manifest directory
This patchset delivers the folowing functionality:
- implementation of the common/opnfv-puppet structure / move of .pp files and update of f_odl_docker to build / source from there
- creation of f_odl_doc subclass
- fetch of latest stable release of ODL
- fetch of latest docker binary release (TODO: this will be changed in next patchset push)
- build of docker container with all needed libs and port exposure for DLUX and OVSDB/ODL with Openstack Integration (OVS Manager)
- deployment of both the target odl docker image as well as the docker runtime binary to the control nodes via puppet script.
GENERATES:
- docker-latest - binary of docker for use on control node
- odl_docker_image.tar - a docker container with a ODL controller running DLUX and OVSDB
ENABLE / DISABLE:
- Comment/Un-comment "SUBDIRS += f_odl_doc in base (fuel/build/) Makefile
Breakdown of Update / Edits per File:
=====================================
fuel/build/Makefile
- Modified include to capture the newly created f_odl_doc directory
fuel/build/f_odl_docker/Makefile
- Fetches libraries and produces two outputs:
docker-latest - binary of docker (actually lxc-docker cause ODL Container is running 12.04 (precise) libraries - i.e java7, tz 12.04, etc)
odl_docker_image.tar - this is a docker image defined in ./dockefile/Dockerfile and contains the ODL distro + setup and deployment scripts for
runtime on the target control node.
fuel/build/f_odl_docker/dockerfile/Dockerfile
- This Dockerfile defined the packages for use in the Docker Container that will run ODL with DLUX and OVSDB submodules. It also defines the ports to be
exposed to the HOST OS (and thus as well through docker the ODL Controller exists in a private, but routable via but the fuel (10.20.0.0/16) and the
tenants public network since docker handles the mapping (see the docker run command in the start_odl_docker.sh script)
fuel/build/f_odl_docker/dockerfile/check_feature.sh
- Simple expect script that starts up a client and checks that the features are installed (used during visual demo only)
TODO - Replace with API call to ODL KARAF to install features (LOOKUP - Dont know how to address Karaf programatically - LOOKUP)
fuel/build/f_odl_docker/dockerfile/speak.sh
- Expect script called by odl_start_docker.sh once karaf is up to install the features that we need (runtime inside container script) called via ENTRYPOINT in Dockerfile
at runtime on control node.
fuel/build/f_odl_docker/dockerfile/start_odl_docker.sh
- This is the CMD/ENTRYPOINT defined in docker and is what is called from the controller when you start the container (note: This runs inside the conatiner), not
to be confused with staring the actual container on the control node). This script fires up Karaf first time, loads DLUX and OVSDB modules
and monitors that the container is up. The container itself is started on the control node via /opt/opnfv/start_odl_container.sh which includes the syntax
for the port mapping (RANDOM or 1:1).
TODO -
integrate into controller monitor script to ensure better handling (stop, start, monitor) of docker processes
remove expect helper scripts and replace with API/JSON or some other appropriate method to KARAF
fuel/build/f_odl_docker/puppet/modules/opnfv/manifests/odl_doc.pp
- This puppet manifest defined where the docker binary and docker image should be placed on the target control node. /opt/opnfv/start_odl_container.sh will install
docker binary package (if necessary) and load ODL docker image into docker, start the image. This file just ensures placement in /opt/opnfv/odl_docker
fuel/build/f_opnfv_puppet/puppet/modules/opnfv/manifests/init.pp (MODIFICATION):
- Removed previous includes and updated to have only f_odl_doc added
fuel/build/f_odl_doc/scripts/start_odl_container.sh
- this is the control script that will start the docker container (to be run on the control node), this is deployed
this is deployed to /opt/opnfv on the control node via odl_doc puppet manifest file.
JIRA:
DEPRECATED:
Deleted files are no longer needed due to new implementation of ODL.
Change-Id: I26c13cc468a2aba18af78b7a3c78a719033f03e0
Signed-off-by: Daniel Smith <daniel.smith@ericsson.com>
|
|
OPNFV installation.
Intent of this commit is to be a common place where we can contain a
common set of puppet modules that installers should leverage when
installing/configuring OPNFV target system.
Change-Id: I3a694b05a35a6e6025489b74c7bb38256dd84f12
Signed-off-by: Tim Rozet <trozet@redhat.com>
|