diff options
-rw-r--r-- | ci/README.rst | 58 | ||||
-rw-r--r-- | mcp/config/scenario/README.rst | 1 | ||||
-rw-r--r-- | mcp/patches/README.rst | 150 | ||||
-rw-r--r-- | prototypes/sfc_tacker/README.rst | 47 |
4 files changed, 158 insertions, 98 deletions
diff --git a/ci/README.rst b/ci/README.rst index 9f1230dbd..dc860c003 100644 --- a/ci/README.rst +++ b/ci/README.rst @@ -13,7 +13,10 @@ OPNFV CI pipeline guideline: USAGE ===== For usage information of the CI/CD scripts, please run: -./deploy.sh -h + + .. code-block:: bash + + $ ./deploy.sh -h Details on the CI/CD deployment framework ========================================= @@ -41,8 +44,11 @@ Executing a deployment deploy.sh must be executed locally at the target lab/pod/jumpserver A configuration structure must be provided - see the section below. It is straight forward to execute a deployment task - as an example: -$ sudo deploy.sh -b file:///home/jenkins/config \ - -l lf -p pod2 -s os-nosdn-nofeature-ha + + .. code-block:: bash + + $ sudo deploy.sh -b file:///home/jenkins/config + -l lf -p pod2 -s os-nosdn-nofeature-ha -b and -i arguments should be expressed in URI style (eg: file://... or http://...). The resources can thus be local or remote. @@ -65,30 +71,32 @@ A local stripped version of this configuration structure with virtual deployment configurations also exist under build/config/. Following configuration directory and file structure should adheare to: -TOP -! -+---- labs - ! - +---- lab-name-1 - ! ! - ! +---- pod-name-1 - ! ! ! - ! ! +---- fuel - ! ! ! - ! ! +---- config - ! ! ! - ! ! +---- dea-pod-override.yaml - ! ! ! - ! ! +---- dha.yaml - ! ! - ! +---- pod-name-2 - ! ! - ! - +---- lab-name-2 - ! ! + .. code-block:: bash + + TOP + ! + +---- labs + ! + +---- lab-name-1 + ! ! + ! +---- pod-name-1 + ! ! ! + ! ! +---- fuel + ! ! ! + ! ! +---- config + ! ! ! + ! ! +---- dea-pod-override.yaml + ! ! ! + ! ! +---- dha.yaml + ! ! + ! +---- pod-name-2 + ! ! + ! + +---- lab-name-2 + ! ! Creating a deployment scenario ------------------------------ -Please find deploy/scenario/README for instructions on how to create a new +Please find `mcp/config/README.rst` for instructions on how to create a new deployment scenario. diff --git a/mcp/config/scenario/README.rst b/mcp/config/scenario/README.rst index e0f9848ea..389877ac4 100644 --- a/mcp/config/scenario/README.rst +++ b/mcp/config/scenario/README.rst @@ -9,6 +9,7 @@ Abstract: --------- This directory contains configuration files for different OPNFV deployment feature scenarios used by Fuel@OPNFV, e.g.: + - High availability configuration; - Type of SDN controller to be deployed; - OPNFV collaboration project features to be deployed; diff --git a/mcp/patches/README.rst b/mcp/patches/README.rst index 81717b946..735b70341 100644 --- a/mcp/patches/README.rst +++ b/mcp/patches/README.rst @@ -2,6 +2,7 @@ .. SPDX-License-Identifier: CC-BY-4.0 .. (c) 2017 Mirantis Inc., Enea AB and others. +========================================== Fuel@OPNFV submodule fetching and patching ========================================== @@ -10,98 +11,135 @@ working with upstream Fuel/MCP components (e.g.: reclass-system-salt-model) in developing/applying OPNFV patches (backports, custom fixes etc.). The scripts should be friendly to the following 2 use-cases: + - development work: easily cloning, binding repos to specific commits, remote tracking, patch development etc.; - to provide parent build scripts an easy method of tracking upstream references and applying OPNFV patches on top; Also, we need to support at least the following modes of operations: + - submodule bind - each submodule patches will be based on the commit ID saved in the .gitmodules config file; - remote tracking - each submodule will sync with the upstream remote and patches will be applied on top of <sub_remote>/<sub_branch>/HEAD; Workflow (development) ----------------------- +====================== + The standard development workflow should look as follows: -1. Decide whether remote tracking should be active or not: - NOTE: Setting the following var to any non-empty str enables remote track. - NOTE: Leaving unset will enable remote track for anything but stable branch. +Decide whether remote tracking should be active or not +------------------------------------------------------ + +NOTE: Setting the following var to any non-empty str enables remote track. + +NOTE: Leaving unset will enable remote track for anything but stable branch. + + .. code-block:: bash + + $ export FUEL_TRACK_REMOTES="" + +Initialize git submodules +------------------------- + +All Fuel sub-projects are registered as submodules. +If remote tracking is active, upstream remote is queried and latest remote +branch HEAD is fetched. Otherwise, checkout commit IDs from .gitmodules. - $ export FUEL_TRACK_REMOTES="" + .. code-block:: bash -2. All Fuel sub-projects are registered as submodules. To initialize them, call: - If remote tracking is active, upstream remote is queried and latest remote - branch HEAD is fetched. Otherwise, checkout commit IDs from .gitmodules. + $ make sub - $ make sub +Apply patches from `patches/<sub-project>/*` to respective submodules +--------------------------------------------------------------------- -3. Apply patches from `patches/<sub-project>/*` to respective submodules via: +This will result in creation of: - $ make patches-import +- a tag called `${FUEL_MAIN_TAG}-opnfv-root` at the same commit as Fuel@OPNFV + upstream reference (bound to git submodule OR tracking remote HEAD); +- a new branch `opnfv-fuel` which will hold all the OPNFV patches, + each patch is applied on this new branch with `git-am`; +- a tag called `${FUEL_MAIN_TAG}-opnfv` at `opnfv-fuel/HEAD`; - This will result in creation of: - - a tag called `${FUEL_MAIN_TAG}-opnfv-root` at the same commit as Fuel@OPNFV - upstream reference (bound to git submodule OR tracking remote HEAD); - - a new branch `opnfv-fuel` which will hold all the OPNFV patches, - each patch is applied on this new branch with `git-am`; - - a tag called `${FUEL_MAIN_TAG}-opnfv` at `opnfv-fuel/HEAD`; + .. code-block:: bash -4. Modify sub-projects for whatever you need. - Commit your changes when you want them taken into account in the build. + $ make patches-import -5. Re-create patches via: +Modify sub-projects for whatever you need +----------------------------------------- - $ make patches-export +Commit your changes when you want them taken into account in the build. - Each commit on `opnfv-fuel` branch of each subproject will be - exported to `patches/subproject/` via `git format-patch`. +Re-create patches +----------------- - NOTE: Only commit (-f) submodules when you need to bump upstream ref. - NOTE: DO NOT commit patched submodules! +Each commit on `opnfv-fuel` branch of each subproject will be +exported to `patches/subproject/` via `git format-patch`. -6. Clean workbench branches and tags with: +NOTE: Only commit (-f) submodules when you need to bump upstream ref. - $ make clean +NOTE: DO NOT commit patched submodules! -7. De-initialize submodules and force a clean clone with: + .. code-block:: bash - $ make deepclean + $ make patches-export + +Clean workbench branches and tags +--------------------------------- + + .. code-block:: bash + + $ make clean + +De-initialize submodules and force a clean clone +------------------------------------------------ + + .. code-block:: bash + + $ make deepclean Sub-project maintenance ------------------------ -1. Adding a new submodule - If you need to add another subproject, you can do it with `git submodule`. - Make sure that you specify branch (with `-b`), short name (with `--name`): +======================= + +Adding a new submodule +---------------------- + +If you need to add another subproject, you can do it with `git submodule`. +Make sure that you specify branch (with `-b`), short name (with `--name`): + + .. code-block:: bash + + $ git submodule -b master add --name reclass-system-salt-model + https://github.com/Mirantis/reclass-system-salt-model + relative/path/to/submodule - $ git submodule -b master add --name reclass-system-salt-model \ - https://github.com/Mirantis/reclass-system-salt-model \ - relative/path/to/submodule +Working with remote tracking for upgrading Fuel components +---------------------------------------------------------- -2. Working with remote tracking for upgrading Fuel components - Enable remote tracking as described above, which at `make sub` will update - ALL submodules (e.g. reclass-system-salt-model) to remote branch (set in - .gitmodules) HEAD. +Enable remote tracking as described above, which at `make sub` will update +ALL submodules (e.g. reclass-system-salt-model) to remote branch (set in +.gitmodules) HEAD. - * If upstream has NOT already tagged a new version, we can still work on - our patches, make sure they apply etc., then check for new upstream - changes (and that our patches still apply on top of them) by: +* If upstream has NOT already tagged a new version, we can still work on + our patches, make sure they apply etc., then check for new upstream + changes (and that our patches still apply on top of them) by: - $ make deepclean patches-import +* If upstream has already tagged a new version we want to pick up, checkout + the new tag in each submodule: - * If upstream has already tagged a new version we want to pick up, checkout - the new tag in each submodule: +* Once satisfied with the patch and submodule changes, commit them: - $ git submodule foreach 'git checkout <newtag>' + - enforce FUEL_TRACK_REMOTES to "yes" if you want to constatly use the + latest remote branch HEAD (as soon as upstream pushes a change on that + branch, our next build will automatically include it - risk of our + patches colliding with new upstream changes); + - stage patch changes if any; + - if submodule tags have been updated (relevant when remote tracking is + disabled, i.e. we have a stable upstream baseline), add submodules; - * Once satisfied with the patch and submodule changes, commit them: - - enforce FUEL_TRACK_REMOTES to "yes" if you want to constatly use the - latest remote branch HEAD (as soon as upstream pushes a change on that - branch, our next build will automatically include it - risk of our - patches colliding with new upstream changes); - - stage patch changes if any; - - if submodule tags have been updated (relevant when remote tracking is - disabled, i.e. we have a stable upstream baseline), add submodules: + .. code-block:: bash - $ make deepclean sub && git add -f relative/path/to/submodule + $ make deepclean patches-import + $ git submodule foreach 'git checkout <newtag>' + $ make deepclean sub && git add -f relative/path/to/submodule diff --git a/prototypes/sfc_tacker/README.rst b/prototypes/sfc_tacker/README.rst index 27574a66f..e219abd34 100644 --- a/prototypes/sfc_tacker/README.rst +++ b/prototypes/sfc_tacker/README.rst @@ -5,10 +5,11 @@ README SFC + Tacker =================== - The Enclosed shell script builds, deploys, orchestrates Tacker, +The Enclosed shell script builds, deploys, orchestrates Tacker, an Open NFV Orchestrator with in-built general purpose VNF Manager to deploy and operate Virtual Network Functions (VNFs). - The provided deployment tool is experimental, not fault + +The provided deployment tool is experimental, not fault tolerant but as idempotent as possible. To use the provided shell script for provision/deployment, transfer the script to the Openstack primary controller node, where Your deployed OpenDaylight SDN @@ -16,20 +17,28 @@ controller runs. The deployment tool (poc.tacker-up.sh), expects that Your primary controller reaches all your OPNFV/Fuel cluster nodes and has internet connection either directly or via an http proxy, note that a working and consistent DNS name resolution is a must. - Theory of operation: the deployment tool downloads the source + +Theory of operation: the deployment tool downloads the source python packages from GitHub and a json rpc library developed by Josh Marshall. Besides these sources, downloads software for python/debian software release. When building succeeds the script deploys the software components to the OPNFV Cluster nodes. Finally orchestrates the deployed tacker binaries as an infrastucture/service. The Tacker has two components: -o Tacker server - what interacts with Openstack and OpenDayLight. -o Tacker client - a command line software talks with the server, - available on all cluster nodes and the access point - to the Tacker service. Note that the tacker - distribution provides a a plugin to the Horizon - OpenStack Gui, but thus Horizon plugin is out of the - scope of this Proof of Concept setup/deployment. + +#. Tacker server + + - what interacts with Openstack and OpenDayLight. + +#. Tacker client + + - a command line software talks with the server, + available on all cluster nodes and the access point + to the Tacker service. Note that the tacker + distribution provides a a plugin to the Horizon + OpenStack Gui, but thus Horizon plugin is out of the + scope of this Proof of Concept setup/deployment. + As mentioned, this compilation contains an OpenDayLight SDN controller with Service Function Chaining and Group based Policy features enabled. @@ -37,13 +46,17 @@ To acces for your cluster information ssh to the fuel master (10.20.0.2) and issue command: fuel node. Here is an output of an example deployment: -id | status | name | cluster | ip | mac | roles | pending_roles | online | group_id ----|--------|------------------|---------|-----------|-------------------|----------------------------------|---------------|--------|--------- -3 | ready | Untitled (a2:4c) | 1 | 10.20.0.5 | 52:54:00:d3:a2:4c | compute | | True | 1 -4 | ready | Untitled (c7:d8) | 1 | 10.20.0.3 | 52:54:00:00:c7:d8 | cinder, controller, opendaylight | | True | 1 -1 | ready | Untitled (cc:51) | 1 | 10.20.0.6 | 52:54:00:1e:cc:51 | compute | | True | 1 -2 | ready | Untitled (e6:3e) | 1 | 10.20.0.4 | 52:54:00:0c:e6:3e | compute | | True | 1 -[root@fuel-sfc-virt ~]# ++--------+------------+------------------+-------------+-----------+-------------------+----------------------------------+-------------------+------------+--------------+ +| **id** | **status** | **name** | **cluster** | **ip** | **mac** | **roles** | **pending_roles** | **online** | **group_id** | ++--------+------------+------------------+-------------+-----------+-------------------+----------------------------------+-------------------+------------+--------------+ +| 1 | ready | Untitled (cc:51) | 1 | 10.20.0.6 | 52:54:00:1e:cc:51 | compute | | True | 1 | ++--------+------------+------------------+-------------+-----------+-------------------+----------------------------------+-------------------+------------+--------------+ +| 2 | ready | Untitled (e6:3e) | 1 | 10.20.0.4 | 52:54:00:0c:e6:3e | compute | | True | 1 | ++--------+------------+------------------+-------------+-----------+-------------------+----------------------------------+-------------------+------------+--------------+ +| 3 | ready | Untitled (a2:4c) | 1 | 10.20.0.5 | 52:54:00:d3:a2:4c | compute | | True | 1 | ++--------+------------+------------------+-------------+-----------+-------------------+----------------------------------+-------------------+------------+--------------+ +| 4 | ready | Untitled (c7:d8) | 1 | 10.20.0.3 | 52:54:00:00:c7:d8 | cinder, controller, opendaylight | | True | 1 | ++--------+------------+------------------+-------------+-----------+-------------------+----------------------------------+-------------------+------------+--------------+ As You can see in this case the poc.tacker-up.sh script should be transferred and run on node having IP address 10.20.0.3 |