summaryrefslogtreecommitdiffstats
path: root/docs/development/scenario/scenariomatrix.rst
blob: 64e115015de1c9b0c2452a593b25500c07f38076 (plain)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
Scenarios are implemented as deployable compositions through integration with an installation tool.
OPNFV supports multiple installation tools and for any given release not all tools will support all
scenarios. While our target is to establish parity across the installation tools to ensure they
can provide all scenarios, the practical challenge of achieving that goal for any given feature and
release results in some disparity.

Brahmaputra scenario overeview
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

The following table provides an overview of the installation tools and available scenario's
in the Brahmaputra release of OPNFV.

.. image:: ../images/brahmaputrascenariomatrix.jpg
   :alt: OPNFV Brahmaputra Scenario Matrix

Scenario status is indicated by a weather pattern icon. All scenarios listed with
a weather pattern are possible to deploy and run in your environment or a Pharos lab,
however they may have known limitations or issues as indicated by the icon.

Weather pattern icon legend:

+---------------------------------------------+----------------------------------------------------------+
| Weather Icon                                | Scenario Status                                          |
+=============================================+==========================================================+
| .. image:: ../images/weather-clear.jpg      | Stable, no known issues                                  |
+---------------------------------------------+----------------------------------------------------------+
| .. image:: ../images/weather-few-clouds.jpg | Stable, documented limitations                           |
+---------------------------------------------+----------------------------------------------------------+
| .. image:: ../images/weather-overcast.jpg   | Deployable, stability or feature limitations             |
+---------------------------------------------+----------------------------------------------------------+
| .. image:: ../images/weather-dash.jpg       | Not deployed with this installer                         |
+---------------------------------------------+----------------------------------------------------------+

Scenarios that are not yet in a state of "Stable, no known issues" will continue to be stabilised
and updates will be made on the stable/brahmaputra branch. While we intend that all Brahmaputra
scenarios should be stable it is worth checking regularly to see the current status.  Due to
our dependency on upstream communities and code some issues may not be resolved prior to the C release.

Scenario Naming
^^^^^^^^^^^^^^^

In OPNFV scenarios are identified by short scenario names, these names follow a scheme that
identifies the key components and behaviours of the scenario. The rules for scenario naming are as follows:

  os-[controller]-[feature]-[mode]-[option]

Details of the fields are
  * os: mandatory

    * Refers to the platform type used
    * possible value: os (OpenStack)

* [controller]: mandatory

    * Refers to the SDN controller integrated in the platform
    * example values: nosdn, ocl, odl, onos

  * [feature]: mandatory

    * Refers to the feature projects supported by the scenario
    * example values: nofeature, kvm, ovs, sfc

  * [mode]: mandatory

    * Refers to the deployment type, which may include for instance high availability
    * possible values: ha, noha

  * [option]: optional

    * Used for the scenarios those do not fit into naming scheme.
    * The optional field in the short scenario name should not be included if there is no optional scenario.

Some examples of supported scenario names are:

  * os-nosdn-kvm-noha

    * This is an OpenStack based deployment using neutron including the OPNFV enhanced KVM hypervisor

  * os-onos-nofeature-ha

    * This is an OpenStack deployment in high availability mode including ONOS as the SDN controller

  * os-odl_l2-sfc

    * This is an OpenStack deployment using OpenDaylight and OVS enabled with SFC features

Installing your scenario
^^^^^^^^^^^^^^^^^^^^^^^^

There are two main methods of deploying your target scenario, one method is to follow this guide which will
walk you through the process of deploying to your hardware using scripts or ISO images, the other method is
to set up a Jenkins slave and connect your infrastructure to the OPNFV Jenkins master.

For the purposes of evaluation and development a number of Brahmaputra scenarios are able to be deployed
virtually to mitigate the requirements on physical infrastructure. Details and instructions on performing
virtual deployments can be found in the installer specific installation instructions.

To set up a Jenkins slave for automated deployment to your lab, refer to the `Jenkins slave connect guide.
<http://artifacts.opnfv.org/brahmaputra.1.0/docs/opnfv-jenkins-slave-connection.brahmaputra.1.0.html>`_
39;https://' ~ host ~ '/' ~ image_path, true) }}" image_dest: "{{ workspace }}/{{ image_filename }}" sha256sums_path: "{{ release }}/current/SHA256SUMS" sha256sums_filename: "{{ sha256sums_path|basename }}" sha256sums_url: "{{ lookup('env', 'SHA256SUMS_URL')|default('https://' ~ host ~ '/' ~ sha256sums_path, true) }}" workspace: "{{ lookup('env', 'workspace')|default('/tmp/workspace/yardstick', true) }}" raw_imgfile_basename: "yardstick-{{ release }}-server.raw" growpart_package: RedHat: cloud-utils-growpart Debian: cloud-guest-utils environment: - PATH: /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/root/bin - "{{ proxy_env }}" tasks: - group_by: key: image_builder - package: name=parted state=present - package: name=kpartx state=present - package: name="{{ growpart_package[ansible_os_family] }}" state=present - set_fact: imgfile: "{{ normal_image_file }}" when: img_property == "normal" - set_fact: imgfile: "{{ nsb_image_file }}" when: img_property == "nsb" - set_fact: mountdir: "{{ lookup('env', 'mountdir')|default('/mnt/yardstick', true) }}" - set_fact: raw_imgfile: "{{ workspace }}/{{ raw_imgfile_basename }}" # cleanup non-lxd - name: unmount all old mount points mount: name: "{{ item }}" state: unmounted with_items: # order matters - "{{ mountdir }}/proc" - "{{ mountdir }}" - "/mnt/{{ release }}" - name: kpartx -dv to delete all image partition device nodes command: kpartx -dv "{{ raw_imgfile }}" ignore_errors: true - name: Debug dump loop devices command: losetup -a ignore_errors: true - name: delete loop devices for image file # use this because kpartx -dv will fail if raw_imgfile was delete # but in theory we could have deleted file still attached to loopback device? # use grep because of // and awk shell: losetup -O NAME,BACK-FILE | grep "{{ raw_imgfile_basename }}" | awk '{ print $1 }' | xargs -l1 losetup -v -d ignore_errors: true - name: Debug dump loop devices again command: losetup -a ignore_errors: true - name: delete {{ raw_imgfile }} file: path: "{{ raw_imgfile }}" state: absent # common - name: remove {{ mountdir }} file: path: "{{ mountdir }}" state: absent # download-common - name: remove {{ workspace }} file: path: "{{ workspace }}" state: directory - name: "fetch {{ image_url }} and verify " fetch_url_and_verify: url: "{{ image_url }}" sha256url: "{{ sha256sums_url }}" dest: "{{ image_dest }}" - name: convert image to raw command: "qemu-img convert {{ image_dest }} {{ raw_imgfile }}" - name: resize image to allow for more VNFs command: "qemu-img resize -f raw {{ raw_imgfile }} +2G" - name: resize parition to allow for more VNFs # use growpart because maybe it handles GPT better than parted command: growpart {{ raw_imgfile }} 1 - name: create mknod devices in chroot command: "mknod -m 0660 /dev/loop{{ item }} b 7 {{ item }}" args: creates: "/dev/loop{{ item }}" with_sequence: start=0 end=9 tags: mknod_devices - name: find first partition device command: kpartx -l "{{ raw_imgfile }}" register: kpartx_res - set_fact: image_first_partition: "{{ kpartx_res.stdout_lines[0].split()[0] }}" - set_fact: # assume / is the first partition image_first_partition_device: "/dev/mapper/{{ image_first_partition }}" - name: use kpartx to create device nodes for the raw image loop device # operate on the loop device to avoid /dev namespace missing devices command: kpartx -avs "{{ raw_imgfile }}" - name: parted dump raw image command: parted "{{ raw_imgfile }}" print register: parted_res - debug: var: parted_res verbosity: 2 - name: use blkid to find filesystem type of first partition device command: blkid -o value -s TYPE {{ image_first_partition_device }} register: blkid_res - set_fact: image_fs_type: "{{ blkid_res.stdout.strip() }}" - fail: msg: "We only support ext4 image filesystems because we have to resize" when: image_fs_type != "ext4" - name: fsck the image filesystem command: "e2fsck -y -f {{ image_first_partition_device }}" - name: resize filesystem to full partition size command: resize2fs {{ image_first_partition_device }} - name: fsck the image filesystem command: "e2fsck -y -f {{ image_first_partition_device }}" - name: make tmp disposable fstab command: mktemp --tmpdir fake_fstab.XXXXXXXXXX register: mktemp_res - set_fact: fake_fstab: "{{ mktemp_res.stdout.strip() }}" - name: mount first parition on image device mount: src: "{{ image_first_partition_device }}" name: "{{ mountdir }}" # fstype is required fstype: "{{ image_fs_type }}" # !!!!!!! this is required otherwise we add entries to /etc/fstab # and prevent the system from booting fstab: "{{ fake_fstab }}" state: mounted - name: mount chroot /proc mount: src: none name: "{{ mountdir }}/proc" fstype: proc # !!!!!!! this is required otherwise we add entries to /etc/fstab # and prevent the system from booting fstab: "{{ fake_fstab }}" state: mounted - name: if arm copy qemu-aarch64-static into chroot copy: src: /usr/bin/qemu-aarch64-static dest: "{{ mountdir }}/usr/bin" when: 'YARD_IMG_ARCH == "arm64"' - name: create ubuntu policy-rc.d workaround copy: content: "{{ '#!/bin/sh\nexit 101\n' }}" dest: "{{ mountdir }}/usr/sbin/policy-rc.d" mode: 0755 when: "target_os == 'Ubuntu'" - name: add chroot as host add_host: name: "{{ mountdir }}" groups: chroot_image,image_builder connection: chroot ansible_python_interpreter: /usr/bin/python3 # set this host variable here nameserver_ip: "{{ ansible_dns.nameservers[0] }}" image_type: vm - name: include ubuntu_server_cloudimg_modify.yml include: ubuntu_server_cloudimg_modify.yml when: img_property == "normal" - name: include ubuntu_server_cloudimg_modify_samplevnfs.yml include: ubuntu_server_cloudimg_modify_samplevnfs.yml when: img_property == "nsb" - hosts: localhost tasks: - name: convert image to image file command: "qemu-img convert -c -o compat=0.10 -O qcow2 {{ raw_imgfile }} {{ imgfile }}" - name: run post build tasks include: post_build_yardstick_image.yml - hosts: localhost tasks: - debug: msg: "yardstick image = {{ imgfile }}"