aboutsummaryrefslogtreecommitdiffstats
path: root/samples/ping-ext-stimuli.yaml
blob: bac9e085b065eb5e5822ca462093f4ed0eae2a51 (plain)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
##############################################################################
# Copyright (c) 2017 Ericsson AB and others.
#
# All rights reserved. This program and the accompanying materials
# are made available under the terms of the Apache License, Version 2.0
# which accompanies this distribution, and is available at
# http://www.apache.org/licenses/LICENSE-2.0
##############################################################################
---
# Sample benchmark task config file
# Measure network latency using ping, destination is an external server
# Make sure servers have internet access before running this test.
# For example using virtual MOS do something this on the host:
# sudo iptables -t nat -A POSTROUTING -s 172.16.0.0/24 \! -d 172.16.0.0/24 -j MASQUERADE
#
# This sample demonstrates the use of runner actions - hooks inserted in
# diffrent places of the runner execution.
#

schema: "yardstick:task:0.1"

scenarios:
-
  type: Ping
  host: goofy.demo
  target: 8.8.8.8
  runner:
    type: Duration
    duration: 60
    interval: 1
    pre-start-action:
        command: "heat stack-show demo"
    periodic-action:
        interval: 10
        command: "ifconfig vboxnet1"
    single-shot-action:
        after: 30
        command: "nova show goofy.demo"
    post-stop-action:
        command: "nova list"
  sla:
    max_rtt: 10
    action: monitor

context:
  name: demo
  image: yardstick-image
  flavor: yardstick-flavor
  user: ubuntu
  servers:
    goofy:
      floating_ip: true
  networks:
    test:
      cidr: '10.0.1.0/24'
| | | | * Consistent resource state awareness | | | | (compute), see `Doctor User Guide`_ for | | | | details | +----------------+----------------+-------------------------------------------+ Functest includes different test suites with several test cases within. Some of the tests are developed by Functest team members whereas others are integrated from upstream communities or other OPNFV projects. For example, `Tempest <http://docs.openstack.org/developer/tempest/overview.html>`_ is the OpenStack integration test suite and Functest is in charge of the selection, integration and automation of the tests that fit in OPNFV. The Tempest suite has been customized but no new test cases have been created. The results produced by the tests run from CI are pushed and collected in a NoSQL database. The goal is to populate the database with results from different sources and scenarios and to show them on a Dashboard. There is no real notion of Test domain or Test coverage. Basic components (VIM, controllers) are tested through their own suites. Feature projects also provide their own test suites with different ways of running their tests. vIMS test case was integrated to demonstrate the capability to deploy a relatively complex NFV scenario on top of the OPNFV infrastructure. Functest considers OPNFV as a black box. OPNFV, since the Brahmaputra release, offers lots of potential combinations: * 2 controllers (OpenDayligh, ONOS) * 4 installers (Apex, Compass, Fuel, Joid) Most of the tests are runnable on any combination, but some others might have restrictions imposed by the installers or the available deployed features. .. _`[2]`: http://docs.openstack.org/developer/tempest/overview.html .. _`[3]`: https://rally.readthedocs.org/en/latest/index.html .. _`Doctor User Guide`: http://artifacts.opnfv.org/opnfvdocs/brahmaputra/docs/userguide/featureusage-doctor.html .. _`Promise User Guide`: http://artifacts.opnfv.org/promise/brahmaputra/docs/userguide/index.html .. _`ONOSFW User Guide`: http://artifacts.opnfv.org/onosfw/brahmaputra/docs/userguide/index.html