From ccae9496c217020455acfe337aaf2b2f0c5644d8 Mon Sep 17 00:00:00 2001 From: Luc Provoost Date: Sun, 30 Jun 2019 09:46:10 +0200 Subject: Multiple changes for June release - Change inittest in warmuptest. This has now a warmuptime, warmupspeed, packet size and flow size. - Change in centos.json. This will now also copy deploycentostools.sh during image building so that it can be used to re-compile prox in the VM by typing "./deploycentostools.sh compile" - runrapid.py parameters needing a file name, need to be complete file names: no extensions will be added any more by the scripts. - Changes in createrapid.py to handel the openstack CLI output in a simpler way. - The management interface of the VMs can now also be an SRIOV interface. We have now an extra optional parameter in the VM sections of the rapiVMs.vms file: SRIOV_mgmt_port - Changed the name of some sections and keys in the environment file since runrapid.py will not always communicate to OpenStack VMs. This could be containers or any other (virtual) machine. - The previous MachineMap.cfg has been renamed to machine.map - A new test has been added: monitorswap is just showing the statistics of a swap (virtual) machine, without generating any packets. This is useful in situations where an external tester is used. - Latency and core statistics can now be measured even if there are multiple PROX cores and tasks running. A new parameter has been added in the test files with following default value: tasks=[0]. During the collection of statistics, all tasks in this list will be queried. It is ok to have a non-existing task in such a query since it will be ignored. - A --screenlog parameter was added for runrapid.py allowing for more detailed output on the screen during debugging. No need to check the log file. - Previous tests to run multiple packet size with a given flow size and to run multiple flow size, with a given packet size are now combined by specifiying 2 lists: packetsizes & flows - The screen output of this test has also been reworked with more meaningful column names and the test result is now in the field "core received". This allows also for faster termination of the test: When the all packets sent by Gen NIC are received by the cores within the thresholds for packet loss and latency, the test is now stopped, even we were requesting more packets to be sent. Change-Id: I3307e7a972f2140e739f376f146fe875df0303e6 Signed-off-by: Luc Provoost --- VNFs/DPPD-PROX/helper-scripts/rapid/MachineMap.cfg | 30 - VNFs/DPPD-PROX/helper-scripts/rapid/README | 53 +- VNFs/DPPD-PROX/helper-scripts/rapid/bare.test | 56 ++ .../DPPD-PROX/helper-scripts/rapid/basicrapid.test | 31 +- VNFs/DPPD-PROX/helper-scripts/rapid/centos.json | 24 +- .../rapid/check-prox-system-setup.service | 1 + .../rapid/check_prox_system_setup.sh | 13 +- VNFs/DPPD-PROX/helper-scripts/rapid/createrapid.py | 279 ++++--- .../DPPD-PROX/helper-scripts/rapid/deploycentos.sh | 139 ---- .../helper-scripts/rapid/deploycentostools.sh | 148 ++++ VNFs/DPPD-PROX/helper-scripts/rapid/gen.cfg | 4 +- VNFs/DPPD-PROX/helper-scripts/rapid/gen_gw.cfg | 4 +- VNFs/DPPD-PROX/helper-scripts/rapid/impair.test | 9 +- VNFs/DPPD-PROX/helper-scripts/rapid/irq.test | 1 + .../helper-scripts/rapid/l2framerate.test | 11 +- VNFs/DPPD-PROX/helper-scripts/rapid/l2gen.cfg | 4 +- VNFs/DPPD-PROX/helper-scripts/rapid/l2gen_bare.cfg | 59 ++ .../DPPD-PROX/helper-scripts/rapid/l2zeroloss.test | 28 +- .../helper-scripts/rapid/l3framerate.test | 13 +- VNFs/DPPD-PROX/helper-scripts/rapid/machine.map | 30 + .../helper-scripts/rapid/monitorswap.test | 31 + VNFs/DPPD-PROX/helper-scripts/rapid/prox_ctrl.py | 50 +- VNFs/DPPD-PROX/helper-scripts/rapid/rapidVMs.vms | 3 +- VNFs/DPPD-PROX/helper-scripts/rapid/runrapid.py | 837 ++++++++++----------- VNFs/DPPD-PROX/helper-scripts/rapid/secgw.test | 16 +- .../DPPD-PROX/helper-scripts/rapid/sharkproxlog.sh | 1 + VNFs/DPPD-PROX/helper-scripts/rapid/swap.cfg | 1 + 27 files changed, 1012 insertions(+), 864 deletions(-) delete mode 100644 VNFs/DPPD-PROX/helper-scripts/rapid/MachineMap.cfg create mode 100644 VNFs/DPPD-PROX/helper-scripts/rapid/bare.test delete mode 100644 VNFs/DPPD-PROX/helper-scripts/rapid/deploycentos.sh create mode 100644 VNFs/DPPD-PROX/helper-scripts/rapid/deploycentostools.sh create mode 100644 VNFs/DPPD-PROX/helper-scripts/rapid/l2gen_bare.cfg create mode 100644 VNFs/DPPD-PROX/helper-scripts/rapid/machine.map create mode 100644 VNFs/DPPD-PROX/helper-scripts/rapid/monitorswap.test create mode 100755 VNFs/DPPD-PROX/helper-scripts/rapid/sharkproxlog.sh diff --git a/VNFs/DPPD-PROX/helper-scripts/rapid/MachineMap.cfg b/VNFs/DPPD-PROX/helper-scripts/rapid/MachineMap.cfg deleted file mode 100644 index b6e199d7..00000000 --- a/VNFs/DPPD-PROX/helper-scripts/rapid/MachineMap.cfg +++ /dev/null @@ -1,30 +0,0 @@ -## -## Copyright (c) 2010-2018 Intel Corporation -## -## Licensed under the Apache License, Version 2.0 (the "License"); -## you may not use this file except in compliance with the License. -## You may obtain a copy of the License at -## -## http://www.apache.org/licenses/LICENSE-2.0 -## -## Unless required by applicable law or agreed to in writing, software -## distributed under the License is distributed on an "AS IS" BASIS, -## WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -## See the License for the specific language governing permissions and -## limitations under the License. -## -## This file contains the mapping for each test machine. The test machine will -## be deployed on a machine defined in the *.env file, as defined by the -## machine_index - -[DEFAULT] -machine_index=0 - -[TestM1] -machine_index=1 - -[TestM2] -machine_index=2 - -[TestM3] -machine_index=3 diff --git a/VNFs/DPPD-PROX/helper-scripts/rapid/README b/VNFs/DPPD-PROX/helper-scripts/rapid/README index 43243a6c..cb3a4fd8 100644 --- a/VNFs/DPPD-PROX/helper-scripts/rapid/README +++ b/VNFs/DPPD-PROX/helper-scripts/rapid/README @@ -18,21 +18,21 @@ rapid (Rapid Automated Performance Indication for Dataplane) ************************************************************ rapid is a set of files offering an easy way to do a sanity check of the -dataplane performance of an OpenStack environment. +dataplane performance of an OpenStack or container environment. -Copy the files in a directory on a machine that can run the OpenStack CLI -commands and that can reach the OpenStack networks to connect to the VMs. +In case of OpenStack, copy the files in a directory on a machine that can run the OpenStack CLI +commands and that can reach the networks to connect to the VMs. You will need an image that has the PROX tool installed. A good way to do this is to use the packer tool to build an image for a target of your choice. -You can also build this image manually by executing all the commands described in the deploycentos.sh. +You can also build this image manually by executing all the commands described in the deploycentostools.sh. The default name of the qcow2 file is rapidVM.qcow2 When using the packer tool, the first step is to upload an existing CentOS cloud image from the internet into OpenStack. Check out: https://cloud.centos.org/centos/7/images/ You should now source the proper .rc file so Packer can connect to your OpenStack. -There are 2 files: centos.json and deploycentos.sh, allowing you to create +There are 2 files: centos.json and deploycentostools.sh, allowing you to create an image automatically. Run # packer build centos.json Edit centos.json to reflect the settings of your environment: The following fields need to @@ -46,7 +46,7 @@ be the ID's of your system: - "security_groups": ID of the security group being used Note that this procedure is not only installing the necessary tools to run PROX, -but also does some system optimizations (tuned). Check deploycentos.sh for more details. +but also does some system optimizations (tuned). Check deploycentostools.sh for more details. Now you can run the createrapid.py file. Use help for more info on the usage: # ./createrapid.py --help @@ -106,44 +106,43 @@ different way (not using the createrapid.py). This can be useful in case you are not using OpenStack as a VIM or when using special configurations that cannot be achieved using createrapid.py. Fields needed for runrapid are: * all info in the [Mx] sections -* the key information in the [OpenStack] section +* the key information in the [ssh] section * the total_number_of_vms information in the [rapid] section -[DEFAULT] -admin_ip = none +[rapid] +loglevel = DEBUG +version = 19.6.30 +total_number_of_machines = 3 [M1] name = rapid-VM1 -admin_ip = 10.25.1.116 -dp_ip = 10.10.10.7 -dp_mac = fa:16:3e:59:b8:28 +admin_ip = 10.25.1.109 +dp_ip = 10.10.10.4 +dp_mac = fa:16:3e:25:be:25 [M2] name = rapid-VM2 -admin_ip = 10.25.1.126 -dp_ip = 10.10.10.11 -dp_mac = fa:16:3e:c9:54:c7 +admin_ip = 10.25.1.110 +dp_ip = 10.10.10.7 +dp_mac = fa:16:3e:72:bf:e8 [M3] name = rapid-VM3 -admin_ip = 10.25.1.108 +admin_ip = 10.25.1.125 dp_ip = 10.10.10.15 -dp_mac = fa:16:3e:72:90:3e +dp_mac = fa:16:3e:69:f3:e7 -[OpenStack] -stack = rapid -vms = rapidVMs +[ssh] key = prox + +[Varia] +vim = OpenStack +stack = rapid +vms = rapidVMs.vms image = rapidVM image_file = rapidVM.qcow2 dataplane_network = dataplane-network subnet = dpdk-subnet subnet_cidr = 10.10.10.0/24 internal_network = admin_internal_net -floating_network = floating-ip-net - -[rapid] -loglevel = DEBUG -version = 19.4.15 -total_number_of_machines = 3 - +floating_network = admin_floating_net diff --git a/VNFs/DPPD-PROX/helper-scripts/rapid/bare.test b/VNFs/DPPD-PROX/helper-scripts/rapid/bare.test new file mode 100644 index 00000000..e686e15e --- /dev/null +++ b/VNFs/DPPD-PROX/helper-scripts/rapid/bare.test @@ -0,0 +1,56 @@ +## +## Copyright (c) 2010-2018 Intel Corporation +## +## Licensed under the Apache License, Version 2.0 (the "License"); +## you may not use this file except in compliance with the License. +## You may obtain a copy of the License at +## +## http://www.apache.org/licenses/LICENSE-2.0 +## +## Unless required by applicable law or agreed to in writing, software +## distributed under the License is distributed on an "AS IS" BASIS, +## WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +## See the License for the specific language governing permissions and +## limitations under the License. +## + +[DEFAULT] +name = BareTesting +number_of_tests = 2 +total_number_of_test_machines = 2 +prox_socket = true +prox_launch_exit = true +tasks=[0] + +[TestM1] +name = Generator +config_file = l2gen_bare.cfg +dest_vm = 2 +gencores = [1] +latcores = [3] + +[TestM2] +name = Swap +config_file = l2swap.cfg +swapcores = [1] + +[BinarySearchParams] +drop_rate_threshold = 0 +lat_avg_threshold = 500 +lat_max_threshold = 1000 +accuracy = 0.1 +startspeed = 10 + +[test1] +test=warmuptest +flowsize=1024 +packetsize=64 +warmupspeed=10 +warmuptime=2 + +[test2] +test=flowsizetest +packetsizes=[64,128] +# the number of flows in the list need to be powers of 2, max 2^20 +# # Select from following numbers: 1, 2, 4, 8, 16, 32, 64, 128, 256, 512, 1024, 2048, 4096, 8192, 16384, 32768, 65535, 131072, 262144, 524280, 1048576 +flows=[1,512] diff --git a/VNFs/DPPD-PROX/helper-scripts/rapid/basicrapid.test b/VNFs/DPPD-PROX/helper-scripts/rapid/basicrapid.test index b2f8f230..4bdfdda4 100644 --- a/VNFs/DPPD-PROX/helper-scripts/rapid/basicrapid.test +++ b/VNFs/DPPD-PROX/helper-scripts/rapid/basicrapid.test @@ -16,10 +16,11 @@ [DEFAULT] name = BasicSwapTesting -number_of_tests = 4 +number_of_tests = 2 total_number_of_test_machines = 2 prox_socket = true prox_launch_exit = true +tasks=[0] [TestM1] name = Generator @@ -27,7 +28,6 @@ config_file = gen.cfg dest_vm = 2 gencores = [1] latcores = [3] -startspeed = 10 [TestM2] name = Swap @@ -36,25 +36,22 @@ swapcores = [1] [BinarySearchParams] drop_rate_threshold = 0.1 -lat_avg_threshold = 500 -lat_max_threshold = 1000 +lat_avg_threshold = 400 +lat_max_threshold = 800 accuracy = 0.1 +startspeed = 10 [test1] -test=inittest - -[test2] -test=speedtest +test=warmuptest +flowsize=1024 packetsize=64 +warmupspeed=1 +warmuptime=2 -[test3] -test=sizetest -flow=1 -packetsizes=[64,256,1024] - -[test4] -test=flowtest -packetsize=64 +[test2] +test=flowsizetest +packetsizes=[64,128] # the number of flows in the list need to be powers of 2, max 2^20 # Select from following numbers: 1, 2, 4, 8, 16, 32, 64, 128, 256, 512, 1024, 2048, 4096, 8192, 16384, 32768, 65535, 131072, 262144, 524280, 1048576 -flows=[1,512,1024] +flows=[1,512] + diff --git a/VNFs/DPPD-PROX/helper-scripts/rapid/centos.json b/VNFs/DPPD-PROX/helper-scripts/rapid/centos.json index 237a6483..3754ea09 100644 --- a/VNFs/DPPD-PROX/helper-scripts/rapid/centos.json +++ b/VNFs/DPPD-PROX/helper-scripts/rapid/centos.json @@ -6,12 +6,14 @@ "type": "openstack", "ssh_username": "centos", "image_name": "rapidVM", -"source_image": "06101dd4-f162-49c0-a072-2fe32ac446a9", -"flavor": "3ea20ea9-855c-4a6e-b454-63ad6e8e0db9", -"networks": "451a030a-eb1e-4c74-85b8-782ef8a6ad38", +"source_image": "09ada6c0-21aa-49aa-ad7b-d364f279ee92", +"flavor": "2d198ea9-7cbc-4211-9983-531493b07a96", +"networks": "2d2bb4ec-58ae-47a4-8af9-f58f14533337", "use_floating_ip": true, -"floating_ip_pool": "d629619f-8f96-4f32-9b90-1b62e5ea3809", -"security_groups": "460b7929-c6de-4b1c-ae83-901c2042a894" +"floating_ip_pool": "ff966059-9dd2-4ed1-ad9a-5fae516eb0fa", +"security_groups": "1da93e77-29c2-42d7-b611-f2ae094aa8df", +"ssh_timeout":"1000s", +"ssh_pty":"true" } ], "provisioners": [ @@ -26,8 +28,18 @@ "destination": "/home/centos/" }, { + "type": "file", + "source": "./sharkproxlog.sh", + "destination": "/home/centos/" + }, + { + "type": "file", + "source": "./deploycentostools.sh", + "destination": "/home/centos/" + }, + { "type": "shell", - "script": "deploycentos.sh" + "script": "deploycentostools.sh" } ] } diff --git a/VNFs/DPPD-PROX/helper-scripts/rapid/check-prox-system-setup.service b/VNFs/DPPD-PROX/helper-scripts/rapid/check-prox-system-setup.service index a55e0c08..6339d3ea 100644 --- a/VNFs/DPPD-PROX/helper-scripts/rapid/check-prox-system-setup.service +++ b/VNFs/DPPD-PROX/helper-scripts/rapid/check-prox-system-setup.service @@ -9,3 +9,4 @@ ExecStart=/usr/local/libexec/check_prox_system_setup.sh [Install] WantedBy=multi-user.target + diff --git a/VNFs/DPPD-PROX/helper-scripts/rapid/check_prox_system_setup.sh b/VNFs/DPPD-PROX/helper-scripts/rapid/check_prox_system_setup.sh index 48999510..9effa53c 100755 --- a/VNFs/DPPD-PROX/helper-scripts/rapid/check_prox_system_setup.sh +++ b/VNFs/DPPD-PROX/helper-scripts/rapid/check_prox_system_setup.sh @@ -26,16 +26,16 @@ then case $line in isolated_cores=1-$MAXCOREID*) echo "Isolated CPU(s) OK, no reboot: $line">>$logfile - modprobe uio - insmod /root/dpdk/build/kmod/igb_uio.ko + modprobe uio + insmod /root/dpdk/build/kmod/igb_uio.ko exit 0 ;; isolated_cores=*) echo "Isolated CPU(s) NOK, change the config and reboot: $line">>$logfile sed -i "/^isolated_cores=.*/c\isolated_cores=1-$MAXCOREID" $filename - tuned-adm profile realtime-virtual-guest + tuned-adm profile realtime-virtual-guest reboot - exit 0 + exit 0 ;; *) echo "$line" @@ -43,9 +43,10 @@ then esac done < "$filename" echo "isolated_cores=1-$MAXCOREID" >> $filename - echo "No Isolated CPU(s) defined in config, line added: $line">>$logfile - tuned-adm profile realtime-virtual-guest + echo "No Isolated CPU(s) defined in config, line added: $line">>$logfile + tuned-adm profile realtime-virtual-guest reboot else echo "$filename not found.">>$logfile fi + diff --git a/VNFs/DPPD-PROX/helper-scripts/rapid/createrapid.py b/VNFs/DPPD-PROX/helper-scripts/rapid/createrapid.py index a1c1de60..3fbdc4c3 100755 --- a/VNFs/DPPD-PROX/helper-scripts/rapid/createrapid.py +++ b/VNFs/DPPD-PROX/helper-scripts/rapid/createrapid.py @@ -31,10 +31,10 @@ from logging import handlers from prox_ctrl import prox_ctrl import ConfigParser -version="19.4.15" +version="19.6.30" stack = "rapid" #Default string for stack. This is not an OpenStack Heat stack, just a group of VMs -vms = "rapidVMs" #Default string for vms file -key = "prox" # default name for kay +vms = "rapidVMs.vms" #Default string for vms file +key = "prox" # default name for key image = "rapidVM" # default name for the image image_file = "rapidVM.qcow2" dataplane_network = "dataplane-network" # default name for the dataplane network @@ -43,7 +43,6 @@ subnet_cidr="10.10.10.0/24" # cidr for dataplane internal_network="admin_internal_net" floating_network="admin_floating_net" loglevel="DEBUG" # sets log level for writing to file -runtime=10 # time in seconds for 1 test run def usage(): print("usage: createrapid [--version] [-v]") @@ -65,7 +64,7 @@ def usage(): print("optional arguments:") print(" -v, --version Show program's version number and exit") print(" --stack STACK_NAME Specify a name for the stack. Default is %s."%stack) - print(" --vms VMS_FILE Specify the vms file to be used. Default is %s.vms."%vms) + print(" --vms VMS_FILE Specify the vms file to be used. Default is %s."%vms) print(" --key KEY_NAME Specify the key to be used. Default is %s."%key) print(" --image IMAGE_NAME Specify the image to be used. Default is %s."%image) print(" --image_file IMAGE_FILE Specify the image qcow2 file to be used. Default is %s."%image_file) @@ -142,7 +141,7 @@ file_formatter = logging.Formatter("%(asctime)s - %(levelname)s - %(message)s") log = logging.getLogger() numeric_level = getattr(logging, loglevel.upper(), None) if not isinstance(numeric_level, int): - raise ValueError('Invalid log level: %s' % loglevel) + raise ValueError('Invalid log level: %s' % loglevel) log.setLevel(numeric_level) log.propagate = 0 @@ -173,58 +172,82 @@ needRoll = os.path.isfile(log_file) # This is a stale log, so roll it if needRoll: - # Add timestamp - log.debug('\n---------\nLog closed on %s.\n---------\n' % time.asctime()) - - # Roll over on application start - log.handlers[0].doRollover() + # Add timestamp + log.debug('\n---------\nLog closed on %s.\n---------\n' % time.asctime()) + # Roll over on application start + log.handlers[0].doRollover() # Add timestamp log.debug('\n---------\nLog started on %s.\n---------\n' % time.asctime()) log.debug("createrapid.py version: "+version) # Checking if the control network already exists, if not, stop the script -log.debug("Checking control plane network: "+internal_network) -cmd = 'openstack network show '+internal_network +log.debug("Checking control plane network: " + internal_network) +cmd = 'openstack network list -f value -c Name' log.debug (cmd) -cmd = cmd + ' |grep "status " | tr -s " " | cut -d" " -f 4' -NetworkExist = subprocess.check_output(cmd , shell=True).strip() -if NetworkExist == 'ACTIVE': - log.info("Control plane network ("+internal_network+") already active") +Networks = subprocess.check_output(cmd , shell=True).decode().strip() +if internal_network in Networks: + log.info("Control plane network (" + internal_network+") already active") else: log.exception("Control plane network " + internal_network + " not existing") raise Exception("Control plane network " + internal_network + " not existing") -# Checking if the floating ip network already exists, if not, stop the script -if floating_network <>'NO': - log.debug("Checking floating ip network: "+floating_network) - cmd = 'openstack network show '+floating_network - log.debug (cmd) - cmd = cmd + ' |grep "status " | tr -s " " | cut -d" " -f 4' - NetworkExist = subprocess.check_output(cmd , shell=True).strip() - if NetworkExist == 'ACTIVE': - log.info("Floating ip network ("+floating_network+") already active") +# Checking if the floating ip network should be used. If yes, check if it exists and stop the script if it doesn't +if floating_network !='NO': + log.debug("Checking floating ip network: " + floating_network) + if floating_network in Networks: + log.info("Floating ip network (" + floating_network + ") already active") else: log.exception("Floating ip network " + floating_network + " not existing") raise Exception("Floating ip network " + floating_network + " not existing") +# Checking if the dataplane network already exists, if not create it +log.debug("Checking dataplane network: " + dataplane_network) +if dataplane_network in Networks: + log.info("Dataplane network (" + dataplane_network + ") already active") +else: + log.info('Creating dataplane network ...') + cmd = 'openstack network create '+dataplane_network+' -f value -c status' + log.debug(cmd) + NetworkExist = subprocess.check_output(cmd , shell=True).decode().strip() + if 'ACTIVE' in NetworkExist: + log.info("Dataplane network created") + # Checking if the dataplane subnet already exists, if not create it + log.debug("Checking subnet: "+subnet) + cmd = 'openstack subnet list -f value -c Name' + log.debug (cmd) + Subnets = subprocess.check_output(cmd , shell=True).decode().strip() + if subnet in Subnets: + log.info("Subnet (" +subnet+ ") already exists") + else: + log.info('Creating subnet ...') + cmd = 'openstack subnet create --network ' + dataplane_network + ' --subnet-range ' + subnet_cidr +' --gateway none ' + subnet+' -f value -c name' + log.debug(cmd) + Subnets = subprocess.check_output(cmd , shell=True).decode().strip() + if subnet in Subnets: + log.info("Subnet created") + else : + log.exception("Failed to create subnet: " + subnet) + raise Exception("Failed to create subnet: " + subnet) + else : + log.exception("Failed to create dataplane network: " + dataplane_network) + raise Exception("Failed to create dataplane network: " + dataplane_network) + # Checking if the image already exists, if not create it -log.debug("Checking image: "+image) -cmd = 'openstack image show '+image +log.debug("Checking image: " + image) +cmd = 'openstack image list -f value -c Name' log.debug(cmd) -cmd = cmd +' |grep "status " | tr -s " " | cut -d" " -f 4' -ImageExist = subprocess.check_output(cmd , shell=True).strip() -if ImageExist == 'active': - log.info("Image ("+image+") already available") +Images = subprocess.check_output(cmd , shell=True).decode().strip() +if image in Images: + log.info("Image (" + image + ") already available") else: log.info('Creating image ...') - cmd = 'openstack image create --disk-format qcow2 --container-format bare --public --file ./'+image_file+ ' ' +image + cmd = 'openstack image create -f value -c status --disk-format qcow2 --container-format bare --public --file ./'+image_file+ ' ' +image log.debug(cmd) - cmd = cmd + ' |grep "status " | tr -s " " | cut -d" " -f 4' - ImageExist = subprocess.check_output(cmd , shell=True).strip() - if ImageExist == 'active': + ImageExist = subprocess.check_output(cmd , shell=True).decode().strip() + if 'active' in ImageExist: log.info('Image created and active') - cmd = 'openstack image set --property hw_vif_multiqueue_enabled="true" ' +image +# cmd = 'openstack image set --property hw_vif_multiqueue_enabled="true" ' +image # subprocess.check_call(cmd , shell=True) else : log.exception("Failed to create image") @@ -232,11 +255,10 @@ else: # Checking if the key already exists, if not create it log.debug("Checking key: "+key) -cmd = 'openstack keypair show '+key +cmd = 'openstack keypair list -f value -c Name' log.debug (cmd) -cmd = cmd + ' |grep "name " | tr -s " " | cut -d" " -f 4' -KeyExist = subprocess.check_output(cmd , shell=True).strip() -if KeyExist == key: +KeyExist = subprocess.check_output(cmd , shell=True).decode().strip() +if key in KeyExist: log.info("Key ("+key+") already installed") else: log.info('Creating key ...') @@ -245,95 +267,51 @@ else: subprocess.check_call(cmd , shell=True) cmd = 'chmod 600 ' +key+'.pem' subprocess.check_call(cmd , shell=True) - cmd = 'openstack keypair show '+key + cmd = 'openstack keypair list -f value -c Name' log.debug(cmd) - cmd = cmd + ' |grep "name " | tr -s " " | cut -d" " -f 4' - KeyExist = subprocess.check_output(cmd , shell=True).strip() - if KeyExist == key: + KeyExist = subprocess.check_output(cmd , shell=True).decode().strip() + if key in KeyExist: log.info("Key created") else : log.exception("Failed to create key: " + key) raise Exception("Failed to create key: " + key) - -# Checking if the dataplane network already exists, if not create it -log.debug("Checking dataplane network: "+dataplane_network) -cmd = 'openstack network show '+dataplane_network -log.debug (cmd) -cmd = cmd + ' |grep "status " | tr -s " " | cut -d" " -f 4' -NetworkExist = subprocess.check_output(cmd , shell=True).strip() -if NetworkExist == 'ACTIVE': - log.info("Dataplane network ("+dataplane_network+") already active") -else: - log.info('Creating dataplane network ...') - cmd = 'openstack network create '+dataplane_network - log.debug(cmd) - cmd = cmd + ' |grep "status " | tr -s " " | cut -d" " -f 4' - NetworkExist = subprocess.check_output(cmd , shell=True).strip() - if NetworkExist == 'ACTIVE': - log.info("Dataplane network created") - else : - log.exception("Failed to create dataplane network: " + dataplane_network) - raise Exception("Failed to create dataplane network: " + dataplane_network) - -# Checking if the dataplane subnet already exists, if not create it -log.debug("Checking subnet: "+subnet) -cmd = 'openstack subnet show '+ subnet -log.debug (cmd) -cmd = cmd +' |grep "name " | tr -s " " | cut -d"|" -f 3' -SubnetExist = subprocess.check_output(cmd , shell=True).strip() -if SubnetExist == subnet: - log.info("Subnet (" +subnet+ ") already exists") -else: - log.info('Creating subnet ...') - cmd = 'openstack subnet create --network ' + dataplane_network + ' --subnet-range ' + subnet_cidr +' --gateway none ' + subnet - log.debug(cmd) - cmd = cmd + ' |grep "name " | tr -s " " | cut -d"|" -f 3' - SubnetExist = subprocess.check_output(cmd , shell=True).strip() - if SubnetExist == subnet: - log.info("Subnet created") - else : - log.exception("Failed to create subnet: " + subnet) - raise Exception("Failed to create subnet: " + subnet) - ServerToBeCreated=[] ServerName=[] config = ConfigParser.RawConfigParser() vmconfig = ConfigParser.RawConfigParser() -vmconfig.read(vms+'.vms') +vmconfig.read(vms) total_number_of_VMs = vmconfig.get('DEFAULT', 'total_number_of_vms') +cmd = 'openstack server list -f value -c Name' +log.debug (cmd) +Servers = subprocess.check_output(cmd , shell=True).decode().strip() +cmd = 'openstack flavor list -f value -c Name' +log.debug (cmd) +Flavors = subprocess.check_output(cmd , shell=True).decode().strip() for vm in range(1, int(total_number_of_VMs)+1): flavor_info = vmconfig.get('VM%d'%vm, 'flavor_info') flavor_meta_data = vmconfig.get('VM%d'%vm, 'flavor_meta_data') boot_info = vmconfig.get('VM%d'%vm, 'boot_info') SRIOV_port = vmconfig.get('VM%d'%vm, 'SRIOV_port') + SRIOV_mgmt_port = vmconfig.get('VM%d'%vm, 'SRIOV_mgmt_port') ServerName.append('%s-VM%d'%(stack,vm)) flavor_name = '%s-VM%d-flavor'%(stack,vm) - log.debug("Checking server: "+ServerName[-1]) - cmd = 'openstack server show '+ServerName[-1] - log.debug (cmd) - cmd = cmd + ' |grep "\sname\s" | tr -s " " | cut -d" " -f 4' - ServerExist = subprocess.check_output(cmd , shell=True).strip() - if ServerExist == ServerName[-1]: - log.info("Server ("+ServerName[-1]+") already active") + log.debug("Checking server: " + ServerName[-1]) + if ServerName[-1] in Servers: + log.info("Server (" + ServerName[-1] + ") already active") ServerToBeCreated.append("no") else: ServerToBeCreated.append("yes") # Checking if the flavor already exists, if not create it - log.debug("Checking flavor: "+flavor_name) - cmd = 'openstack flavor show '+flavor_name - log.debug (cmd) - cmd = cmd + ' |grep "\sname\s" | tr -s " " | cut -d" " -f 4' - FlavorExist = subprocess.check_output(cmd , shell=True).strip() - if FlavorExist == flavor_name: - log.info("Flavor ("+flavor_name+") already installed") + log.debug("Checking flavor: " + flavor_name) + if flavor_name in Flavors: + log.info("Flavor (" + flavor_name+") already installed") else: log.info('Creating flavor ...') - cmd = 'openstack flavor create %s %s'%(flavor_name,flavor_info) + cmd = 'openstack flavor create %s %s -f value -c name'%(flavor_name,flavor_info) log.debug(cmd) - cmd = cmd + ' |grep "\sname\s" | tr -s " " | cut -d" " -f 4' - FlavorExist = subprocess.check_output(cmd , shell=True).strip() - if FlavorExist == flavor_name: + NewFlavor = subprocess.check_output(cmd , shell=True).decode().strip() + if flavor_name in NewFlavor: cmd = 'openstack flavor set %s %s'%(flavor_name, flavor_meta_data) log.debug(cmd) subprocess.check_call(cmd , shell=True) @@ -341,55 +319,61 @@ for vm in range(1, int(total_number_of_VMs)+1): else : log.exception("Failed to create flavor: " + flavor_name) raise Exception("Failed to create flavor: " + flavor_name) + if SRIOV_mgmt_port == 'NO': + nic_info = '--nic net-id=%s'%(internal_network) + else: + for port in SRIOV_mgmt_port.split(','): + nic_info = '--nic port-id=%s'%(port) if SRIOV_port == 'NO': - nic_info = '--nic net-id=%s --nic net-id=%s'%(internal_network,dataplane_network) + nic_info = nic_info + ' --nic net-id=%s'%(dataplane_network) else: - nic_info = '--nic net-id=%s'%(internal_network) for port in SRIOV_port.split(','): nic_info = nic_info + ' --nic port-id=%s'%(port) if vm==int(total_number_of_VMs): # For the last server, we want to wait for the server creation to complete, so the next operations will succeeed (e.g. IP allocation) # Note that this waiting is not bullet proof. Imagine, we loop through all the VMs, and the last VM was already running, while the previous # VMs still needed to be created. Or the previous server creations take much longer than the last one. - # In that case, we might be to fast when we query for the IP & MAC addresses. - wait = ' --wait ' + # In that case, we might be too fast when we query for the IP & MAC addresses. + wait = '--wait' else: - wait = ' ' + wait = '' log.info("Creating server...") - cmd = 'openstack server create --flavor %s --key-name %s --image %s %s %s%s%s'%(flavor_name,key,image,nic_info,boot_info,wait,ServerName[-1]) + cmd = 'openstack server create --flavor %s --key-name %s --image %s %s %s %s %s'%(flavor_name,key,image,nic_info,boot_info,wait,ServerName[-1]) log.debug(cmd) - cmd = cmd + ' |grep "\sname\s" | tr -s " " | cut -d" " -f 4' - ServerExist = subprocess.check_output(cmd , shell=True).strip() -if floating_network <> 'NO': + output = subprocess.check_output(cmd , shell=True).decode().strip() +if floating_network != 'NO': for vm in range(0, int(total_number_of_VMs)): if ServerToBeCreated[vm] =="yes": - log.info('Creating & Associating floating IP for ('+ServerName[vm]+')...') - cmd = 'openstack server show %s -c addresses -f value |grep -Eo "%s=[0-9]*\.[0-9]*\.[0-9]*\.[0-9]*" | cut -d"=" -f2'%(ServerName[vm],internal_network) - log.debug(cmd) - vmportIP = subprocess.check_output(cmd , shell=True).strip() - cmd = 'openstack port list -c ID -c "Fixed IP Addresses" | grep %s' %(vmportIP) - cmd = cmd + ' | cut -d" " -f 2 ' - log.debug(cmd) - vmportID = subprocess.check_output(cmd , shell=True).strip() - cmd = 'openstack floating ip create --port %s %s'%(vmportID,floating_network) - log.debug(cmd) - output = subprocess.check_output(cmd , shell=True).strip() - + log.info('Creating & Associating floating IP for ('+ServerName[vm]+')...') + cmd = 'openstack server show %s -c addresses -f value |grep -Eo "%s=[0-9]*\.[0-9]*\.[0-9]*\.[0-9]*" | cut -d"=" -f2'%(ServerName[vm],internal_network) + log.debug(cmd) + vmportIP = subprocess.check_output(cmd , shell=True).decode().strip() + cmd = 'openstack port list -c ID -c "Fixed IP Addresses" | grep %s | cut -d" " -f 2 ' %(vmportIP) + log.debug(cmd) + vmportID = subprocess.check_output(cmd , shell=True).decode().strip() + cmd = 'openstack floating ip create --port %s %s'%(vmportID,floating_network) + log.debug(cmd) + output = subprocess.check_output(cmd , shell=True).decode().strip() + +config.add_section('rapid') +config.set('rapid', 'loglevel', loglevel) +config.set('rapid', 'version', version) +config.set('rapid', 'total_number_of_machines', total_number_of_VMs) for vm in range(1, int(total_number_of_VMs)+1): cmd = 'openstack server show %s'%(ServerName[vm-1]) log.debug(cmd) - output = subprocess.check_output(cmd , shell=True).strip() - searchString = '.*%s=([0-9]*\.[0-9]*\.[0-9]*\.[0-9]*)' %(dataplane_network) - matchObj = re.search(searchString, output, re.DOTALL) + output = subprocess.check_output(cmd , shell=True).decode().strip() + searchString = '.*%s=([0-9]*\.[0-9]*\.[0-9]*\.[0-9]*)' %(dataplane_network) + matchObj = re.search(searchString, output, re.DOTALL) vmDPIP = matchObj.group(1) - searchString = '.*%s=([0-9]+\.[0-9]+\.[0-9]+\.[0-9]+),*\s*([0-9]+\.[0-9]+\.[0-9]+\.[0-9]+)*' %(internal_network) - matchObj = re.search(searchString, output, re.DOTALL) + searchString = '.*%s=([0-9]+\.[0-9]+\.[0-9]+\.[0-9]+),*\s*([0-9]+\.[0-9]+\.[0-9]+\.[0-9]+)*' %(internal_network) + matchObj = re.search(searchString, output, re.DOTALL) vmAdminIP = matchObj.group(2) if vmAdminIP == None: vmAdminIP = matchObj.group(1) cmd = 'openstack port list |egrep "\\b%s\\b" | tr -s " " | cut -d"|" -f 4'%(vmDPIP) log.debug(cmd) - vmDPmac = subprocess.check_output(cmd , shell=True).strip() + vmDPmac = subprocess.check_output(cmd , shell=True).decode().strip() config.add_section('M%d'%vm) config.set('M%d'%vm, 'name', ServerName[vm-1]) config.set('M%d'%vm, 'admin_ip', vmAdminIP) @@ -397,22 +381,19 @@ for vm in range(1, int(total_number_of_VMs)+1): config.set('M%d'%vm, 'dp_mac', vmDPmac) log.info('%s: (admin IP: %s), (dataplane IP: %s), (dataplane MAC: %s)' % (ServerName[vm-1],vmAdminIP,vmDPIP,vmDPmac)) -config.add_section('OpenStack') -config.set('OpenStack', 'stack', stack) -config.set('OpenStack', 'VMs', vms) -config.set('OpenStack', 'key', key) -config.set('OpenStack', 'image', image) -config.set('OpenStack', 'image_file', image_file) -config.set('OpenStack', 'dataplane_network', dataplane_network) -config.set('OpenStack', 'subnet', subnet) -config.set('OpenStack', 'subnet_cidr', subnet_cidr) -config.set('OpenStack', 'internal_network', internal_network) -config.set('OpenStack', 'floating_network', floating_network) -config.add_section('rapid') -config.set('rapid', 'loglevel', loglevel) -config.set('rapid', 'version', version) -config.set('rapid', 'total_number_of_machines', total_number_of_VMs) -config.set('DEFAULT', 'admin_ip', 'none') +config.add_section('ssh') +config.set('ssh', 'key', key) +config.add_section('Varia') +config.set('Varia', 'VIM', 'OpenStack') +config.set('Varia', 'stack', stack) +config.set('Varia', 'VMs', vms) +config.set('Varia', 'image', image) +config.set('Varia', 'image_file', image_file) +config.set('Varia', 'dataplane_network', dataplane_network) +config.set('Varia', 'subnet', subnet) +config.set('Varia', 'subnet_cidr', subnet_cidr) +config.set('Varia', 'internal_network', internal_network) +config.set('Varia', 'floating_network', floating_network) # Writing the environment file with open(stack+'.env', 'wb') as envfile: - config.write(envfile) + config.write(envfile) diff --git a/VNFs/DPPD-PROX/helper-scripts/rapid/deploycentos.sh b/VNFs/DPPD-PROX/helper-scripts/rapid/deploycentos.sh deleted file mode 100644 index 848520ce..00000000 --- a/VNFs/DPPD-PROX/helper-scripts/rapid/deploycentos.sh +++ /dev/null @@ -1,139 +0,0 @@ -#!/usr/bin/env bash -## -## Copyright (c) 2010-2019 Intel Corporation -## -## Licensed under the Apache License, Version 2.0 (the "License"); -## you may not use this file except in compliance with the License. -## You may obtain a copy of the License at -## -## http://www.apache.org/licenses/LICENSE-2.0 -## -## Unless required by applicable law or agreed to in writing, software -## distributed under the License is distributed on an "AS IS" BASIS, -## WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -## See the License for the specific language governing permissions and -## limitations under the License. -## - -BUILD_DIR="/opt/openstackrapid" -COPY_DIR="/home/centos" # Directory where the packer tool has copied some files -DPDK_VERSION="18.08" -PROX_COMMIT="af95b812" -export RTE_SDK="${BUILD_DIR}/dpdk-${DPDK_VERSION}" -export RTE_TARGET="x86_64-native-linuxapp-gcc" - -function os_pkgs_install() -{ - # NASM repository for AESNI MB library - sudo yum-config-manager --add-repo http://www.nasm.us/nasm.repo - - sudo yum install -y deltarpm - sudo yum update -y - sudo yum install -y git wget gcc unzip libpcap-devel ncurses-devel \ - libedit-devel lua-devel kernel-devel iperf3 pciutils \ - numactl-devel vim tuna openssl-devel nasm -} - -function os_cfg() -{ - # Enabling root ssh access - sudo sed -i '/disable_root: 1/c\disable_root: 0' /etc/cloud/cloud.cfg - - # Mounting huge pages to be used by DPDK, mounting already done by CentOS - # sudo mkdir -p /mnt/huge - # sudo umount `awk '/hugetlbfs/ { print $2 }' /proc/mounts` >/dev/null 2>&1 - # sudo mount -t hugetlbfs nodev /mnt/huge/ - sudo sh -c '(echo "vm.nr_hugepages = 1024") > /etc/sysctl.conf' - - # Enabling tuned with the realtime-virtual-guest profile - pushd ${BUILD_DIR} > /dev/null 2>&1 - wget http://linuxsoft.cern.ch/cern/centos/7/rt/x86_64/Packages/tuned-profiles-realtime-2.8.0-5.el7_4.2.noarch.rpm - wget http://linuxsoft.cern.ch/cern/centos/7/rt/x86_64/Packages/tuned-profiles-nfv-guest-2.8.0-5.el7_4.2.noarch.rpm - # Install with --nodeps. The latest CentOS cloud images come with a tuned version higher than 2.8. These 2 packages however - # do not depend on v2.8 and also work with tuned 2.9. Need to be careful in the future - sudo rpm -ivh ${BUILD_DIR}/tuned-profiles-realtime-2.8.0-5.el7_4.2.noarch.rpm --nodeps - sudo rpm -ivh ${BUILD_DIR}/tuned-profiles-nfv-guest-2.8.0-5.el7_4.2.noarch.rpm --nodeps - # Although we do no know how many cores the VM will have when begin deployed for real testing, we already put a number for the - # isolated CPUs so we can start the realtime-virtual-guest profile. If we don't, that command will fail. - # When the VM will be instantiated, the check_kernel_params service will check for the real number of cores available to this VM - # and update the realtime-virtual-guest-variables.conf accordingly. - echo "isolated_cores=1" | sudo tee -a /etc/tuned/realtime-virtual-guest-variables.conf - sudo tuned-adm profile realtime-virtual-guest - - # Install the check_tuned_params service to make sure that the grub cmd line has the right cpus in isolcpu. The actual number of cpu's - # assigned to this VM depends on the flavor used. We don't know at this time what that will be. - sudo chmod +x ${COPY_DIR}/check_prox_system_setup.sh - sudo cp -r ${COPY_DIR}/check_prox_system_setup.sh /usr/local/libexec/ - sudo cp -r ${COPY_DIR}/check-prox-system-setup.service /etc/systemd/system/ - sudo systemctl daemon-reload - sudo systemctl enable check-prox-system-setup.service - - popd > /dev/null 2>&1 -} - -function mblib_install() -{ - export AESNI_MULTI_BUFFER_LIB_PATH="${BUILD_DIR}/intel-ipsec-mb-0.50" - - # Downloading the Multi-buffer library. Note that the version to download is linked to the DPDK version being used - pushd ${BUILD_DIR} > /dev/null 2>&1 - wget https://github.com/01org/intel-ipsec-mb/archive/v0.50.zip - unzip v0.50.zip - pushd ${AESNI_MULTI_BUFFER_LIB_PATH} - make -j`getconf _NPROCESSORS_ONLN` - sudo make install - popd > /dev/null 2>&1 - popd > /dev/null 2>&1 -} - -function dpdk_install() -{ - # Build DPDK for the latest kernel installed - LATEST_KERNEL_INSTALLED=`ls -v1 /lib/modules/ | tail -1` - export RTE_KERNELDIR="/lib/modules/${LATEST_KERNEL_INSTALLED}/build" - - # Get and compile DPDK - pushd ${BUILD_DIR} > /dev/null 2>&1 - wget http://fast.dpdk.org/rel/dpdk-${DPDK_VERSION}.tar.xz - tar -xf ./dpdk-${DPDK_VERSION}.tar.xz - popd > /dev/null 2>&1 - - # Runtime scripts are assuming /root as the directory for PROX - sudo ln -s ${RTE_SDK} /root/dpdk - - pushd ${RTE_SDK} > /dev/null 2>&1 - make config T=${RTE_TARGET} - # The next sed lines make sure that we can compile DPDK 17.11 with a relatively new OS. Using a newer DPDK (18.5) should also resolve this issue - #sudo sed -i '/CONFIG_RTE_LIBRTE_KNI=y/c\CONFIG_RTE_LIBRTE_KNI=n' ${RTE_SDK}/build/.config - #sudo sed -i '/CONFIG_RTE_LIBRTE_PMD_KNI=y/c\CONFIG_RTE_LIBRTE_PMD_KNI=n' ${RTE_SDK}/build/.config - #sudo sed -i '/CONFIG_RTE_KNI_KMOD=y/c\CONFIG_RTE_KNI_KMOD=n' ${RTE_SDK}/build/.config - #sudo sed -i '/CONFIG_RTE_KNI_PREEMPT_DEFAULT=y/c\CONFIG_RTE_KNI_PREEMPT_DEFAULT=n' ${RTE_SDK}/build/.config - # Compile with MB library - sed -i '/CONFIG_RTE_LIBRTE_PMD_AESNI_MB=n/c\CONFIG_RTE_LIBRTE_PMD_AESNI_MB=y' ${RTE_SDK}/build/.config - make -j`getconf _NPROCESSORS_ONLN` - ln -s ${RTE_SDK}/build ${RTE_SDK}/${RTE_TARGET} - popd > /dev/null 2>&1 -} - -function prox_install() -{ - # Clone and compile PROX - pushd ${BUILD_DIR} > /dev/null 2>&1 - git clone https://git.opnfv.org/samplevnf - pushd ${BUILD_DIR}/samplevnf/VNFs/DPPD-PROX -# git checkout ffc6be26 - git checkout ${PROX_COMMIT} - make -j`getconf _NPROCESSORS_ONLN` - sudo ln -s ${BUILD_DIR}/samplevnf/VNFs/DPPD-PROX /root/prox - popd > /dev/null 2>&1 - popd > /dev/null 2>&1 -} - -[ ! -d ${BUILD_DIR} ] && sudo mkdir -p ${BUILD_DIR} -sudo chmod 0777 ${BUILD_DIR} - -os_pkgs_install -os_cfg -mblib_install -dpdk_install -prox_install diff --git a/VNFs/DPPD-PROX/helper-scripts/rapid/deploycentostools.sh b/VNFs/DPPD-PROX/helper-scripts/rapid/deploycentostools.sh new file mode 100644 index 00000000..883244fa --- /dev/null +++ b/VNFs/DPPD-PROX/helper-scripts/rapid/deploycentostools.sh @@ -0,0 +1,148 @@ +#!/usr/bin/env bash +## +## Copyright (c) 2010-2019 Intel Corporation +## +## Licensed under the Apache License, Version 2.0 (the "License"); +## you may not use this file except in compliance with the License. +## You may obtain a copy of the License at +## +## http://www.apache.org/licenses/LICENSE-2.0 +## +## Unless required by applicable law or agreed to in writing, software +## distributed under the License is distributed on an "AS IS" BASIS, +## WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +## See the License for the specific language governing permissions and +## limitations under the License. +## + +BUILD_DIR="/opt/openstackrapid" +COPY_DIR="/home/centos" # Directory where the packer tool has copied some files (e.g. check_prox_system_setup.sh) +DPDK_VERSION="18.08" +PROX_COMMIT="c8e9e6bb696363a397b2e718eb4d3e5f38a8ef22" +export RTE_SDK="${BUILD_DIR}/dpdk-${DPDK_VERSION}" +export RTE_TARGET="x86_64-native-linuxapp-gcc" + +function os_pkgs_install() +{ + # NASM repository for AESNI MB library + sudo yum-config-manager --add-repo http://www.nasm.us/nasm.repo + + sudo yum install -y deltarpm + sudo yum update -y + sudo yum install -y git wget gcc unzip libpcap-devel ncurses-devel \ + libedit-devel lua-devel kernel-devel iperf3 pciutils \ + numactl-devel vim tuna openssl-devel nasm wireshark +} + +function os_cfg() +{ + # Enabling root ssh access + sudo sed -i '/disable_root: 1/c\disable_root: 0' /etc/cloud/cloud.cfg + + # huge pages to be used by DPDK + sudo sh -c '(echo "vm.nr_hugepages = 1024") > /etc/sysctl.conf' + + # Enabling tuned with the realtime-virtual-guest profile + pushd ${BUILD_DIR} > /dev/null 2>&1 + wget http://linuxsoft.cern.ch/cern/centos/7/rt/x86_64/Packages/tuned-profiles-realtime-2.8.0-5.el7_4.2.noarch.rpm + wget http://linuxsoft.cern.ch/cern/centos/7/rt/x86_64/Packages/tuned-profiles-nfv-guest-2.8.0-5.el7_4.2.noarch.rpm + # Install with --nodeps. The latest CentOS cloud images come with a tuned version higher than 2.8. These 2 packages however + # do not depend on v2.8 and also work with tuned 2.9. Need to be careful in the future + sudo rpm -ivh ${BUILD_DIR}/tuned-profiles-realtime-2.8.0-5.el7_4.2.noarch.rpm --nodeps + sudo rpm -ivh ${BUILD_DIR}/tuned-profiles-nfv-guest-2.8.0-5.el7_4.2.noarch.rpm --nodeps + # Although we do no know how many cores the VM will have when begin deployed for real testing, we already put a number for the + # isolated CPUs so we can start the realtime-virtual-guest profile. If we don't, that command will fail. + # When the VM will be instantiated, the check_kernel_params service will check for the real number of cores available to this VM + # and update the realtime-virtual-guest-variables.conf accordingly. + echo "isolated_cores=1" | sudo tee -a /etc/tuned/realtime-virtual-guest-variables.conf + sudo tuned-adm profile realtime-virtual-guest + + # Install the check_tuned_params service to make sure that the grub cmd line has the right cpus in isolcpu. The actual number of cpu's + # assigned to this VM depends on the flavor used. We don't know at this time what that will be. + sudo chmod +x ${COPY_DIR}/check_prox_system_setup.sh + sudo cp -r ${COPY_DIR}/check_prox_system_setup.sh /usr/local/libexec/ + sudo cp -r ${COPY_DIR}/check-prox-system-setup.service /etc/systemd/system/ + sudo systemctl daemon-reload + sudo systemctl enable check-prox-system-setup.service + + popd > /dev/null 2>&1 +} + +function mblib_install() +{ + export AESNI_MULTI_BUFFER_LIB_PATH="${BUILD_DIR}/intel-ipsec-mb-0.50" + + # Downloading the Multi-buffer library. Note that the version to download is linked to the DPDK version being used + pushd ${BUILD_DIR} > /dev/null 2>&1 + wget https://github.com/01org/intel-ipsec-mb/archive/v0.50.zip + unzip v0.50.zip + pushd ${AESNI_MULTI_BUFFER_LIB_PATH} + make -j`getconf _NPROCESSORS_ONLN` + sudo make install + popd > /dev/null 2>&1 + popd > /dev/null 2>&1 +} + +function dpdk_install() +{ + # Build DPDK for the latest kernel installed + LATEST_KERNEL_INSTALLED=`ls -v1 /lib/modules/ | tail -1` + export RTE_KERNELDIR="/lib/modules/${LATEST_KERNEL_INSTALLED}/build" + + # Get and compile DPDK + pushd ${BUILD_DIR} > /dev/null 2>&1 + wget http://fast.dpdk.org/rel/dpdk-${DPDK_VERSION}.tar.xz + tar -xf ./dpdk-${DPDK_VERSION}.tar.xz + popd > /dev/null 2>&1 + + # Runtime scripts are assuming /root as the directory for PROX + sudo ln -s ${RTE_SDK} /root/dpdk + + pushd ${RTE_SDK} > /dev/null 2>&1 + make config T=${RTE_TARGET} + # The next sed lines make sure that we can compile DPDK 17.11 with a relatively new OS. Using a newer DPDK (18.5) should also resolve this issue + #sudo sed -i '/CONFIG_RTE_LIBRTE_KNI=y/c\CONFIG_RTE_LIBRTE_KNI=n' ${RTE_SDK}/build/.config + #sudo sed -i '/CONFIG_RTE_LIBRTE_PMD_KNI=y/c\CONFIG_RTE_LIBRTE_PMD_KNI=n' ${RTE_SDK}/build/.config + #sudo sed -i '/CONFIG_RTE_KNI_KMOD=y/c\CONFIG_RTE_KNI_KMOD=n' ${RTE_SDK}/build/.config + #sudo sed -i '/CONFIG_RTE_KNI_PREEMPT_DEFAULT=y/c\CONFIG_RTE_KNI_PREEMPT_DEFAULT=n' ${RTE_SDK}/build/.config + # Compile with MB library + sed -i '/CONFIG_RTE_LIBRTE_PMD_AESNI_MB=n/c\CONFIG_RTE_LIBRTE_PMD_AESNI_MB=y' ${RTE_SDK}/build/.config + make -j`getconf _NPROCESSORS_ONLN` + ln -s ${RTE_SDK}/build ${RTE_SDK}/${RTE_TARGET} + popd > /dev/null 2>&1 +} + +function prox_compile() +{ + # Compile PROX + pushd ${BUILD_DIR}/samplevnf/VNFs/DPPD-PROX + make -j`getconf _NPROCESSORS_ONLN` + popd > /dev/null 2>&1 +} + +function prox_install() +{ + # Clone and compile PROX + pushd ${BUILD_DIR} > /dev/null 2>&1 + git clone https://git.opnfv.org/samplevnf + pushd ${BUILD_DIR}/samplevnf/VNFs/DPPD-PROX + git checkout ${PROX_COMMIT} + popd > /dev/null 2>&1 + prox_compile + sudo ln -s ${BUILD_DIR}/samplevnf/VNFs/DPPD-PROX /root/prox + popd > /dev/null 2>&1 +} + +if [ "$1" == "compile" ]; then + prox_compile +else + echo "Positional parameter 1 is empty" + [ ! -d ${BUILD_DIR} ] && sudo mkdir -p ${BUILD_DIR} + sudo chmod 0777 ${BUILD_DIR} + + os_pkgs_install + os_cfg + mblib_install + dpdk_install + prox_install +fi diff --git a/VNFs/DPPD-PROX/helper-scripts/rapid/gen.cfg b/VNFs/DPPD-PROX/helper-scripts/rapid/gen.cfg index 42cfdc1b..0b52430f 100644 --- a/VNFs/DPPD-PROX/helper-scripts/rapid/gen.cfg +++ b/VNFs/DPPD-PROX/helper-scripts/rapid/gen.cfg @@ -56,7 +56,7 @@ drop=yes lat pos=42 accuracy pos=46 packet id pos=50 -signature=0x6789abcd +signature=0x98765432 signature pos=56 ;arp update time=1 @@ -69,7 +69,7 @@ rx port=p0 lat pos=42 accuracy pos=46 packet id pos=50 -signature=0x6789abcd +signature=0x98765432 signature pos=56 accuracy limit nsec=1000000 ;arp update time=1 diff --git a/VNFs/DPPD-PROX/helper-scripts/rapid/gen_gw.cfg b/VNFs/DPPD-PROX/helper-scripts/rapid/gen_gw.cfg index e819041c..d6a2fa98 100644 --- a/VNFs/DPPD-PROX/helper-scripts/rapid/gen_gw.cfg +++ b/VNFs/DPPD-PROX/helper-scripts/rapid/gen_gw.cfg @@ -56,7 +56,7 @@ drop=yes lat pos=42 accuracy pos=46 packet id pos=50 -signature=0x6789abcd +signature=0x98765432 signature pos=56 ;arp update time=1 @@ -69,6 +69,6 @@ rx port=p0 lat pos=42 accuracy pos=46 packet id pos=50 -signature=0x6789abcd +signature=0x98765432 signature pos=56 ;arp update time=1 diff --git a/VNFs/DPPD-PROX/helper-scripts/rapid/impair.test b/VNFs/DPPD-PROX/helper-scripts/rapid/impair.test index 9b633d99..d1b0e368 100644 --- a/VNFs/DPPD-PROX/helper-scripts/rapid/impair.test +++ b/VNFs/DPPD-PROX/helper-scripts/rapid/impair.test @@ -20,6 +20,7 @@ number_of_tests = 2 total_number_of_test_machines = 3 prox_socket = true prox_launch_exit = true +tasks=[0] [TestM1] name = Generator @@ -28,7 +29,6 @@ gw_vm = 2 dest_vm = 3 gencores = [1] latcores = [3] -startspeed = 10 [TestM2] name = ImpairGW @@ -45,9 +45,14 @@ drop_rate_threshold = 0.1 lat_avg_threshold = 500 lat_max_threshold = 1000 accuracy = 0.1 +startspeed = 10 [test1] -test=inittest +test=warmuptest +flowsize=1024 +packetsize=64 +warmupspeed=10 +warmuptime=2 [test2] test=impairtest diff --git a/VNFs/DPPD-PROX/helper-scripts/rapid/irq.test b/VNFs/DPPD-PROX/helper-scripts/rapid/irq.test index b8dc706b..78b68483 100644 --- a/VNFs/DPPD-PROX/helper-scripts/rapid/irq.test +++ b/VNFs/DPPD-PROX/helper-scripts/rapid/irq.test @@ -20,6 +20,7 @@ number_of_tests = 1 total_number_of_test_machines = 1 prox_socket = true prox_launch_exit = true +tasks=[0] [TestM1] name = InterruptTesting diff --git a/VNFs/DPPD-PROX/helper-scripts/rapid/l2framerate.test b/VNFs/DPPD-PROX/helper-scripts/rapid/l2framerate.test index 44fefdda..a9f8d0ae 100644 --- a/VNFs/DPPD-PROX/helper-scripts/rapid/l2framerate.test +++ b/VNFs/DPPD-PROX/helper-scripts/rapid/l2framerate.test @@ -20,6 +20,7 @@ number_of_tests = 2 total_number_of_test_machines = 2 prox_socket = true prox_launch_exit = true +tasks=[0] [TestM1] name = Generator @@ -35,8 +36,14 @@ config_file = l2swap.cfg swapcores = [1] [test1] -test=inittest +test=warmuptest +flowsize=1024 +packetsize=64 +warmupspeed=10 +warmuptime=2 + [test2] -test=max_frame_rate +test=fixed_rate packetsizes=[256] +speed=10 diff --git a/VNFs/DPPD-PROX/helper-scripts/rapid/l2gen.cfg b/VNFs/DPPD-PROX/helper-scripts/rapid/l2gen.cfg index 1469604b..3a3cf2c8 100644 --- a/VNFs/DPPD-PROX/helper-scripts/rapid/l2gen.cfg +++ b/VNFs/DPPD-PROX/helper-scripts/rapid/l2gen.cfg @@ -55,7 +55,7 @@ drop=yes lat pos=42 accuracy pos=46 packet id pos=50 -signature=0x6789abcd +signature=0x98765432 signature pos=56 [core $latcores] @@ -66,5 +66,5 @@ rx port=p0 lat pos=42 accuracy pos=46 packet id pos=50 -signature=0x6789abcd +signature=0x98765432 signature pos=56 diff --git a/VNFs/DPPD-PROX/helper-scripts/rapid/l2gen_bare.cfg b/VNFs/DPPD-PROX/helper-scripts/rapid/l2gen_bare.cfg new file mode 100644 index 00000000..79140623 --- /dev/null +++ b/VNFs/DPPD-PROX/helper-scripts/rapid/l2gen_bare.cfg @@ -0,0 +1,59 @@ +;; +;; Copyright (c) 2010-2017 Intel Corporation +;; +;; Licensed under the Apache License, Version 2.0 (the "License"); +;; you may not use this file except in compliance with the License. +;; You may obtain a copy of the License at +;; +;; http://www.apache.org/licenses/LICENSE-2.0 +;; +;; Unless required by applicable law or agreed to in writing, software +;; distributed under the License is distributed on an "AS IS" BASIS, +;; WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +;; See the License for the specific language governing permissions and +;; limitations under the License. +;; + +[eal options] +-n=4 ; force number of memory channels +no-output=no ; disable DPDK debug output + +[lua] +dofile("parameters.lua") + +[port 0] +name=p0 +rx desc=2048 +tx desc=2048 +vlan=yes + +[variables] +$mbs=8 + +[defaults] +mempool size=8K + +[global] +name=${name} + +[core 0] +mode=master + +[core $gencores] +name=p0 +task=0 +mode=gen +tx port=p0 +bps=1250000000 +pkt inline=${dest_hex_mac} 00 00 00 00 00 00 08 00 45 00 00 2e 00 01 00 00 40 11 f7 7d ${local_hex_ip} ${dest_hex_ip} 0b b8 0b b9 00 1a 55 7b +pkt size=60 +local ipv4=${local_ip} +min bulk size=$mbs +max bulk size=64 +drop=yes + +[core $latcores] +name=drop +task=0 +mode=none +rx port=p0 diff --git a/VNFs/DPPD-PROX/helper-scripts/rapid/l2zeroloss.test b/VNFs/DPPD-PROX/helper-scripts/rapid/l2zeroloss.test index 04065909..af60c407 100644 --- a/VNFs/DPPD-PROX/helper-scripts/rapid/l2zeroloss.test +++ b/VNFs/DPPD-PROX/helper-scripts/rapid/l2zeroloss.test @@ -16,10 +16,11 @@ [DEFAULT] name = L2BasicSwapTesting -number_of_tests = 4 +number_of_tests = 2 total_number_of_test_machines = 2 prox_socket = true prox_launch_exit = true +tasks=[0] [TestM1] name = Generator @@ -27,7 +28,6 @@ config_file = l2gen.cfg dest_vm = 2 gencores = [1] latcores = [3] -startspeed = 10 [TestM2] name = Swap @@ -39,18 +39,20 @@ drop_rate_threshold = 0 lat_avg_threshold = 500 lat_max_threshold = 1000 accuracy = 0.1 - +startspeed = 10 [test1] -test=inittest - -[test2] -test=speedtest +test=warmuptest +flowsize=1024 packetsize=64 +warmupspeed=1 +warmuptime=2 -[test3] -test=sizetest - -[test4] -test=flowtest -packetsize=64 +[test2] +test=flowsizetest +packetsizes=[64] +# the number of flows in the list need to be powers of 2, max 2^20 +# # Select from following numbers: 1, 2, 4, 8, 16, 32, 64, 128, 256, 512, 1024, 2048, 4096, 8192, 16384, 32768, 65535, 131072, 262144, 524280, 1048576 +# flows=[1,512] +# +# diff --git a/VNFs/DPPD-PROX/helper-scripts/rapid/l3framerate.test b/VNFs/DPPD-PROX/helper-scripts/rapid/l3framerate.test index 21bf8106..81d9989d 100644 --- a/VNFs/DPPD-PROX/helper-scripts/rapid/l3framerate.test +++ b/VNFs/DPPD-PROX/helper-scripts/rapid/l3framerate.test @@ -20,6 +20,7 @@ number_of_tests = 2 total_number_of_test_machines = 2 prox_socket = true prox_launch_exit = true +tasks=[0] [TestM1] name = Generator @@ -27,7 +28,6 @@ config_file = gen.cfg dest_vm = 2 gencores = [1] latcores = [3] -startspeed = 10 [TestM2] name = Swap @@ -35,8 +35,13 @@ config_file = swap.cfg swapcores = [1] [test1] -test=inittest +test=warmuptest +flowsize=1024 +packetsize=64 +warmupspeed=10 +warmuptime=2 [test2] -test=max_frame_rate -packetsizes=[256] +test=fixed_rate +packetsizes=[64] +speed=10 diff --git a/VNFs/DPPD-PROX/helper-scripts/rapid/machine.map b/VNFs/DPPD-PROX/helper-scripts/rapid/machine.map new file mode 100644 index 00000000..b6e199d7 --- /dev/null +++ b/VNFs/DPPD-PROX/helper-scripts/rapid/machine.map @@ -0,0 +1,30 @@ +## +## Copyright (c) 2010-2018 Intel Corporation +## +## Licensed under the Apache License, Version 2.0 (the "License"); +## you may not use this file except in compliance with the License. +## You may obtain a copy of the License at +## +## http://www.apache.org/licenses/LICENSE-2.0 +## +## Unless required by applicable law or agreed to in writing, software +## distributed under the License is distributed on an "AS IS" BASIS, +## WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +## See the License for the specific language governing permissions and +## limitations under the License. +## +## This file contains the mapping for each test machine. The test machine will +## be deployed on a machine defined in the *.env file, as defined by the +## machine_index + +[DEFAULT] +machine_index=0 + +[TestM1] +machine_index=1 + +[TestM2] +machine_index=2 + +[TestM3] +machine_index=3 diff --git a/VNFs/DPPD-PROX/helper-scripts/rapid/monitorswap.test b/VNFs/DPPD-PROX/helper-scripts/rapid/monitorswap.test new file mode 100644 index 00000000..76da2347 --- /dev/null +++ b/VNFs/DPPD-PROX/helper-scripts/rapid/monitorswap.test @@ -0,0 +1,31 @@ +## +## Copyright (c) 2010-2019 Intel Corporation +## +## Licensed under the Apache License, Version 2.0 (the "License"); +## you may not use this file except in compliance with the License. +## You may obtain a copy of the License at +## +## http://www.apache.org/licenses/LICENSE-2.0 +## +## Unless required by applicable law or agreed to in writing, software +## distributed under the License is distributed on an "AS IS" BASIS, +## WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +## See the License for the specific language governing permissions and +## limitations under the License. +## + +[DEFAULT] +name = MonitorSwap +number_of_tests = 1 +total_number_of_test_machines = 1 +prox_socket = true +prox_launch_exit = false +tasks=[0] + +[TestM1] +name = Swap +config_file = swap.cfg +swapcores = [1] + +[test1] +test=measureswap diff --git a/VNFs/DPPD-PROX/helper-scripts/rapid/prox_ctrl.py b/VNFs/DPPD-PROX/helper-scripts/rapid/prox_ctrl.py index 3ee28c00..bda3e5d9 100644 --- a/VNFs/DPPD-PROX/helper-scripts/rapid/prox_ctrl.py +++ b/VNFs/DPPD-PROX/helper-scripts/rapid/prox_ctrl.py @@ -183,15 +183,21 @@ class prox_sock(object): def reset_stats(self): self._send('reset stats') - def lat_stats(self, cores, task=0): + def lat_stats(self, cores, tasks={0}): min_lat = 999999999 max_lat = avg_lat = 0 - self._send('lat stats %s %s' % (','.join(map(str, cores)), task)) + self._send('lat stats %s %s' % (','.join(map(str, cores)), ','.join(map(str, tasks)))) for core in cores: - stats = self._recv().split(',') - min_lat = min(int(stats[0]),min_lat) - max_lat = max(int(stats[1]),max_lat) - avg_lat += int(stats[2]) + for task in tasks: + stats = self._recv().split(',') + if stats[0].startswith('error'): + if stats[0].startswith('error: invalid syntax'): + log.critical("dp core stats error: unexpected invalid syntax (potential incompatibility between scripts and PROX)") + raise Exception("dp core stats error") + continue + min_lat = min(int(stats[0]),min_lat) + max_lat = max(int(stats[1]),max_lat) + avg_lat += int(stats[2]) avg_lat = avg_lat/len(cores) self._send('stats latency(0).used') used = float(self._recv()) @@ -211,19 +217,27 @@ class prox_sock(object): buckets = buckets[:-1] return buckets - def core_stats(self, cores, task=0): - rx = tx = drop = tsc = hz = rx_non_dp = tx_non_dp = 0 - self._send('dp core stats %s %s' % (','.join(map(str, cores)), task)) + def core_stats(self, cores, tasks={0}): + rx = tx = drop = tsc = hz = rx_non_dp = tx_non_dp = tx_fail = 0 + self._send('dp core stats %s %s' % (','.join(map(str, cores)), ','.join(map(str, tasks)))) for core in cores: - stats = self._recv().split(',') - rx += int(stats[0]) - tx += int(stats[1]) - rx_non_dp += int(stats[2]) - tx_non_dp += int(stats[3]) - drop += int(stats[4]) - tsc = int(stats[5]) - hz = int(stats[6]) - return rx-rx_non_dp, tx-tx_non_dp, drop, tsc, hz + for task in tasks: + stats = self._recv().split(',') + if stats[0].startswith('error'): + if stats[0].startswith('error: invalid syntax'): + log.critical("dp core stats error: unexpected invalid syntax (potential incompatibility between scripts and PROX)") + raise Exception("dp core stats error") + continue + rx += int(stats[0]) + tx += int(stats[1]) + rx_non_dp += int(stats[2]) + tx_non_dp += int(stats[3]) + drop += int(stats[4]) + tx_fail += int(stats[5]) + tsc = int(stats[6]) + hz = int(stats[7]) + return rx,rx_non_dp, tx,tx_non_dp, drop, tx_fail, tsc, hz + #return rx-rx_non_dp, tx-tx_non_dp, drop, tx_fail, tsc, hz def set_random(self, cores, task, offset, mask, length): self._send('set random %s %s %s %s %s' % (','.join(map(str, cores)), task, offset, mask, length)) diff --git a/VNFs/DPPD-PROX/helper-scripts/rapid/rapidVMs.vms b/VNFs/DPPD-PROX/helper-scripts/rapid/rapidVMs.vms index d18184d6..b83c0d07 100644 --- a/VNFs/DPPD-PROX/helper-scripts/rapid/rapidVMs.vms +++ b/VNFs/DPPD-PROX/helper-scripts/rapid/rapidVMs.vms @@ -22,8 +22,9 @@ flavor_info=--ram 4096 --disk 40 --vcpus 4 ;flavor_meta_data=--property hw:mem_page_size=large --property hw:cpu_policy=dedicated --property hw:cpu_thread_policy=isolate flavor_meta_data=--property hw:mem_page_size=large --property hw:cpu_policy=dedicated ;flavor_meta_data=--property hw:mem_page_size=large --property hw:cpu_policy=dedicated --property hw:cpu_realtime=yes --property hw:cpu_realtime_mask=^0 -;boot_info=--availability-zone nova --security-group default +;boot_info=--availability-zone nova --security-group default --config-drive=true boot_info=--availability-zone nova --security-group prox +SRIOV_mgmt_port=NO SRIOV_port=NO [VM1] diff --git a/VNFs/DPPD-PROX/helper-scripts/rapid/runrapid.py b/VNFs/DPPD-PROX/helper-scripts/rapid/runrapid.py index 3b0eeb8b..159550ca 100755 --- a/VNFs/DPPD-PROX/helper-scripts/rapid/runrapid.py +++ b/VNFs/DPPD-PROX/helper-scripts/rapid/runrapid.py @@ -34,11 +34,12 @@ import ast import atexit import csv -version="19.4.15" -env = "rapid" #Default string for environment -test = "basicrapid" #Default string for test -machine_map_file = "MachineMap" #Default string for machine map file +version="19.6.30" +env = "rapid.env" #Default string for environment +test_file = "basicrapid.test" #Default string for test +machine_map_file = "machine.map" #Default string for machine map file loglevel="DEBUG" # sets log level for writing to file +screenloglevel="INFO" # sets log level for writing to screen runtime=10 # time in seconds for 1 test run configonly = False # IF True, the system will upload all the necessary config fiels to the VMs, but not start PROX and the actual testing @@ -56,17 +57,18 @@ def usage(): print("") print("optional arguments:") print(" -v, --version Show program's version number and exit") - print(" --env ENVIRONMENT_NAME Parameters will be read from ENVIRONMENT_NAME.env Default is %s."%env) - print(" --test TEST_NAME Test cases will be read from TEST_NAME.test Default is %s."%test) - print(" --map MACHINE_MAP_FILE Machine mapping will be read from MACHINE_MAP_FILE.cfg Default is %s."%machine_map_file) + print(" --env ENVIRONMENT_NAME Parameters will be read from ENVIRONMENT_NAME. Default is %s."%env) + print(" --test TEST_NAME Test cases will be read from TEST_NAME. Default is %s."%test_file) + print(" --map MACHINE_MAP_FILE Machine mapping will be read from MACHINE_MAP_FILE. Default is %s."%machine_map_file) print(" --runtime Specify time in seconds for 1 test run") print(" --configonly If True, only upload all config files to the VMs, do not run the tests. Default is %s."%configonly) - print(" --log Specify logging level for log file output, screen output level is hard coded") + print(" --log Specify logging level for log file output, default is DEBUG") + print(" --screenlog Specify logging level for screen output, default is INFO") print(" -h, --help Show help message and exit.") print("") try: - opts, args = getopt.getopt(sys.argv[1:], "vh", ["version","help", "env=", "test=", "map=", "runtime=","configonly=","log="]) + opts, args = getopt.getopt(sys.argv[1:], "vh", ["version","help", "env=", "test=", "map=", "runtime=","configonly=","log=","screenlog="]) except getopt.GetoptError as err: print("===========================================") print(str(err)) @@ -85,32 +87,41 @@ for opt, arg in opts: sys.exit() if opt in ("--env"): env = arg - print ("Using '"+env+"' as name for the environment") if opt in ("--test"): - test = arg - print ("Using '"+test+".test' for test case definition") + test_file = arg if opt in ("--map"): machine_map_file = arg - print ("Using '"+machine_map_file+".cfg' for machine mapping") if opt in ("--runtime"): runtime = arg - print ("Runtime: "+ runtime) if opt in ("--configonly"): configonly = arg - print ("configonly: "+ configonly) + if configonly == 'True': + configonly = True + print('No actual runs, only uploading configuration files') + else: + configonly = False + print('--configonly parameter is defaulted to False') if opt in ("--log"): loglevel = arg print ("Log level: "+ loglevel) + if opt in ("--screenlog"): + screenloglevel = arg + print ("Screen Log level: "+ screenloglevel) + +print ("Using '"+env+"' as name for the environment") +print ("Using '"+test_file+"' for test case definition") +print ("Using '"+machine_map_file+"' for machine mapping") +print ("Runtime: "+ runtime) class bcolors: - HEADER = '\033[95m' - OKBLUE = '\033[94m' - OKGREEN = '\033[92m' - WARNING = '\033[93m' - FAIL = '\033[91m' - ENDC = '\033[0m' - BOLD = '\033[1m' - UNDERLINE = '\033[4m' + HEADER = '\033[95m' + OKBLUE = '\033[94m' + OKGREEN = '\033[92m' + WARNING = '\033[93m' + FAIL = '\033[91m' + ENDC = '\033[0m' + BOLD = '\033[1m' + UNDERLINE = '\033[4m' # create formatters screen_formatter = logging.Formatter("%(message)s") @@ -123,7 +134,7 @@ file_formatter = logging.Formatter("%(asctime)s - %(levelname)s - %(message)s") log = logging.getLogger() numeric_level = getattr(logging, loglevel.upper(), None) if not isinstance(numeric_level, int): - raise ValueError('Invalid log level: %s' % loglevel) + raise ValueError('Invalid log level: %s' % loglevel) log.setLevel(numeric_level) log.propagate = 0 @@ -131,13 +142,17 @@ log.propagate = 0 # and set its log level to the command-line option # console_handler = logging.StreamHandler(sys.stdout) -console_handler.setLevel(logging.INFO) +#console_handler.setLevel(logging.INFO) +numeric_screenlevel = getattr(logging, screenloglevel.upper(), None) +if not isinstance(numeric_screenlevel, int): + raise ValueError('Invalid screenlog level: %s' % screenloglevel) +console_handler.setLevel(numeric_screenlevel) console_handler.setFormatter(screen_formatter) # create a file handler -# and set its log level to DEBUG +# and set its log level # -log_file = 'RUN{}.{}.log'.format(env,test) +log_file = 'RUN{}.{}.log'.format(env,test_file) file_handler = logging.handlers.RotatingFileHandler(log_file, backupCount=10) #file_handler = log.handlers.TimedRotatingFileHandler(log_file, 'D', 1, 5) file_handler.setLevel(numeric_level) @@ -154,11 +169,11 @@ needRoll = os.path.isfile(log_file) # This is a stale log, so roll it if needRoll: - # Add timestamp - log.debug('\n---------\nLog closed on %s.\n---------\n' % time.asctime()) + # Add timestamp + log.debug('\n---------\nLog closed on %s.\n---------\n' % time.asctime()) - # Roll over on application start - log.handlers[0].doRollover() + # Roll over on application start + log.handlers[0].doRollover() # Add timestamp log.debug('\n---------\nLog started on %s.\n---------\n' % time.asctime()) @@ -199,41 +214,53 @@ def connect_client(client): log.debug("Connected to VM on %s" % client.ip()) def run_iteration(gensock,sutsock): - sleep_time = 3 + sleep_time = 2 # Sleep_time is needed to be able to do accurate measurements to check for packet loss. We need to make this time large enough so that we do not take the first measurement while some packets from the previous tests migth still be in flight time.sleep(sleep_time) - abs_old_rx, abs_old_tx, abs_old_drop, abs_old_tsc, abs_tsc_hz = gensock.core_stats(genstatcores) + abs_old_rx, abs_old_non_dp_rx, abs_old_tx, abs_old_non_dp_tx, abs_old_drop, abs_old_tx_fail, abs_old_tsc, abs_tsc_hz = gensock.core_stats(genstatcores,tasks) + abs_old_rx = abs_old_rx - abs_old_non_dp_rx + abs_old_tx = abs_old_tx - abs_old_non_dp_tx gensock.start(gencores) time.sleep(sleep_time) if sutsock!='none': - old_sut_rx, old_sut_tx, old_sut_drop, old_sut_tsc, sut_tsc_hz = sutsock.core_stats(sutstatcores) - old_rx, old_tx, old_drop, old_tsc, tsc_hz = gensock.core_stats(genstatcores) + old_sut_rx, old_sut_non_dp_rx, old_sut_tx, old_sut_non_dp_tx, old_sut_drop, old_sut_tx_fail, old_sut_tsc, sut_tsc_hz = sutsock.core_stats(sutstatcores,tasks) + old_sut_rx = old_sut_rx - old_sut_non_dp_rx + old_sut_tx = old_sut_tx - old_sut_non_dp_tx + old_rx, old_non_dp_rx, old_tx, old_non_dp_tx, old_drop, old_tx_fail, old_tsc, tsc_hz = gensock.core_stats(genstatcores,tasks) + old_rx = old_rx - old_non_dp_rx + old_tx = old_tx - old_non_dp_tx # Measure latency statistics per second - n_loops = 0 - lat_min = 0 - lat_max = 0 - lat_avg = 0 + n_loops = 0 + lat_min = 0 + lat_max = 0 + lat_avg = 0 used_avg = 0 - while n_loops < float(runtime): - n_loops +=1 - time.sleep(1) - lat_min_sample, lat_max_sample, lat_avg_sample, used_sample = gensock.lat_stats(latcores) - if lat_min > lat_min_sample: - lat_min = lat_min_sample - if lat_max < lat_max_sample: - lat_max = lat_max_sample - lat_avg = lat_avg + lat_avg_sample + while n_loops < float(runtime): + n_loops +=1 + time.sleep(1) + lat_min_sample, lat_max_sample, lat_avg_sample, used_sample = gensock.lat_stats(latcores) + if lat_min > lat_min_sample: + lat_min = lat_min_sample + if lat_max < lat_max_sample: + lat_max = lat_max_sample + lat_avg = lat_avg + lat_avg_sample used_avg = used_avg + used_sample - lat_avg = lat_avg / n_loops + lat_avg = lat_avg / n_loops used_avg = used_avg / n_loops # Get statistics after some execution time - new_rx, new_tx, new_drop, new_tsc, tsc_hz = gensock.core_stats(genstatcores) + new_rx, new_non_dp_rx, new_tx, new_non_dp_tx, new_drop, new_tx_fail, new_tsc, tsc_hz = gensock.core_stats(genstatcores,tasks) + new_rx = new_rx - new_non_dp_rx + new_tx = new_tx - new_non_dp_tx if sutsock!='none': - new_sut_rx, new_sut_tx, new_sut_drop, new_sut_tsc, sut_tsc_hz = sutsock.core_stats(sutstatcores) + new_sut_rx, new_sut_non_dp_rx, new_sut_tx, new_sut_non_dp_tx, new_sut_drop, new_sut_tx_fail, new_sut_tsc, sut_tsc_hz = sutsock.core_stats(sutstatcores,tasks) + new_sut_rx = new_sut_rx - new_sut_non_dp_rx + new_sut_tx = new_sut_tx - new_sut_non_dp_tx #Stop generating gensock.stop(gencores) time.sleep(sleep_time) - abs_new_rx, abs_new_tx, abs_new_drop, abs_new_tsc, abs_tsc_hz = gensock.core_stats(genstatcores) + abs_new_rx, abs_new_non_dp_rx, abs_new_tx, abs_new_non_dp_tx, abs_new_drop, abs_new_tx_fail, abs_new_tsc, abs_tsc_hz = gensock.core_stats(genstatcores,tasks) + abs_new_rx = abs_new_rx - abs_new_non_dp_rx + abs_new_tx = abs_new_tx - abs_new_non_dp_tx drop = new_drop-old_drop # drop is all packets dropped by all tasks. This includes packets dropped at the generator task + packets dropped by the nop task. In steady state, this equals to the number of packets received by this VM rx = new_rx - old_rx # rx is all packets received by the nop task = all packets received in the gen VM tx = new_tx - old_tx # tx is all generated packets actually accepted by the interface @@ -252,9 +279,9 @@ def run_iteration(gensock,sutsock): pps_sut_tx = 0 pps_sut_tx_str = 'NO MEAS.' if (tx == 0): - log.critical("TX = 0. Test interrupted since no packet has been sent.") + log.critical("TX = 0. Test interrupted since no packet has been sent.") raise Exception("TX = 0") - return(pps_req_tx,pps_tx,pps_sut_tx_str,pps_rx,lat_avg,lat_max,abs_dropped,(abs_new_tx - abs_old_tx),lat_min,used_avg) + return(pps_req_tx,pps_tx,pps_sut_tx_str,pps_rx,lat_avg,lat_max,abs_dropped,(abs_new_tx_fail - abs_old_tx_fail),(abs_new_tx - abs_old_tx),lat_min,used_avg) def new_speed(speed,minspeed,maxspeed,success): if success: @@ -265,197 +292,126 @@ def new_speed(speed,minspeed,maxspeed,success): return (newspeed,minspeed,maxspeed) def get_pps(speed,size): + # speed is given in % of 10Gb/s, returning Mpps return (speed * 100.0 / (8*(size+24))) -def run_speedtest(gensock,sutsock): - maxspeed = speed = STARTSPEED - minspeed = 0 - size=PACKETSIZE-4 - attempts = 0 - log.info("+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+") - log.info("| Generator is sending UDP (1 flow) packets ("+ '{:>5}'.format(size+4) +" bytes) to SUT. SUT sends packets back |") - log.info("+--------+--------------------+----------------+----------------+----------------+----------------+----------------+----------------+----------------+------------+------------+") - log.info("| Test | Speed requested | Sent to NIC | Sent by Gen | Forward by SUT | Rec. by Gen | Avg. Latency | Max. Latency | Packets Lost | Loss Ratio | Result |") - log.info("+--------+--------------------+----------------+----------------+----------------+----------------+----------------+----------------+----------------+------------+------------+") - endpps_sut_tx_str = 'NO_RESULTS' - gensock.set_size(gencores,0,size) # This is setting the frame size - gensock.set_value(gencores,0,16,(size-14),2) # 18 is the difference between the frame size and IP size = size of (MAC addresses, ethertype and FCS) - gensock.set_value(gencores,0,38,(size-34),2) # 38 is the difference between the frame size and UDP size = 18 + size of IP header (=20) - # This will only work when using sending UDP packets. For different protocols and ethernet types, we would need a different calculation - gensock.start(latcores) - while (maxspeed-minspeed > ACCURACY): - attempts += 1 - print('Measurement ongoing at speed: ' + str(round(speed,2)) + '% ',end='\r') - sys.stdout.flush() - # Start generating packets at requested speed (in % of a 10Gb/s link) - gensock.speed(speed / len(gencores), gencores) - time.sleep(1) - # Get statistics now that the generation is stable and initial ARP messages are dealt with. - pps_req_tx,pps_tx,pps_sut_tx_str,pps_rx,lat_avg,lat_max, abs_dropped, abs_tx, lat_min, lat_used = run_iteration(gensock,sutsock) - drop_rate = 100.0*abs_dropped/abs_tx - if lat_used < 0.95: - lat_warning = bcolors.FAIL + ' Potential latency accuracy problem: {:>3.0f}%'.format(lat_used*100) + bcolors.ENDC - else: - lat_warning = '' - if ((get_pps(speed,size) - pps_tx)/get_pps(speed,size))<0.001 and ((drop_rate < DROP_RATE_TRESHOLD) or (abs_dropped==DROP_RATE_TRESHOLD ==0)) and (lat_avg< LAT_AVG_TRESHOLD) and (lat_max < LAT_MAX_TRESHOLD): - log.info('|{:>7}'.format(str(attempts))+" | " + '{:>5.1f}'.format(speed) + '% ' +'{:>6.3f}'.format(get_pps(speed,size)) + ' Mpps | '+ '{:>9.3f}'.format(pps_req_tx)+' Mpps | '+ '{:>9.3f}'.format(pps_tx) +' Mpps | ' + '{:>9}'.format(pps_sut_tx_str) +' Mpps | '+ '{:>9.3f}'.format(pps_rx)+' Mpps | '+ '{:>9.0f}'.format(lat_avg)+' us | '+ '{:>9.0f}'.format(lat_max)+' us | '+ '{:>14d}'.format(abs_dropped)+ ' |''{:>9.2f}'.format(drop_rate)+ '% | SUCCESS |'+lat_warning) - endspeed = speed - endpps_req_tx = pps_req_tx - endpps_tx = pps_tx - endpps_sut_tx_str = pps_sut_tx_str - endpps_rx = pps_rx - endlat_avg = lat_avg - endlat_max = lat_max - endabs_dropped = abs_dropped - enddrop_rate = drop_rate - endwarning = lat_warning - success = True - else: - abs_drop_rate_prefix = bcolors.ENDC - if ((abs_dropped>0) and (DROP_RATE_TRESHOLD ==0)): - abs_drop_rate_prefix = bcolors.FAIL - if (drop_rate < DROP_RATE_TRESHOLD): - drop_rate_prefix = bcolors.ENDC - else: - drop_rate_prefix = bcolors.FAIL - if (lat_avg< LAT_AVG_TRESHOLD): - lat_avg_prefix = bcolors.ENDC - else: - lat_avg_prefix = bcolors.FAIL - if (lat_max< LAT_MAX_TRESHOLD): - lat_max_prefix = bcolors.ENDC - else: - lat_max_prefix = bcolors.FAIL - if (((get_pps(speed,size) - pps_tx)/get_pps(speed,size))<0.001): - speed_prefix = bcolors.ENDC - else: - speed_prefix = bcolors.FAIL - log.info('|{:>7}'.format(str(attempts))+" | " + '{:>5.1f}'.format(speed) + '% '+speed_prefix +'{:>6.3f}'.format(get_pps(speed,size)) + ' Mpps | '+ '{:>9.3f}'.format(pps_req_tx)+' Mpps | ' + '{:>9.3f}'.format(pps_tx) +' Mpps | '+ bcolors.ENDC + '{:>9}'.format(pps_sut_tx_str) +' Mpps | '+ '{:>9.3f}'.format(pps_rx)+' Mpps | '+lat_avg_prefix+ '{:>9.0f}'.format(lat_avg)+' us | '+lat_max_prefix+ '{:>9.0f}'.format(lat_max)+' us | '+ abs_drop_rate_prefix + '{:>14d}'.format(abs_dropped)+drop_rate_prefix+ ' |''{:>9.2f}'.format(drop_rate)+bcolors.ENDC+ '% | FAILED |'+lat_warning) - success = False - speed,minspeed,maxspeed = new_speed(speed,minspeed,maxspeed,success) - if endpps_sut_tx_str <> 'NO_RESULTS': - log.info("+--------+--------------------+----------------+----------------+----------------+----------------+----------------+----------------+----------------+------------+------------+") - log.info('|{:>7}'.format('END')+" | " + '{:>5.1f}'.format(endspeed) + '% ' +'{:>6.3f}'.format(get_pps(endspeed,size)) + ' Mpps | '+ '{:>9.3f}'.format(endpps_req_tx)+' Mpps | '+ '{:>9.3f}'.format(endpps_tx) +' Mpps | ' + '{:>9}'.format(endpps_sut_tx_str) +' Mpps | '+ '{:>9.3f}'.format(endpps_rx)+' Mpps | '+ '{:>9.0f}'.format(endlat_avg)+' us | '+ '{:>9.0f}'.format(endlat_max)+' us | '+'{:>14d}'.format(endabs_dropped)+ ' |''{:>9.2f}'.format(enddrop_rate)+ '% | SUCCESS |'+endwarning) - log.info("+--------+--------------------+----------------+----------------+----------------+----------------+----------------+----------------+----------------+------------+------------+") - writer.writerow({'flow':'1','size':(size+4),'endspeed':endspeed,'endspeedpps':get_pps(endspeed,size),'endpps_req_tx':endpps_req_tx,'endpps_tx':endpps_tx,'endpps_sut_tx_str':endpps_sut_tx_str,'endpps_rx':endpps_rx,'endlat_avg':endlat_avg,'endlat_max':endlat_max,'endabs_dropped':endabs_dropped,'enddrop_rate':enddrop_rate}) - else: - log.info('| Speed 0 or close to 0') - gensock.stop(latcores) +def get_speed(packet_speed,size): + # return speed in Gb/s + return (packet_speed / 1000.0 * (8*(size+24))) -def run_flowtest(gensock,sutsock): - size=PACKETSIZE-4 - log.info("+-----------------------------------------------------------------------------------------------------------------------------------------------------------------+") - log.info("| UDP, "+ '{:>5}'.format(size+4) +" bytes, different number of flows by randomizing SRC & DST UDP port |") - log.info("+--------+--------------------+----------------+----------------+----------------+----------------+----------------+----------------+----------------+------------+") - log.info("| Flows | Speed requested | Sent to NIC | Sent by Gen | Forward by SUT | Rec. by Gen | Avg. Latency | Max. Latency | Packets Lost | Loss Ratio |") - log.info("+--------+--------------------+----------------+----------------+----------------+----------------+----------------+----------------+----------------+------------+") - # To generate a desired number of flows, PROX will randomize the bits in source and destination ports, as specified by the bit masks in the flows variable. - flows={\ - 1: ['1000000000000000','1000000000000000'],\ - 2: ['1000000000000000','100000000000000X'],\ - 4: ['100000000000000X','100000000000000X'],\ - 8: ['100000000000000X','10000000000000XX'],\ - 16: ['10000000000000XX','10000000000000XX'],\ - 32: ['10000000000000XX','1000000000000XXX'],\ - 64: ['1000000000000XXX','1000000000000XXX'],\ - 128: ['1000000000000XXX','100000000000XXXX'],\ - 256: ['100000000000XXXX','100000000000XXXX'],\ - 512: ['100000000000XXXX','10000000000XXXXX'],\ - 1024: ['10000000000XXXXX','10000000000XXXXX'],\ - 2048: ['10000000000XXXXX','1000000000XXXXXX'],\ - 4096: ['1000000000XXXXXX','1000000000XXXXXX'],\ - 8192: ['1000000000XXXXXX','100000000XXXXXXX'],\ - 16384: ['100000000XXXXXXX','100000000XXXXXXX'],\ - 32768: ['100000000XXXXXXX','10000000XXXXXXXX'],\ - 65535: ['10000000XXXXXXXX','10000000XXXXXXXX'],\ - 131072: ['10000000XXXXXXXX','1000000XXXXXXXXX'],\ - 262144: ['1000000XXXXXXXXX','1000000XXXXXXXXX'],\ - 524280: ['1000000XXXXXXXXX','100000XXXXXXXXXX'],\ - 1048576:['100000XXXXXXXXXX','100000XXXXXXXXXX'],} - gensock.set_size(gencores,0,size) # This is setting the frame size - gensock.set_value(gencores,0,16,(size-14),2) # 18 is the difference between the frame size and IP size = size of (MAC addresses, ethertype and FCS) - gensock.set_value(gencores,0,38,(size-34),2) # 38 is the difference between the frame size and UDP size = 18 + size of IP header (=20) - # This will only work when using sending UDP packets. For different protocls and ehternet types, we would need a differnt calculation + +def run_flow_size_test(gensock,sutsock): gensock.start(latcores) - for flow_number in flow_size_list: - attempts = 0 - gensock.reset_stats() - if sutsock!='none': - sutsock.reset_stats() - source_port,destination_port = flows[flow_number] - gensock.set_random(gencores,0,34,source_port,2) - gensock.set_random(gencores,0,36,destination_port,2) - endpps_sut_tx_str = 'NO_RESULTS' - maxspeed = speed = STARTSPEED - minspeed = 0 - while (maxspeed-minspeed > ACCURACY): - attempts += 1 - print(str(flow_number)+' flows: Measurement ongoing at speed: ' + str(round(speed,2)) + '% ',end='\r') - sys.stdout.flush() - # Start generating packets at requested speed (in % of a 10Gb/s link) - gensock.speed(speed / len(gencores), gencores) - time.sleep(1) - # Get statistics now that the generation is stable and initial ARP messages are dealt with - pps_req_tx,pps_tx,pps_sut_tx_str,pps_rx,lat_avg,lat_max, abs_dropped, abs_tx, lat_min, lat_used = run_iteration(gensock,sutsock) - drop_rate = 100.0*abs_dropped/abs_tx - if lat_used < 0.95: - lat_warning = bcolors.FAIL + ' Potential latency accuracy problem: {:>3.0f}%'.format(lat_used*100) + bcolors.ENDC - else: - lat_warning = '' - if ((get_pps(speed,size) - pps_tx)/get_pps(speed,size))<0.001 and ((drop_rate < DROP_RATE_TRESHOLD) or (abs_dropped==DROP_RATE_TRESHOLD ==0)) and (lat_avg< LAT_AVG_TRESHOLD) and (lat_max < LAT_MAX_TRESHOLD): - log.debug('|{:>7}'.format(str(attempts))+" | " + '{:>5.1f}'.format(speed) + '% ' +'{:>6.3f}'.format(get_pps(speed,size)) + ' Mpps | '+ '{:>9.3f}'.format(pps_req_tx)+' Mpps | '+ '{:>9.3f}'.format(pps_tx) +' Mpps | ' + '{:>9}'.format(pps_sut_tx_str) +' Mpps | '+ '{:>9.3f}'.format(pps_rx)+' Mpps | '+ '{:>9.0f}'.format(lat_avg)+' us | '+ '{:>9.0f}'.format(lat_max)+' us | '+ '{:>14d}'.format(abs_dropped)+ ' |''{:>9.2f}'.format(drop_rate)+ '% | SUCCESS |'+lat_warning) - endspeed = speed - endpps_req_tx = pps_req_tx - endpps_tx = pps_tx - endpps_sut_tx_str = pps_sut_tx_str - endpps_rx = pps_rx - endlat_avg = lat_avg - endlat_max = lat_max - endabs_dropped = abs_dropped - enddrop_rate = drop_rate - endwarning = lat_warning - success = True - else: - abs_drop_rate_prefix = bcolors.ENDC - if ((abs_dropped>0) and (DROP_RATE_TRESHOLD ==0)): - abs_drop_rate_prefix = bcolors.FAIL - if (drop_rate < DROP_RATE_TRESHOLD): - drop_rate_prefix = bcolors.ENDC + for size in packet_size_list: + size = size-4 + gensock.set_size(gencores,0,size) # This is setting the frame size + gensock.set_value(gencores,0,16,(size-14),2) # 18 is the difference between the frame size and IP size = size of (MAC addresses, ethertype and FCS) + gensock.set_value(gencores,0,38,(size-34),2) # 38 is the difference between the frame size and UDP size = 18 + size of IP header (=20) + # This will only work when using sending UDP packets. For different protocls and ehternet types, we would need a different calculation + log.info("+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------+") + log.info("| UDP, "+ '{:>5}'.format(size+4) +" bytes, different number of flows by randomizing SRC & DST UDP port |") + log.info("+--------+--------------------+----------------+----------------+----------------+------------------------+----------------+----------------+----------------+------------+") + log.info("| Flows | Speed requested | core generated | Sent by Gen NIC| Forward by SUT | core received | Avg. Latency | Max. Latency | Packets Lost | Loss Ratio |") + log.info("+--------+--------------------+----------------+----------------+----------------+------------------------+----------------+----------------+----------------+------------+") + for flow_number in flow_size_list: + attempts = 0 + gensock.reset_stats() + if sutsock!='none': + sutsock.reset_stats() + source_port,destination_port = flows[flow_number] + gensock.set_random(gencores,0,34,source_port,2) + gensock.set_random(gencores,0,36,destination_port,2) + endpps_sut_tx_str = 'NO_RESULTS' + maxspeed = speed = STARTSPEED + minspeed = 0 + while (maxspeed-minspeed > ACCURACY): + attempts += 1 + print(str(flow_number)+' flows: Measurement ongoing at speed: ' + str(round(speed,2)) + '% ',end='\r') + sys.stdout.flush() + # Start generating packets at requested speed (in % of a 10Gb/s link) + gensock.speed(speed / len(gencores), gencores) + time.sleep(1) + # Get statistics now that the generation is stable and initial ARP messages are dealt with + pps_req_tx,pps_tx,pps_sut_tx_str,pps_rx,lat_avg,lat_max, abs_dropped, abs_tx_fail, abs_tx, lat_min, lat_used = run_iteration(gensock,sutsock) + drop_rate = 100.0*abs_dropped/abs_tx + if lat_used < 0.95: + lat_warning = bcolors.WARNING + ' Latency accuracy issue?: {:>3.0f}%'.format(lat_used*100) + bcolors.ENDC else: - drop_rate_prefix = bcolors.FAIL - if (lat_avg< LAT_AVG_TRESHOLD): + lat_warning = '' + if ((drop_rate < DROP_RATE_TRESHOLD) or (abs_dropped==DROP_RATE_TRESHOLD ==0)) and (lat_avg< LAT_AVG_TRESHOLD) and (lat_max < LAT_MAX_TRESHOLD): lat_avg_prefix = bcolors.ENDC - else: - lat_avg_prefix = bcolors.FAIL - if (lat_max< LAT_MAX_TRESHOLD): lat_max_prefix = bcolors.ENDC + abs_drop_rate_prefix = bcolors.ENDC + drop_rate_prefix = bcolors.ENDC + if ((get_pps(speed,size) - pps_tx)/get_pps(speed,size))>0.01: + speed_prefix = bcolors.WARNING + if abs_tx_fail > 0: + gen_warning = bcolors.WARNING + ' Network limit?: requesting {:<.3f} Mpps and getting {:<.3f} Mpps - {} failed to be transmitted'.format(get_pps(speed,size), pps_tx, abs_tx_fail) + bcolors.ENDC + else: + gen_warning = bcolors.WARNING + ' Generator limit?: requesting {:<.3f} Mpps and getting {:<.3f} Mpps'.format(get_pps(speed,size), pps_tx) + bcolors.ENDC + else: + speed_prefix = bcolors.ENDC + gen_warning = '' + endspeed = speed + endpps_req_tx = pps_req_tx + endpps_tx = pps_tx + endpps_sut_tx_str = pps_sut_tx_str + endpps_rx = pps_rx + endlat_avg = lat_avg + endlat_max = lat_max + endabs_dropped = abs_dropped + enddrop_rate = drop_rate + endwarning = '| |' + lat_warning + gen_warning + success = True + success_message='% | SUCCESS' else: - lat_max_prefix = bcolors.FAIL - if (((get_pps(speed,size) - pps_tx)/get_pps(speed,size))<0.001): - speed_prefix = bcolors.ENDC - else: - speed_prefix = bcolors.FAIL - log.debug('|{:>7}'.format(str(attempts))+" | " + '{:>5.1f}'.format(speed) + '% '+speed_prefix +'{:>6.3f}'.format(get_pps(speed,size)) + ' Mpps | '+ '{:>9.3f}'.format(pps_req_tx)+' Mpps | ' + '{:>9.3f}'.format(pps_tx) +' Mpps | '+ bcolors.ENDC + '{:>9}'.format(pps_sut_tx_str) +' Mpps | '+ '{:>9.3f}'.format(pps_rx)+' Mpps | '+lat_avg_prefix+ '{:>9.0f}'.format(lat_avg)+' us | '+lat_max_prefix+ '{:>9.0f}'.format(lat_max)+' us | '+ abs_drop_rate_prefix + '{:>14d}'.format(abs_dropped)+drop_rate_prefix+ ' |''{:>9.2f}'.format(drop_rate)+bcolors.ENDC+ '% | FAILED |'+lat_warning) - success = False - speed,minspeed,maxspeed = new_speed(speed,minspeed,maxspeed,success) - if endpps_sut_tx_str <> 'NO_RESULTS': - log.info('|{:>7}'.format(str(flow_number))+" | " + '{:>5.1f}'.format(endspeed) + '% ' +'{:>6.3f}'.format(get_pps(endspeed,size)) + ' Mpps | '+ '{:>9.3f}'.format(endpps_req_tx)+' Mpps | '+ '{:>9.3f}'.format(endpps_tx) +' Mpps | ' + '{:>9}'.format(endpps_sut_tx_str) +' Mpps | '+ '{:>9.3f}'.format(endpps_rx)+' Mpps | '+ '{:>9.0f}'.format(endlat_avg)+' us | '+ '{:>9.0f}'.format(endlat_max)+' us | '+ '{:>14d}'.format(endabs_dropped)+ ' |'+'{:>9.2f}'.format(enddrop_rate)+ '% |'+endwarning) - log.info("+--------+--------------------+----------------+----------------+----------------+----------------+----------------+----------------+----------------+------------+") - writer.writerow({'flow':flow_number,'size':(size+4),'endspeed':endspeed,'endspeedpps':get_pps(endspeed,size),'endpps_req_tx':endpps_req_tx,'endpps_tx':endpps_tx,'endpps_sut_tx_str':endpps_sut_tx_str,'endpps_rx':endpps_rx,'endlat_avg':endlat_avg,'endlat_max':endlat_max,'endabs_dropped':endabs_dropped,'enddrop_rate':enddrop_rate}) - else: - log.info('|{:>7}'.format(str(flow_number))+" | Speed 0 or close to 0") + success_message='% | FAILED' + gen_warning = '' + abs_drop_rate_prefix = bcolors.ENDC + if ((abs_dropped>0) and (DROP_RATE_TRESHOLD ==0)): + abs_drop_rate_prefix = bcolors.FAIL + if (drop_rate < DROP_RATE_TRESHOLD): + drop_rate_prefix = bcolors.ENDC + else: + drop_rate_prefix = bcolors.FAIL + if (lat_avg< LAT_AVG_TRESHOLD): + lat_avg_prefix = bcolors.ENDC + else: + lat_avg_prefix = bcolors.FAIL + if (lat_max< LAT_MAX_TRESHOLD): + lat_max_prefix = bcolors.ENDC + else: + lat_max_prefix = bcolors.FAIL + if (((get_pps(speed,size) - pps_tx)/get_pps(speed,size))<0.001): + speed_prefix = bcolors.ENDC + else: + speed_prefix = bcolors.FAIL + success = False + log.debug('|step{:>3}'.format(str(attempts))+" | " + '{:>5.1f}'.format(speed) + '% '+speed_prefix +'{:>6.3f}'.format(get_pps(speed,size)) + ' Mpps | '+ '{:>9.3f}'.format(pps_req_tx)+' Mpps | ' + '{:>9.3f}'.format(pps_tx) +' Mpps | '+ bcolors.ENDC + '{:>9}'.format(pps_sut_tx_str) +' Mpps | '+bcolors.OKBLUE + '{:>4.1f}'.format(get_speed(pps_rx,size)) + 'Gb/s{:>9.3f}'.format(pps_rx)+' Mpps'+bcolors.ENDC+' | '+lat_avg_prefix+ '{:>9.0f}'.format(lat_avg)+' us | '+lat_max_prefix+ '{:>9.0f}'.format(lat_max)+' us | '+ abs_drop_rate_prefix + '{:>14d}'.format(abs_dropped)+drop_rate_prefix+ ' |''{:>9.2f}'.format(drop_rate)+bcolors.ENDC+ success_message +lat_warning + gen_warning) + speed,minspeed,maxspeed = new_speed(speed,minspeed,maxspeed,success) + if endpps_sut_tx_str != 'NO_RESULTS': + log.info('|{:>7}'.format(str(flow_number))+" | " + '{:>5.1f}'.format(endspeed) + '% ' + speed_prefix + '{:>6.3f}'.format(get_pps(endspeed,size)) + ' Mpps | '+ '{:>9.3f}'.format(endpps_req_tx)+ ' Mpps | '+ bcolors.ENDC + '{:>9.3f}'.format(endpps_tx) +' Mpps | ' + '{:>9}'.format(endpps_sut_tx_str) +' Mpps | '+bcolors.OKBLUE + '{:>4.1f}'.format(get_speed(pps_rx,size)) + 'Gb/s{:>9.3f}'.format(endpps_rx)+' Mpps'+bcolors.ENDC+' | '+ '{:>9.0f}'.format(endlat_avg)+' us | '+ '{:>9.0f}'.format(endlat_max)+' us | '+ '{:>14d}'.format(endabs_dropped)+ ' |'+'{:>9.2f}'.format(enddrop_rate)+ '% |') + if endwarning: + log.info (endwarning) + log.info("+--------+--------------------+----------------+----------------+----------------+------------------------+----------------+----------------+----------------+------------+") + writer.writerow({'flow':flow_number,'size':(size+4),'endspeed':endspeed,'endspeedpps':get_pps(endspeed,size),'endpps_req_tx':endpps_req_tx,'endpps_tx':endpps_tx,'endpps_sut_tx_str':endpps_sut_tx_str,'endpps_rx':endpps_rx,'endlat_avg':endlat_avg,'endlat_max':endlat_max,'endabs_dropped':endabs_dropped,'enddrop_rate':enddrop_rate}) + else: + log.info('|{:>7}'.format(str(flow_number))+" | Speed 0 or close to 0") gensock.stop(latcores) -def run_sizetest(gensock,sutsock): - log.info("+-----------------------------------------------------------------------------------------------------------------------------------------------------------------+") - log.info("| UDP, 1 flow, different packet sizes |") - log.info("+--------+--------------------+----------------+----------------+----------------+----------------+----------------+----------------+----------------+------------+") - log.info("| Pktsize| Speed requested | Sent to NIC | Sent by Gen | Forward by SUT | Rec. by Gen | Avg. Latency | Max. Latency | Packets Lost | Loss Ratio |") - log.info("+--------+--------------------+----------------+----------------+----------------+----------------+----------------+----------------+----------------+------------+") + +def run_fixed_rate(gensock,sutsock): + log.info("+-----------------------------------------------------------------------------------------------------------------------------------------------------------+") + log.info("| UDP, 1 flow, different packet sizes |") + log.info("+-----+------------------+-------------+-------------+-------------+-------------+-------------+-------------+-----------+-----------+---------+------------+") + log.info("|Pktsz| Speed requested | Gen by core | Sent by NIC | Fwrd by SUT | Rec. by core| Avg. Latency| Max. Latency| Sent | Received | Lost | Total Lost |") + log.info("+-----+------------------+-------------+-------------+-------------+-------------+-------------+-------------+-----------+-----------+---------+------------+") + sleep_time = 3 gensock.start(latcores) for size in packet_size_list: + # Sleep_time is needed to be able to do accurate measurements to check for packet loss. We need to make this time large enough so that we do not take the first measurement while some packets from the previous tests migth still be in flight + time.sleep(sleep_time) size = size-4 - attempts = 0 gensock.reset_stats() if sutsock!='none': sutsock.reset_stats() @@ -463,190 +419,157 @@ def run_sizetest(gensock,sutsock): gensock.set_value(gencores,0,16,(size-14),2) # 18 is the difference between the frame size and IP size = size of (MAC addresses, ethertype and FCS) gensock.set_value(gencores,0,38,(size-34),2) # 38 is the difference between the frame size and UDP size = 18 + size of IP header (=20) # This will only work when using sending UDP packets. For different protocls and ehternet types, we would need a differnt calculation - endpps_sut_tx_str = 'NO_RESULTS' - maxspeed = speed = STARTSPEED - minspeed = 0 - while (maxspeed-minspeed > ACCURACY): - attempts += 1 - print(str(size+4)+' bytes: Measurement ongoing at speed: ' + str(round(speed,2)) + '% ',end='\r') - sys.stdout.flush() - # Start generating packets at requested speed (in % of a 10Gb/s link) - gensock.speed(speed / len(gencores), gencores) - # Get statistics now that the generation is stable and initial ARP messages are dealt with - pps_req_tx,pps_tx,pps_sut_tx_str,pps_rx,lat_avg,lat_max, abs_dropped, abs_tx, lat_min, lat_used = run_iteration(gensock,sutsock) - drop_rate = 100.0*abs_dropped/abs_tx - if lat_used < 0.95: - lat_warning = bcolors.FAIL + ' Potential latency accuracy problem: {:>3.0f}%'.format(lat_used*100) + bcolors.ENDC - else: - lat_warning = '' - if ((get_pps(speed,size) - pps_tx)/get_pps(speed,size))<0.001 and ((drop_rate < DROP_RATE_TRESHOLD) or (abs_dropped==DROP_RATE_TRESHOLD ==0)) and (lat_avg< LAT_AVG_TRESHOLD) and (lat_max < LAT_MAX_TRESHOLD): - log.debug('|{:>7}'.format(str(attempts))+" | " + '{:>5.1f}'.format(speed) + '% ' +'{:>6.3f}'.format(get_pps(speed,size)) + ' Mpps | '+ '{:>9.3f}'.format(pps_req_tx)+' Mpps | '+ '{:>9.3f}'.format(pps_tx) +' Mpps | ' + '{:>9}'.format(pps_sut_tx_str) +' Mpps | '+ '{:>9.3f}'.format(pps_rx)+' Mpps | '+ '{:>9.0f}'.format(lat_avg)+' us | '+ '{:>9.0f}'.format(lat_max)+' us | '+ '{:>14d}'.format(abs_dropped)+ ' |''{:>9.2f}'.format(drop_rate)+ '% | SUCCESS |'+lat_warning) - endspeed = speed - endpps_req_tx = pps_req_tx - endpps_tx = pps_tx - endpps_sut_tx_str = pps_sut_tx_str - endpps_rx = pps_rx - endlat_avg = lat_avg - endlat_max = lat_max - endabs_dropped = abs_dropped - enddrop_rate = drop_rate - endwarning = lat_warning - success = True - else: - abs_drop_rate_prefix = bcolors.ENDC - if ((abs_dropped>0) and (DROP_RATE_TRESHOLD ==0)): - abs_drop_rate_prefix = bcolors.FAIL - if (drop_rate < DROP_RATE_TRESHOLD): - drop_rate_prefix = bcolors.ENDC - else: - drop_rate_prefix = bcolors.FAIL - if (lat_avg< LAT_AVG_TRESHOLD): - lat_avg_prefix = bcolors.ENDC - else: - lat_avg_prefix = bcolors.FAIL - if (lat_max< LAT_MAX_TRESHOLD): - lat_max_prefix = bcolors.ENDC - else: - lat_max_prefix = bcolors.FAIL - if (((get_pps(speed,size) - pps_tx)/get_pps(speed,size))<0.001): - speed_prefix = bcolors.ENDC - else: - speed_prefix = bcolors.FAIL - log.debug('|{:>7}'.format(str(attempts))+" | " + '{:>5.1f}'.format(speed) + '% '+speed_prefix +'{:>6.3f}'.format(get_pps(speed,size)) + ' Mpps | '+ '{:>9.3f}'.format(pps_req_tx)+' Mpps | ' + '{:>9.3f}'.format(pps_tx) +' Mpps | '+ bcolors.ENDC + '{:>9}'.format(pps_sut_tx_str) +' Mpps | '+ '{:>9.3f}'.format(pps_rx)+' Mpps | '+lat_avg_prefix+ '{:>9.0f}'.format(lat_avg)+' us | '+lat_max_prefix+ '{:>9.0f}'.format(lat_max)+' us | '+ abs_drop_rate_prefix + '{:>14d}'.format(abs_dropped)+drop_rate_prefix+ ' |''{:>9.2f}'.format(drop_rate)+bcolors.ENDC+ '% | FAILED |'+ lat_warning) - success = False - speed,minspeed,maxspeed = new_speed(speed,minspeed,maxspeed,success) - if endpps_sut_tx_str <> 'NO_RESULTS': - log.info('|{:>7}'.format(size+4)+" | " + '{:>5.1f}'.format(endspeed) + '% ' +'{:>6.3f}'.format(get_pps(endspeed,size)) + ' Mpps | '+ '{:>9.3f}'.format(endpps_req_tx)+' Mpps | '+ '{:>9.3f}'.format(endpps_tx) +' Mpps | ' + '{:>9}'.format(endpps_sut_tx_str) +' Mpps | '+ '{:>9.3f}'.format(endpps_rx)+' Mpps | '+ '{:>9.0f}'.format(endlat_avg)+' us | '+'{:>9.0f}'.format(endlat_max)+' us | '+ '{:>14d}'.format(endabs_dropped)+ ' |'+'{:>9.2f}'.format(enddrop_rate)+ '% |'+ endwarning) - log.info("+--------+--------------------+----------------+----------------+----------------+----------------+----------------+----------------+----------------+------------+") - writer.writerow({'flow':'1','size':(size+4),'endspeed':endspeed,'endspeedpps':get_pps(endspeed,size),'endpps_req_tx':endpps_req_tx,'endpps_tx':endpps_tx,'endpps_sut_tx_str':endpps_sut_tx_str,'endpps_rx':endpps_rx,'endlat_avg':endlat_avg,'endlat_max':endlat_max,'endabs_dropped':endabs_dropped,'enddrop_rate':enddrop_rate}) - else: - log.debug('|{:>7}'.format(str(size))+" | Speed 0 or close to 0") - gensock.stop(latcores) - -def run_max_frame_rate(gensock,sutsock): - log.info("+-----------------------------------------------------------------------------------------------------------------------------------------------------------+") - log.info("| UDP, 1 flow, different packet sizes |") - log.info("+-----+------------------+-------------+-------------+-------------+-------------+-------------+-------------+-----------+-----------+---------+------------+") - log.info("|Pktsz| Speed requested | Sent to NIC | Sent by Gen | Fwrd by SUT | Rec. by Gen | Avg. Latency| Max. Latency| Sent | Received | Lost | Total Lost |") - log.info("+-----+------------------+-------------+-------------+-------------+-------------+-------------+-------------+-----------+-----------+---------+------------+") - sleep_time = 3 - gensock.start(latcores) - for size in packet_size_list: - # Sleep_time is needed to be able to do accurate measurements to check for packet loss. We need to make this time large enough so that we do not take the first measurement while some packets from the previous tests migth still be in flight - time.sleep(sleep_time) - size = size-4 - gensock.reset_stats() - if sutsock!='none': - sutsock.reset_stats() - gensock.set_size(gencores,0,size) # This is setting the frame size - gensock.set_value(gencores,0,16,(size-14),2) # 18 is the difference between the frame size and IP size = size of (MAC addresses, ethertype and FCS) - gensock.set_value(gencores,0,38,(size-34),2) # 38 is the difference between the frame size and UDP size = 18 + size of IP header (=20) - # This will only work when using sending UDP packets. For different protocls and ehternet types, we would need a differnt calculation - pps_sut_tx_str = 'NO_RESULTS' - speed = STARTSPEED - # Start generating packets at requested speed (in % of a 10Gb/s link) - gensock.speed(speed / len(gencores), gencores) - duration = float(runtime) - first = 1 - tot_drop = 0 - if sutsock!='none': - old_sut_rx, old_sut_tx, old_sut_drop, old_sut_tsc, sut_tsc_hz = sutsock.core_stats(sutstatcores) - old_rx, old_tx, old_drop, old_tsc, tsc_hz = gensock.core_stats(genstatcores) - gensock.start(gencores) - while (duration > 0): - time.sleep(0.5) - lat_min, lat_max, lat_avg, lat_used = gensock.lat_stats(latcores) + pps_sut_tx_str = 'NO_RESULTS' + speed = STARTSPEED + # Start generating packets at requested speed (in % of a 10Gb/s link) + gensock.speed(speed / len(gencores), gencores) + duration = float(runtime) + first = 1 + tot_drop = 0 + if sutsock!='none': + old_sut_rx, old_sut_non_dp_rx, old_sut_tx, old_sut_non_dp_tx, old_sut_drop, old_sut_tx_fail, old_sut_tsc, sut_tsc_hz = sutsock.core_stats(sutstatcores,tasks) + old_sut_rx = old_sut_rx - old_sut_non_dp_rx + old_sut_tx = old_sut_tx - old_sut_non_dp_tx + old_rx, old_non_dp_rx, old_tx, old_non_dp_tx, old_drop, old_tx_fail, old_tsc, tsc_hz = gensock.core_stats(genstatcores,tasks) + old_rx = old_rx - old_non_dp_rx + old_tx = old_tx - old_non_dp_tx + gensock.start(gencores) + while (duration > 0): + time.sleep(0.5) + lat_min, lat_max, lat_avg, lat_used = gensock.lat_stats(latcores) if lat_used < 0.95: lat_warning = bcolors.FAIL + ' Potential latency accuracy problem: {:>3.0f}%'.format(lat_used*100) + bcolors.ENDC else: lat_warning = '' - # Get statistics after some execution time - new_rx, new_tx, new_drop, new_tsc, tsc_hz = gensock.core_stats(genstatcores) - if sutsock!='none': - new_sut_rx, new_sut_tx, new_sut_drop, new_sut_tsc, sut_tsc_hz = sutsock.core_stats(sutstatcores) - drop = new_drop-old_drop # drop is all packets dropped by all tasks. This includes packets dropped at the generator task + packets dropped by the nop task. In steady state, this equals to the number of packets received by this VM - rx = new_rx - old_rx # rx is all packets received by the nop task = all packets received in the gen VM - tx = new_tx - old_tx # tx is all generated packets actually accepted by the interface - tsc = new_tsc - old_tsc # time difference between the 2 measurements, expressed in cycles. + # Get statistics after some execution time + new_rx, new_non_dp_rx, new_tx, new_non_dp_tx, new_drop, new_tx_fail, new_tsc, tsc_hz = gensock.core_stats(genstatcores,tasks) + new_rx = new_rx - new_non_dp_rx + new_tx = new_tx - new_non_dp_tx + if sutsock!='none': + new_sut_rx, new_sut_non_dp_rx, new_sut_tx, new_sut_non_dp_tx, new_sut_drop, new_sut_tx_fail, new_sut_tsc, sut_tsc_hz = sutsock.core_stats(sutstatcores,tasks) + new_sut_rx = new_sut_rx - new_sut_non_dp_rx + new_sut_tx = new_sut_tx - new_sut_non_dp_tx + drop = new_drop-old_drop # drop is all packets dropped by all tasks. This includes packets dropped at the generator task + packets dropped by the nop task. In steady state, this equals to the number of packets received by this VM + rx = new_rx - old_rx # rx is all packets received by the nop task = all packets received in the gen VM + tx = new_tx - old_tx # tx is all generated packets actually accepted by the interface + tsc = new_tsc - old_tsc # time difference between the 2 measurements, expressed in cycles. if tsc == 0 : continue - if sutsock!='none': - sut_rx = new_sut_rx - old_sut_rx - sut_tx = new_sut_tx - old_sut_tx - sut_tsc = new_sut_tsc - old_sut_tsc + if sutsock!='none': + sut_rx = new_sut_rx - old_sut_rx + sut_tx = new_sut_tx - old_sut_tx + sut_tsc = new_sut_tsc - old_sut_tsc if sut_tsc == 0 : continue - duration = duration - 1 - old_drop = new_drop - old_rx = new_rx - old_tx = new_tx - old_tsc = new_tsc - pps_req_tx = (tx+drop-rx)*tsc_hz*1.0/(tsc*1000000) - pps_tx = tx*tsc_hz*1.0/(tsc*1000000) - pps_rx = rx*tsc_hz*1.0/(tsc*1000000) - if sutsock!='none': - old_sut_tx = new_sut_tx - old_sut_rx = new_sut_rx - old_sut_tsc= new_sut_tsc - pps_sut_tx = sut_tx*sut_tsc_hz*1.0/(sut_tsc*1000000) - pps_sut_tx_str = '{:>7.3f}'.format(pps_sut_tx) - else: - pps_sut_tx = 0 - pps_sut_tx_str = 'NO MEAS.' - if (tx == 0): - log.critical("TX = 0. Test interrupted since no packet has been sent.") - raise Exception("TX = 0") - tot_drop = tot_drop + tx - rx - - if pps_sut_tx_str <> 'NO_RESULTS': - # First second mpps are not valid as there is no alignement between time the generator is started and per seconds stats - if (first): - log.info('|{:>4}'.format(size+4)+" |" + '{:>5.1f}'.format(speed) + '% ' +'{:>6.3f}'.format(get_pps(speed,size)) + ' Mpps|'+' |' +' |' +' |'+ ' |'+ '{:>8.0f}'.format(lat_avg)+' us |'+'{:>8.0f}'.format(lat_max)+' us | ' + '{:>9.0f}'.format(tx) + ' | '+ '{:>9.0f}'.format(rx) + ' | '+ '{:>7.0f}'.format(tx-rx) + ' | '+'{:>7.0f}'.format(tot_drop) +' |'+lat_warning) - else: - log.info('|{:>4}'.format(size+4)+" |" + '{:>5.1f}'.format(speed) + '% ' +'{:>6.3f}'.format(get_pps(speed,size)) + ' Mpps|'+ '{:>7.3f}'.format(pps_req_tx)+' Mpps |'+ '{:>7.3f}'.format(pps_tx) +' Mpps |' + '{:>7}'.format(pps_sut_tx_str) +' Mpps |'+ '{:>7.3f}'.format(pps_rx)+' Mpps |'+ '{:>8.0f}'.format(lat_avg)+' us |'+'{:>8.0f}'.format(lat_max)+' us | ' + '{:>9.0f}'.format(tx) + ' | '+ '{:>9.0f}'.format(rx) + ' | '+ '{:>7.0f}'.format(tx-rx) + ' | '+ '{:>7.0f}'.format(tot_drop) +' |'+lat_warning) - else: - log.debug('|{:>7}'.format(str(size))+" | Speed 0 or close to 0") - first = 0 - if (duration <= 0): - #Stop generating - gensock.stop(gencores) - time.sleep(sleep_time) - lat_min, lat_max, lat_avg, lat_used = gensock.lat_stats(latcores) + duration = duration - 1 + old_drop = new_drop + old_rx = new_rx + old_tx = new_tx + old_tsc = new_tsc + pps_req_tx = (tx+drop-rx)*tsc_hz*1.0/(tsc*1000000) + pps_tx = tx*tsc_hz*1.0/(tsc*1000000) + pps_rx = rx*tsc_hz*1.0/(tsc*1000000) + if sutsock!='none': + old_sut_tx = new_sut_tx + old_sut_rx = new_sut_rx + old_sut_tsc= new_sut_tsc + pps_sut_tx = sut_tx*sut_tsc_hz*1.0/(sut_tsc*1000000) + pps_sut_tx_str = '{:>7.3f}'.format(pps_sut_tx) + else: + pps_sut_tx = 0 + pps_sut_tx_str = 'NO MEAS.' + if (tx == 0): + log.critical("TX = 0. Test interrupted since no packet has been sent.") + raise Exception("TX = 0") + tot_drop = tot_drop + tx - rx + + if pps_sut_tx_str != 'NO_RESULTS': + # First second mpps are not valid as there is no alignement between time the generator is started and per seconds stats + if (first): + log.info('|{:>4}'.format(size+4)+" |" + '{:>5.1f}'.format(speed) + '% ' +'{:>6.3f}'.format(get_pps(speed,size)) + ' Mpps|'+' |' +' |' +' |'+ ' |'+ '{:>8.0f}'.format(lat_avg)+' us |'+'{:>8.0f}'.format(lat_max)+' us | ' + '{:>9.0f}'.format(tx) + ' | '+ '{:>9.0f}'.format(rx) + ' | '+ '{:>7.0f}'.format(tx-rx) + ' | '+'{:>7.0f}'.format(tot_drop) +' |'+lat_warning) + else: + log.info('|{:>4}'.format(size+4)+" |" + '{:>5.1f}'.format(speed) + '% ' +'{:>6.3f}'.format(get_pps(speed,size)) + ' Mpps|'+ '{:>7.3f}'.format(pps_req_tx)+' Mpps |'+ '{:>7.3f}'.format(pps_tx) +' Mpps |' + '{:>7}'.format(pps_sut_tx_str) +' Mpps |'+ '{:>7.3f}'.format(pps_rx)+' Mpps |'+ '{:>8.0f}'.format(lat_avg)+' us |'+'{:>8.0f}'.format(lat_max)+' us | ' + '{:>9.0f}'.format(tx) + ' | '+ '{:>9.0f}'.format(rx) + ' | '+ '{:>7.0f}'.format(tx-rx) + ' | '+ '{:>7.0f}'.format(tot_drop) +' |'+lat_warning) + else: + log.debug('|{:>7}'.format(str(size))+" | Speed 0 or close to 0") + first = 0 + if (duration <= 0): + #Stop generating + gensock.stop(gencores) + time.sleep(sleep_time) + lat_min, lat_max, lat_avg, lat_used = gensock.lat_stats(latcores) if lat_used < 0.95: lat_warning = bcolors.FAIL + ' Potential latency accuracy problem: {:>3.0f}%'.format(lat_used*100) + bcolors.ENDC else: lat_warning = '' - # Get statistics after some execution time - new_rx, new_tx, new_drop, new_tsc, tsc_hz = gensock.core_stats(genstatcores) - if sutsock!='none': - new_sut_rx, new_sut_tx, new_sut_drop, new_sut_tsc, sut_tsc_hz = sutsock.core_stats(sutstatcores) - drop = new_drop-old_drop # drop is all packets dropped by all tasks. This includes packets dropped at the generator task + packets dropped by the nop task. In steady state, this equals to the number of packets received by this VM - rx = new_rx - old_rx # rx is all packets received by the nop task = all packets received in the gen VM - tx = new_tx - old_tx # tx is all generated packets actually accepted by the interface - tsc = new_tsc - old_tsc # time difference between the 2 measurements, expressed in cycles. - tot_drop = tot_drop + tx - rx - if sutsock!='none': - sut_rx = new_sut_rx - old_sut_rx - sut_tx = new_sut_tx - old_sut_tx - sut_tsc = new_sut_tsc - old_sut_tsc - if pps_sut_tx_str <> 'NO_RESULTS': - log.info('|{:>4}'.format(size+4)+" |" + '{:>5.1f}'.format(speed) + '% ' +'{:>6.3f}'.format(get_pps(speed,size)) + ' Mpps|'+' |' +' |' +' |'+ ' |'+ '{:>8.0f}'.format(lat_avg)+' us |'+'{:>8.0f}'.format(lat_max)+' us | ' + '{:>9.0f}'.format(tx) + ' | '+ '{:>9.0f}'.format(rx) + ' | '+ '{:>7.0f}'.format(tx-rx) + ' | '+ '{:>7.0f}'.format(tot_drop) +' |'+lat_warning) - log.info("+-----+------------------+-------------+-------------+-------------+-------------+-------------+-------------+-----------+-----------+---------+------------+") + # Get statistics after some execution time + new_rx, new_non_dp_rx, new_tx, new_non_dp_tx, new_drop, new_tx_fail, new_tsc, tsc_hz = gensock.core_stats(genstatcores,tasks) + new_rx = new_rx - new_non_dp_rx + new_tx = new_tx - new_non_dp_tx + if sutsock!='none': + new_sut_rx, new_sut_non_dp_rx, new_sut_tx, new_sut_non_dp_tx, new_sut_drop, new_sut_tx_fail, new_sut_tsc, sut_tsc_hz = sutsock.core_stats(sutstatcores,tasks) + new_sut_rx = new_sut_rx - new_sut_non_dp_rx + new_sut_tx = new_sut_tx - new_sut_non_dp_tx + drop = new_drop-old_drop # drop is all packets dropped by all tasks. This includes packets dropped at the generator task + packets dropped by the nop task. In steady state, this equals to the number of packets received by this VM + rx = new_rx - old_rx # rx is all packets received by the nop task = all packets received in the gen VM + tx = new_tx - old_tx # tx is all generated packets actually accepted by the interface + tsc = new_tsc - old_tsc # time difference between the 2 measurements, expressed in cycles. + tot_drop = tot_drop + tx - rx + if sutsock!='none': + sut_rx = new_sut_rx - old_sut_rx + sut_tx = new_sut_tx - old_sut_tx + sut_tsc = new_sut_tsc - old_sut_tsc + if pps_sut_tx_str != 'NO_RESULTS': + log.info('|{:>4}'.format(size+4)+" |" + '{:>5.1f}'.format(speed) + '% ' +'{:>6.3f}'.format(get_pps(speed,size)) + ' Mpps|'+' |' +' |' +' |'+ ' |'+ '{:>8.0f}'.format(lat_avg)+' us |'+'{:>8.0f}'.format(lat_max)+' us | ' + '{:>9.0f}'.format(tx) + ' | '+ '{:>9.0f}'.format(rx) + ' | '+ '{:>7.0f}'.format(tx-rx) + ' | '+ '{:>7.0f}'.format(tot_drop) +' |'+lat_warning) + log.info("+-----+------------------+-------------+-------------+-------------+-------------+-------------+-------------+-----------+-----------+---------+------------+") gensock.stop(latcores) +def run_measure_swap(sutsock): + log.info("+------------------------------------------------------------------------------------------------------+") + log.info("| Measuring packets on SWAP system |") + log.info("+-----------+------------+------------+------------+------------+------------+------------+------------+") + log.info("| Time | RX | TX | non DP RX | non DP TX | TX - RX | nonDP TX-RX| DROP TOT |") + log.info("+-----------+------------+------------+------------+------------+------------+------------+------------+") + sutsock.reset_stats() + duration = float(runtime) + first = 1 + tot_drop = 0 + old_rx, old_non_dp_rx, old_tx, old_non_dp_tx, old_drop, old_tx_fail, old_tsc, tsc_hz = sutsock.core_stats(sutstatcores,tasks) + while (duration > 0): + time.sleep(0.5) + # Get statistics after some execution time + new_rx, new_non_dp_rx, new_tx, new_non_dp_tx, new_drop, new_tx_fail, new_tsc, tsc_hz = sutsock.core_stats(sutstatcores,tasks) + drop = new_drop-old_drop + rx = new_rx - old_rx + tx = new_tx - old_tx + non_dp_rx = new_non_dp_rx - old_non_dp_rx + non_dp_tx = new_non_dp_tx - old_non_dp_tx + tsc = new_tsc - old_tsc + if tsc == 0 : + continue + duration = duration - 1 + old_drop = new_drop + old_rx = new_rx + old_tx = new_tx + old_non_dp_rx = new_non_dp_rx + old_non_dp_tx = new_non_dp_tx + old_tsc = new_tsc + tot_drop = tot_drop + tx - rx + + log.info('|{:>10.0f}'.format(duration)+' | ' + '{:>10.0f}'.format(rx) + ' | ' +'{:>10.0f}'.format(tx) + ' | '+'{:>10.0f}'.format(non_dp_rx)+' | '+'{:>10.0f}'.format(non_dp_tx)+' | ' + '{:>10.0f}'.format(tx-rx) + ' | '+ '{:>10.0f}'.format(non_dp_tx-non_dp_rx) + ' | '+'{:>10.0f}'.format(tot_drop) +' |') + log.info("+------------------------------------------------------------------------------------------------------+") def run_irqtest(sock): - log.info("+----------------------------------------------------------------------------------------------------------------------------") - log.info("| Measuring time probably spent dealing with an interrupt. Interrupting DPDK cores for more than 50us might be problematic ") - log.info("| and result in packet loss. The first row shows the interrupted time buckets: first number is the bucket between 0us and ") + log.info("+----------------------------------------------------------------------------------------------------------------------------") + log.info("| Measuring time probably spent dealing with an interrupt. Interrupting DPDK cores for more than 50us might be problematic ") + log.info("| and result in packet loss. The first row shows the interrupted time buckets: first number is the bucket between 0us and ") log.info("| that number expressed in us and so on. The numbers in the other rows show how many times per second, the program was ") - log.info("| interrupted for a time as specified by its bucket. '0' is printed when there are no interrupts in this bucket throughout ") - log.info("| the duration of the test. This is to avoid rounding errors in the case of 0.0 ") - log.info("+----------------------------------------------------------------------------------------------------------------------------") - sys.stdout.flush() + log.info("| interrupted for a time as specified by its bucket. '0' is printed when there are no interrupts in this bucket throughout ") + log.info("| the duration of the test. This is to avoid rounding errors in the case of 0.0 ") + log.info("+----------------------------------------------------------------------------------------------------------------------------") + sys.stdout.flush() buckets=sock.show_irq_buckets(1) - print('Measurement ongoing ... ',end='\r') + print('Measurement ongoing ... ',end='\r') sock.stop(irqcores) old_irq = [[0 for x in range(len(buckets)+1)] for y in range(len(irqcores)+1)] irq = [[0 for x in range(len(buckets)+1)] for y in range(len(irqcores)+1)] @@ -671,12 +594,11 @@ def run_irqtest(sock): irq[i][j] = str(round(diff/float(runtime), 2)) for row in irq: log.info(''.join(['{:>12}'.format(item) for item in row])) -# log.info('\n'.join([''.join(['{:>12}'.format(item) for item in row]) for row in irq])) def run_impairtest(gensock,sutsock): size=PACKETSIZE-4 - log.info("+-----------------------------------------------------------------------------------------------------------------------------------------------------------------+") - log.info("| Generator is sending UDP (1 flow) packets ("+ '{:>5}'.format(size+4) +" bytes) to SUT via GW dropping and delaying packets. SUT sends packets back. Use ctrl-c to stop the test |") + log.info("+-----------------------------------------------------------------------------------------------------------------------------------------------------------------+") + log.info("| Generator is sending UDP (1 flow) packets ("+ '{:>5}'.format(size+4) +" bytes) to SUT via GW dropping and delaying packets. SUT sends packets back. Use ctrl-c to stop the test |") log.info("+--------+--------------------+----------------+----------------+----------------+----------------+----------------+----------------+----------------+------------+") log.info("| Test | Speed requested | Sent to NIC | Sent by Gen | Forward by SUT | Rec. by Gen | Avg. Latency | Max. Latency | Packets Lost | Loss Ratio |") log.info("+--------+--------------------+----------------+----------------+----------------+----------------+----------------+----------------+----------------+------------+") @@ -688,31 +610,41 @@ def run_impairtest(gensock,sutsock): gensock.start(latcores) speed = STARTSPEED gensock.speed(speed / len(gencores), gencores) - while True: - attempts += 1 - print('Measurement ongoing at speed: ' + str(round(speed,2)) + '% ',end='\r') - sys.stdout.flush() - time.sleep(1) - # Get statistics now that the generation is stable and NO ARP messages any more - pps_req_tx,pps_tx,pps_sut_tx_str,pps_rx,lat_avg,lat_max, abs_dropped, abs_tx, lat_min, lat_used = run_iteration(gensock,sutsock) + while True: + attempts += 1 + print('Measurement ongoing at speed: ' + str(round(speed,2)) + '% ',end='\r') + sys.stdout.flush() + time.sleep(1) + # Get statistics now that the generation is stable and NO ARP messages any more + pps_req_tx,pps_tx,pps_sut_tx_str,pps_rx,lat_avg,lat_max, abs_dropped, abs_tx_fail, abs_tx, lat_min, lat_used = run_iteration(gensock,sutsock) drop_rate = 100.0*abs_dropped/abs_tx if lat_used < 0.95: lat_warning = bcolors.FAIL + ' Potential latency accuracy problem: {:>3.0f}%'.format(lat_used*100) + bcolors.ENDC else: lat_warning = '' - log.info('|{:>7}'.format(str(attempts))+" | " + '{:>5.1f}'.format(speed) + '% ' +'{:>6.3f}'.format(get_pps(speed,size)) + ' Mpps | '+ '{:>9.3f}'.format(pps_req_tx)+' Mpps | '+ '{:>9.3f}'.format(pps_tx) +' Mpps | ' + '{:>9}'.format(pps_sut_tx_str) +' Mpps | '+ '{:>9.3f}'.format(pps_rx)+' Mpps | '+ '{:>9.0f}'.format(lat_avg)+' us | '+ '{:>9.0f}'.format(lat_max)+' us | '+ '{:>14d}'.format(abs_dropped)+ ' |''{:>9.2f}'.format(drop_rate)+ '% |'+lat_warning) + log.info('|{:>7}'.format(str(attempts))+" | " + '{:>5.1f}'.format(speed) + '% ' +'{:>6.3f}'.format(get_pps(speed,size)) + ' Mpps | '+ '{:>9.3f}'.format(pps_req_tx)+' Mpps | '+ '{:>9.3f}'.format(pps_tx) +' Mpps | ' + '{:>9}'.format(pps_sut_tx_str) +' Mpps | '+ '{:>9.3f}'.format(pps_rx)+' Mpps | '+ '{:>9.0f}'.format(lat_avg)+' us | '+ '{:>9.0f}'.format(lat_max)+' us | '+ '{:>14d}'.format(abs_dropped)+ ' |''{:>9.2f}'.format(drop_rate)+ '% |'+lat_warning) writer.writerow({'flow':'1','size':(size+4),'endspeed':speed,'endspeedpps':get_pps(speed,size),'endpps_req_tx':pps_req_tx,'endpps_tx':pps_tx,'endpps_sut_tx_str':pps_sut_tx_str,'endpps_rx':pps_rx,'endlat_avg':lat_avg,'endlat_max':lat_max,'endabs_dropped':abs_dropped,'enddrop_rate':drop_rate}) gensock.stop(latcores) -def run_inittest(gensock): +def run_warmuptest(gensock): # Running at low speed to make sure the ARP messages can get through. # If not doing this, the ARP message could be dropped by a switch in overload and then the test will not give proper results # Note hoever that if we would run the test steps during a very long time, the ARP would expire in the switch. # PROX will send a new ARP request every seconds so chances are very low that they will all fail to get through - gensock.speed(0.01 / len(gencores), gencores) + gensock.speed(WARMUPSPEED / len(gencores), gencores) + size=PACKETSIZE-4 + gensock.set_size(gencores,0,size) # This is setting the frame size + gensock.set_value(gencores,0,16,(size-14),2) # 18 is the difference between the frame size and IP size = size of (MAC addresses, ethertype and FCS) + gensock.set_value(gencores,0,38,(size-34),2) # 38 is the difference between the frame size and UDP size = 18 + size of IP header (=20) + gensock.set_value(gencores,0,56,1,1) + # This will only work when using sending UDP packets. For different protocols and ethernet types, we would need a different calculation + source_port,destination_port = flows[FLOWSIZE] + gensock.set_random(gencores,0,34,source_port,2) + gensock.set_random(gencores,0,36,destination_port,2) gensock.start(genstatcores) - time.sleep(2) + time.sleep(WARMUPTIME) gensock.stop(genstatcores) + gensock.set_value(gencores,0,56,50,1) global sutstatcores global genstatcores @@ -721,8 +653,34 @@ global gencores global irqcores global PACKETSIZE global packet_size_list +global FLOWSIZE global flow_size_list +global WARMUPTIME +global WARMUPSPEED global required_number_of_test_machines +# To generate a desired number of flows, PROX will randomize the bits in source and destination ports, as specified by the bit masks in the flows variable. +flows={\ +1: ['1000000000000000','1000000000000000'],\ +2: ['1000000000000000','100000000000000X'],\ +4: ['100000000000000X','100000000000000X'],\ +8: ['100000000000000X','10000000000000XX'],\ +16: ['10000000000000XX','10000000000000XX'],\ +32: ['10000000000000XX','1000000000000XXX'],\ +64: ['1000000000000XXX','1000000000000XXX'],\ +128: ['1000000000000XXX','100000000000XXXX'],\ +256: ['100000000000XXXX','100000000000XXXX'],\ +512: ['100000000000XXXX','10000000000XXXXX'],\ +1024: ['10000000000XXXXX','10000000000XXXXX'],\ +2048: ['10000000000XXXXX','1000000000XXXXXX'],\ +4096: ['1000000000XXXXXX','1000000000XXXXXX'],\ +8192: ['1000000000XXXXXX','100000000XXXXXXX'],\ +16384: ['100000000XXXXXXX','100000000XXXXXXX'],\ +32768: ['100000000XXXXXXX','10000000XXXXXXXX'],\ +65535: ['10000000XXXXXXXX','10000000XXXXXXXX'],\ +131072: ['10000000XXXXXXXX','1000000XXXXXXXXX'],\ +262144: ['1000000XXXXXXXXX','1000000XXXXXXXXX'],\ +524280: ['1000000XXXXXXXXX','100000XXXXXXXXXX'],\ +1048576:['100000XXXXXXXXXX','100000XXXXXXXXXX'],} clients =[] socks =[] socks_control =[] @@ -737,16 +695,16 @@ auto_start =[] mach_type =[] sock_type =[] -data_file = 'RUN{}.{}.csv'.format(env,test) +data_file = 'RUN{}.{}.csv'.format(env,test_file) data_csv_file = open(data_file,'w') testconfig = ConfigParser.RawConfigParser() -testconfig.read(test+'.test') +testconfig.read(test_file) required_number_of_test_machines = testconfig.get('DEFAULT', 'total_number_of_test_machines') config = ConfigParser.RawConfigParser() -config.read(env+'.env') +config.read(env) machine_map = ConfigParser.RawConfigParser() -machine_map.read(machine_map_file +'.cfg') -key = config.get('OpenStack', 'key') +machine_map.read(machine_map_file) +key = config.get('ssh', 'key') total_number_of_machines = config.get('rapid', 'total_number_of_machines') if int(required_number_of_test_machines) > int(total_number_of_machines): log.exception("Not enough VMs for this test: %s needed and only %s available" % (required_number_of_test_machines,total_number_of_machines)) @@ -765,14 +723,13 @@ for vm in range(1, int(required_number_of_test_machines)+1): if prox_socket[vm-1]: prox_launch_exit.append(testconfig.getboolean('TestM%d'%vm, 'prox_launch_exit')) config_file.append(testconfig.get('TestM%d'%vm, 'config_file')) - with open('{}_{}_parameters{}.lua'.format(env,test,vm), "w") as f: + with open('{}_{}_parameters{}.lua'.format(env,test_file,vm), "w") as f: f.write('name="%s"\n'% testconfig.get('TestM%d'%vm, 'name')) f.write('local_ip="%s"\n'% vmDPIP[machine_index[vm-1]]) f.write('local_hex_ip="%s"\n'% hexDPIP[machine_index[vm-1]]) - if re.match('(l2){0,1}gen\.cfg',config_file[-1]): + if re.match('(l2){0,1}gen(_bare){0,1}\.cfg',config_file[-1]): gencores = ast.literal_eval(testconfig.get('TestM%d'%vm, 'gencores')) latcores = ast.literal_eval(testconfig.get('TestM%d'%vm, 'latcores')) - STARTSPEED = float(testconfig.get('TestM%d'%vm, 'startspeed')) genstatcores = gencores + latcores auto_start.append(False) mach_type.append('gen') @@ -785,7 +742,6 @@ for vm in range(1, int(required_number_of_test_machines)+1): elif re.match('(l2){0,1}gen_gw\.cfg',config_file[-1]): gencores = ast.literal_eval(testconfig.get('TestM%d'%vm, 'gencores')) latcores = ast.literal_eval(testconfig.get('TestM%d'%vm, 'latcores')) - STARTSPEED = float(testconfig.get('TestM%d'%vm, 'startspeed')) genstatcores = gencores + latcores auto_start.append(False) mach_type.append('gen') @@ -831,9 +787,9 @@ def exit_handler(): log.debug ('exit cleanup') for index, sock in enumerate(socks): if socks_control[index]: - sock.quit() + sock.quit() for client in clients: - client.close() + client.close() data_csv_file.close sys.exit(0) @@ -844,7 +800,7 @@ for vm in range(0, int(required_number_of_test_machines)): clients.append(prox_ctrl(vmAdminIP[machine_index[vm]], key+'.pem','root')) connect_client(clients[-1]) # Creating script to bind the right network interface to the poll mode driver - devbindfile = '{}_{}_devbindvm{}.sh'.format(env,test, vm+1) + devbindfile = '{}_{}_devbindvm{}.sh'.format(env,test_file, vm+1) with open("devbind.sh") as f: newText=f.read().replace('MACADDRESS', vmDPmac[machine_index[vm]]) with open(devbindfile, "w") as f: @@ -856,7 +812,7 @@ for vm in range(0, int(required_number_of_test_machines)): clients[-1].run_cmd(cmd) log.debug("devbind.sh running on VM%d"%(vm+1)) clients[-1].scp_put('./%s'%config_file[vm], '/root/%s'%config_file[vm]) - clients[-1].scp_put('./{}_{}_parameters{}.lua'.format(env,test, vm+1), '/root/parameters.lua') + clients[-1].scp_put('./{}_{}_parameters{}.lua'.format(env,test_file, vm+1), '/root/parameters.lua') if not configonly: if prox_launch_exit[vm]: log.debug("Starting PROX on VM%d"%(vm+1)) @@ -881,6 +837,7 @@ def get_BinarySearchParams() : LAT_AVG_TRESHOLD = float(testconfig.get('BinarySearchParams', 'lat_avg_threshold')) LAT_MAX_TRESHOLD = float(testconfig.get('BinarySearchParams', 'lat_max_threshold')) ACCURACY = float(testconfig.get('BinarySearchParams', 'accuracy')) + STARTSPEED = float(testconfig.get('BinarySearchParams', 'startspeed')) if configonly: sys.exit() @@ -898,30 +855,30 @@ with data_csv_file: writer.writeheader() for test_nr in range(1, int(number_of_tests)+1): test=testconfig.get('test%d'%test_nr,'test') + tasks= ast.literal_eval(testconfig.get('test%d'%test_nr, 'tasks')) log.info(test) - if test == 'speedtest': - get_BinarySearchParams() - PACKETSIZE = int(testconfig.get('test%d'%test_nr, 'packetsize')) - run_speedtest(socks[gensock_index],socks[sutsock_index]) - elif test == 'flowtest': - get_BinarySearchParams() - PACKETSIZE = int(testconfig.get('test%d'%test_nr, 'packetsize')) - flow_size_list = ast.literal_eval(testconfig.get('test%d'%test_nr, 'flows')) - run_flowtest(socks[gensock_index],socks[sutsock_index]) - elif test == 'sizetest': + if test == 'flowsizetest': get_BinarySearchParams() packet_size_list = ast.literal_eval(testconfig.get('test%d'%test_nr, 'packetsizes')) - run_sizetest(socks[gensock_index],socks[sutsock_index]) - elif test == 'max_frame_rate': -# PACKETSIZE = int(testconfig.get('test%d'%test_nr, 'packetsize')) + flow_size_list = ast.literal_eval(testconfig.get('test%d'%test_nr, 'flows')) + run_flow_size_test(socks[gensock_index],socks[sutsock_index]) + elif test == 'fixed_rate': packet_size_list = ast.literal_eval(testconfig.get('test%d'%test_nr, 'packetsizes')) - run_max_frame_rate(socks[gensock_index],socks[sutsock_index]) + STARTSPEED = float(testconfig.get('test%d'%test_nr, 'speed')) + run_fixed_rate(socks[gensock_index],socks[sutsock_index]) + elif test == 'measureswap': + #packet_size_list = ast.literal_eval(testconfig.get('test%d'%test_nr, 'packetsizes')) + run_measure_swap(socks[sutsock_index]) elif test == 'impairtest': get_BinarySearchParams() PACKETSIZE = int(testconfig.get('test%d'%test_nr, 'packetsize')) run_impairtest(socks[gensock_index],socks[sutsock_index]) elif test == 'irqtest': run_irqtest(socks[irqsock_index]) - elif test == 'inittest': - run_inittest(socks[gensock_index]) + elif test == 'warmuptest': + PACKETSIZE = int(testconfig.get('test%d'%test_nr, 'packetsize')) + FLOWSIZE = int(testconfig.get('test%d'%test_nr, 'flowsize')) + WARMUPSPEED = int(testconfig.get('test%d'%test_nr, 'warmupspeed')) + WARMUPTIME = int(testconfig.get('test%d'%test_nr, 'warmuptime')) + run_warmuptest(socks[gensock_index]) #################################################### diff --git a/VNFs/DPPD-PROX/helper-scripts/rapid/secgw.test b/VNFs/DPPD-PROX/helper-scripts/rapid/secgw.test index cf1b5522..5c5813f0 100644 --- a/VNFs/DPPD-PROX/helper-scripts/rapid/secgw.test +++ b/VNFs/DPPD-PROX/helper-scripts/rapid/secgw.test @@ -20,6 +20,7 @@ number_of_tests = 2 total_number_of_test_machines = 3 prox_socket = true prox_launch_exit = true +tasks=[0] [TestM1] name = Generator @@ -28,7 +29,6 @@ dest_vm = 3 gw_vm = 2 gencores = [1] latcores = [3] -startspeed = 10 [TestM2] name = GW1 @@ -46,10 +46,18 @@ drop_rate_threshold = 0.1 lat_avg_threshold = 500 lat_max_threshold = 1000 accuracy = 0.1 +startspeed = 10 [test1] -test=inittest +test=warmuptest +flowsize=1024 +packetsize=64 +warmupspeed=1 +warmuptime=2 [test2] -test=speedtest -packetsize=64 +test=flowsizetest +packetsizes=[64] +# the number of flows in the list need to be powers of 2, max 2^20 +# Select from following numbers: 1, 2, 4, 8, 16, 32, 64, 128, 256, 512, 1024, 2048, 4096, 8192, 16384, 32768, 65535, 131072, 262144, 524280, 1048576 +flows=[512] diff --git a/VNFs/DPPD-PROX/helper-scripts/rapid/sharkproxlog.sh b/VNFs/DPPD-PROX/helper-scripts/rapid/sharkproxlog.sh new file mode 100755 index 00000000..f52e5766 --- /dev/null +++ b/VNFs/DPPD-PROX/helper-scripts/rapid/sharkproxlog.sh @@ -0,0 +1 @@ +egrep '^[0-9]{4}|^[0-9]+\.' prox.log | text2pcap -q - - | tshark -r - diff --git a/VNFs/DPPD-PROX/helper-scripts/rapid/swap.cfg b/VNFs/DPPD-PROX/helper-scripts/rapid/swap.cfg index 47cb0b07..02300f82 100644 --- a/VNFs/DPPD-PROX/helper-scripts/rapid/swap.cfg +++ b/VNFs/DPPD-PROX/helper-scripts/rapid/swap.cfg @@ -24,6 +24,7 @@ dofile("parameters.lua") [port 0] name=if0 mac=hardware +vlan=yes [defaults] mempool size=2K -- cgit 1.2.3-korg From 1b650efa968fa10a5fed1ecd8bd5ca5a7cb46660 Mon Sep 17 00:00:00 2001 From: Provoost Date: Mon, 1 Jul 2019 11:21:00 -0400 Subject: Cosmetic change when printing warnings Warnings can be printed in the following cases: - When not enough packets are taken into account for measuring packet latency accuracy. - When there us a potential network throughput issue, meaning the generator is generating more packets than the the NIC can handle - When the generator cannot generate the requested load If there are no warnings for a certain measurement, nothing gets printed, suppressing an empty line Change-Id: Iee07c12142e28dcc0ac406bfed7626731ab08f98 Signed-off-by: Luc Provoost --- VNFs/DPPD-PROX/helper-scripts/rapid/runrapid.py | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-) diff --git a/VNFs/DPPD-PROX/helper-scripts/rapid/runrapid.py b/VNFs/DPPD-PROX/helper-scripts/rapid/runrapid.py index 159550ca..8964f2de 100755 --- a/VNFs/DPPD-PROX/helper-scripts/rapid/runrapid.py +++ b/VNFs/DPPD-PROX/helper-scripts/rapid/runrapid.py @@ -326,6 +326,7 @@ def run_flow_size_test(gensock,sutsock): minspeed = 0 while (maxspeed-minspeed > ACCURACY): attempts += 1 + endwarning ='' print(str(flow_number)+' flows: Measurement ongoing at speed: ' + str(round(speed,2)) + '% ',end='\r') sys.stdout.flush() # Start generating packets at requested speed (in % of a 10Gb/s link) @@ -361,7 +362,8 @@ def run_flow_size_test(gensock,sutsock): endlat_max = lat_max endabs_dropped = abs_dropped enddrop_rate = drop_rate - endwarning = '| |' + lat_warning + gen_warning + if lat_warning or gen_warning: + endwarning = '| | {:167.167} |'.format(lat_warning + gen_warning) success = True success_message='% | SUCCESS' else: -- cgit 1.2.3-korg From d55e457cfc09e84c0a3fb8c32a21517c4388a131 Mon Sep 17 00:00:00 2001 From: Luc Provoost Date: Fri, 5 Jul 2019 06:05:20 -0400 Subject: Some fixes after code review Taking into account comments from Patrice and Xavier Change-Id: Ifdabd1945e074c9ee97b059956f107901392c020 Signed-off-by: Luc Provoost --- VNFs/DPPD-PROX/helper-scripts/rapid/README | 2 +- VNFs/DPPD-PROX/helper-scripts/rapid/bare.test | 6 ++-- .../DPPD-PROX/helper-scripts/rapid/basicrapid.test | 6 ++-- VNFs/DPPD-PROX/helper-scripts/rapid/centos.json | 2 +- .../rapid/check_prox_system_setup.sh | 2 +- VNFs/DPPD-PROX/helper-scripts/rapid/createrapid.py | 36 ++++++++++---------- .../helper-scripts/rapid/deploycentostools.sh | 1 - VNFs/DPPD-PROX/helper-scripts/rapid/gen.cfg | 2 +- VNFs/DPPD-PROX/helper-scripts/rapid/gen_gw.cfg | 2 +- VNFs/DPPD-PROX/helper-scripts/rapid/impair.cfg | 2 +- VNFs/DPPD-PROX/helper-scripts/rapid/impair.test | 2 +- VNFs/DPPD-PROX/helper-scripts/rapid/irq.test | 2 +- .../helper-scripts/rapid/l2framerate.test | 10 +----- VNFs/DPPD-PROX/helper-scripts/rapid/l2gen.cfg | 2 +- VNFs/DPPD-PROX/helper-scripts/rapid/l2gen_bare.cfg | 2 +- VNFs/DPPD-PROX/helper-scripts/rapid/l2swap.cfg | 2 +- .../DPPD-PROX/helper-scripts/rapid/l2zeroloss.test | 9 +++-- .../helper-scripts/rapid/l3framerate.test | 9 +---- VNFs/DPPD-PROX/helper-scripts/rapid/machine.map | 2 +- VNFs/DPPD-PROX/helper-scripts/rapid/prox_ctrl.py | 17 +++++----- VNFs/DPPD-PROX/helper-scripts/rapid/rapidVMs.vms | 2 +- VNFs/DPPD-PROX/helper-scripts/rapid/runrapid.py | 39 ++++++++++------------ VNFs/DPPD-PROX/helper-scripts/rapid/secgw.test | 4 +-- .../DPPD-PROX/helper-scripts/rapid/sharkproxlog.sh | 18 ++++++++++ VNFs/DPPD-PROX/helper-scripts/rapid/swap.cfg | 2 +- 25 files changed, 92 insertions(+), 91 deletions(-) diff --git a/VNFs/DPPD-PROX/helper-scripts/rapid/README b/VNFs/DPPD-PROX/helper-scripts/rapid/README index cb3a4fd8..602346da 100644 --- a/VNFs/DPPD-PROX/helper-scripts/rapid/README +++ b/VNFs/DPPD-PROX/helper-scripts/rapid/README @@ -1,5 +1,5 @@ ## -## Copyright (c) 2010-2017 Intel Corporation +## Copyright (c) 2010-2019 Intel Corporation ## ## Licensed under the Apache License, Version 2.0 (the "License"); ## you may not use this file except in compliance with the License. diff --git a/VNFs/DPPD-PROX/helper-scripts/rapid/bare.test b/VNFs/DPPD-PROX/helper-scripts/rapid/bare.test index e686e15e..c3f4965f 100644 --- a/VNFs/DPPD-PROX/helper-scripts/rapid/bare.test +++ b/VNFs/DPPD-PROX/helper-scripts/rapid/bare.test @@ -1,5 +1,5 @@ ## -## Copyright (c) 2010-2018 Intel Corporation +## Copyright (c) 2010-2019 Intel Corporation ## ## Licensed under the Apache License, Version 2.0 (the "License"); ## you may not use this file except in compliance with the License. @@ -43,7 +43,7 @@ startspeed = 10 [test1] test=warmuptest -flowsize=1024 +flowsize=512 packetsize=64 warmupspeed=10 warmuptime=2 @@ -53,4 +53,4 @@ test=flowsizetest packetsizes=[64,128] # the number of flows in the list need to be powers of 2, max 2^20 # # Select from following numbers: 1, 2, 4, 8, 16, 32, 64, 128, 256, 512, 1024, 2048, 4096, 8192, 16384, 32768, 65535, 131072, 262144, 524280, 1048576 -flows=[1,512] +flows=[512,1] diff --git a/VNFs/DPPD-PROX/helper-scripts/rapid/basicrapid.test b/VNFs/DPPD-PROX/helper-scripts/rapid/basicrapid.test index 4bdfdda4..0a751d8c 100644 --- a/VNFs/DPPD-PROX/helper-scripts/rapid/basicrapid.test +++ b/VNFs/DPPD-PROX/helper-scripts/rapid/basicrapid.test @@ -1,5 +1,5 @@ ## -## Copyright (c) 2010-2018 Intel Corporation +## Copyright (c) 2010-2019 Intel Corporation ## ## Licensed under the Apache License, Version 2.0 (the "License"); ## you may not use this file except in compliance with the License. @@ -43,7 +43,7 @@ startspeed = 10 [test1] test=warmuptest -flowsize=1024 +flowsize=512 packetsize=64 warmupspeed=1 warmuptime=2 @@ -53,5 +53,5 @@ test=flowsizetest packetsizes=[64,128] # the number of flows in the list need to be powers of 2, max 2^20 # Select from following numbers: 1, 2, 4, 8, 16, 32, 64, 128, 256, 512, 1024, 2048, 4096, 8192, 16384, 32768, 65535, 131072, 262144, 524280, 1048576 -flows=[1,512] +flows=[512,1] diff --git a/VNFs/DPPD-PROX/helper-scripts/rapid/centos.json b/VNFs/DPPD-PROX/helper-scripts/rapid/centos.json index 3754ea09..df43393a 100644 --- a/VNFs/DPPD-PROX/helper-scripts/rapid/centos.json +++ b/VNFs/DPPD-PROX/helper-scripts/rapid/centos.json @@ -1,5 +1,5 @@ { -"_Copyright": "Copyright (c) 2010-2018 Intel Corporation", +"_Copyright": "Copyright (c) 2010-2019 Intel Corporation", "_License": "SPDX-License-Identifier: Apache-2.0", "builders": [ { diff --git a/VNFs/DPPD-PROX/helper-scripts/rapid/check_prox_system_setup.sh b/VNFs/DPPD-PROX/helper-scripts/rapid/check_prox_system_setup.sh index 9effa53c..7d66bd39 100755 --- a/VNFs/DPPD-PROX/helper-scripts/rapid/check_prox_system_setup.sh +++ b/VNFs/DPPD-PROX/helper-scripts/rapid/check_prox_system_setup.sh @@ -1,6 +1,6 @@ #!/usr/bin/env bash ## -## Copyright (c) 2010-2018 Intel Corporation +## Copyright (c) 2010-2019 Intel Corporation ## ## Licensed under the Apache License, Version 2.0 (the "License"); ## you may not use this file except in compliance with the License. diff --git a/VNFs/DPPD-PROX/helper-scripts/rapid/createrapid.py b/VNFs/DPPD-PROX/helper-scripts/rapid/createrapid.py index 3fbdc4c3..fc5e97b4 100755 --- a/VNFs/DPPD-PROX/helper-scripts/rapid/createrapid.py +++ b/VNFs/DPPD-PROX/helper-scripts/rapid/createrapid.py @@ -1,7 +1,7 @@ #!/usr/bin/python ## -## Copyright (c) 2010-2017 Intel Corporation +## Copyright (c) 2010-2019 Intel Corporation ## ## Licensed under the Apache License, Version 2.0 (the "License"); ## you may not use this file except in compliance with the License. @@ -89,43 +89,43 @@ if args: usage() sys.exit(2) for opt, arg in opts: - if opt in ("-h", "--help"): + if opt in ["-h", "--help"]: usage() sys.exit() - if opt in ("-v", "--version"): + if opt in ["-v", "--version"]: print("Rapid Automated Performance Indication for Dataplane "+version) sys.exit() - if opt in ("--stack"): + if opt in ["--stack"]: stack = arg print ("Using '"+stack+"' as name for the stack") - elif opt in ("--vms"): + elif opt in ["--vms"]: vms = arg print ("Using Virtual Machines Description: "+vms) - elif opt in ("--key"): + elif opt in ["--key"]: key = arg print ("Using key: "+key) - elif opt in ("--image"): + elif opt in ["--image"]: image = arg print ("Using image: "+image) - elif opt in ("--image_file"): + elif opt in ["--image_file"]: image_file = arg print ("Using qcow2 file: "+image_file) - elif opt in ("--dataplane_network"): + elif opt in ["--dataplane_network"]: dataplane_network = arg print ("Using dataplane network: "+ dataplane_network) - elif opt in ("--subnet"): + elif opt in ["--subnet"]: subnet = arg print ("Using dataplane subnet: "+ subnet) - elif opt in ("--subnet_cidr"): + elif opt in ["--subnet_cidr"]: subnet_cidr = arg print ("Using dataplane subnet: "+ subnet_cidr) - elif opt in ("--internal_network"): + elif opt in ["--internal_network"]: internal_network = arg print ("Using control plane network: "+ internal_network) - elif opt in ("--floating_network"): + elif opt in ["--floating_network"]: floating_network = arg print ("Using floating ip network: "+ floating_network) - elif opt in ("--log"): + elif opt in ["--log"]: loglevel = arg print ("Log level: "+ loglevel) @@ -204,6 +204,7 @@ if floating_network !='NO': # Checking if the dataplane network already exists, if not create it log.debug("Checking dataplane network: " + dataplane_network) if dataplane_network in Networks: + # If the dataplane already exists, we are assuming that this network is already created before with the proper configuration, hence we do not check if the subnet is created etc... log.info("Dataplane network (" + dataplane_network + ") already active") else: log.info('Creating dataplane network ...') @@ -280,7 +281,9 @@ ServerToBeCreated=[] ServerName=[] config = ConfigParser.RawConfigParser() vmconfig = ConfigParser.RawConfigParser() -vmconfig.read(vms) +vmname = os.path.dirname(os.path.realpath(__file__))+'/' + vms +#vmconfig.read_file(open(vmname)) +vmconfig.readfp(open(vmname)) total_number_of_VMs = vmconfig.get('DEFAULT', 'total_number_of_vms') cmd = 'openstack server list -f value -c Name' log.debug (cmd) @@ -322,8 +325,7 @@ for vm in range(1, int(total_number_of_VMs)+1): if SRIOV_mgmt_port == 'NO': nic_info = '--nic net-id=%s'%(internal_network) else: - for port in SRIOV_mgmt_port.split(','): - nic_info = '--nic port-id=%s'%(port) + nic_info = '--nic port-id=%s'%(SRIOV_mgmt_port) if SRIOV_port == 'NO': nic_info = nic_info + ' --nic net-id=%s'%(dataplane_network) else: diff --git a/VNFs/DPPD-PROX/helper-scripts/rapid/deploycentostools.sh b/VNFs/DPPD-PROX/helper-scripts/rapid/deploycentostools.sh index 883244fa..2695735c 100644 --- a/VNFs/DPPD-PROX/helper-scripts/rapid/deploycentostools.sh +++ b/VNFs/DPPD-PROX/helper-scripts/rapid/deploycentostools.sh @@ -136,7 +136,6 @@ function prox_install() if [ "$1" == "compile" ]; then prox_compile else - echo "Positional parameter 1 is empty" [ ! -d ${BUILD_DIR} ] && sudo mkdir -p ${BUILD_DIR} sudo chmod 0777 ${BUILD_DIR} diff --git a/VNFs/DPPD-PROX/helper-scripts/rapid/gen.cfg b/VNFs/DPPD-PROX/helper-scripts/rapid/gen.cfg index 0b52430f..0a34a83f 100644 --- a/VNFs/DPPD-PROX/helper-scripts/rapid/gen.cfg +++ b/VNFs/DPPD-PROX/helper-scripts/rapid/gen.cfg @@ -1,5 +1,5 @@ ;; -;; Copyright (c) 2010-2017 Intel Corporation +;; Copyright (c) 2010-2019 Intel Corporation ;; ;; Licensed under the Apache License, Version 2.0 (the "License"); ;; you may not use this file except in compliance with the License. diff --git a/VNFs/DPPD-PROX/helper-scripts/rapid/gen_gw.cfg b/VNFs/DPPD-PROX/helper-scripts/rapid/gen_gw.cfg index d6a2fa98..6744d54f 100644 --- a/VNFs/DPPD-PROX/helper-scripts/rapid/gen_gw.cfg +++ b/VNFs/DPPD-PROX/helper-scripts/rapid/gen_gw.cfg @@ -1,5 +1,5 @@ ;; -;; Copyright (c) 2010-2017 Intel Corporation +;; Copyright (c) 2010-2019 Intel Corporation ;; ;; Licensed under the Apache License, Version 2.0 (the "License"); ;; you may not use this file except in compliance with the License. diff --git a/VNFs/DPPD-PROX/helper-scripts/rapid/impair.cfg b/VNFs/DPPD-PROX/helper-scripts/rapid/impair.cfg index 8ca9e05d..16b6ac99 100644 --- a/VNFs/DPPD-PROX/helper-scripts/rapid/impair.cfg +++ b/VNFs/DPPD-PROX/helper-scripts/rapid/impair.cfg @@ -1,5 +1,5 @@ ;; -;; Copyright (c) 2010-2017 Intel Corporation +;; Copyright (c) 2010-2019 Intel Corporation ;; ;; Licensed under the Apache License, Version 2.0 (the "License"); ;; you may not use this file except in compliance with the License. diff --git a/VNFs/DPPD-PROX/helper-scripts/rapid/impair.test b/VNFs/DPPD-PROX/helper-scripts/rapid/impair.test index d1b0e368..806762a1 100644 --- a/VNFs/DPPD-PROX/helper-scripts/rapid/impair.test +++ b/VNFs/DPPD-PROX/helper-scripts/rapid/impair.test @@ -1,5 +1,5 @@ ## -## Copyright (c) 2010-2018 Intel Corporation +## Copyright (c) 2010-2019 Intel Corporation ## ## Licensed under the Apache License, Version 2.0 (the "License"); ## you may not use this file except in compliance with the License. diff --git a/VNFs/DPPD-PROX/helper-scripts/rapid/irq.test b/VNFs/DPPD-PROX/helper-scripts/rapid/irq.test index 78b68483..4dbb0cc6 100644 --- a/VNFs/DPPD-PROX/helper-scripts/rapid/irq.test +++ b/VNFs/DPPD-PROX/helper-scripts/rapid/irq.test @@ -1,5 +1,5 @@ ## -## Copyright (c) 2010-2018 Intel Corporation +## Copyright (c) 2010-2019 Intel Corporation ## ## Licensed under the Apache License, Version 2.0 (the "License"); ## you may not use this file except in compliance with the License. diff --git a/VNFs/DPPD-PROX/helper-scripts/rapid/l2framerate.test b/VNFs/DPPD-PROX/helper-scripts/rapid/l2framerate.test index a9f8d0ae..51710fe9 100644 --- a/VNFs/DPPD-PROX/helper-scripts/rapid/l2framerate.test +++ b/VNFs/DPPD-PROX/helper-scripts/rapid/l2framerate.test @@ -16,7 +16,7 @@ [DEFAULT] name = L2BasicSwapTesting -number_of_tests = 2 +number_of_tests = 1 total_number_of_test_machines = 2 prox_socket = true prox_launch_exit = true @@ -36,14 +36,6 @@ config_file = l2swap.cfg swapcores = [1] [test1] -test=warmuptest -flowsize=1024 -packetsize=64 -warmupspeed=10 -warmuptime=2 - - -[test2] test=fixed_rate packetsizes=[256] speed=10 diff --git a/VNFs/DPPD-PROX/helper-scripts/rapid/l2gen.cfg b/VNFs/DPPD-PROX/helper-scripts/rapid/l2gen.cfg index 3a3cf2c8..37612c3d 100644 --- a/VNFs/DPPD-PROX/helper-scripts/rapid/l2gen.cfg +++ b/VNFs/DPPD-PROX/helper-scripts/rapid/l2gen.cfg @@ -1,5 +1,5 @@ ;; -;; Copyright (c) 2010-2017 Intel Corporation +;; Copyright (c) 2010-2019 Intel Corporation ;; ;; Licensed under the Apache License, Version 2.0 (the "License"); ;; you may not use this file except in compliance with the License. diff --git a/VNFs/DPPD-PROX/helper-scripts/rapid/l2gen_bare.cfg b/VNFs/DPPD-PROX/helper-scripts/rapid/l2gen_bare.cfg index 79140623..380b6646 100644 --- a/VNFs/DPPD-PROX/helper-scripts/rapid/l2gen_bare.cfg +++ b/VNFs/DPPD-PROX/helper-scripts/rapid/l2gen_bare.cfg @@ -1,5 +1,5 @@ ;; -;; Copyright (c) 2010-2017 Intel Corporation +;; Copyright (c) 2010-2019 Intel Corporation ;; ;; Licensed under the Apache License, Version 2.0 (the "License"); ;; you may not use this file except in compliance with the License. diff --git a/VNFs/DPPD-PROX/helper-scripts/rapid/l2swap.cfg b/VNFs/DPPD-PROX/helper-scripts/rapid/l2swap.cfg index 004588c0..366d8ac2 100644 --- a/VNFs/DPPD-PROX/helper-scripts/rapid/l2swap.cfg +++ b/VNFs/DPPD-PROX/helper-scripts/rapid/l2swap.cfg @@ -1,5 +1,5 @@ ;; -;; Copyright (c) 2010-2017 Intel Corporation +;; Copyright (c) 2010-2019 Intel Corporation ;; ;; Licensed under the Apache License, Version 2.0 (the "License"); ;; you may not use this file except in compliance with the License. diff --git a/VNFs/DPPD-PROX/helper-scripts/rapid/l2zeroloss.test b/VNFs/DPPD-PROX/helper-scripts/rapid/l2zeroloss.test index af60c407..95b2d492 100644 --- a/VNFs/DPPD-PROX/helper-scripts/rapid/l2zeroloss.test +++ b/VNFs/DPPD-PROX/helper-scripts/rapid/l2zeroloss.test @@ -1,5 +1,5 @@ ## -## Copyright (c) 2010-2018 Intel Corporation +## Copyright (c) 2010-2019 Intel Corporation ## ## Licensed under the Apache License, Version 2.0 (the "License"); ## you may not use this file except in compliance with the License. @@ -43,7 +43,7 @@ startspeed = 10 [test1] test=warmuptest -flowsize=1024 +flowsize=512 packetsize=64 warmupspeed=1 warmuptime=2 @@ -53,6 +53,5 @@ test=flowsizetest packetsizes=[64] # the number of flows in the list need to be powers of 2, max 2^20 # # Select from following numbers: 1, 2, 4, 8, 16, 32, 64, 128, 256, 512, 1024, 2048, 4096, 8192, 16384, 32768, 65535, 131072, 262144, 524280, 1048576 -# flows=[1,512] -# -# +flows=[512,1] + diff --git a/VNFs/DPPD-PROX/helper-scripts/rapid/l3framerate.test b/VNFs/DPPD-PROX/helper-scripts/rapid/l3framerate.test index 81d9989d..2095da4c 100644 --- a/VNFs/DPPD-PROX/helper-scripts/rapid/l3framerate.test +++ b/VNFs/DPPD-PROX/helper-scripts/rapid/l3framerate.test @@ -16,7 +16,7 @@ [DEFAULT] name = L3FrameRateTesting -number_of_tests = 2 +number_of_tests = 1 total_number_of_test_machines = 2 prox_socket = true prox_launch_exit = true @@ -35,13 +35,6 @@ config_file = swap.cfg swapcores = [1] [test1] -test=warmuptest -flowsize=1024 -packetsize=64 -warmupspeed=10 -warmuptime=2 - -[test2] test=fixed_rate packetsizes=[64] speed=10 diff --git a/VNFs/DPPD-PROX/helper-scripts/rapid/machine.map b/VNFs/DPPD-PROX/helper-scripts/rapid/machine.map index b6e199d7..1f7ce99d 100644 --- a/VNFs/DPPD-PROX/helper-scripts/rapid/machine.map +++ b/VNFs/DPPD-PROX/helper-scripts/rapid/machine.map @@ -1,5 +1,5 @@ ## -## Copyright (c) 2010-2018 Intel Corporation +## Copyright (c) 2010-2019 Intel Corporation ## ## Licensed under the Apache License, Version 2.0 (the "License"); ## you may not use this file except in compliance with the License. diff --git a/VNFs/DPPD-PROX/helper-scripts/rapid/prox_ctrl.py b/VNFs/DPPD-PROX/helper-scripts/rapid/prox_ctrl.py index bda3e5d9..5d5fb181 100644 --- a/VNFs/DPPD-PROX/helper-scripts/rapid/prox_ctrl.py +++ b/VNFs/DPPD-PROX/helper-scripts/rapid/prox_ctrl.py @@ -1,5 +1,5 @@ ## -## Copyright (c) 2010-2017 Intel Corporation +## Copyright (c) 2010-2019 Intel Corporation ## ## Licensed under the Apache License, Version 2.0 (the "License"); ## you may not use this file except in compliance with the License. @@ -183,22 +183,24 @@ class prox_sock(object): def reset_stats(self): self._send('reset stats') - def lat_stats(self, cores, tasks={0}): + def lat_stats(self, cores, tasks=[0]): min_lat = 999999999 max_lat = avg_lat = 0 + number_tasks_returning_stats = 0 self._send('lat stats %s %s' % (','.join(map(str, cores)), ','.join(map(str, tasks)))) for core in cores: for task in tasks: stats = self._recv().split(',') if stats[0].startswith('error'): if stats[0].startswith('error: invalid syntax'): - log.critical("dp core stats error: unexpected invalid syntax (potential incompatibility between scripts and PROX)") - raise Exception("dp core stats error") + log.critical("lat stats error: unexpected invalid syntax (potential incompatibility between scripts and PROX)") + raise Exception("lat stats error") continue + number_tasks_returning_stats += 1 min_lat = min(int(stats[0]),min_lat) max_lat = max(int(stats[1]),max_lat) avg_lat += int(stats[2]) - avg_lat = avg_lat/len(cores) + avg_lat = avg_lat/number_tasks_returning_stats self._send('stats latency(0).used') used = float(self._recv()) self._send('stats latency(0).total') @@ -217,7 +219,7 @@ class prox_sock(object): buckets = buckets[:-1] return buckets - def core_stats(self, cores, tasks={0}): + def core_stats(self, cores, tasks=[0]): rx = tx = drop = tsc = hz = rx_non_dp = tx_non_dp = tx_fail = 0 self._send('dp core stats %s %s' % (','.join(map(str, cores)), ','.join(map(str, tasks)))) for core in cores: @@ -236,8 +238,7 @@ class prox_sock(object): tx_fail += int(stats[5]) tsc = int(stats[6]) hz = int(stats[7]) - return rx,rx_non_dp, tx,tx_non_dp, drop, tx_fail, tsc, hz - #return rx-rx_non_dp, tx-tx_non_dp, drop, tx_fail, tsc, hz + return rx, rx_non_dp, tx, tx_non_dp, drop, tx_fail, tsc, hz def set_random(self, cores, task, offset, mask, length): self._send('set random %s %s %s %s %s' % (','.join(map(str, cores)), task, offset, mask, length)) diff --git a/VNFs/DPPD-PROX/helper-scripts/rapid/rapidVMs.vms b/VNFs/DPPD-PROX/helper-scripts/rapid/rapidVMs.vms index b83c0d07..6032f68b 100644 --- a/VNFs/DPPD-PROX/helper-scripts/rapid/rapidVMs.vms +++ b/VNFs/DPPD-PROX/helper-scripts/rapid/rapidVMs.vms @@ -1,5 +1,5 @@ ## -## Copyright (c) 2010-2018 Intel Corporation +## Copyright (c) 2010-2019 Intel Corporation ## ## Licensed under the Apache License, Version 2.0 (the "License"); ## you may not use this file except in compliance with the License. diff --git a/VNFs/DPPD-PROX/helper-scripts/rapid/runrapid.py b/VNFs/DPPD-PROX/helper-scripts/rapid/runrapid.py index 8964f2de..d0ee68a3 100755 --- a/VNFs/DPPD-PROX/helper-scripts/rapid/runrapid.py +++ b/VNFs/DPPD-PROX/helper-scripts/rapid/runrapid.py @@ -1,7 +1,7 @@ #!/usr/bin/python ## -## Copyright (c) 2010-2017 Intel Corporation +## Copyright (c) 2010-2019 Intel Corporation ## ## Licensed under the Apache License, Version 2.0 (the "License"); ## you may not use this file except in compliance with the License. @@ -61,14 +61,14 @@ def usage(): print(" --test TEST_NAME Test cases will be read from TEST_NAME. Default is %s."%test_file) print(" --map MACHINE_MAP_FILE Machine mapping will be read from MACHINE_MAP_FILE. Default is %s."%machine_map_file) print(" --runtime Specify time in seconds for 1 test run") - print(" --configonly If True, only upload all config files to the VMs, do not run the tests. Default is %s."%configonly) + print(" --configonly If this option is specified, only upload all config files to the VMs, do not run the tests") print(" --log Specify logging level for log file output, default is DEBUG") print(" --screenlog Specify logging level for screen output, default is INFO") print(" -h, --help Show help message and exit.") print("") try: - opts, args = getopt.getopt(sys.argv[1:], "vh", ["version","help", "env=", "test=", "map=", "runtime=","configonly=","log=","screenlog="]) + opts, args = getopt.getopt(sys.argv[1:], "vh", ["version","help", "env=", "test=", "map=", "runtime=","configonly","log=","screenlog="]) except getopt.GetoptError as err: print("===========================================") print(str(err)) @@ -79,32 +79,27 @@ if args: usage() sys.exit(2) for opt, arg in opts: - if opt in ("-h", "--help"): + if opt in ["-h", "--help"]: usage() sys.exit() - if opt in ("-v", "--version"): + if opt in ["-v", "--version"]: print("Rapid Automated Performance Indication for Dataplane "+version) sys.exit() - if opt in ("--env"): + if opt in ["--env"]: env = arg - if opt in ("--test"): + if opt in ["--test"]: test_file = arg - if opt in ("--map"): + if opt in ["--map"]: machine_map_file = arg - if opt in ("--runtime"): + if opt in ["--runtime"]: runtime = arg - if opt in ("--configonly"): - configonly = arg - if configonly == 'True': - configonly = True - print('No actual runs, only uploading configuration files') - else: - configonly = False - print('--configonly parameter is defaulted to False') - if opt in ("--log"): + if opt in ["--configonly"]: + configonly = True + print('No actual runs, only uploading configuration files') + if opt in ["--log"]: loglevel = arg print ("Log level: "+ loglevel) - if opt in ("--screenlog"): + if opt in ["--screenlog"]: screenloglevel = arg print ("Screen Log level: "+ screenloglevel) @@ -245,7 +240,7 @@ def run_iteration(gensock,sutsock): lat_max = lat_max_sample lat_avg = lat_avg + lat_avg_sample used_avg = used_avg + used_sample - lat_avg = lat_avg / n_loops + lat_avg = lat_avg / n_loops used_avg = used_avg / n_loops # Get statistics after some execution time new_rx, new_non_dp_rx, new_tx, new_non_dp_tx, new_drop, new_tx_fail, new_tsc, tsc_hz = gensock.core_stats(genstatcores,tasks) @@ -339,6 +334,9 @@ def run_flow_size_test(gensock,sutsock): lat_warning = bcolors.WARNING + ' Latency accuracy issue?: {:>3.0f}%'.format(lat_used*100) + bcolors.ENDC else: lat_warning = '' + # The following if statement is testing if we pass the success criteria of a certain drop rate, average latenecy and maximum latency below the threshold + # The drop rate success can be achieved in 2 ways: either the drop rate is below a treshold, either we want that no packet has been lost during the test + # This can be specified by putting 0 in the .test file if ((drop_rate < DROP_RATE_TRESHOLD) or (abs_dropped==DROP_RATE_TRESHOLD ==0)) and (lat_avg< LAT_AVG_TRESHOLD) and (lat_max < LAT_MAX_TRESHOLD): lat_avg_prefix = bcolors.ENDC lat_max_prefix = bcolors.ENDC @@ -626,7 +624,6 @@ def run_impairtest(gensock,sutsock): lat_warning = '' log.info('|{:>7}'.format(str(attempts))+" | " + '{:>5.1f}'.format(speed) + '% ' +'{:>6.3f}'.format(get_pps(speed,size)) + ' Mpps | '+ '{:>9.3f}'.format(pps_req_tx)+' Mpps | '+ '{:>9.3f}'.format(pps_tx) +' Mpps | ' + '{:>9}'.format(pps_sut_tx_str) +' Mpps | '+ '{:>9.3f}'.format(pps_rx)+' Mpps | '+ '{:>9.0f}'.format(lat_avg)+' us | '+ '{:>9.0f}'.format(lat_max)+' us | '+ '{:>14d}'.format(abs_dropped)+ ' |''{:>9.2f}'.format(drop_rate)+ '% |'+lat_warning) writer.writerow({'flow':'1','size':(size+4),'endspeed':speed,'endspeedpps':get_pps(speed,size),'endpps_req_tx':pps_req_tx,'endpps_tx':pps_tx,'endpps_sut_tx_str':pps_sut_tx_str,'endpps_rx':pps_rx,'endlat_avg':lat_avg,'endlat_max':lat_max,'endabs_dropped':abs_dropped,'enddrop_rate':drop_rate}) - gensock.stop(latcores) def run_warmuptest(gensock): # Running at low speed to make sure the ARP messages can get through. diff --git a/VNFs/DPPD-PROX/helper-scripts/rapid/secgw.test b/VNFs/DPPD-PROX/helper-scripts/rapid/secgw.test index 5c5813f0..d3693f29 100644 --- a/VNFs/DPPD-PROX/helper-scripts/rapid/secgw.test +++ b/VNFs/DPPD-PROX/helper-scripts/rapid/secgw.test @@ -1,5 +1,5 @@ ## -## Copyright (c) 2010-2018 Intel Corporation +## Copyright (c) 2010-2019 Intel Corporation ## ## Licensed under the Apache License, Version 2.0 (the "License"); ## you may not use this file except in compliance with the License. @@ -50,7 +50,7 @@ startspeed = 10 [test1] test=warmuptest -flowsize=1024 +flowsize=512 packetsize=64 warmupspeed=1 warmuptime=2 diff --git a/VNFs/DPPD-PROX/helper-scripts/rapid/sharkproxlog.sh b/VNFs/DPPD-PROX/helper-scripts/rapid/sharkproxlog.sh index f52e5766..3c1a90ee 100755 --- a/VNFs/DPPD-PROX/helper-scripts/rapid/sharkproxlog.sh +++ b/VNFs/DPPD-PROX/helper-scripts/rapid/sharkproxlog.sh @@ -1 +1,19 @@ +## +## Copyright (c) 2010-2019 Intel Corporation +## +## Licensed under the Apache License, Version 2.0 (the "License"); +## you may not use this file except in compliance with the License. +## You may obtain a copy of the License at +## +## http://www.apache.org/licenses/LICENSE-2.0 +## +## Unless required by applicable law or agreed to in writing, software +## distributed under the License is distributed on an "AS IS" BASIS, +## WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +## See the License for the specific language governing permissions and +## limitations under the License. +## +## This code will help in using tshark to decode packets that were dumped +## in the prox.log file as a result of dump, dump_tx or dump_rx commands + egrep '^[0-9]{4}|^[0-9]+\.' prox.log | text2pcap -q - - | tshark -r - diff --git a/VNFs/DPPD-PROX/helper-scripts/rapid/swap.cfg b/VNFs/DPPD-PROX/helper-scripts/rapid/swap.cfg index 02300f82..b2f39c9a 100644 --- a/VNFs/DPPD-PROX/helper-scripts/rapid/swap.cfg +++ b/VNFs/DPPD-PROX/helper-scripts/rapid/swap.cfg @@ -1,5 +1,5 @@ ;; -;; Copyright (c) 2010-2017 Intel Corporation +;; Copyright (c) 2010-2019 Intel Corporation ;; ;; Licensed under the Apache License, Version 2.0 (the "License"); ;; you may not use this file except in compliance with the License. -- cgit 1.2.3-korg