summaryrefslogtreecommitdiffstats
path: root/jjb/promise
AgeCommit message (Collapse)AuthorFilesLines
2018-01-10Reduce Basic Job DuplicationTrevor Bramwell1-57/+1
In the initial formation of projects we found it easy to copy job definitions to provide project a baseline job config to work off. This has led to a lot of duplication and misalignment with the gerrit triggers and default build server tag to be used. Collapsing these jobs into a job-group containing the stream and a verify job per-stream, should help reduce this duplication. Change-Id: Icb366487590a145be4cbfc0637a8d86a6d9b7cec Signed-off-by: Trevor Bramwell <tbramwell@linuxfoundation.org>
2017-09-18Create jobs for euphratesFatih Degirmenci1-1/+1
Change-Id: I700eb3c113889cb70b3df7a8cfa4faf5e37ffce5 Signed-off-by: Fatih Degirmenci <fatih.degirmenci@ericsson.com>
2017-09-12Fix Yamllint Violations for jjb/promiseTrevor Bramwell1-38/+39
JIRA: RELENG-254 Change-Id: Ia936e1ab5840b7f7acbe874bb46ca906369a4121 Signed-off-by: Trevor Bramwell <tbramwell@linuxfoundation.org>
2017-03-15jjb: Set disable-strict-forbidden-file-verification to 'true'Markos Chandras1-0/+1
Previously, if an upstream patchset contained a change for a file listed in 'forbidden-file-paths' the job would not be triggered. This is not desirable since such a patchset may contain important changes so we enable the 'disable-strict-forbidden-file-verification' option which triggers the job unless the patchset only contains changes for the files listed in 'forbidden-file-paths'. Note: The diff was generated using the following script: for i in $(grep -l -r forbidden-file-paths *);do sed -i "s/\(^.*\)forbidden-file-paths/\1disable-strict-forbidden-file-verification: \'true\'\n&/" $i; done Please double check that the changes look sensible for each team's project. Change-Id: Ifa86d3a39b36375b2fd52b449e29c8dc757499b4 Signed-off-by: Markos Chandras <mchandras@suse.de>
2017-02-01merge GIT_BRANCH and GERRIT_BRANCH into BRANCHRyota MIBU1-1/+0
GIT_BRANCH parameter is different; master or stable/danube in daily jobs, but 'refs/changes/57/27657/6' in verify jobs. This breaks some job builders triggered by the both types of jobs. We have GERRIT_BRANCH parameter for verify jobs to identify stream and expected branch into the patch will be merged after it got +2 and submitted. To avoid further confusion and to have common job builders for daily and verify jobs, this patch introduce BRANCH parameter. GERRIT_BRANCH is now deprecated. Change-Id: Ibcd42c1cd8a0be0f330878b21d3011f1ec97043b Signed-off-by: Ryota MIBU <r-mibu@cq.jp.nec.com>
2017-01-09Remove colorado jobs and create danube jobsFatih Degirmenci1-1/+1
Danube stream for the projects with daily jobs is disabled. Danube stream for the projects without daily jobs is left as enabled as there will be no changes coming to danube branch until branch is created. Dovetail, Apex and Fuel jobs have not been updated yet. Change-Id: Ice39826c8f829157fa864370557837290838f634 Signed-off-by: Fatih Degirmenci <fatih.degirmenci@ericsson.com>
2016-12-19clean scm defenitionsRyota MIBU1-4/+1
This patch makes sure we use 2 types of scm, and allows us to specify them by adding one line in job-templates. - git-scm # for daily jobs and merge jobs - git-scm-gerrit # for verify jobs [New] Change-Id: Iddc8a5e0e115193c7081a6d6c53da209900e95c8 Signed-off-by: Ryota MIBU <r-mibu@cq.jp.nec.com>
2016-12-01Restrict Gerrit Triggers to OPNFV Gerrit ServerTrevor Bramwell1-0/+1
With the addition of ODL and OpenStack Gerrit servers, it's important we don't generate additional noise by accidentally building against these Gerrit servers when we don't intend to. JIRA: RELENG-179 Change-Id: Ia163c6c3eaa58e8e21dc6548a839062fcbde39ed Signed-off-by: Trevor Bramwell <tbramwell@linuxfoundation.org>
2016-08-22Create project jobs for colorado branchFatih Degirmenci1-1/+5
Daily jobs for Colorado branch for installer and test projects have not been created yet and it needs to be done via separate patches. Change-Id: I34517e89dfc502ce5741733e01bf8425d513df02 Signed-off-by: Fatih Degirmenci <fatih.degirmenci@ericsson.com>
2016-06-15Tie all verify/merge/build to opnfv-build-ubuntuFatih Degirmenci1-1/+1
We reconfigured centos build server now and it is important to know which jobs require centos and which ones ubuntu. The machines with ubuntu were labelled with opnfv-build previously, preventing us from keeping track of which projects require what OS. This tries to solve that. Change-Id: I1fb2912ec49f5bc2781853e500508d9992d59fbb Signed-off-by: Fatih Degirmenci <fatih.degirmenci@ericsson.com>
2016-01-07Enable verify and merge jobs for stable/brahmaputra branchFatih Degirmenci1-5/+4
Change-Id: I5f811a0db6c1725e02b3bfd51d8c7c21b12633a2 Signed-off-by: Fatih Degirmenci <fatih.degirmenci@ericsson.com>
2015-12-14Fix branch parameter and change stream name to brahmaputraFatih Degirmenci1-3/+3
Change-Id: I9005cb7cee44873b37fb310e5850d85d887c958d Signed-off-by: Fatih Degirmenci <fatih.degirmenci@ericsson.com>
2015-12-11Remove job_defaults from jobsFatih Degirmenci1-9/+2
Change-Id: Id936700af4b842d9a79db9004ed02f5d571ed17a Signed-off-by: Fatih Degirmenci <fatih.degirmenci@ericsson.com>
2015-12-11Cleanup jjb filesFatih Degirmenci2-51/+61
- Remove the jobs that do nothing such as merge and daily jobs - Leave only verify jobs for all the projects as placeholders - Introduce stable/brahmaputra branch and keep it disabled Please note that the "real" jobs for bottlenecks, compass4nfv, functest, and qtip are not deleted. Change-Id: I80031f77a11c2bf5173fbb7be98294285e3cc2ef Signed-off-by: Fatih Degirmenci <fatih.degirmenci@ericsson.com>
2015-08-24This dot file should not have been in the repo, removingAric Gardner1-0/+0
Change-Id: Iaf1b9e1a409a2022d5044dcf95ca4cd12c39a740 Signed-off-by: Aric Gardner <agardner@linuxfoundation.org>
2015-07-06[promise] upload under-review documentsRyota MIBU3-196/+44
This patch configures Jenkins to upload under-review documents built in verify jobs, so that reviewers can check how document would be changed. The config and script are copied from those of Doctor. JIRA: PROMISE-5 Change-Id: I096a82e8b1cc0b89db0a03c79ae28f1419dbf752 Signed-off-by: Ryota MIBU <r-mibu@cq.jp.nec.com>
2015-06-17Build and publish Promise's requirement docRyota MIBU3-70/+19
JIRA: OCTO-104 Change-Id: I9d705833f3580a5934fd4011a40faff431e7c34f Signed-off-by: Ryota MIBU <r-mibu@cq.jp.nec.com>
2015-06-11Use new version docu-build.shAric Gardner3-41/+27
Update jobs to use new version docu-build.sh, as well as cease using depreciated docu-verify.sh These two are special cases and will be handled in a seperate commit. ./doctor/docu-build.sh ./genesis/docu-build.sh Change-Id: I3e1d7750fce0a1e97e9c1904983e67189e0b8a32 Signed-off-by: Aric Gardner <agardner@linuxfoundation.org>
2015-05-11fix daily builder failure for parser prediction promise and attach to node ↵MatthewLi1-1/+1
master JIRA: OCTO-66 Change-Id: Ia2337487b20e2f68be68f9a1488385383286b6d3 Signed-off-by: MatthewLi <matthew.lijun@huawei.com>
2015-05-11docu build scripts and amend jjb for promise projectMatthewLi3-17/+108
JIRA: DOCS-28 Change-Id: Iebf8e2ebca486c859fc7ac2bba10e35479f7cfb4 Signed-off-by: MatthewLi <matthew.lijun@huawei.com>
2015-04-08Fix {branch} parameter not foundThanh Ha1-2/+2
JIRA: 0000 Change-Id: I784e48c181bc2c1fda7d52539e8775a253d0b128 Signed-off-by: Thanh Ha <thanh.ha@linuxfoundation.org>
2015-03-06Add merge and verify jobs for all projects that lack said jobsAric Gardner1-1/+120
Change-Id: Ib85d6e162d2ebb37d0df60738c16d678ebc5326e Signed-off-by: Aric Gardner <agardner@linuxfoundation.org>
2015-03-01Remove unnecessary CFG filesThanh Ha1-3/+0
These files are used by templates from opendaylight/releng/builder's python scripts to generate JJB files automatically. Those scripts don't appear to exist here. Change-Id: I410188ea09221fbd5294121b6ebc15731e6bc794 Signed-off-by: Thanh Ha <thanh.ha@linuxfoundation.org>
2015-02-18Inital commit for jenkins job builderAric Gardner3-0/+69
Change-Id: I8c50158e55a6ddb46fd1f74dbc81e668402e089f Signed-off-by: Aric Gardner <agardner@linuxfoundation.org>
d='n692' href='#n692'>692 693 694 695 696 697 698 699 700 701 702 703 704 705 706 707 708 709 710 711 712 713 714 715 716 717 718 719 720 721 722 723 724 725 726 727 728 729 730 731 732 733 734 735 736 737 738 739 740 741 742 743 744 745 746 747 748 749 750 751 752 753 754 755 756 757 758 759 760 761 762 763 764 765 766 767 768 769 770 771 772 773 774 775 776 777 778 779 780 781 782 783 784 785 786 787 788 789 790 791 792 793 794 795 796 797 798 799 800 801 802 803 804 805 806 807 808 809 810 811 812 813 814 815 816 817 818 819 820 821 822 823 824 825 826 827 828 829 830 831 832 833 834 835 836 837 838 839 840 841 842 843 844 845 846 847 848 849 850 851 852 853 854 855 856 857 858 859 860 861 862 863 864 865 866 867 868 869 870 871 872 873 874 875 876 877 878 879 880 881 882 883 884 885 886 887 888 889 890 891 892 893 894 895 896 897 898 899 900 901 902 903 904 905 906 907 908 909 910 911 912 913
{
 "cells": [
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# Metrics Analysis Notebook (local)\n",
    "\n",
    "#### Used to analyse / visualize the metrics when uploaded via csv file\n",
    "\n",
    "### Contributor:    Aditya Srivastava <adityasrivastava301199@gmail.com>\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "from datetime import datetime\n",
    "import json\n",
    "import matplotlib.pyplot as plt\n",
    "import matplotlib.dates as mdates\n",
    "import numpy as np\n",
    "import os\n",
    "import pandas as pd\n",
    "from pprint import pprint\n",
    "import re\n",
    "import requests\n",
    "import time"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Helper Functions"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "DATETIME_FORMAT = \"%Y-%m-%d %H:%M:%S\"\n",
    "\n",
    "def convert_to_timestamp(s):\n",
    "    global DATETIME_FORMAT\n",
    "    return time.mktime(datetime.strptime(s, DATETIME_FORMAT).timetuple())\n",
    "\n",
    "def convert_to_time_string(epoch):\n",
    "    global DATETIME_FORMAT\n",
    "    t = datetime.fromtimestamp(float(epoch)/1000.)\n",
    "    return t.strftime(DATETIME_FORMAT)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Note: \n",
    "    \n",
    "Path will be used as a parameter in almost every function\n",
    "\n",
    "path / rootdir / csv : (str) Path to the folder whose direct children are metric folders\n",
    "\n",
    "example: /path/to/folder\n",
    "\n",
    "When : \n",
    "```sh\n",
    "ls /path/to/folder\n",
    "\n",
    "# output should be directories such as\n",
    "# cpu-0 cpu-1 cpu-2  ..........................\n",
    "# processes-ovs-vswitchd ........processes-ovsdb-server\n",
    "```"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Analysis Function"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "#### CPU"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "rootdir = 'metrics_data/'\n",
    "\n",
    "def fetch_cpu_data(rootdir):\n",
    "    df = pd.DataFrame()\n",
    "    reg_compile = re.compile(\"cpu-\\d{1,2}\")\n",
    "    for dirpath, dirnames, filenames in os.walk(rootdir):\n",
    "        dirname = dirpath.split(os.sep)[-1] \n",
    "        if reg_compile.match(dirname):\n",
    "            # read 3 files from this folder...\n",
    "            _df = pd.DataFrame()\n",
    "            for file in filenames:\n",
    "                if 'user' in file:\n",
    "                    temp_df = pd.read_csv(dirpath + os.sep + file)\n",
    "                    _df['user'] = temp_df['value']\n",
    "                    _df['epoch'] = temp_df['epoch']\n",
    "\n",
    "                if 'system' in file:\n",
    "                    temp_df = pd.read_csv(dirpath + os.sep + file)\n",
    "                    _df['system'] = temp_df['value']\n",
    "                    _df['epoch'] = temp_df['epoch']\n",
    "\n",
    "                if 'idle' in file:\n",
    "                    temp_df = pd.read_csv(dirpath + os.sep + file)\n",
    "                    _df['idle'] = temp_df['value']\n",
    "                    _df['epoch'] = temp_df['epoch']\n",
    "\n",
    "            _df['cpu'] = dirname.split('-')[-1]\n",
    "\n",
    "            df = df.append(_df, ignore_index=True)\n",
    "\n",
    "    total = df['user'] + df['system'] + df['idle']\n",
    "\n",
    "    df['user_percentage'] = df['user']*100 / total\n",
    "    df['system_percentage'] = df['system']*100 / total\n",
    "    df['idle_percentage'] = df['idle']*100 / total\n",
    "    \n",
    "    return df\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# CPU Unused Cores\n",
    "def unused_cores(rootdir, verbose=False):\n",
    "    \n",
    "    df = fetch_cpu_data(rootdir)\n",
    "    groups = df.groupby(['cpu'])\n",
    "    if verbose: print(\"Unused Cores :\")\n",
    "\n",
    "    unused_cores = []\n",
    "    for key, item in groups:\n",
    "        curr_df = item\n",
    "        unused_cores.append(key)\n",
    "        idle_values = curr_df.loc[curr_df['idle_percentage'] < 99.999]\n",
    "        if np.any(idle_values):\n",
    "            unused_cores.pop(-1)\n",
    "\n",
    "    unused_cores = set(unused_cores)\n",
    "    for key, item in groups:\n",
    "        if key not in unused_cores:\n",
    "            continue\n",
    "        fig = plt.figure(figsize=(24,6), facecolor='oldlace', edgecolor='red')\n",
    "\n",
    "        ax1 = fig.add_subplot(131)\n",
    "        ax1.title.set_text(\"System\")\n",
    "        ax1.plot(item['epoch'], item['system_percentage'])\n",
    "    \n",
    "        ax2 = fig.add_subplot(132)\n",
    "        ax2.title.set_text(\"User\")\n",
    "        ax2.plot(item['epoch'], item['user_percentage'])\n",
    "            \n",
    "        ax3 = fig.add_subplot(133)\n",
    "        ax3.title.set_text(\"Idle\")\n",
    "        ax3.plot(item['epoch'], item['idle_percentage'])\n",
    "\n",
    "        plt.suptitle('Used CPU Core {}'.format(key), fontsize=14)\n",
    "        plt.show()\n",
    "\n",
    "    print(\"Number of unused cores:   \", len(unused_cores))\n",
    "    return unused_cores\n",
    "\n",
    "\n",
    "#CPU fully used cores\n",
    "def fully_used_cores(rootdir, verbose=False):\n",
    "    \n",
    "\n",
    "    df = fetch_cpu_data(rootdir)\n",
    "    groups = df.groupby(['cpu'])\n",
    "    if verbose: print(\"Fully Used Cores :\")\n",
    "\n",
    "    fully_used_cores = []\n",
    "    for key, item in groups:\n",
    "        curr_df = item\n",
    "        idle_values = curr_df.loc[curr_df['idle_percentage'] <= 10]\n",
    "        if np.any(idle_values):\n",
    "            fully_used_cores.append(key)\n",
    "\n",
    "    fully_used_cores = set(fully_used_cores)\n",
    "    for key, item in groups:\n",
    "        if key not in fully_used_cores:\n",
    "            continue\n",
    "        fig = plt.figure(figsize=(24,6), facecolor='oldlace', edgecolor='red')\n",
    "\n",
    "        ax1 = fig.add_subplot(131)\n",
    "        ax1.title.set_text(\"System\")\n",
    "        ax1.plot(item['epoch'], item['system_percentage'])\n",
    "\n",
    "        ax2 = fig.add_subplot(132)\n",
    "        ax2.title.set_text(\"User\")\n",
    "        ax2.plot(item['epoch'], item['user_percentage'])\n",
    "\n",
    "        ax3 = fig.add_subplot(133)\n",
    "        ax3.title.set_text(\"Idle\")\n",
    "        ax3.plot(item['epoch'], item['idle_percentage'])\n",
    "\n",
    "        plt.suptitle('Used CPU Core {}'.format(key), fontsize=14)\n",
    "        plt.show()\n",
    "\n",
    "    print(\"Number of fully used cores:   \", len(fully_used_cores))\n",
    "    return fully_used_cores\n",
    "\n",
    "\n",
    "# CPU used cores plots\n",
    "def used_cores(rootdir, verbose=False):\n",
    "\n",
    "    df = fetch_cpu_data(rootdir)\n",
    "    groups = df.groupby(['cpu'])\n",
    "    if verbose: print(\"Used Cores :\")\n",
    "\n",
    "    used_cores = []\n",
    "    for key, item in groups:\n",
    "        curr_df = item\n",
    "        idle_values = curr_df.loc[curr_df['idle_percentage'] < 99.999]\n",
    "        if np.any(idle_values):\n",
    "            used_cores.append(key)\n",
    "\n",
    "    used_cores = set(used_cores)\n",
    "    for key, item in groups:\n",
    "        if key not in used_cores:\n",
    "            continue\n",
    "        fig = plt.figure(figsize=(24,6), facecolor='oldlace', edgecolor='red')\n",
    "\n",
    "        ax1 = fig.add_subplot(131)\n",
    "        ax1.title.set_text(\"System\")\n",
    "        ax1.plot(item['epoch'], item['system_percentage'])\n",
    "\n",
    "        ax2 = fig.add_subplot(132)\n",
    "        ax2.title.set_text(\"User\")\n",
    "        ax2.plot(item['epoch'], item['user_percentage'])\n",
    "\n",
    "        ax3 = fig.add_subplot(133)\n",
    "        ax3.title.set_text(\"Idle\")\n",
    "        ax3.plot(item['epoch'], item['idle_percentage'])\n",
    "\n",
    "        plt.suptitle('Used CPU Core {}'.format(key), fontsize=14)\n",
    "        plt.show()\n",
    "\n",
    "    print(\"Number of used cores:   \", len(used_cores))\n",
    "    return used_cores\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "#### Interface"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "rootdir = 'metrics_data/'\n",
    "\n",
    "def fetch_interfaces_data(rootdir):\n",
    "\n",
    "    df = pd.DataFrame()\n",
    "    reg_compile = re.compile(\"interface-.*\")\n",
    "    for dirpath, dirnames, filenames in os.walk(rootdir):\n",
    "        dirname = dirpath.split(os.sep)[-1] \n",
    "        if reg_compile.match(dirname):\n",
    "            # read 3 files from this folder...\n",
    "            _df = pd.DataFrame()\n",
    "            for file in filenames:\n",
    "                if 'errors' in file:\n",
    "                    temp_df = pd.read_csv(dirpath + os.sep + file)\n",
    "                    _df['error_rx'] = temp_df['rx']\n",
    "                    _df['error_tx'] = temp_df['tx']\n",
    "                    _df['epoch'] = temp_df['epoch']\n",
    "\n",
    "                if 'dropped' in file:\n",
    "                    temp_df = pd.read_csv(dirpath + os.sep + file)\n",
    "                    _df['dropped_rx'] = temp_df['rx']\n",
    "                    _df['dropped_tx'] = temp_df['tx']\n",
    "                    _df['epoch'] = temp_df['epoch']\n",
    "\n",
    "            _df['interface'] = '-'.join(dirname.split('-')[1:])\n",
    "            df = df.append(_df, ignore_index=True)\n",
    "    return df\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# Interface Dropped (both type 1 and 2, i.e rx and tx)\n",
    "def interface_dropped(rootdir, verbose=False):\n",
    "        \n",
    "    df = fetch_interfaces_data(rootdir)\n",
    "    group = df.groupby(['interface'])\n",
    "    color = ['oldlace', 'mistyrose']\n",
    "\n",
    "    dropped = {'rx':[], 'tx':[]}\n",
    "\n",
    "    itr = 0\n",
    "    for key, item in group:\n",
    "        curr_df = item\n",
    "\n",
    "        if np.any(curr_df['dropped_rx'] == 1):\n",
    "            dropped_rows = curr_df[curr_df['dropped_rx'] == 1]\n",
    "            dropped['rx'].append([key, dropped_row['epoch'].iloc[0]])\n",
    "        if np.any(curr_df['dropped_tx'] == 1):\n",
    "            dropped_rows = curr_df[curr_df['dropped_tx'] == 1]\n",
    "            dropped['tx'].append([key, dropped_row['epoch'].iloc[0]])\n",
    "\n",
    "        fig = plt.figure(figsize=(24,6), facecolor=color[itr%2], edgecolor='red')\n",
    "        ax = fig.add_subplot(211)\n",
    "        ax.title.set_text(\"Interface: {} Dropped (rx)\".format(key))\n",
    "        ax.plot(item['epoch'], item['dropped_rx'])\n",
    "\n",
    "        ax1 = fig.add_subplot(212)\n",
    "        ax1.title.set_text(\"Interface: {} Dropped (tx)\".format(key))\n",
    "        ax1.plot(item['epoch'], item['dropped_tx'])\n",
    "\n",
    "        itr += 1\n",
    "\n",
    "    plt.suptitle('Interface Dropped', fontsize=14)\n",
    "    plt.show()\n",
    "\n",
    "    return dropped\n",
    "\n",
    "\n",
    "# Interface Errors (both type 1 and 2, i.e rx and tx)\n",
    "def interface_errors(rootdir, verbose=False):\n",
    "        \n",
    "    df = fetch_interfaces_data(rootdir)\n",
    "    group = df.groupby(['interface'])\n",
    "    color = ['oldlace', 'mistyrose']\n",
    "\n",
    "    errors = {'rx':[], 'tx':[]}\n",
    "\n",
    "    itr = 0\n",
    "    for key, item in group:\n",
    "        curr_df = item\n",
    "\n",
    "        if np.any(curr_df['error_rx'] == 1):\n",
    "            err_rows = curr_df[curr_df['error_rx'] == 1]\n",
    "            errors['rx'].append([key, err_row['epoch'].iloc[0]])\n",
    "        if np.any(curr_df['error_tx'] == 1):\n",
    "            err_rows = curr_df[curr_df['error_tx'] == 1]\n",
    "            errors['tx'].append([key, err_row['epoch'].iloc[0]])\n",
    "\n",
    "        fig = plt.figure(figsize=(24,6), facecolor=color[itr%2], edgecolor='red')\n",
    "        ax = fig.add_subplot(211)\n",
    "        ax.title.set_text(\"Interface: {} Errors (rx)\".format(key))\n",
    "        ax.plot(item['epoch'], item['error_rx'])\n",
    "\n",
    "        ax1 = fig.add_subplot(212)\n",
    "        ax1.title.set_text(\"Interface: {} Errors (tx)\".format(key))\n",
    "        ax1.plot(item['epoch'], item['error_tx'])\n",
    "\n",
    "        itr += 1\n",
    "\n",
    "    plt.suptitle('Interface Erros', fontsize=14)\n",
    "    plt.show()\n",
    "\n",
    "    return errors\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "#### OVS Stats (Non DPDK)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "rootdir = 'metrics_data/'\n",
    "\n",
    "def fetch_ovs_stats_data(rootdir):\n",
    "    df = pd.DataFrame()\n",
    "    reg_compile = re.compile(\"ovs_stats-.*\")\n",
    "    for dirpath, dirnames, filenames in os.walk(rootdir):\n",
    "        dirname = dirpath.split(os.sep)[-1] \n",
    "        if reg_compile.match(dirname):\n",
    "            if 'dpdk' in dirname:\n",
    "                continue #ignoring dpdk\n",
    "\n",
    "            _df = pd.DataFrame()\n",
    "            for file in filenames:\n",
    "                if 'errors' in file:\n",
    "                    col_name = '-'.join(file.split('_')[1:])\n",
    "                    temp_df = pd.read_csv(dirpath + os.sep + file)\n",
    "\n",
    "                    _df['epoch'] = temp_df['epoch']\n",
    "                    temp_df = temp_df.drop(['epoch'], axis=1)\n",
    "                    new_cols = [i + '_' + col_name for i in temp_df.columns]\n",
    "                    _df[new_cols] = temp_df\n",
    "\n",
    "                if 'dropped' in file:\n",
    "                    col_name = '-'.join(file.split('_')[1:])\n",
    "                    temp_df = pd.read_csv(dirpath + os.sep + file)\n",
    "                    _df['epoch'] = temp_df['epoch']\n",
    "                    temp_df = temp_df.drop(['epoch'], axis=1)\n",
    "                    new_cols = [i + '_' + col_name for i in temp_df.columns]\n",
    "                    _df[new_cols] = temp_df            \n",
    "            _df['interface'] = '-'.join(dirname.split('-')[1:])\n",
    "            df = df.append(_df, ignore_index=True)\n",
    "    return df\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "def ovs_stats_dropped(rootdir, verbose=False):\n",
    "    \n",
    "    df = fetch_ovs_stats_data(rootdir)\n",
    "    group = df.groupby(['interface'])\n",
    "    color = ['oldlace', 'mistyrose']\n",
    "\n",
    "    i = 0\n",
    "    for key, item in group:\n",
    "        curr_df = item\n",
    "        for col in curr_df:\n",
    "            if 'dropped' in col:\n",
    "                if item[col].isnull().all():\n",
    "                    continue\n",
    "                fig = plt.figure(figsize=(24,6), facecolor=color[i%2], edgecolor='red')\n",
    "                plt.plot(item['epoch'], item[col])\n",
    "                plt.title(\"Interface: {} Dropped {}\".format(key, col))\n",
    "        i += 1\n",
    "    plt.show()\n",
    "    return\n",
    "\n",
    "\n",
    "# Interface Errors (both type 1 and 2, i.e rx and tx)\n",
    "def ovs_stats_errors(rootdir, verbose=False):\n",
    "\n",
    "\n",
    "    df = fetch_ovs_stats_data(rootdir)\n",
    "    group = df.groupby(['interface'])\n",
    "    color = ['oldlace', 'mistyrose']\n",
    "\n",
    "    i = 0\n",
    "    for key, item in group:\n",
    "        curr_df = item\n",
    "        for col in curr_df:\n",
    "            if 'error' in col:\n",
    "                if item[col].isnull().all():\n",
    "                    continue\n",
    "                fig = plt.figure(figsize=(24,6), facecolor=color[i%2], edgecolor='red')\n",
    "                plt.plot(item['epoch'], item[col])\n",
    "                plt.title(\"Interface: {} Errors {}\".format(key, col))\n",
    "        i += 1\n",
    "    plt.show()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "#### DPDK"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "rootdir = 'metrics_data/'\n",
    "\n",
    "def fetch_dpdk_data(rootdir):\n",
    "    df = pd.DataFrame()\n",
    "    reg_compile = re.compile(\".*dpdk.*\")\n",
    "    for dirpath, dirnames, filenames in os.walk(rootdir):\n",
    "        dirname = dirpath.split(os.sep)[-1] \n",
    "        if reg_compile.match(dirname):\n",
    "            _df = pd.DataFrame()\n",
    "            for file in filenames:\n",
    "                if 'errors' in file:\n",
    "                    col_name = '-'.join(file.split('_')[1:])\n",
    "                    temp_df = pd.read_csv(dirpath + os.sep + file)\n",
    "\n",
    "                    _df['epoch'] = temp_df['epoch']\n",
    "                    temp_df = temp_df.drop(['epoch'], axis=1)\n",
    "                    new_cols = [i + '_' + col_name for i in temp_df.columns]\n",
    "                    _df[new_cols] = temp_df\n",
    "\n",
    "                if 'dropped' in file:\n",
    "                    col_name = '-'.join(file.split('_')[1:])\n",
    "                    temp_df = pd.read_csv(dirpath + os.sep + file)\n",
    "                    _df['epoch'] = temp_df['epoch']\n",
    "                    temp_df = temp_df.drop(['epoch'], axis=1)\n",
    "                    new_cols = [i + '_' + col_name for i in temp_df.columns]\n",
    "                    _df[new_cols] = temp_df            \n",
    "            _df['dpdk'] = '-'.join(dirname.split('-')[1:])\n",
    "            df = df.append(_df, ignore_index=True)\n",
    "    return df\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "fetch_dpdk_data(rootdir)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "def dpdk_dropped(rootdir, verbose=False):\n",
    "    \n",
    "    df = fetch_dpdk_data(rootdir)\n",
    "    group = df.groupby(['dpdk'])\n",
    "    color = ['oldlace', 'mistyrose']\n",
    "\n",
    "    i = 0\n",
    "    for key, item in group:\n",
    "        curr_df = item\n",
    "        for col in curr_df:\n",
    "            if 'dropped' in col:\n",
    "                if item[col].isnull().all():\n",
    "                    continue\n",
    "                fig = plt.figure(figsize=(24,6), facecolor=color[i%2], edgecolor='red')\n",
    "                plt.plot(item['epoch'], item[col])\n",
    "                plt.title(\"DpDK: {} Dropped {}\".format(key, col))\n",
    "        i += 1\n",
    "    plt.show()\n",
    "    return\n",
    "\n",
    "\n",
    "# Interface Errors (both type 1 and 2, i.e rx and tx)\n",
    "def dpdk_errors(rootdir, verbose=False):\n",
    "\n",
    "\n",
    "    df = fetch_dpdk_data(rootdir)\n",
    "    group = df.groupby(['dpdk'])\n",
    "    color = ['oldlace', 'mistyrose']\n",
    "\n",
    "    i = 0\n",
    "    for key, item in group:\n",
    "        curr_df = item\n",
    "        for col in curr_df:\n",
    "            if 'error' in col:\n",
    "                if item[col].isnull().all():\n",
    "                    continue\n",
    "                fig = plt.figure(figsize=(24,6), facecolor=color[i%2], edgecolor='red')\n",
    "                plt.plot(item['epoch'], item[col])\n",
    "                plt.title(\"DpDK: {} Errors {}\".format(key, col))\n",
    "        i += 1\n",
    "    plt.show()"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "dpdk_dropped(rootdir)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "#### RDT  (need to be testes)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "rootdir = 'metrics_data/'\n",
    "\n",
    "def fetch_rdt_data(rootdir):\n",
    "    df = pd.DataFrame()\n",
    "    reg_compile = re.compile(\".*rdt.*\")\n",
    "    for dirpath, dirnames, filenames in os.walk(rootdir):\n",
    "        dirname = dirpath.split(os.sep)[-1] \n",
    "        if reg_compile.match(dirname):\n",
    "            _df = pd.DataFrame()\n",
    "            for file in filenames:\n",
    "                if 'bytes' in file:\n",
    "                    col_name = '-'.join(file.split('_')[1:])\n",
    "                    temp_df = pd.read_csv(dirpath + os.sep + file)\n",
    "\n",
    "                    _df['epoch'] = temp_df['epoch']\n",
    "                    temp_df = temp_df.drop(['epoch'], axis=1)\n",
    "                    new_cols = [i + '_' + col_name for i in temp_df.columns]\n",
    "                    _df[new_cols] = temp_df\n",
    "                    \n",
    "                if 'bandwidth' in file:\n",
    "                    col_name = '-'.join(file.split('_')[1:])\n",
    "                    temp_df = pd.read_csv(dirpath + os.sep + file)\n",
    "\n",
    "                    _df['epoch'] = temp_df['epoch']\n",
    "                    temp_df = temp_df.drop(['epoch'], axis=1)\n",
    "                    new_cols = [i + '_' + col_name for i in temp_df.columns]\n",
    "                    _df[new_cols] = temp_df\n",
    "\n",
    "                if 'ipc' in file:\n",
    "                    col_name = '-'.join(file.split('_')[1:])\n",
    "                    temp_df = pd.read_csv(dirpath + os.sep + file)\n",
    "                    _df['epoch'] = temp_df['epoch']\n",
    "                    temp_df = temp_df.drop(['epoch'], axis=1)\n",
    "                    new_cols = [i + '_' + col_name for i in temp_df.columns]\n",
    "                    _df[new_cols] = temp_df            \n",
    "            _df['intel_rdt'] = '-'.join(dirname.split('-')[1:])\n",
    "            df = df.append(_df, ignore_index=True)\n",
    "    return df\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# L3 cache bytes\n",
    "def plot_rdt_bytes(start=None, end=None, node=None, steps='15s', csv=None, verbose=False):\n",
    "    \n",
    "    df = fetch_rdt_data(rootdir)\n",
    "    group = df.groupby(['intel_rdt'])\n",
    "    color = ['oldlace', 'mistyrose']\n",
    "\n",
    "    i = 0\n",
    "    for key, item in group:\n",
    "        curr_df = item\n",
    "        for col in curr_df:\n",
    "            if 'bytes' in col:\n",
    "                if item[col].isnull().all():\n",
    "                    continue\n",
    "                fig = plt.figure(figsize=(24,6), facecolor=color[i%2], edgecolor='red')\n",
    "                plt.plot(item['epoch'], item[col])\n",
    "                plt.title(\"RDT BYTES, RDT: {}\".format(key, col))\n",
    "        i += 1\n",
    "    plt.show()\n",
    "\n",
    "\n",
    "# L3 IPC values\n",
    "def plot_rdt_ipc(start=None, end=None, node=None, steps='15s', csv=None, verbose=False):\n",
    "    \n",
    "    \n",
    "    df = fetch_rdt_data(rootdir)\n",
    "    group = df.groupby(['intel_rdt'])\n",
    "    color = ['oldlace', 'mistyrose']\n",
    "\n",
    "    i = 0\n",
    "    for key, item in group:\n",
    "        curr_df = item\n",
    "        for col in curr_df:\n",
    "            if 'ipc' in col:\n",
    "                if item[col].isnull().all():\n",
    "                    continue\n",
    "                fig = plt.figure(figsize=(24,6), facecolor=color[i%2], edgecolor='red')\n",
    "                plt.plot(item['epoch'], item[col])\n",
    "                plt.title(\"RDT IPC, RDT: {}\".format(key, col))\n",
    "        i += 1\n",
    "    plt.show()\n",
    "\n",
    "\n",
    "\n",
    "# memeory bandwidtdh\n",
    "def get_rdt_memory_bandwidth(start=None, end=None, node=None, steps='15s', csv=None, verbose=False):\n",
    "    \n",
    "        \n",
    "    df = fetch_rdt_data(rootdir)\n",
    "    group = df.groupby(['intel_rdt'])\n",
    "    color = ['oldlace', 'mistyrose']\n",
    "\n",
    "    i = 0\n",
    "    for key, item in group:\n",
    "        curr_df = item\n",
    "        for col in curr_df:\n",
    "            if 'bandwidht' in col:\n",
    "                if item[col].isnull().all():\n",
    "                    continue\n",
    "                fig = plt.figure(figsize=(24,6), facecolor=color[i%2], edgecolor='red')\n",
    "                plt.plot(item['epoch'], item[col])\n",
    "                plt.title(\"RDT Memory Bandwidht, RDT: {}\".format(key, col))\n",
    "        i += 1\n",
    "    plt.show()\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "#### Memory (following functions still need to written for csv)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "rootdir = 'metrics_data/'\n",
    "\n",
    "def fetch_memory_data(rootdir):\n",
    "    df = pd.DataFrame()\n",
    "    reg_compile = re.compile(\"memory\")\n",
    "    for dirpath, dirnames, filenames in os.walk(rootdir):\n",
    "        dirname = dirpath.split(os.sep)[-1] \n",
    "        if reg_compile.match(dirname):\n",
    "            print(dirname)\n",
    "            _df = pd.DataFrame()\n",
    "            for file in filenames:                \n",
    "                col_name = file.split('-')[1]\n",
    "                temp_df = pd.read_csv(dirpath + os.sep + file)\n",
    "                _df['epoch'] = temp_df['epoch']\n",
    "                temp_df = temp_df.drop(['epoch'], axis=1)\n",
    "                new_cols = [col_name for i in temp_df.columns]\n",
    "                _df[new_cols] = temp_df\n",
    "            df = df.append(_df, ignore_index=True)\n",
    "    return df"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "scrolled": true
   },
   "outputs": [],
   "source": [
    "def get_memory_usage(rootdir, verbose=False):\n",
    "    df = fetch_memory_data(rootdir)\n",
    "    color = ['oldlace', 'mistyrose']\n",
    "    i = 0\n",
    "    for col in df:\n",
    "        if df[col].isnull().all():\n",
    "            continue\n",
    "        fig = plt.figure(figsize=(24,6), facecolor=color[i%2], edgecolor='red')\n",
    "        plt.plot(df['epoch'], df[col])\n",
    "        plt.title(\"{} Memory\".format(col))\n",
    "        i += 1\n",
    "    plt.show()\n",
    "\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    " "
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Usage / Examples\n",
    "\n",
    "\n",
    "##### CPU \n",
    "\n",
    "- For calling cpu unsued cores\n",
    "\n",
    "```py\n",
    "cores = unused_cores(rootdir='metrics_data')\n",
    "```\n",
    "\n",
    "- For finding fully used cores\n",
    "\n",
    "```py\n",
    "fully_used = fully_used_cores('metrics_data')\n",
    "```\n",
    "\n",
    "- Similarly for plotting used cores\n",
    "\n",
    "```py\n",
    "plot_used_cores(csv='metrics_data')\n",
    "```\n",
    "\n",
    "\n",
    "##### Interface\n",
    "\n",
    "- Interface Dropped  \n",
    "\n",
    "```py\n",
    "# Using CSV\n",
    "dropped_interfaces = interface_dropped('metrics_data')\n",
    "```\n",
    "\n",
    "- Interface Errors\n",
    "\n",
    "```py\n",
    "# Using CSV\n",
    "interface_errors('metrics_data')\n",
    "```\n",
    "\n",
    "##### OVS Stats\n",
    "\n",
    "- OVS Stats Dropped  \n",
    "\n",
    "```py\n",
    "# Using CSV\n",
    "ovs_stats_dropped('metrics_data')\n",
    "```\n",
    "\n",
    "- OVS Stats Errors\n",
    "\n",
    "```py\n",
    "# Using CSV\n",
    "ovs_stats_errors('metrics_data')\n",
    "```\n",
    "\n",
    "##### DPDK \n",
    "\n",
    "- DPDK Dropped  \n",
    "\n",
    "```py\n",
    "# Using CSV\n",
    "dpdk_dropped('metrics_data')\n",
    "```\n",
    "\n",
    "- DPDK Errors\n",
    "\n",
    "```py\n",
    "# Using CSV\n",
    "dpdk_errors('metrics_data')\n",
    "```\n",
    "\n",
    "\n",
    "\n",
    "##### RDT (Do not run yet)\n",
    "\n",
    "- Plot bytes\n",
    "\n",
    "```py\n",
    "#csv\n",
    "plot_rdt_bytes('metrics_data')\n",
    "```\n",
    "\n",
    "- Plot ipc values\n",
    "\n",
    "```py\n",
    "#csv\n",
    "plot_rdt_ipc('metrics_data')\n",
    "```\n",
    "\n",
    "- Memory bandwidth\n",
    "\n",
    "```py\n",
    "#csv\n",
    "get_rdt_memory_bandwidth('metrics_data')\n",
    "```\n",
    "\n",
    "##### Memory\n",
    "\n",
    "```py\n",
    "#csv\n",
    "get_memory_usage('metrics_data')\n",
    "```"
   ]
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "Python 3",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.6.8"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 4
}