summaryrefslogtreecommitdiffstats
path: root/docs/configurationguide/index.rst
blob: 67f616f01b7406c303d871a0055fdf25edfb260e (plain)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35

@media only all and (prefers-color-scheme: dark) {
.highlight .hll { background-color: #49483e }
.highlight .c { color: #75715e } /* Comment */
.highlight .err { color: #960050; background-color: #1e0010 } /* Error */
.highlight .k { color: #66d9ef } /* Keyword */
.highlight .l { color: #ae81ff } /* Literal */
.highlight .n { color: #f8f8f2 } /* Name */
.highlight .o { color: #f92672 } /* Operator */
.highlight .p { color: #f8f8f2 } /* Punctuation */
.highlight .ch { color: #75715e } /* Comment.Hashbang */
.highlight .cm { color: #75715e } /* Comment.Multiline */
.highlight .cp { color: #75715e } /* Comment.Preproc */
.highlight .cpf { color: #75715e } /* Comment.PreprocFile */
.highlight .c1 { color: #75715e } /* Comment.Single */
.highlight .cs { color: #75715e } /* Comment.Special */
.highlight .gd { color: #f92672 } /* Generic.Deleted */
.highlight .ge { font-style: italic } /* Generic.Emph */
.highlight .gi { color: #a6e22e } /* Generic.Inserted */
.highlight .gs { font-weight: bold } /* Generic.Strong */
.highlight .gu { color: #75715e } /* Generic.Subheading */
.highlight .kc { color: #66d9ef } /* Keyword.Constant */
.highlight .kd { color: #66d9ef } /* Keyword.Declaration */
.highlight .kn { color: #f92672 } /* Keyword.Namespace */
.highlight .kp { color: #66d9ef } /* Keyword.Pseudo */
.highlight .kr { color: #66d9ef } /* Keyword.Reserved */
.highlight .kt { color: #66d9ef } /* Keyword.Type */
.highlight .ld { color: #e6db74 } /* Literal.Date */
.highlight .m { color: #ae81ff } /* Literal.Number */
.highlight .s { color: #e6db74 } /* Literal.String */
.highlight .na { color: #a6e22e } /* Name.Attribute */
.highlight .nb { color: #f8f8f2 } /* Name.Builtin */
.highlight .nc { color: #a6e22e } /* Name.Class */
.highlight .no { color: #66d9ef } /* Name.Constant */
.highlight .nd { color: #a6e22e } /* Name.Decorator */
.highlight .ni { color: #f8f8f2 } /* Name.Entity */
.highlight .ne { color: #a6e22e } /* Name.Exception */
.highlight .nf { color: #a6e22e } /* Name.Function */
.highlight .nl { color: #f8f8f2 } /* Name.Label */
.highlight .nn { color: #f8f8f2 } /* Name.Namespace */
.highlight .nx { color: #a6e22e } /* Name.Other */
.highlight .py { color: #f8f8f2 } /* Name.Property */
.highlight .nt { color: #f92672 } /* Name.Tag */
.highlight .nv { color: #f8f8f2 } /* Name.Variable */
.highlight .ow { color: #f92672 } /* Operator.Word */
.highlight .w { color: #f8f8f2 } /* Text.Whitespace */
.highlight .mb { color: #ae81ff } /* Literal.Number.Bin */
.highlight .mf { color: #ae81ff } /* Literal.Number.Float */
.highlight .mh { color: #ae81ff } /* Literal.Number.Hex */
.highlight .mi { color: #ae81ff } /* Literal.Number.Integer */
.highlight .mo { color: #ae81ff } /* Literal.Number.Oct */
.highlight .sa { color: #e6db74 } /* Literal.String.Affix */
.highlight .sb { color: #e6db74 } /* Literal.String.Backtick */
.highlight .sc { color: #e6db74 } /* Literal.String.Char */
.highlight .dl { color: #e6db74 } /* Literal.String.Delimiter */
.highlight .sd { color: #e6db74 } /* Literal.String.Doc */
.highlight .s2 { color: #e6db74 } /* Literal.String.Double */
.highlight .se { color: #ae81ff } /* Literal.String.Escape */
.highlight .sh { color: #e6db74 } /* Literal.String.Heredoc */
.highlight .si { color: #e6db74 } /* Literal.String.Interpol */
.highlight .sx { color: #e6db74 } /* Literal.String.Other */
.highlight .sr { color: #e6db74 } /* Literal.String.Regex */
.highlight .s1 { color: #e6db74 } /* Literal.String.Single */
.highlight .ss { color: #e6db74 } /* Literal.String.Symbol */
.highlight .bp { color: #f8f8f2 } /* Name.Builtin.Pseudo */
.highlight .fm { color: #a6e22e } /* Name.Function.Magic */
.highlight .vc { color: #f8f8f2 } /* Name.Variable.Class */
.highlight .vg { color: #f8f8f2 } /* Name.Variable.Global */
.highlight .vi { color: #f8f8f2 } /* Name.Variable.Instance */
.highlight .vm { color: #f8f8f2 } /* Name.Variable.Magic */
.highlight .il { color: #ae81ff } /* Literal.Number.Integer.Long */
}
@media (prefers-color-scheme: light) {
.highlight .hll { background-color: #ffffcc }
.highlight .c { color: #888888 } /* Comment */
.highlight .err { color: #a61717; background-color: #e3d2d2 } /* Error */
.highlight .k { color: #008800; font-weight: bold } /* Keyword */
.highlight .ch { color: #888888 } /* Comment.Hashbang */
.highlight .cm { color: #888888 } /* Comment.Multiline */
.highlight .cp { color: #cc0000; font-weight: bold } /* Comment.Preproc */
.highlight .cpf { color: #888888 } /* Comment.PreprocFile */
.highlight .c1 { color: #888888 } /* Comment.Single */
.highlight .cs { color: #cc0000; font-weight: bold; background-color: #fff0f0 } /* Comment.Special */
.highlight .gd { color: #000000; background-color: #ffdddd } /* Generic.Deleted */
.highlight .ge { font-style: italic } /* Generic.Emph */
.highlight .gr { color: #aa0000 } /* Generic.Error */
.highlight .gh { color: #333333 } /* Generic.Heading */
.highlight .gi { color: #000000; background-color: #ddffdd } /* Generic.Inserted */
.highlight .go { color: #888888 } /* Generic.Output */
.highlight .gp { color: #555555 } /* Generic.Prompt */
.highlight .gs { font-weight: bold } /* Generic.Strong */
.highlight .gu { color: #666666 } /* Generic.Subheading */
.highlight .gt { color: #aa0000 } /* Generic.Traceback */
.highlight .kc { color: #008800; font-weight: bold } /* Keyword.Constant */
.highlight .kd { color: #008800; font-weight: bold } /* Keyword.Declaration */
.highlight .kn { color: #008800; font-weight: bold } /* Keyword.Namespace */
.highlight .kp { color: #008800 } /* Keyword.Pseudo */
.highlight .kr { color: #008800; font-weight: bold } /* Keyword.Reserved */
.highlight .kt { color: #888888; font-weight: bold } /* Keyword.Type */
.highlight .m { color: #0000DD; font-weight: bold } /* Literal.Number */
.highlight .s { color: #dd2200; background-color: #fff0f0 } /* Literal.String */
.highlight .na { color: #336699 } /* Name.Attribute */
.highlight .nb { color: #003388 } /* Name.Builtin */
.highlight .nc { color: #bb0066; font-weight: bold } /* Name.Class */
.highlight .no { color: #003366; font-weight: bold } /* Name.Constant */
.highlight .nd { color: #555555 } /* Name.Decorator */
.highlight .ne { color: #bb0066; font-weight: bold } /* Name.Exception */
.highlight .nf { color: #0066bb; font-weight: bold } /* Name.Function */
.highlight .nl { color: #336699; font-style: italic } /* Name.Label */
.highlight .nn { color: #bb0066; font-weight: bold } /* Name.Namespace */
.highlight .py { color: #336699; font-weight: bold } /* Name.Property */
.highlight .nt { color: #bb0066; font-weight: bold } /* Name.Tag */
.highlight .nv { color: #336699 } /* Name.Variable */
.highlight .ow { color: #008800 } /* Operator.Word */
.highlight .w { color: #bbbbbb } /* Text.Whitespace */
.highlight .mb { color: #0000DD; font-weight: bold } /* Literal.Number.Bin */
.highlight .mf { color: #0000DD; font-weight: bold } /* Literal.Number.Float */
.highlight .mh { color: #0000DD; font-weight: bold } /* Literal.Number.Hex */
.highlight .mi { color: #0000DD; font-weight: bold } /* Literal.Number.Integer */
.highlight .mo { color: #0000DD; font-weight: bold } /* Literal.Number.Oct */
.highlight .sa { color: #dd2200; background-color: #fff0f0 } /* Literal.String.Affix */
.highlight .sb { color: #dd2200; background-color: #fff0f0 } /* Literal.String.Backtick */
.highlight .sc { color: #dd2200; background-color: #fff0f0 } /* Literal.String.Char */
.highlight .dl { color: #dd2200; background-color: #fff0f0 } /* Literal.String.Delimiter */
.highlight .sd { color: #dd2200; background-color: #fff0f0 } /* Literal.String.Doc */
.highlight .s2 { color: #dd2200; background-color: #fff0f0 } /* Literal.String.Double */
.highlight .se { color: #0044dd; background-color: #fff0f0 } /* Literal.String.Escape */
.highlight .sh { color: #dd2200; background-color: #fff0f0 } /* Literal.String.Heredoc */
.highlight .si { color: #3333bb; background-color: #fff0f0 } /* Literal.String.Interpol */
.highlight .sx { color: #22bb22; background-color: #f0fff0 } /* Literal.String.Other */
.highlight .sr { color: #008800; background-color: #fff0ff } /* Literal.String.Regex */
.highlight .s1 { color: #dd2200; background-color: #fff0f0 } /* Literal.String.Single */
.highlight .ss { color: #aa6600; background-color: #fff0f0 } /* Literal.String.Symbol */
.highlight .bp { color: #003388 } /* Name.Builtin.Pseudo */
.highlight .fm { color: #0066bb; font-weight: bold } /* Name.Function.Magic */
.highlight .vc { color: #336699 } /* Name.Variable.Class */
.highlight .vg { color: #dd7700 } /* Name.Variable.Global */
.highlight .vi { color: #3333bb } /* Name.Variable.Instance */
.highlight .vm { color: #336699 } /* Name.Variable.Magic */
.highlight .il { color: #0000DD; font-weight: bold } /* Literal.Number.Integer.Long */
}
workload:
  full_sequential:
    - workunit:
        branch: hammer
        clients:
          client.0:
            - rbd/test_librbd_python.sh
    - print: "**** done rbd/test_librbd_python.sh 2-workload"
65 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 404 405 406 407 408 409 410 411 412 413 414 415 416 417 418 419 420 421 422 423 424 425 426 427 428 429 430 431 432 433 434 435 436 437 438 439 440 441 442 443 444 445 446 447 448 449 450 451 452 453 454 455 456 457 458 459 460 461 462 463 464 465 466 467 468 469 470 471 472 473 474 475 476 477 478 479 480 481 482 483 484 485 486 487 488 489 490 491 492 493 494 495 496 497 498 499 500 501 502 503 504 505 506 507 508 509 510 511 512 513 514 515 516 517 518 519 520 521 522 523 524 525 526 527 528 529 530 531 532 533 534 535 536 537 538 539 540 541 542 543 544 545 546 547 548 549 550 551 552 553 554 555 556 557 558 559 560 561 562 563 564 565 566 567 568 569 570 571 572 573 574 575 576 577 578 579 580 581 582 583 584 585 586 587 588 589 590 591 592 593 594 595 596 597 598 599 600 601 602 603 604 605 606 607 608 609 610 611 612 613 614 615 616 617 618 619 620 621 622 623 624 625 626 627 628 629 630 631 632 633 634 635 636 637 638 639 640 641 642 643 644 645 646 647 648 649 650 651 652 653 654 655 656 657 658 659 660 661 662 663 664 665 666 667 668 669 670 671 672 673 674 675 676 677 678 679 680 681 682 683 684 685 686 687 688 689 690 691 692 693 694 695 696 697 698 699 700 701 702 703 704 705 706 707 708 709 710 711 712 713 714 715 716 717 718 719 720 721 722 723 724 725 726 727 728 729 730 731 732 733 734 735 736 737 738 739 740 741 742 743 744 745 746 747 748 749 750 751 752 753 754 755 756 757 758 759 760 761 762 763 764 765 766 767 768 769 770 771 772 773 774 775 776 777 778 779 780 781 782 783 784 785 786 787 788 789 790 791 792 793 794 795 796 797 798 799 800 801 802 803 804 805 806 807 808 809 810 811 812 813 814 815 816 817 818 819 820 821 822 823 824 825 826 827 828 829 830 831 832 833 834 835 836 837 838 839 840 841 842 843 844 845 846 847 848 849 850 851 852 853 854 855 856 857 858 859 860 861 862 863 864 865 866 867 868 869 870 871 872 873 874 875 876 877 878 879 880 881 882 883 884 885 886 887 888 889 890 891 892 893 894 895 896 897 898 899 900 901 902 903 904 905 906 907 908 909 910 911 912 913 914 915 916 917 918 919 920 921 922 923 924 925 926 927 928 929 930 931 932 933 934 935 936 937 938 939 940 941 942 943 944 945 946 947 948 949 950 951 952 953 954 955 956 957 958 959 960 961 962 963 964 965 966 967 968 969 970 971 972 973 974 975 976 977 978 979 980 981 982 983 984 985 986 987 988 989 990 991 992 993 994 995 996 997 998 999 1000 1001 1002 1003 1004 1005 1006 1007 1008 1009 1010 1011 1012 1013 1014 1015 1016 1017 1018 1019 1020 1021 1022 1023 1024 1025 1026 1027 1028 1029 1030 1031 1032 1033 1034 1035 1036 1037 1038 1039 1040 1041
.. This work is licensed under a Creative Commons Attribution 4.0 International License.
.. http://creativecommons.org/licenses/by/4.0
.. (c) Bin Hu (AT&T) and Sridhar Gaddam (RedHat)

===============================================================
IPv6 Configuration - Setting Up a Service VM as an IPv6 vRouter
===============================================================

This section provides instructions to set up a service VM as an IPv6 vRouter using OPNFV Colorado Release
installers. The environment may be pure OpenStack option or Open Daylight L2-only option.
The deployment model may be HA or non-HA. The infrastructure may be bare metal or virtual environment.

For complete instructions and documentations of setting up service VM as an IPv6 vRouter using ANY method,
please refer to:

1. IPv6 Configuration Guide (HTML): http://artifacts.opnfv.org/ipv6/docs/setupservicevm/index.html
2. IPv6 User Guide (HTML): http://artifacts.opnfv.org/ipv6/docs/gapanalysis/index.html

****************************
Pre-configuration Activities
****************************

The configuration will work in 2 environments:

1. OpenStack-only environment
2. OpenStack with Open Daylight L2-only environment

Depending on which installer will be used to deploy OPNFV, each environment may be deployed
on bare metal or virtualized infrastructure. Each deployment may be HA or non-HA.

Refer to the previous installer configuration chapters, installations guide and release notes.

******************************************
Setup Manual in OpenStack-Only Environment
******************************************

If you intend to set up a service VM as an IPv6 vRouter in OpenStack-only environment of
OPNFV Colorado Release, please **NOTE** that:

* Because the anti-spoofing rules of Security Group feature in OpenStack prevents
  a VM from forwarding packets, we need to disable Security Group feature in the
  OpenStack-only environment.
* The hostnames, IP addresses, and username are for exemplary purpose in instructions.
  Please change as needed to fit your environment.
* The instructions apply to both deployment model of single controller node and
  HA (High Availability) deployment model where multiple controller nodes are used.

-----------------------------
Install OPNFV and Preparation
-----------------------------

**OPNFV-NATIVE-INSTALL-1**: To install OpenStack-only environment of OPNFV Colorado Release:

**Apex Installer**:

.. code-block:: bash

    # HA, Virtual deployment in OpenStack-only environment
    ./opnfv-deploy -v -d /etc/opnfv-apex/os-nosdn-nofeature-ha.yaml \
    -n /etc/opnfv-apex/network_setting.yaml

    # HA, Bare Metal deployment in OpenStack-only environment
    ./opnfv-deploy -d /etc/opnfv-apex/os-nosdn-nofeature-ha.yaml \
    -i <inventory file> -n /etc/opnfv-apex/network_setting.yaml

    # Non-HA, Virtual deployment in OpenStack-only environment
    ./opnfv-deploy -v -d /etc/opnfv-apex/os-nosdn-nofeature-noha.yaml \
    -n /etc/opnfv-apex/network_setting.yaml

    # Non-HA, Bare Metal deployment in OpenStack-only environment
    ./opnfv-deploy -d /etc/opnfv-apex/os-nosdn-nofeature-noha.yaml \
    -i <inventory file> -n /etc/opnfv-apex/network_setting.yaml

    # Note:
    #
    # 1. Parameter ""-v" is mandatory for Virtual deployment
    # 2. Parameter "-i <inventory file>" is mandatory for Bare Metal deployment
    # 2.1 Refer to https://git.opnfv.org/cgit/apex/tree/config/inventory for examples of inventory file
    # 3. You can use "-n /etc/opnfv-apex/network_setting_v6.yaml" for deployment in IPv6-only infrastructure

**Compass** Installer:

.. code-block:: bash

    # HA deployment in OpenStack-only environment
    export ISO_URL=file://$BUILD_DIRECTORY/compass.iso
    export OS_VERSION=${{COMPASS_OS_VERSION}}
    export OPENSTACK_VERSION=${{COMPASS_OPENSTACK_VERSION}}
    export CONFDIR=$WORKSPACE/deploy/conf/vm_environment
    ./deploy.sh --dha $CONFDIR/os-nosdn-nofeature-ha.yml \
    --network $CONFDIR/$NODE_NAME/network.yml

    # Non-HA deployment in OpenStack-only environment
    # Non-HA deployment is currently not supported by Compass installer

**Fuel** Installer:

.. code-block:: bash

    # HA deployment in OpenStack-only environment
    # Scenario Name: os-nosdn-nofeature-ha
    # Scenario Configuration File: ha_heat_ceilometer_scenario.yaml
    # You can use either Scenario Name or Scenario Configuration File Name in "-s" parameter
    sudo ./deploy.sh -b <stack-config-uri> -l <lab-name> -p <pod-name> \
    -s os-nosdn-nofeature-ha -i <iso-uri>

    # Non-HA deployment in OpenStack-only environment
    # Scenario Name: os-nosdn-nofeature-noha
    # Scenario Configuration File: no-ha_heat_ceilometer_scenario.yaml
    # You can use either Scenario Name or Scenario Configuration File Name in "-s" parameter
    sudo ./deploy.sh -b <stack-config-uri> -l <lab-name> -p <pod-name> \
    -s os-nosdn-nofeature-noha -i <iso-uri>

    # Note:
    #
    # 1. Refer to http://git.opnfv.org/cgit/fuel/tree/deploy/scenario/scenario.yaml for scenarios
    # 2. Refer to http://git.opnfv.org/cgit/fuel/tree/ci/README for description of
    #    stack configuration directory structure
    # 3. <stack-config-uri> is the base URI of stack configuration directory structure
    # 3.1 Example: http://git.opnfv.org/cgit/fuel/tree/deploy/config
    # 4. <lab-name> and <pod-name> must match the directory structure in stack configuration
    # 4.1 Example of <lab-name>: -l devel-pipeline
    # 4.2 Example of <pod-name>: -p elx
    # 5. <iso-uri> could be local or remote ISO image of Fuel Installer
    # 5.1 Example: http://artifacts.opnfv.org/fuel/colorado/opnfv-colorado.1.0.iso
    #
    # Please refer to Fuel Installer's documentation for further information and any update

**Joid** Installer:

.. code-block:: bash

    # HA deployment in OpenStack-only environment
    ./deploy.sh -o mitaka -s nosdn -t ha -l default -f ipv6

    # Non-HA deployment in OpenStack-only environment
    ./deploy.sh -o mitaka -s nosdn -t nonha -l default -f ipv6

Please **NOTE** that:

* You need to refer to **installer's documentation** for other necessary
  parameters applicable to your deployment.
* You need to refer to **Release Notes** and **installer's documentation** if there is
  any issue in installation.

**OPNFV-NATIVE-INSTALL-2**: Clone the following GitHub repository to get the
configuration and metadata files

.. code-block:: bash

    git clone https://github.com/sridhargaddam/opnfv_os_ipv6_poc.git \
    /opt/stack/opnfv_os_ipv6_poc

----------------------------------------------
Disable Security Groups in OpenStack ML2 Setup
----------------------------------------------

**OPNFV-NATIVE-SEC-1**: Change the settings in
``/etc/neutron/plugins/ml2/ml2_conf.ini`` as follows

.. code-block:: bash

    # /etc/neutron/plugins/ml2/ml2_conf.ini
    [securitygroup]
    extension_drivers = port_security
    enable_security_group = False
    firewall_driver = neutron.agent.firewall.NoopFirewallDriver

**OPNFV-NATIVE-SEC-2**: Change the settings in ``/etc/nova/nova.conf`` as follows

.. code-block:: bash

    # /etc/nova/nova.conf
    [DEFAULT]
    security_group_api = nova
    firewall_driver = nova.virt.firewall.NoopFirewallDriver

**OPNFV-NATIVE-SEC-3**: After updating the settings, you will have to restart the
``Neutron`` and ``Nova`` services.

**Please note that the commands of restarting** ``Neutron`` **and** ``Nova`` **would vary
depending on the installer. Please refer to relevant documentation of specific installers**

---------------------------------
Set Up Service VM as IPv6 vRouter
---------------------------------

**OPNFV-NATIVE-SETUP-1**: Now we assume that OpenStack multi-node setup is up and running.
We have to source the tenant credentials in OpenStack controller node in this step.
Please **NOTE** that the method of sourcing tenant credentials may vary depending on installers.
For example:

**Apex** installer:

.. code-block:: bash

    # On jump host, source the tenant credentials using /bin/opnfv-util provided by Apex installer
    opnfv-util undercloud "source overcloudrc; keystone service-list"

    # Alternatively, you can copy the file /home/stack/overcloudrc from the installer VM called "undercloud"
    # to a location in controller node, for example, in the directory /opt, and do:
    # source /opt/overcloudrc

**Compass** installer:

.. code-block:: bash

    # source the tenant credentials using Compass installer of OPNFV
    source /opt/admin-openrc.sh

**Fuel** installer:

.. code-block:: bash

    # source the tenant credentials using Fuel installer of OPNFV
    source /root/openrc

**Joid** installer:

.. code-block:: bash

    # source the tenant credentials using Joid installer of OPNFV
    source $HOME/joid_config/admin-openrc

**devstack**:

.. code-block:: bash

    # source the tenant credentials in devstack
    source openrc admin demo

**Please refer to relevant documentation of installers if you encounter any issue**.

**OPNFV-NATIVE-SETUP-2**: Download ``fedora22`` image which would be used for ``vRouter``

.. code-block:: bash

    wget https://download.fedoraproject.org/pub/fedora/linux/releases/22/Cloud/x86_64/\
    Images/Fedora-Cloud-Base-22-20150521.x86_64.qcow2

**OPNFV-NATIVE-SETUP-3**: Import Fedora22 image to ``glance``

.. code-block:: bash

    glance image-create --name 'Fedora22' --disk-format qcow2 --container-format bare \
    --file ./Fedora-Cloud-Base-22-20150521.x86_64.qcow2

**OPNFV-NATIVE-SETUP-4: This step is Informational. OPNFV Installer has taken care of this step
during deployment. You may refer to this step only if there is any issue, or if you are using other installers**.

We have to move the physical interface (i.e. the public network interface) to ``br-ex``, including moving
the public IP address and setting up default route. Please refer to ``OS-NATIVE-SETUP-4`` and
``OS-NATIVE-SETUP-5`` in our `more complete instruction <http://artifacts.opnfv.org/ipv6/docs/setupservicevm/5-ipv6-configguide-scenario-1-native-os.html#set-up-service-vm-as-ipv6-vrouter>`_.

**OPNFV-NATIVE-SETUP-5**: Create Neutron routers ``ipv4-router`` and ``ipv6-router``
which need to provide external connectivity.

.. code-block:: bash

    neutron router-create ipv4-router
    neutron router-create ipv6-router

**OPNFV-NATIVE-SETUP-6**: Create an external network/subnet ``ext-net`` using
the appropriate values based on the data-center physical network setup.

Please **NOTE** that you may only need to create the subnet of ``ext-net`` because OPNFV installers
should have created an external network during installation. You must use the same name of external
network that installer creates when you create the subnet. For example:

* **Apex** installer: ``external``
* **Compass** installer: ``ext-net``
* **Fuel** installer: ``admin_floating_net``
* **Joid** installer: ``ext-net``

**Please refer to the documentation of installers if there is any issue**

.. code-block:: bash

    # This is needed only if installer does not create an external work
    # Otherwise, skip this command "net-create"
    neutron net-create --router:external ext-net

    # Note that the name "ext-net" may work for some installers such as Compass and Joid
    # Change the name "ext-net" to match the name of external network that an installer creates
    neutron subnet-create --disable-dhcp --allocation-pool start=198.59.156.251,\
    end=198.59.156.254 --gateway 198.59.156.1 ext-net 198.59.156.0/24

**OPNFV-NATIVE-SETUP-7**: Create Neutron networks ``ipv4-int-network1`` and
``ipv6-int-network2`` with port_security disabled

.. code-block:: bash

    neutron net-create --port_security_enabled=False ipv4-int-network1
    neutron net-create --port_security_enabled=False ipv6-int-network2

**OPNFV-NATIVE-SETUP-8**: Create IPv4 subnet ``ipv4-int-subnet1`` in the internal network
``ipv4-int-network1``, and associate it to ``ipv4-router``.

.. code-block:: bash

    neutron subnet-create --name ipv4-int-subnet1 --dns-nameserver 8.8.8.8 \
    ipv4-int-network1 20.0.0.0/24

    neutron router-interface-add ipv4-router ipv4-int-subnet1

**OPNFV-NATIVE-SETUP-9**: Associate the ``ext-net`` to the Neutron routers ``ipv4-router``
and ``ipv6-router``.

.. code-block:: bash

    # Note that the name "ext-net" may work for some installers such as Compass and Joid
    # Change the name "ext-net" to match the name of external network that an installer creates
    neutron router-gateway-set ipv4-router ext-net
    neutron router-gateway-set ipv6-router ext-net

**OPNFV-NATIVE-SETUP-10**: Create two subnets, one IPv4 subnet ``ipv4-int-subnet2`` and
one IPv6 subnet ``ipv6-int-subnet2`` in ``ipv6-int-network2``, and associate both subnets to
``ipv6-router``

.. code-block:: bash

    neutron subnet-create --name ipv4-int-subnet2 --dns-nameserver 8.8.8.8 \
    ipv6-int-network2 10.0.0.0/24

    neutron subnet-create --name ipv6-int-subnet2 --ip-version 6 --ipv6-ra-mode slaac \
    --ipv6-address-mode slaac ipv6-int-network2 2001:db8:0:1::/64

    neutron router-interface-add ipv6-router ipv4-int-subnet2
    neutron router-interface-add ipv6-router ipv6-int-subnet2

**OPNFV-NATIVE-SETUP-11**: Create a keypair

.. code-block:: bash

    nova keypair-add vRouterKey > ~/vRouterKey

**OPNFV-NATIVE-SETUP-12**: Create ports for vRouter (with some specific MAC address
- basically for automation - to know the IPv6 addresses that would be assigned to the port).

.. code-block:: bash

    neutron port-create --name eth0-vRouter --mac-address fa:16:3e:11:11:11 ipv6-int-network2
    neutron port-create --name eth1-vRouter --mac-address fa:16:3e:22:22:22 ipv4-int-network1

**OPNFV-NATIVE-SETUP-13**: Create ports for VM1 and VM2.

.. code-block:: bash

    neutron port-create --name eth0-VM1 --mac-address fa:16:3e:33:33:33 ipv4-int-network1
    neutron port-create --name eth0-VM2 --mac-address fa:16:3e:44:44:44 ipv4-int-network1

**OPNFV-NATIVE-SETUP-14**: Update ``ipv6-router`` with routing information to subnet
``2001:db8:0:2::/64``

.. code-block:: bash

    neutron router-update ipv6-router --routes type=dict list=true \
    destination=2001:db8:0:2::/64,nexthop=2001:db8:0:1:f816:3eff:fe11:1111

**OPNFV-NATIVE-SETUP-15**: Boot Service VM (``vRouter``), VM1 and VM2

.. code-block:: bash

    nova boot --image Fedora22 --flavor m1.small \
    --user-data /opt/stack/opnfv_os_ipv6_poc/metadata.txt \
    --availability-zone nova:opnfv-os-compute \
    --nic port-id=$(neutron port-list | grep -w eth0-vRouter | awk '{print $2}') \
    --nic port-id=$(neutron port-list | grep -w eth1-vRouter | awk '{print $2}') \
    --key-name vRouterKey vRouter

    nova list

    # Please wait for some 10 to 15 minutes so that necessary packages (like radvd)
    # are installed and vRouter is up.
    nova console-log vRouter

    nova boot --image cirros-0.3.4-x86_64-uec --flavor m1.tiny \
    --user-data /opt/stack/opnfv_os_ipv6_poc/set_mtu.sh \
    --availability-zone nova:opnfv-os-controller \
    --nic port-id=$(neutron port-list | grep -w eth0-VM1 | awk '{print $2}') \
    --key-name vRouterKey VM1

    nova boot --image cirros-0.3.4-x86_64-uec --flavor m1.tiny
    --user-data /opt/stack/opnfv_os_ipv6_poc/set_mtu.sh \
    --availability-zone nova:opnfv-os-compute \
    --nic port-id=$(neutron port-list | grep -w eth0-VM2 | awk '{print $2}') \
    --key-name vRouterKey VM2

    nova list # Verify that all the VMs are in ACTIVE state.

**OPNFV-NATIVE-SETUP-16**: If all goes well, the IPv6 addresses assigned to the VMs
would be as shown as follows:

.. code-block:: bash

    # vRouter eth0 interface would have the following IPv6 address:
    #     2001:db8:0:1:f816:3eff:fe11:1111/64
    # vRouter eth1 interface would have the following IPv6 address:
    #     2001:db8:0:2::1/64
    # VM1 would have the following IPv6 address:
    #     2001:db8:0:2:f816:3eff:fe33:3333/64
    # VM2 would have the following IPv6 address:
    #     2001:db8:0:2:f816:3eff:fe44:4444/64

**OPNFV-NATIVE-SETUP-17**: Now we can ``SSH`` to VMs. You can execute the following command.

.. code-block:: bash

    # 1. Create a floatingip and associate it with VM1, VM2 and vRouter (to the port id that is passed).
    #    Note that the name "ext-net" may work for some installers such as Compass and Joid
    #    Change the name "ext-net" to match the name of external network that an installer creates
    neutron floatingip-create --port-id $(neutron port-list | grep -w eth0-VM1 | \
    awk '{print $2}') ext-net
    neutron floatingip-create --port-id $(neutron port-list | grep -w eth0-VM2 | \
    awk '{print $2}') ext-net
    neutron floatingip-create --port-id $(neutron port-list | grep -w eth1-vRouter | \
    awk '{print $2}') ext-net

    # 2. To know / display the floatingip associated with VM1, VM2 and vRouter.
    neutron floatingip-list -F floating_ip_address -F port_id | grep $(neutron port-list | \
    grep -w eth0-VM1 | awk '{print $2}') | awk '{print $2}'
    neutron floatingip-list -F floating_ip_address -F port_id | grep $(neutron port-list | \
    grep -w eth0-VM2 | awk '{print $2}') | awk '{print $2}'
    neutron floatingip-list -F floating_ip_address -F port_id | grep $(neutron port-list | \
    grep -w eth1-vRouter | awk '{print $2}') | awk '{print $2}'

    # 3. To ssh to the vRouter, VM1 and VM2, user can execute the following command.
    ssh -i ~/vRouterKey fedora@<floating-ip-of-vRouter>
    ssh -i ~/vRouterKey cirros@<floating-ip-of-VM1>
    ssh -i ~/vRouterKey cirros@<floating-ip-of-VM2>

****************************************************************
Setup Manual in OpenStack with Open Daylight L2-Only Environment
****************************************************************

If you intend to set up a service VM as an IPv6 vRouter in an environment of OpenStack
and Open Daylight L2-only of OPNFV Colorado Release, please **NOTE** that:

* We **SHOULD** use the ``odl-ovsdb-openstack`` version of Open Daylight Boron
  in OPNFV Colorado Release. Please refer to our
  `Gap Analysis <http://artifacts.opnfv.org/ipv6/docs/gapanalysis/gap-analysis-odl-boron.html>`_
  for more information.
* The hostnames, IP addresses, and username are for exemplary purpose in instructions.
  Please change as needed to fit your environment.
* The instructions apply to both deployment model of single controller node and
  HA (High Availability) deployment model where multiple controller nodes are used.
* However, in case of HA, when ``ipv6-router`` is created in step **SETUP-SVM-11**,
  it could be created in any of the controller node. Thus you need to identify in which
  controller node ``ipv6-router`` is created in order to manually spawn ``radvd`` daemon
  inside the ``ipv6-router`` namespace in steps **SETUP-SVM-24** through **SETUP-SVM-30**.

-----------------------------
Install OPNFV and Preparation
-----------------------------

**OPNFV-INSTALL-1**: To install OpenStack with Open Daylight L2-only environment
of OPNFV Colorado Release:

**Apex Installer**:

.. code-block:: bash

    # HA, Virtual deployment in OpenStack with Open Daylight L2-only environment
    ./opnfv-deploy -v -d /etc/opnfv-apex/os-odl_l2-nofeature-ha.yaml \
    -n /etc/opnfv-apex/network_setting.yaml

    # HA, Bare Metal deployment in OpenStack with Open Daylight L2-only environment
    ./opnfv-deploy -d /etc/opnfv-apex/os-odl_l2-nofeature-ha.yaml \
    -i <inventory file> -n /etc/opnfv-apex/network_setting.yaml

    # Non-HA deployment in OpenStack with Open Daylight L2-only environment
    # There is no settings file provided by default for odl_l2 non-HA deployment
    # You need to copy /etc/opnfv-apex/os-odl_l2-nofeature-ha.yaml to another file
    # e.g. /etc/opnfv-apex/os-odl_l2-nofeature-noha.yaml
    # and change the "ha_enabled" parameter to be "false", i.e.: "ha_enabled: false", and:

    # - For Non-HA, Virtual deployment
    ./opnfv-deploy -v -d /etc/opnfv-apex/os-odl_l2-nofeature-noha.yaml \
    -n /etc/opnfv-apex/network_setting.yaml

    # - For Non-HA, Bare Metal deployment
    ./opnfv-deploy -d /etc/opnfv-apex/os-odl_l2-nofeature-noha.yaml \
    -i <inventory file> -n /etc/opnfv-apex/network_setting.yaml

    # Note:
    #
    # 1. Parameter ""-v" is mandatory for Virtual deployment
    # 2. Parameter "-i <inventory file>" is mandatory for Bare Metal deployment
    # 2.1 Refer to https://git.opnfv.org/cgit/apex/tree/config/inventory for examples of inventory file
    # 3. You can use "-n /etc/opnfv-apex/network_setting_v6.yaml" for deployment in IPv6-only infrastructure

**Compass** Installer:

.. code-block:: bash

    # HA deployment in OpenStack with Open Daylight L2-only environment
    export ISO_URL=file://$BUILD_DIRECTORY/compass.iso
    export OS_VERSION=${{COMPASS_OS_VERSION}}
    export OPENSTACK_VERSION=${{COMPASS_OPENSTACK_VERSION}}
    export CONFDIR=$WORKSPACE/deploy/conf/vm_environment
    ./deploy.sh --dha $CONFDIR/os-odl_l2-nofeature-ha.yml \
    --network $CONFDIR/$NODE_NAME/network.yml

    # Non-HA deployment in OpenStack with Open Daylight L2-only environment
    # Non-HA deployment is currently not supported by Compass installer

**Fuel** Installer:

.. code-block:: bash

    # HA deployment in OpenStack with Open Daylight L2-only environment
    # Scenario Name: os-odl_l2-nofeature-ha
    # Scenario Configuration File: ha_odl-l2_heat_ceilometer_scenario.yaml
    # You can use either Scenario Name or Scenario Configuration File Name in "-s" parameter
    sudo ./deploy.sh -b <stack-config-uri> -l <lab-name> -p <pod-name> \
    -s os-odl_l2-nofeature-ha -i <iso-uri>

    # Non-HA deployment in OpenStack with Open Daylight L2-only environment
    # Scenario Name: os-odl_l2-nofeature-noha
    # Scenario Configuration File: no-ha_odl-l2_heat_ceilometer_scenario.yaml
    # You can use either Scenario Name or Scenario Configuration File Name in "-s" parameter
    sudo ./deploy.sh -b <stack-config-uri> -l <lab-name> -p <pod-name> \
    -s os-odl_l2-nofeature-noha -i <iso-uri>

    # Note:
    #
    # 1. Refer to http://git.opnfv.org/cgit/fuel/tree/deploy/scenario/scenario.yaml for scenarios
    # 2. Refer to http://git.opnfv.org/cgit/fuel/tree/ci/README for description of
    #    stack configuration directory structure
    # 3. <stack-config-uri> is the base URI of stack configuration directory structure
    # 3.1 Example: http://git.opnfv.org/cgit/fuel/tree/deploy/config
    # 4. <lab-name> and <pod-name> must match the directory structure in stack configuration
    # 4.1 Example of <lab-name>: -l devel-pipeline
    # 4.2 Example of <pod-name>: -p elx
    # 5. <iso-uri> could be local or remote ISO image of Fuel Installer
    # 5.1 Example: http://artifacts.opnfv.org/fuel/colorado/opnfv-colorado.1.0.iso
    #
    # Please refer to Fuel Installer's documentation for further information and any update

**Joid** Installer:

.. code-block:: bash

    # HA deployment in OpenStack with Open Daylight L2-only environment
    ./deploy.sh -o mitaka -s odl -t ha -l default -f ipv6

    # Non-HA deployment in OpenStack with Open Daylight L2-only environment
    ./deploy.sh -o mitaka -s odl -t nonha -l default -f ipv6

Please **NOTE** that:

* You need to refer to **installer's documentation** for other necessary
  parameters applicable to your deployment.
* You need to refer to **Release Notes** and **installer's documentation** if there is
  any issue in installation.

**OPNFV-INSTALL-2**: Clone the following GitHub repository to get the
configuration and metadata files

.. code-block:: bash

    git clone https://github.com/sridhargaddam/opnfv_os_ipv6_poc.git \
    /opt/stack/opnfv_os_ipv6_poc

----------------------------------------------
Disable Security Groups in OpenStack ML2 Setup
----------------------------------------------

Please **NOTE** that although Security Groups feature has been disabled automatically
through ``local.conf`` configuration file by some installers such as ``devstack``, it is very likely
that other installers such as ``Apex``, ``Compass``, ``Fuel`` or ``Joid`` will enable Security
Groups feature after installation.

**Please make sure that Security Groups are disabled in the setup**

**OPNFV-SEC-1**: Change the settings in
``/etc/neutron/plugins/ml2/ml2_conf.ini`` as follows

.. code-block:: bash

    # /etc/neutron/plugins/ml2/ml2_conf.ini
    [securitygroup]
    enable_security_group = False
    firewall_driver = neutron.agent.firewall.NoopFirewallDriver

**OPNFV-SEC-2**: Change the settings in ``/etc/nova/nova.conf`` as follows

.. code-block:: bash

    # /etc/nova/nova.conf
    [DEFAULT]
    security_group_api = nova
    firewall_driver = nova.virt.firewall.NoopFirewallDriver

**OPNFV-SEC-3**: After updating the settings, you will have to restart the
``Neutron`` and ``Nova`` services.

**Please note that the commands of restarting** ``Neutron`` **and** ``Nova`` **would vary
depending on the installer. Please refer to relevant documentation of specific installers**

---------------------------------------------------
Source the Credentials in OpenStack Controller Node
---------------------------------------------------

**SETUP-SVM-1**: Login in OpenStack Controller Node. Start a new terminal,
and change directory to where OpenStack is installed.

**SETUP-SVM-2**: We have to source the tenant credentials in this step. Please **NOTE**
that the method of sourcing tenant credentials may vary depending on installers. For example:

**Apex** installer:

.. code-block:: bash

    # On jump host, source the tenant credentials using /bin/opnfv-util provided by Apex installer
    opnfv-util undercloud "source overcloudrc; keystone service-list"

    # Alternatively, you can copy the file /home/stack/overcloudrc from the installer VM called "undercloud"
    # to a location in controller node, for example, in the directory /opt, and do:
    # source /opt/overcloudrc

**Compass** installer:

.. code-block:: bash

    # source the tenant credentials using Compass installer of OPNFV
    source /opt/admin-openrc.sh

**Fuel** installer:

.. code-block:: bash

    # source the tenant credentials using Fuel installer of OPNFV
    source /root/openrc

**Joid** installer:

.. code-block:: bash

    # source the tenant credentials using Joid installer of OPNFV
    source $HOME/joid_config/admin-openrc

**devstack**:

.. code-block:: bash

    # source the tenant credentials in devstack
    source openrc admin demo

**Please refer to relevant documentation of installers if you encounter any issue**.

------------------------------------------------------------------------------------
Informational Note: Move Public Network from Physical Network Interface to ``br-ex``
------------------------------------------------------------------------------------

**SETUP-SVM-3**: Move the physical interface (i.e. the public network interface) to ``br-ex``

**SETUP-SVM-4**: Verify setup of ``br-ex``

**Those 2 steps are Informational. OPNFV Installer has taken care of those 2 steps during deployment.
You may refer to this step only if there is any issue, or if you are using other installers**.

We have to move the physical interface (i.e. the public network interface) to ``br-ex``, including moving
the public IP address and setting up default route. Please refer to ``SETUP-SVM-3`` and
``SETUP-SVM-4`` in our `more complete instruction <http://artifacts.opnfv.org/ipv6/docs/setupservicevm/4-ipv6-configguide-servicevm.html#add-external-connectivity-to-br-ex>`_.

--------------------------------------------------------
Create IPv4 Subnet and Router with External Connectivity
--------------------------------------------------------

**SETUP-SVM-5**: Create a Neutron router ``ipv4-router`` which needs to provide external connectivity.

.. code-block:: bash

    neutron router-create ipv4-router

**SETUP-SVM-6**: Create an external network/subnet ``ext-net`` using the appropriate values based on the
data-center physical network setup.

Please **NOTE** that you may only need to create the subnet of ``ext-net`` because OPNFV installers
should have created an external network during installation. You must use the same name of external
network that installer creates when you create the subnet. For example:

* **Apex** installer: ``external``
* **Compass** installer: ``ext-net``
* **Fuel** installer: ``admin_floating_net``
* **Joid** installer: ``ext-net``

**Please refer to the documentation of installers if there is any issue**

.. code-block:: bash

    # This is needed only if installer does not create an external work
    # Otherwise, skip this command "net-create"
    neutron net-create --router:external ext-net

    # Note that the name "ext-net" may work for some installers such as Compass and Joid
    # Change the name "ext-net" to match the name of external network that an installer creates
    neutron subnet-create --disable-dhcp --allocation-pool start=198.59.156.251,\
    end=198.59.156.254 --gateway 198.59.156.1 ext-net 198.59.156.0/24

Please note that the IP addresses in the command above are for exemplary purpose. **Please replace the IP addresses of
your actual network**.

**SETUP-SVM-7**: Associate the ``ext-net`` to the Neutron router ``ipv4-router``.

.. code-block:: bash

    # Note that the name "ext-net" may work for some installers such as Compass and Joid
    # Change the name "ext-net" to match the name of external network that an installer creates
    neutron router-gateway-set ipv4-router ext-net

**SETUP-SVM-8**: Create an internal/tenant IPv4 network ``ipv4-int-network1``

.. code-block:: bash

    neutron net-create ipv4-int-network1

**SETUP-SVM-9**: Create an IPv4 subnet ``ipv4-int-subnet1`` in the internal network ``ipv4-int-network1``

.. code-block:: bash

    neutron subnet-create --name ipv4-int-subnet1 --dns-nameserver 8.8.8.8 \
    ipv4-int-network1 20.0.0.0/24

**SETUP-SVM-10**: Associate the IPv4 internal subnet ``ipv4-int-subnet1`` to the Neutron router ``ipv4-router``.

.. code-block:: bash

    neutron router-interface-add ipv4-router ipv4-int-subnet1

--------------------------------------------------------
Create IPv6 Subnet and Router with External Connectivity
--------------------------------------------------------

Now, let us create a second neutron router where we can "manually" spawn a ``radvd`` daemon to simulate an external
IPv6 router.

**SETUP-SVM-11**:  Create a second Neutron router ``ipv6-router`` which needs to provide external connectivity

.. code-block:: bash

    neutron router-create ipv6-router

**SETUP-SVM-12**: Associate the ``ext-net`` to the Neutron router ``ipv6-router``

.. code-block:: bash

    # Note that the name "ext-net" may work for some installers such as Compass and Joid
    # Change the name "ext-net" to match the name of external network that an installer creates
    neutron router-gateway-set ipv6-router ext-net

**SETUP-SVM-13**: Create a second internal/tenant IPv4 network ``ipv4-int-network2``

.. code-block:: bash

    neutron net-create ipv4-int-network2

**SETUP-SVM-14**: Create an IPv4 subnet ``ipv4-int-subnet2`` for the ``ipv6-router`` internal network
``ipv4-int-network2``

.. code-block:: bash

    neutron subnet-create --name ipv4-int-subnet2 --dns-nameserver 8.8.8.8 \
    ipv4-int-network2 10.0.0.0/24

**SETUP-SVM-15**: Associate the IPv4 internal subnet ``ipv4-int-subnet2`` to the Neutron router ``ipv6-router``.

.. code-block:: bash

    neutron router-interface-add ipv6-router ipv4-int-subnet2

--------------------------------------------------
Prepare Image, Metadata and Keypair for Service VM
--------------------------------------------------

**SETUP-SVM-16**: Download ``fedora22`` image which would be used as ``vRouter``

.. code-block:: bash

    wget https://download.fedoraproject.org/pub/fedora/linux/releases/22/Cloud/x86_64/\
    Images/Fedora-Cloud-Base-22-20150521.x86_64.qcow2

    glance image-create --name 'Fedora22' --disk-format qcow2 --container-format bare \
    --file ./Fedora-Cloud-Base-22-20150521.x86_64.qcow2

**SETUP-SVM-17**: Create a keypair

.. code-block:: bash

    nova keypair-add vRouterKey > ~/vRouterKey

**SETUP-SVM-18**: Create ports for ``vRouter`` and both the VMs with some specific MAC addresses.

.. code-block:: bash

    neutron port-create --name eth0-vRouter --mac-address fa:16:3e:11:11:11 ipv4-int-network2
    neutron port-create --name eth1-vRouter --mac-address fa:16:3e:22:22:22 ipv4-int-network1
    neutron port-create --name eth0-VM1 --mac-address fa:16:3e:33:33:33 ipv4-int-network1
    neutron port-create --name eth0-VM2 --mac-address fa:16:3e:44:44:44 ipv4-int-network1

----------------------------------------------------------------------------------------------------------
Boot Service VM (``vRouter``) with ``eth0`` on ``ipv4-int-network2`` and ``eth1`` on ``ipv4-int-network1``
----------------------------------------------------------------------------------------------------------

Let us boot the service VM (``vRouter``) with ``eth0`` interface on ``ipv4-int-network2`` connecting to ``ipv6-router``,
and ``eth1`` interface on ``ipv4-int-network1`` connecting to ``ipv4-router``.

**SETUP-SVM-19**: Boot the ``vRouter`` using ``Fedora22`` image on the OpenStack Compute Node with hostname
``opnfv-os-compute``

.. code-block:: bash

    nova boot --image Fedora22 --flavor m1.small \
    --user-data /opt/stack/opnfv_os_ipv6_poc/metadata.txt \
    --availability-zone nova:opnfv-os-compute \
    --nic port-id=$(neutron port-list | grep -w eth0-vRouter | awk '{print $2}') \
    --nic port-id=$(neutron port-list | grep -w eth1-vRouter | awk '{print $2}') \
    --key-name vRouterKey vRouter

Please **note** that ``/opt/stack/opnfv_os_ipv6_poc/metadata.txt`` is used to enable the ``vRouter`` to automatically
spawn a ``radvd``, and

* Act as an IPv6 vRouter which advertises the RA (Router Advertisements) with prefix
  ``2001:db8:0:2::/64`` on its internal interface (``eth1``).
* Forward IPv6 traffic from internal interface (``eth1``)

**SETUP-SVM-20**: Verify that ``Fedora22`` image boots up successfully and vRouter has ``ssh`` keys properly injected

.. code-block:: bash

    nova list
    nova console-log vRouter

Please note that **it may take a few minutes** for the necessary packages to get installed and ``ssh`` keys
to be injected.

.. code-block:: bash

    # Sample Output
    [  762.884523] cloud-init[871]: ec2: #############################################################
    [  762.909634] cloud-init[871]: ec2: -----BEGIN SSH HOST KEY FINGERPRINTS-----
    [  762.931626] cloud-init[871]: ec2: 2048 e3:dc:3d:4a:bc:b6:b0:77:75:a1:70:a3:d0:2a:47:a9   (RSA)
    [  762.957380] cloud-init[871]: ec2: -----END SSH HOST KEY FINGERPRINTS-----
    [  762.979554] cloud-init[871]: ec2: #############################################################

-------------------------------------------
Boot Two Other VMs in ``ipv4-int-network1``
-------------------------------------------

In order to verify that the setup is working, let us create two cirros VMs with ``eth1`` interface on the
``ipv4-int-network1``, i.e., connecting to ``vRouter`` ``eth1`` interface for internal network.

We will have to configure appropriate ``mtu`` on the VMs' interface by taking into account the tunneling
overhead and any physical switch requirements. If so, push the ``mtu`` to the VM either using ``dhcp``
options or via ``meta-data``.

**SETUP-SVM-21**: Create VM1 on OpenStack Controller Node with hostname ``opnfv-os-controller``

.. code-block:: bash

    nova boot --image cirros-0.3.4-x86_64-uec --flavor m1.tiny \
    --user-data /opt/stack/opnfv_os_ipv6_poc/set_mtu.sh \
    --availability-zone nova:opnfv-os-controller \
    --nic port-id=$(neutron port-list | grep -w eth0-VM1 | awk '{print $2}') \
    --key-name vRouterKey VM1

**SETUP-SVM-22**: Create VM2 on OpenStack Compute Node with hostname ``opnfv-os-compute``

.. code-block:: bash

    nova boot --image cirros-0.3.4-x86_64-uec --flavor m1.tiny \
    --user-data /opt/stack/opnfv_os_ipv6_poc/set_mtu.sh \
    --availability-zone nova:opnfv-os-compute \
    --nic port-id=$(neutron port-list | grep -w eth0-VM2 | awk '{print $2}') \
    --key-name vRouterKey VM2

**SETUP-SVM-23**: Confirm that both the VMs are successfully booted.

.. code-block:: bash

    nova list
    nova console-log VM1
    nova console-log VM2

----------------------------------
Spawn ``RADVD`` in ``ipv6-router``
----------------------------------

Let us manually spawn a ``radvd`` daemon inside ``ipv6-router`` namespace to simulate an external router.
First of all, we will have to identify the ``ipv6-router`` namespace and move to the namespace.

Please **NOTE** that in case of HA (High Availability) deployment model where multiple controller
nodes are used, ``ipv6-router`` created in step **SETUP-SVM-11** could be in any of the controller
node. Thus you need to identify in which controller node ``ipv6-router`` is created in order to manually
spawn ``radvd`` daemon inside the ``ipv6-router`` namespace in steps **SETUP-SVM-24** through
**SETUP-SVM-30**. The following command in Neutron will display the controller on which the
``ipv6-router`` is spawned.

.. code-block:: bash

    neutron l3-agent-list-hosting-router ipv6-router

Then you login to that controller and execute steps **SETUP-SVM-24**
through **SETUP-SVM-30**

**SETUP-SVM-24**: identify the ``ipv6-router`` namespace and move to the namespace

.. code-block:: bash

    sudo ip netns exec qrouter-$(neutron router-list | grep -w ipv6-router | \
    awk '{print $2}') bash

**SETUP-SVM-25**: Upon successful execution of the above command, you will be in the router namespace.
Now let us configure the IPv6 address on the <qr-xxx> interface.

.. code-block:: bash

    export router_interface=$(ip a s | grep -w "global qr-*" | awk '{print $7}')
    ip -6 addr add 2001:db8:0:1::1 dev $router_interface

**SETUP-SVM-26**: Update the sample file ``/opt/stack/opnfv_os_ipv6_poc/scenario2/radvd.conf``
with ``$router_interface``.

.. code-block:: bash

    cp /opt/stack/opnfv_os_ipv6_poc/scenario2/radvd.conf /tmp/radvd.$router_interface.conf
    sed -i 's/$router_interface/'$router_interface'/g' /tmp/radvd.$router_interface.conf

**SETUP-SVM-27**: Spawn a ``radvd`` daemon to simulate an external router. This ``radvd`` daemon advertises an IPv6
subnet prefix of ``2001:db8:0:1::/64`` using RA (Router Advertisement) on its $router_interface so that ``eth0``
interface of ``vRouter`` automatically configures an IPv6 SLAAC address.

.. code-block:: bash

    $radvd -C /tmp/radvd.$router_interface.conf -p /tmp/br-ex.pid.radvd -m syslog

**SETUP-SVM-28**: Add an IPv6 downstream route pointing to the ``eth0`` interface of vRouter.

.. code-block:: bash

    ip -6 route add 2001:db8:0:2::/64 via 2001:db8:0:1:f816:3eff:fe11:1111

**SETUP-SVM-29**: The routing table should now look similar to something shown below.

.. code-block:: bash

    ip -6 route show
    2001:db8:0:1::1 dev qr-42968b9e-62 proto kernel metric 256
    2001:db8:0:1::/64 dev qr-42968b9e-62 proto kernel metric 256 expires 86384sec
    2001:db8:0:2::/64 via 2001:db8:0:1:f816:3eff:fe11:1111 dev qr-42968b9e-62 proto ra metric 1024 expires 29sec
    fe80::/64 dev qg-3736e0c7-7c proto kernel metric 256
    fe80::/64 dev qr-42968b9e-62 proto kernel metric 256

**SETUP-SVM-30**: If all goes well, the IPv6 addresses assigned to the VMs would be as shown as follows:

.. code-block:: bash

    # vRouter eth0 interface would have the following IPv6 address:
    #     2001:db8:0:1:f816:3eff:fe11:1111/64
    # vRouter eth1 interface would have the following IPv6 address:
    #     2001:db8:0:2::1/64
    # VM1 would have the following IPv6 address:
    #     2001:db8:0:2:f816:3eff:fe33:3333/64
    # VM2 would have the following IPv6 address:
    #     2001:db8:0:2:f816:3eff:fe44:4444/64

--------------------------------
Testing to Verify Setup Complete
--------------------------------

Now, let us ``SSH`` to those VMs, e.g. VM1 and / or VM2 and / or vRouter, to confirm that
it has successfully configured the IPv6 address using ``SLAAC`` with prefix
``2001:db8:0:2::/64`` from ``vRouter``.

We use ``floatingip`` mechanism to achieve ``SSH``.

**SETUP-SVM-31**: Now we can ``SSH`` to VMs. You can execute the following command.

.. code-block:: bash

    # 1. Create a floatingip and associate it with VM1, VM2 and vRouter (to the port id that is passed).
    #    Note that the name "ext-net" may work for some installers such as Compass and Joid
    #    Change the name "ext-net" to match the name of external network that an installer creates
    neutron floatingip-create --port-id $(neutron port-list | grep -w eth0-VM1 | \
    awk '{print $2}') ext-net
    neutron floatingip-create --port-id $(neutron port-list | grep -w eth0-VM2 | \
    awk '{print $2}') ext-net
    neutron floatingip-create --port-id $(neutron port-list | grep -w eth1-vRouter | \
    awk '{print $2}') ext-net

    # 2. To know / display the floatingip associated with VM1, VM2 and vRouter.
    neutron floatingip-list -F floating_ip_address -F port_id | grep $(neutron port-list | \
    grep -w eth0-VM1 | awk '{print $2}') | awk '{print $2}'
    neutron floatingip-list -F floating_ip_address -F port_id | grep $(neutron port-list | \
    grep -w eth0-VM2 | awk '{print $2}') | awk '{print $2}'
    neutron floatingip-list -F floating_ip_address -F port_id | grep $(neutron port-list | \
    grep -w eth1-vRouter | awk '{print $2}') | awk '{print $2}'

    # 3. To ssh to the vRouter, VM1 and VM2, user can execute the following command.
    ssh -i ~/vRouterKey fedora@<floating-ip-of-vRouter>
    ssh -i ~/vRouterKey cirros@<floating-ip-of-VM1>
    ssh -i ~/vRouterKey cirros@<floating-ip-of-VM2>

If everything goes well, ``ssh`` will be successful and you will be logged into those VMs.
Run some commands to verify that IPv6 addresses are configured on ``eth0`` interface.

**SETUP-SVM-32**: Show an IPv6 address with a prefix of ``2001:db8:0:2::/64``

.. code-block:: bash

    ip address show

**SETUP-SVM-33**: ping some external IPv6 address, e.g. ``ipv6-router``

.. code-block:: bash

    ping6 2001:db8:0:1::1

If the above ping6 command succeeds, it implies that ``vRouter`` was able to successfully forward the IPv6 traffic
to reach external ``ipv6-router``.

*********************************
IPv6 Post Installation Procedures
*********************************

Congratulations, you have completed the setup of using a service VM to act as an IPv6 vRouter.
You have validated the setup based on the instruction in previous sections. If you want to further
test your setup, you can ``ping6`` among ``VM1``, ``VM2``, ``vRouter`` and ``ipv6-router``.

This setup allows further open innovation by any 3rd-party. For more instructions and documentations,
please refer to:

1. IPv6 Configuration Guide (HTML): http://artifacts.opnfv.org/ipv6/docs/setupservicevm/index.html
2. IPv6 User Guide (HTML): http://artifacts.opnfv.org/ipv6/docs/gapanalysis/index.html

**************************************
Automated post installation activities
**************************************

Refer to the relevant testing guides, results, and release notes of Yardstick Project.