diff options
Diffstat (limited to 'docs/testing')
16 files changed, 4201 insertions, 1319 deletions
diff --git a/docs/testing/developer/testscope/index.rst b/docs/testing/developer/testscope/index.rst index ffa91fd1..09901333 100644 --- a/docs/testing/developer/testscope/index.rst +++ b/docs/testing/developer/testscope/index.rst @@ -1,13 +1,13 @@ .. This work is lit_snapshots_list_details_with_paramsensed under a Creative Commons Attribution 4.0 International License. .. http://creativecommons.org/licenses/by/4.0 -.. (c) Ericsson AB +.. (c) OPNFV ======================================================= Compliance and Verification program accepted test cases ======================================================= -.. toctree:: - :maxdepth: 2 + .. toctree:: + :maxdepth: 2 Mandatory CVP Test Areas @@ -19,105 +19,112 @@ Test Area VIM Operations - Compute Image operations within the Compute API --------------------------------------- -tempest.api.compute.images.test_images_oneserver.ImagesOneServerTestJSON.test_create_delete_image -tempest.api.compute.images.test_images_oneserver.ImagesOneServerTestJSON.test_create_image_specify_multibyte_character_image_name + +| tempest.api.compute.images.test_images_oneserver.ImagesOneServerTestJSON.test_create_delete_image +| tempest.api.compute.images.test_images_oneserver.ImagesOneServerTestJSON.test_create_image_specify_multibyte_character_image_name Basic support Compute API for server actions such as reboot, rebuild, resize ---------------------------------------------------------------------------- -tempest.api.compute.servers.test_instance_actions.InstanceActionsTestJSON.test_get_instance_action -tempest.api.compute.servers.test_instance_actions.InstanceActionsTestJSON.test_list_instance_actions + +| tempest.api.compute.servers.test_instance_actions.InstanceActionsTestJSON.test_get_instance_action +| tempest.api.compute.servers.test_instance_actions.InstanceActionsTestJSON.test_list_instance_actions Generate, import, and delete SSH keys within Compute services ------------------------------------------------------------- -tempest.api.compute.servers.test_servers.ServersTestJSON.test_create_specify_keypair + +| tempest.api.compute.servers.test_servers.ServersTestJSON.test_create_specify_keypair List supported versions of the Compute API ------------------------------------------ -tempest.api.compute.test_versions.TestVersions.test_list_api_versions + +| tempest.api.compute.test_versions.TestVersions.test_list_api_versions Quotas management in Compute API -------------------------------- -tempest.api.compute.test_quotas.QuotasTestJSON.test_get_default_quotas -tempest.api.compute.test_quotas.QuotasTestJSON.test_get_quotas + +| tempest.api.compute.test_quotas.QuotasTestJSON.test_get_default_quotas +| tempest.api.compute.test_quotas.QuotasTestJSON.test_get_quotas Basic server operations in the Compute API ------------------------------------------ -tempest.api.compute.servers.test_servers.ServersTestJSON.test_create_server_with_admin_password -tempest.api.compute.servers.test_servers.ServersTestJSON.test_create_with_existing_server_name -tempest.api.compute.servers.test_servers_negative.ServersNegativeTestJSON.test_create_numeric_server_name -tempest.api.compute.servers.test_servers_negative.ServersNegativeTestJSON.test_create_server_metadata_exceeds_length_limit -tempest.api.compute.servers.test_servers_negative.ServersNegativeTestJSON.test_create_server_name_length_exceeds_256 -tempest.api.compute.servers.test_servers_negative.ServersNegativeTestJSON.test_create_with_invalid_flavor -tempest.api.compute.servers.test_servers_negative.ServersNegativeTestJSON.test_create_with_invalid_image -tempest.api.compute.servers.test_servers_negative.ServersNegativeTestJSON.test_create_with_invalid_network_uuid -tempest.api.compute.servers.test_servers_negative.ServersNegativeTestJSON.test_delete_server_pass_id_exceeding_length_limit -tempest.api.compute.servers.test_servers_negative.ServersNegativeTestJSON.test_delete_server_pass_negative_id -tempest.api.compute.servers.test_servers_negative.ServersNegativeTestJSON.test_get_non_existent_server -tempest.api.compute.servers.test_create_server.ServersTestJSON.test_host_name_is_same_as_server_name -tempest.api.compute.servers.test_create_server.ServersTestManualDisk.test_host_name_is_same_as_server_name -tempest.api.compute.servers.test_servers_negative.ServersNegativeTestJSON.test_invalid_ip_v6_address -tempest.api.compute.servers.test_create_server.ServersTestJSON.test_list_servers -tempest.api.compute.servers.test_create_server.ServersTestJSON.test_list_servers_with_detail -tempest.api.compute.servers.test_create_server.ServersTestManualDisk.test_list_servers -tempest.api.compute.servers.test_create_server.ServersTestManualDisk.test_list_servers_with_detail -tempest.api.compute.servers.test_list_server_filters.ListServerFiltersTestJSON.test_list_servers_detailed_filter_by_flavor -tempest.api.compute.servers.test_list_server_filters.ListServerFiltersTestJSON.test_list_servers_detailed_filter_by_image -tempest.api.compute.servers.test_list_server_filters.ListServerFiltersTestJSON.test_list_servers_detailed_filter_by_server_name -tempest.api.compute.servers.test_list_server_filters.ListServerFiltersTestJSON.test_list_servers_detailed_filter_by_server_status -tempest.api.compute.servers.test_list_server_filters.ListServerFiltersTestJSON.test_list_servers_detailed_limit_results -tempest.api.compute.servers.test_list_server_filters.ListServerFiltersTestJSON.test_list_servers_filter_by_flavor -tempest.api.compute.servers.test_list_server_filters.ListServerFiltersTestJSON.test_list_servers_filter_by_image -tempest.api.compute.servers.test_list_server_filters.ListServerFiltersTestJSON.test_list_servers_filter_by_limit -tempest.api.compute.servers.test_list_server_filters.ListServerFiltersTestJSON.test_list_servers_filter_by_server_name -tempest.api.compute.servers.test_list_server_filters.ListServerFiltersTestJSON.test_list_servers_filter_by_server_status -tempest.api.compute.servers.test_list_server_filters.ListServerFiltersTestJSON.test_list_servers_filtered_by_name_wildcard -tempest.api.compute.servers.test_list_servers_negative.ListServersNegativeTestJSON.test_list_servers_by_changes_since_future_date -tempest.api.compute.servers.test_list_servers_negative.ListServersNegativeTestJSON.test_list_servers_by_changes_since_invalid_date -tempest.api.compute.servers.test_list_servers_negative.ListServersNegativeTestJSON.test_list_servers_by_limits -tempest.api.compute.servers.test_list_servers_negative.ListServersNegativeTestJSON.test_list_servers_by_limits_greater_than_actual_count -tempest.api.compute.servers.test_list_servers_negative.ListServersNegativeTestJSON.test_list_servers_by_limits_pass_negative_value -tempest.api.compute.servers.test_list_servers_negative.ListServersNegativeTestJSON.test_list_servers_by_limits_pass_string -tempest.api.compute.servers.test_list_servers_negative.ListServersNegativeTestJSON.test_list_servers_by_non_existing_flavor -tempest.api.compute.servers.test_list_servers_negative.ListServersNegativeTestJSON.test_list_servers_by_non_existing_image -tempest.api.compute.servers.test_list_servers_negative.ListServersNegativeTestJSON.test_list_servers_by_non_existing_server_name -tempest.api.compute.servers.test_list_servers_negative.ListServersNegativeTestJSON.test_list_servers_detail_server_is_deleted -tempest.api.compute.servers.test_list_servers_negative.ListServersNegativeTestJSON.test_list_servers_status_non_existing -tempest.api.compute.servers.test_list_servers_negative.ListServersNegativeTestJSON.test_list_servers_with_a_deleted_server -tempest.api.compute.servers.test_server_actions.ServerActionsTestJSON.test_lock_unlock_server -tempest.api.compute.servers.test_server_metadata.ServerMetadataTestJSON.test_delete_server_metadata_item -tempest.api.compute.servers.test_server_metadata.ServerMetadataTestJSON.test_get_server_metadata_item -tempest.api.compute.servers.test_server_metadata.ServerMetadataTestJSON.test_list_server_metadata -tempest.api.compute.servers.test_server_metadata.ServerMetadataTestJSON.test_set_server_metadata -tempest.api.compute.servers.test_server_metadata.ServerMetadataTestJSON.test_set_server_metadata_item -tempest.api.compute.servers.test_server_metadata.ServerMetadataTestJSON.test_update_server_metadata -tempest.api.compute.servers.test_servers_negative.ServersNegativeTestJSON.test_server_name_blank -tempest.api.compute.servers.test_server_actions.ServerActionsTestJSON.test_reboot_server_hard -tempest.api.compute.servers.test_servers_negative.ServersNegativeTestJSON.test_reboot_non_existent_server -tempest.api.compute.servers.test_server_actions.ServerActionsTestJSON.test_rebuild_server -tempest.api.compute.servers.test_servers_negative.ServersNegativeTestJSON.test_rebuild_deleted_server -tempest.api.compute.servers.test_servers_negative.ServersNegativeTestJSON.test_rebuild_non_existent_server -tempest.api.compute.servers.test_server_actions.ServerActionsTestJSON.test_stop_start_server -tempest.api.compute.servers.test_servers_negative.ServersNegativeTestJSON.test_stop_non_existent_server -tempest.api.compute.servers.test_servers.ServersTestJSON.test_update_access_server_address -tempest.api.compute.servers.test_servers.ServersTestJSON.test_update_server_name -tempest.api.compute.servers.test_servers_negative.ServersNegativeTestJSON.test_update_name_of_non_existent_server -tempest.api.compute.servers.test_servers_negative.ServersNegativeTestJSON.test_update_server_name_length_exceeds_256 -tempest.api.compute.servers.test_servers_negative.ServersNegativeTestJSON.test_update_server_set_empty_name -tempest.api.compute.servers.test_create_server.ServersTestJSON.test_verify_created_server_vcpus -tempest.api.compute.servers.test_create_server.ServersTestJSON.test_verify_server_details -tempest.api.compute.servers.test_create_server.ServersTestManualDisk.test_verify_created_server_vcpus -tempest.api.compute.servers.test_create_server.ServersTestManualDisk.test_verify_server_details + +| tempest.api.compute.servers.test_servers.ServersTestJSON.test_create_server_with_admin_password +| tempest.api.compute.servers.test_servers.ServersTestJSON.test_create_with_existing_server_name +| tempest.api.compute.servers.test_servers_negative.ServersNegativeTestJSON.test_create_numeric_server_name +| tempest.api.compute.servers.test_servers_negative.ServersNegativeTestJSON.test_create_server_metadata_exceeds_length_limit +| tempest.api.compute.servers.test_servers_negative.ServersNegativeTestJSON.test_create_server_name_length_exceeds_256 +| tempest.api.compute.servers.test_servers_negative.ServersNegativeTestJSON.test_create_with_invalid_flavor +| tempest.api.compute.servers.test_servers_negative.ServersNegativeTestJSON.test_create_with_invalid_image +| tempest.api.compute.servers.test_servers_negative.ServersNegativeTestJSON.test_create_with_invalid_network_uuid +| tempest.api.compute.servers.test_servers_negative.ServersNegativeTestJSON.test_delete_server_pass_id_exceeding_length_limit +| tempest.api.compute.servers.test_servers_negative.ServersNegativeTestJSON.test_delete_server_pass_negative_id +| tempest.api.compute.servers.test_servers_negative.ServersNegativeTestJSON.test_get_non_existent_server +| tempest.api.compute.servers.test_create_server.ServersTestJSON.test_host_name_is_same_as_server_name +| tempest.api.compute.servers.test_create_server.ServersTestManualDisk.test_host_name_is_same_as_server_name +| tempest.api.compute.servers.test_servers_negative.ServersNegativeTestJSON.test_invalid_ip_v6_address +| tempest.api.compute.servers.test_create_server.ServersTestJSON.test_list_servers +| tempest.api.compute.servers.test_create_server.ServersTestJSON.test_list_servers_with_detail +| tempest.api.compute.servers.test_create_server.ServersTestManualDisk.test_list_servers +| tempest.api.compute.servers.test_create_server.ServersTestManualDisk.test_list_servers_with_detail +| tempest.api.compute.servers.test_list_server_filters.ListServerFiltersTestJSON.test_list_servers_detailed_filter_by_flavor +| tempest.api.compute.servers.test_list_server_filters.ListServerFiltersTestJSON.test_list_servers_detailed_filter_by_image +| tempest.api.compute.servers.test_list_server_filters.ListServerFiltersTestJSON.test_list_servers_detailed_filter_by_server_name +| tempest.api.compute.servers.test_list_server_filters.ListServerFiltersTestJSON.test_list_servers_detailed_filter_by_server_status +| tempest.api.compute.servers.test_list_server_filters.ListServerFiltersTestJSON.test_list_servers_detailed_limit_results +| tempest.api.compute.servers.test_list_server_filters.ListServerFiltersTestJSON.test_list_servers_filter_by_flavor +| tempest.api.compute.servers.test_list_server_filters.ListServerFiltersTestJSON.test_list_servers_filter_by_image +| tempest.api.compute.servers.test_list_server_filters.ListServerFiltersTestJSON.test_list_servers_filter_by_limit +| tempest.api.compute.servers.test_list_server_filters.ListServerFiltersTestJSON.test_list_servers_filter_by_server_name +| tempest.api.compute.servers.test_list_server_filters.ListServerFiltersTestJSON.test_list_servers_filter_by_server_status +| tempest.api.compute.servers.test_list_server_filters.ListServerFiltersTestJSON.test_list_servers_filtered_by_name_wildcard +| tempest.api.compute.servers.test_list_servers_negative.ListServersNegativeTestJSON.test_list_servers_by_changes_since_future_date +| tempest.api.compute.servers.test_list_servers_negative.ListServersNegativeTestJSON.test_list_servers_by_changes_since_invalid_date +| tempest.api.compute.servers.test_list_servers_negative.ListServersNegativeTestJSON.test_list_servers_by_limits +| tempest.api.compute.servers.test_list_servers_negative.ListServersNegativeTestJSON.test_list_servers_by_limits_greater_than_actual_count +| tempest.api.compute.servers.test_list_servers_negative.ListServersNegativeTestJSON.test_list_servers_by_limits_pass_negative_value +| tempest.api.compute.servers.test_list_servers_negative.ListServersNegativeTestJSON.test_list_servers_by_limits_pass_string +| tempest.api.compute.servers.test_list_servers_negative.ListServersNegativeTestJSON.test_list_servers_by_non_existing_flavor +| tempest.api.compute.servers.test_list_servers_negative.ListServersNegativeTestJSON.test_list_servers_by_non_existing_image +| tempest.api.compute.servers.test_list_servers_negative.ListServersNegativeTestJSON.test_list_servers_by_non_existing_server_name +| tempest.api.compute.servers.test_list_servers_negative.ListServersNegativeTestJSON.test_list_servers_detail_server_is_deleted +| tempest.api.compute.servers.test_list_servers_negative.ListServersNegativeTestJSON.test_list_servers_status_non_existing +| tempest.api.compute.servers.test_list_servers_negative.ListServersNegativeTestJSON.test_list_servers_with_a_deleted_server +| tempest.api.compute.servers.test_server_actions.ServerActionsTestJSON.test_lock_unlock_server +| tempest.api.compute.servers.test_server_metadata.ServerMetadataTestJSON.test_delete_server_metadata_item +| tempest.api.compute.servers.test_server_metadata.ServerMetadataTestJSON.test_get_server_metadata_item +| tempest.api.compute.servers.test_server_metadata.ServerMetadataTestJSON.test_list_server_metadata +| tempest.api.compute.servers.test_server_metadata.ServerMetadataTestJSON.test_set_server_metadata +| tempest.api.compute.servers.test_server_metadata.ServerMetadataTestJSON.test_set_server_metadata_item +| tempest.api.compute.servers.test_server_metadata.ServerMetadataTestJSON.test_update_server_metadata +| tempest.api.compute.servers.test_servers_negative.ServersNegativeTestJSON.test_server_name_blank +| tempest.api.compute.servers.test_server_actions.ServerActionsTestJSON.test_reboot_server_hard +| tempest.api.compute.servers.test_servers_negative.ServersNegativeTestJSON.test_reboot_non_existent_server +| tempest.api.compute.servers.test_server_actions.ServerActionsTestJSON.test_rebuild_server +| tempest.api.compute.servers.test_servers_negative.ServersNegativeTestJSON.test_rebuild_deleted_server +| tempest.api.compute.servers.test_servers_negative.ServersNegativeTestJSON.test_rebuild_non_existent_server +| tempest.api.compute.servers.test_server_actions.ServerActionsTestJSON.test_stop_start_server +| tempest.api.compute.servers.test_servers_negative.ServersNegativeTestJSON.test_stop_non_existent_server +| tempest.api.compute.servers.test_servers.ServersTestJSON.test_update_access_server_address +| tempest.api.compute.servers.test_servers.ServersTestJSON.test_update_server_name +| tempest.api.compute.servers.test_servers_negative.ServersNegativeTestJSON.test_update_name_of_non_existent_server +| tempest.api.compute.servers.test_servers_negative.ServersNegativeTestJSON.test_update_server_name_length_exceeds_256 +| tempest.api.compute.servers.test_servers_negative.ServersNegativeTestJSON.test_update_server_set_empty_name +| tempest.api.compute.servers.test_create_server.ServersTestJSON.test_verify_created_server_vcpus +| tempest.api.compute.servers.test_create_server.ServersTestJSON.test_verify_server_details +| tempest.api.compute.servers.test_create_server.ServersTestManualDisk.test_verify_created_server_vcpus +| tempest.api.compute.servers.test_create_server.ServersTestManualDisk.test_verify_server_details Retrieve volume information through the Compute API --------------------------------------------------- -tempest.api.compute.volumes.test_attach_volume.AttachVolumeTestJSON.test_attach_detach_volume -tempest.api.compute.volumes.test_attach_volume.AttachVolumeTestJSON.test_list_get_volume_attachments + +| tempest.api.compute.volumes.test_attach_volume.AttachVolumeTestJSON.test_attach_detach_volume +| tempest.api.compute.volumes.test_attach_volume.AttachVolumeTestJSON.test_list_get_volume_attachments @@ -127,15 +134,16 @@ Test Area VIM Operations - Identity API discovery operations within the Identity v3 API --------------------------------------------------- -tempest.api.identity.v3.test_api_discovery.TestApiDiscovery.test_api_media_types -tempest.api.identity.v3.test_api_discovery.TestApiDiscovery.test_api_version_resources -tempest.api.identity.v3.test_api_discovery.TestApiDiscovery.test_api_version_statuses + +| tempest.api.identity.v3.test_api_discovery.TestApiDiscovery.test_api_media_types +| tempest.api.identity.v3.test_api_discovery.TestApiDiscovery.test_api_version_resources +| tempest.api.identity.v3.test_api_discovery.TestApiDiscovery.test_api_version_statuses Auth operations within the Identity API --------------------------------------- -tempest.api.identity.v3.test_tokens.TokensV3Test.test_create_token +| tempest.api.identity.v3.test_tokens.TokensV3Test.test_create_token -------------------------------- @@ -144,42 +152,47 @@ Test Area VIM Operations - Image Image deletion tests using the Glance v2 API -------------------------------------------- -tempest.api.image.v2.test_images.BasicOperationsImagesTest.test_delete_image -tempest.api.image.v2.test_images_negative.ImagesNegativeTest.test_delete_image_null_id -tempest.api.image.v2.test_images_negative.ImagesNegativeTest.test_delete_non_existing_image -tempest.api.image.v2.test_images_tags_negative.ImagesTagsNegativeTest.test_delete_non_existing_tag + +| tempest.api.image.v2.test_images.BasicOperationsImagesTest.test_delete_image +| tempest.api.image.v2.test_images_negative.ImagesNegativeTest.test_delete_image_null_id +| tempest.api.image.v2.test_images_negative.ImagesNegativeTest.test_delete_non_existing_image +| tempest.api.image.v2.test_images_tags_negative.ImagesTagsNegativeTest.test_delete_non_existing_tag Image get tests using the Glance v2 API --------------------------------------- -tempest.api.image.v2.test_images.ListImagesTest.test_get_image_schema -tempest.api.image.v2.test_images.ListImagesTest.test_get_images_schema -tempest.api.image.v2.test_images_negative.ImagesNegativeTest.test_get_delete_deleted_image -tempest.api.image.v2.test_images_negative.ImagesNegativeTest.test_get_image_null_id -tempest.api.image.v2.test_images_negative.ImagesNegativeTest.test_get_non_existent_image + +| tempest.api.image.v2.test_images.ListImagesTest.test_get_image_schema +| tempest.api.image.v2.test_images.ListImagesTest.test_get_images_schema +| tempest.api.image.v2.test_images_negative.ImagesNegativeTest.test_get_delete_deleted_image +| tempest.api.image.v2.test_images_negative.ImagesNegativeTest.test_get_image_null_id +| tempest.api.image.v2.test_images_negative.ImagesNegativeTest.test_get_non_existent_image CRUD image operations in Images API v2 -------------------------------------- -tempest.api.image.v2.test_images.ListImagesTest.test_list_no_params + +| tempest.api.image.v2.test_images.ListImagesTest.test_list_no_params Image list tests using the Glance v2 API ---------------------------------------- -tempest.api.image.v2.test_images.ListImagesTest.test_list_images_param_container_format -tempest.api.image.v2.test_images.ListImagesTest.test_list_images_param_disk_format -tempest.api.image.v2.test_images.ListImagesTest.test_list_images_param_limit -tempest.api.image.v2.test_images.ListImagesTest.test_list_images_param_min_max_size -tempest.api.image.v2.test_images.ListImagesTest.test_list_images_param_size -tempest.api.image.v2.test_images.ListImagesTest.test_list_images_param_status -tempest.api.image.v2.test_images.ListImagesTest.test_list_images_param_visibility + +| tempest.api.image.v2.test_images.ListImagesTest.test_list_images_param_container_format +| tempest.api.image.v2.test_images.ListImagesTest.test_list_images_param_disk_format +| tempest.api.image.v2.test_images.ListImagesTest.test_list_images_param_limit +| tempest.api.image.v2.test_images.ListImagesTest.test_list_images_param_min_max_size +| tempest.api.image.v2.test_images.ListImagesTest.test_list_images_param_size +| tempest.api.image.v2.test_images.ListImagesTest.test_list_images_param_status +| tempest.api.image.v2.test_images.ListImagesTest.test_list_images_param_visibility Image update tests using the Glance v2 API ------------------------------------------ -tempest.api.image.v2.test_images.BasicOperationsImagesTest.test_update_image -tempest.api.image.v2.test_images_tags.ImagesTagsTest.test_update_delete_tags_for_image -tempest.api.image.v2.test_images_tags_negative.ImagesTagsNegativeTest.test_update_tags_for_non_existing_image + +| tempest.api.image.v2.test_images.BasicOperationsImagesTest.test_update_image +| tempest.api.image.v2.test_images_tags.ImagesTagsTest.test_update_delete_tags_for_image +| tempest.api.image.v2.test_images_tags_negative.ImagesTagsNegativeTest.test_update_tags_for_non_existing_image ---------------------------------- @@ -189,56 +202,57 @@ Test Area VIM Operations - Network Basic CRUD operations on L2 networks and L2 network ports --------------------------------------------------------- -tempest.api.network.test_networks.NetworksTest.test_create_delete_subnet_all_attributes -tempest.api.network.test_networks.NetworksTest.test_create_delete_subnet_with_allocation_pools -tempest.api.network.test_networks.NetworksTest.test_create_delete_subnet_with_dhcp_enabled -tempest.api.network.test_networks.NetworksTest.test_create_delete_subnet_with_gw -tempest.api.network.test_networks.NetworksTest.test_create_delete_subnet_with_gw_and_allocation_pools -tempest.api.network.test_networks.NetworksTest.test_create_delete_subnet_with_host_routes_and_dns_nameservers -tempest.api.network.test_networks.NetworksTest.test_create_delete_subnet_without_gateway -tempest.api.network.test_networks.NetworksTest.test_create_update_delete_network_subnet -tempest.api.network.test_networks.NetworksTest.test_delete_network_with_subnet -tempest.api.network.test_networks.NetworksTest.test_list_networks -tempest.api.network.test_networks.NetworksTest.test_list_networks_fields -tempest.api.network.test_networks.NetworksTest.test_list_subnets -tempest.api.network.test_networks.NetworksTest.test_list_subnets_fields -tempest.api.network.test_networks.NetworksTest.test_show_network -tempest.api.network.test_networks.NetworksTest.test_show_network_fields -tempest.api.network.test_networks.NetworksTest.test_show_subnet -tempest.api.network.test_networks.NetworksTest.test_show_subnet_fields -tempest.api.network.test_networks.NetworksTest.test_update_subnet_gw_dns_host_routes_dhcp -tempest.api.network.test_ports.PortsTestJSON.test_create_bulk_port -tempest.api.network.test_ports.PortsTestJSON.test_create_port_in_allowed_allocation_pools -tempest.api.network.test_ports.PortsTestJSON.test_create_update_delete_port -tempest.api.network.test_ports.PortsTestJSON.test_list_ports -tempest.api.network.test_ports.PortsTestJSON.test_list_ports_fields -tempest.api.network.test_ports.PortsTestJSON.test_show_port -tempest.api.network.test_ports.PortsTestJSON.test_show_port_fields -tempest.api.network.test_ports.PortsTestJSON.test_update_port_with_security_group_and_extra_attributes -tempest.api.network.test_ports.PortsTestJSON.test_update_port_with_two_security_groups_and_extra_attributes +| tempest.api.network.test_networks.NetworksTest.test_create_delete_subnet_all_attributes +| tempest.api.network.test_networks.NetworksTest.test_create_delete_subnet_with_allocation_pools +| tempest.api.network.test_networks.NetworksTest.test_create_delete_subnet_with_dhcp_enabled +| tempest.api.network.test_networks.NetworksTest.test_create_delete_subnet_with_gw +| tempest.api.network.test_networks.NetworksTest.test_create_delete_subnet_with_gw_and_allocation_pools +| tempest.api.network.test_networks.NetworksTest.test_create_delete_subnet_with_host_routes_and_dns_nameservers +| tempest.api.network.test_networks.NetworksTest.test_create_delete_subnet_without_gateway +| tempest.api.network.test_networks.NetworksTest.test_create_update_delete_network_subnet +| tempest.api.network.test_networks.NetworksTest.test_delete_network_with_subnet +| tempest.api.network.test_networks.NetworksTest.test_list_networks +| tempest.api.network.test_networks.NetworksTest.test_list_networks_fields +| tempest.api.network.test_networks.NetworksTest.test_list_subnets +| tempest.api.network.test_networks.NetworksTest.test_list_subnets_fields +| tempest.api.network.test_networks.NetworksTest.test_show_network +| tempest.api.network.test_networks.NetworksTest.test_show_network_fields +| tempest.api.network.test_networks.NetworksTest.test_show_subnet +| tempest.api.network.test_networks.NetworksTest.test_show_subnet_fields +| tempest.api.network.test_networks.NetworksTest.test_update_subnet_gw_dns_host_routes_dhcp +| tempest.api.network.test_ports.PortsTestJSON.test_create_bulk_port +| tempest.api.network.test_ports.PortsTestJSON.test_create_port_in_allowed_allocation_pools +| tempest.api.network.test_ports.PortsTestJSON.test_create_update_delete_port +| tempest.api.network.test_ports.PortsTestJSON.test_list_ports +| tempest.api.network.test_ports.PortsTestJSON.test_list_ports_fields +| tempest.api.network.test_ports.PortsTestJSON.test_show_port +| tempest.api.network.test_ports.PortsTestJSON.test_show_port_fields +| tempest.api.network.test_ports.PortsTestJSON.test_update_port_with_security_group_and_extra_attributes +| tempest.api.network.test_ports.PortsTestJSON.test_update_port_with_two_security_groups_and_extra_attributes Basic CRUD operations on security groups ---------------------------------------- -tempest.api.network.test_security_groups.SecGroupTest.test_create_list_update_show_delete_security_group -tempest.api.network.test_security_groups.SecGroupTest.test_create_security_group_rule_with_additional_args -tempest.api.network.test_security_groups.SecGroupTest.test_create_security_group_rule_with_icmp_type_code -tempest.api.network.test_security_groups.SecGroupTest.test_create_security_group_rule_with_protocol_integer_value -tempest.api.network.test_security_groups.SecGroupTest.test_create_security_group_rule_with_remote_group_id -tempest.api.network.test_security_groups.SecGroupTest.test_create_security_group_rule_with_remote_ip_prefix -tempest.api.network.test_security_groups.SecGroupTest.test_create_show_delete_security_group_rule -tempest.api.network.test_security_groups.SecGroupTest.test_list_security_groups -tempest.api.network.test_security_groups_negative.NegativeSecGroupTest.test_create_additional_default_security_group_fails -tempest.api.network.test_security_groups_negative.NegativeSecGroupTest.test_create_duplicate_security_group_rule_fails -tempest.api.network.test_security_groups_negative.NegativeSecGroupTest.test_create_security_group_rule_with_bad_ethertype -tempest.api.network.test_security_groups_negative.NegativeSecGroupTest.test_create_security_group_rule_with_bad_protocol -tempest.api.network.test_security_groups_negative.NegativeSecGroupTest.test_create_security_group_rule_with_bad_remote_ip_prefix -tempest.api.network.test_security_groups_negative.NegativeSecGroupTest.test_create_security_group_rule_with_invalid_ports -tempest.api.network.test_security_groups_negative.NegativeSecGroupTest.test_create_security_group_rule_with_non_existent_remote_groupid -tempest.api.network.test_security_groups_negative.NegativeSecGroupTest.test_create_security_group_rule_with_non_existent_security_group -tempest.api.network.test_security_groups_negative.NegativeSecGroupTest.test_delete_non_existent_security_group -tempest.api.network.test_security_groups_negative.NegativeSecGroupTest.test_show_non_existent_security_group -tempest.api.network.test_security_groups_negative.NegativeSecGroupTest.test_show_non_existent_security_group_rule + +| tempest.api.network.test_security_groups.SecGroupTest.test_create_list_update_show_delete_security_group +| tempest.api.network.test_security_groups.SecGroupTest.test_create_security_group_rule_with_additional_args +| tempest.api.network.test_security_groups.SecGroupTest.test_create_security_group_rule_with_icmp_type_code +| tempest.api.network.test_security_groups.SecGroupTest.test_create_security_group_rule_with_protocol_integer_value +| tempest.api.network.test_security_groups.SecGroupTest.test_create_security_group_rule_with_remote_group_id +| tempest.api.network.test_security_groups.SecGroupTest.test_create_security_group_rule_with_remote_ip_prefix +| tempest.api.network.test_security_groups.SecGroupTest.test_create_show_delete_security_group_rule +| tempest.api.network.test_security_groups.SecGroupTest.test_list_security_groups +| tempest.api.network.test_security_groups_negative.NegativeSecGroupTest.test_create_additional_default_security_group_fails +| tempest.api.network.test_security_groups_negative.NegativeSecGroupTest.test_create_duplicate_security_group_rule_fails +| tempest.api.network.test_security_groups_negative.NegativeSecGroupTest.test_create_security_group_rule_with_bad_ethertype +| tempest.api.network.test_security_groups_negative.NegativeSecGroupTest.test_create_security_group_rule_with_bad_protocol +| tempest.api.network.test_security_groups_negative.NegativeSecGroupTest.test_create_security_group_rule_with_bad_remote_ip_prefix +| tempest.api.network.test_security_groups_negative.NegativeSecGroupTest.test_create_security_group_rule_with_invalid_ports +| tempest.api.network.test_security_groups_negative.NegativeSecGroupTest.test_create_security_group_rule_with_non_existent_remote_groupid +| tempest.api.network.test_security_groups_negative.NegativeSecGroupTest.test_create_security_group_rule_with_non_existent_security_group +| tempest.api.network.test_security_groups_negative.NegativeSecGroupTest.test_delete_non_existent_security_group +| tempest.api.network.test_security_groups_negative.NegativeSecGroupTest.test_show_non_existent_security_group +| tempest.api.network.test_security_groups_negative.NegativeSecGroupTest.test_show_non_existent_security_group_rule --------------------------------- @@ -247,117 +261,300 @@ Test Area VIM Operations - Volume Volume attach and detach operations with the Cinder v2 API ---------------------------------------------------------- -tempest.api.volume.test_volumes_actions.VolumesV2ActionsTest.test_attach_detach_volume_to_instance -tempest.api.volume.test_volumes_actions.VolumesV2ActionsTest.test_get_volume_attachment -tempest.api.volume.test_volumes_negative.VolumesV2NegativeTest.test_attach_volumes_with_nonexistent_volume_id -tempest.api.volume.test_volumes_negative.VolumesV2NegativeTest.test_detach_volumes_with_invalid_volume_id + +| tempest.api.volume.test_volumes_actions.VolumesV2ActionsTest.test_attach_detach_volume_to_instance +| tempest.api.volume.test_volumes_actions.VolumesV2ActionsTest.test_get_volume_attachment +| tempest.api.volume.test_volumes_negative.VolumesV2NegativeTest.test_attach_volumes_with_nonexistent_volume_id +| tempest.api.volume.test_volumes_negative.VolumesV2NegativeTest.test_detach_volumes_with_invalid_volume_id Volume service availability zone operations with the Cinder v2 API ------------------------------------------------------------------ -tempest.api.volume.test_availability_zone.AvailabilityZoneV2TestJSON.test_get_availability_zone_list + +| tempest.api.volume.test_availability_zone.AvailabilityZoneV2TestJSON.test_get_availability_zone_list Volume cloning operations with the Cinder v2 API ------------------------------------------------ -tempest.api.volume.test_volumes_get.VolumesV2GetTest.test_volume_create_get_update_delete_as_clone + +| tempest.api.volume.test_volumes_get.VolumesV2GetTest.test_volume_create_get_update_delete_as_clone Image copy-to-volume operations with the Cinder v2 API ------------------------------------------------------ -tempest.api.volume.test_volumes_actions.VolumesV2ActionsTest.test_volume_bootable -tempest.api.volume.test_volumes_get.VolumesV2GetTest.test_volume_create_get_update_delete_from_image + +| tempest.api.volume.test_volumes_actions.VolumesV2ActionsTest.test_volume_bootable +| tempest.api.volume.test_volumes_get.VolumesV2GetTest.test_volume_create_get_update_delete_from_image Volume creation and deletion operations with the Cinder v2 API -------------------------------------------------------------- -tempest.api.volume.test_volumes_get.VolumesV2GetTest.test_volume_create_get_update_delete -tempest.api.volume.test_volumes_negative.VolumesV2NegativeTest.test_create_volume_with_invalid_size -tempest.api.volume.test_volumes_negative.VolumesV2NegativeTest.test_create_volume_with_nonexistent_source_volid -tempest.api.volume.test_volumes_negative.VolumesV2NegativeTest.test_create_volume_with_nonexistent_volume_type -tempest.api.volume.test_volumes_negative.VolumesV2NegativeTest.test_create_volume_with_out_passing_size -tempest.api.volume.test_volumes_negative.VolumesV2NegativeTest.test_create_volume_with_size_negative -tempest.api.volume.test_volumes_negative.VolumesV2NegativeTest.test_create_volume_with_size_zero + +| tempest.api.volume.test_volumes_get.VolumesV2GetTest.test_volume_create_get_update_delete +| tempest.api.volume.test_volumes_negative.VolumesV2NegativeTest.test_create_volume_with_invalid_size +| tempest.api.volume.test_volumes_negative.VolumesV2NegativeTest.test_create_volume_with_nonexistent_source_volid +| tempest.api.volume.test_volumes_negative.VolumesV2NegativeTest.test_create_volume_with_nonexistent_volume_type +| tempest.api.volume.test_volumes_negative.VolumesV2NegativeTest.test_create_volume_with_out_passing_size +| tempest.api.volume.test_volumes_negative.VolumesV2NegativeTest.test_create_volume_with_size_negative +| tempest.api.volume.test_volumes_negative.VolumesV2NegativeTest.test_create_volume_with_size_zero Volume service extension listing operations with the Cinder v2 API ------------------------------------------------------------------ -tempest.api.volume.test_extensions.ExtensionsV2TestJSON.test_list_extensions + +| tempest.api.volume.test_extensions.ExtensionsV2TestJSON.test_list_extensions Volume GET operations with the Cinder v2 API -------------------------------------------- -tempest.api.volume.test_volumes_negative.VolumesV2NegativeTest.test_get_invalid_volume_id -tempest.api.volume.test_volumes_negative.VolumesV2NegativeTest.test_get_volume_without_passing_volume_id -tempest.api.volume.test_volumes_negative.VolumesV2NegativeTest.test_volume_get_nonexistent_volume_id + +| tempest.api.volume.test_volumes_negative.VolumesV2NegativeTest.test_get_invalid_volume_id +| tempest.api.volume.test_volumes_negative.VolumesV2NegativeTest.test_get_volume_without_passing_volume_id +| tempest.api.volume.test_volumes_negative.VolumesV2NegativeTest.test_volume_get_nonexistent_volume_id + Volume listing operations with the Cinder v2 API ------------------------------------------------ -tempest.api.volume.test_volumes_list.VolumesV2ListTestJSON.test_volume_list -tempest.api.volume.test_volumes_list.VolumesV2ListTestJSON.test_volume_list_by_name -tempest.api.volume.test_volumes_list.VolumesV2ListTestJSON.test_volume_list_details_by_name -tempest.api.volume.test_volumes_list.VolumesV2ListTestJSON.test_volume_list_param_display_name_and_status -tempest.api.volume.test_volumes_list.VolumesV2ListTestJSON.test_volume_list_with_detail_param_display_name_and_status -tempest.api.volume.test_volumes_list.VolumesV2ListTestJSON.test_volume_list_with_detail_param_metadata -tempest.api.volume.test_volumes_list.VolumesV2ListTestJSON.test_volume_list_with_details -tempest.api.volume.test_volumes_list.VolumesV2ListTestJSON.test_volume_list_with_param_metadata -tempest.api.volume.test_volumes_list.VolumesV2ListTestJSON.test_volumes_list_by_availability_zone -tempest.api.volume.test_volumes_list.VolumesV2ListTestJSON.test_volumes_list_by_status -tempest.api.volume.test_volumes_list.VolumesV2ListTestJSON.test_volumes_list_details_by_availability_zone -tempest.api.volume.test_volumes_list.VolumesV2ListTestJSON.test_volumes_list_details_by_status -tempest.api.volume.test_volumes_negative.VolumesV2NegativeTest.test_list_volumes_detail_with_invalid_status -tempest.api.volume.test_volumes_negative.VolumesV2NegativeTest.test_list_volumes_detail_with_nonexistent_name -tempest.api.volume.test_volumes_negative.VolumesV2NegativeTest.test_list_volumes_with_invalid_status -tempest.api.volume.test_volumes_negative.VolumesV2NegativeTest.test_list_volumes_with_nonexistent_name -tempest.api.volume.v2.test_volumes_list.VolumesV2ListTestJSON.test_volume_list_details_pagination -tempest.api.volume.v2.test_volumes_list.VolumesV2ListTestJSON.test_volume_list_details_with_multiple_params -tempest.api.volume.v2.test_volumes_list.VolumesV2ListTestJSON.test_volume_list_pagination + +| tempest.api.volume.test_volumes_list.VolumesV2ListTestJSON.test_volume_list +| tempest.api.volume.test_volumes_list.VolumesV2ListTestJSON.test_volume_list_by_name +| tempest.api.volume.test_volumes_list.VolumesV2ListTestJSON.test_volume_list_details_by_name +| tempest.api.volume.test_volumes_list.VolumesV2ListTestJSON.test_volume_list_param_display_name_and_status +| tempest.api.volume.test_volumes_list.VolumesV2ListTestJSON.test_volume_list_with_detail_param_display_name_and_status +| tempest.api.volume.test_volumes_list.VolumesV2ListTestJSON.test_volume_list_with_detail_param_metadata +| tempest.api.volume.test_volumes_list.VolumesV2ListTestJSON.test_volume_list_with_details +| tempest.api.volume.test_volumes_list.VolumesV2ListTestJSON.test_volume_list_with_param_metadata +| tempest.api.volume.test_volumes_list.VolumesV2ListTestJSON.test_volumes_list_by_availability_zone +| tempest.api.volume.test_volumes_list.VolumesV2ListTestJSON.test_volumes_list_by_status +| tempest.api.volume.test_volumes_list.VolumesV2ListTestJSON.test_volumes_list_details_by_availability_zone +| tempest.api.volume.test_volumes_list.VolumesV2ListTestJSON.test_volumes_list_details_by_status +| tempest.api.volume.test_volumes_negative.VolumesV2NegativeTest.test_list_volumes_detail_with_invalid_status +| tempest.api.volume.test_volumes_negative.VolumesV2NegativeTest.test_list_volumes_detail_with_nonexistent_name +| tempest.api.volume.test_volumes_negative.VolumesV2NegativeTest.test_list_volumes_with_invalid_status +| tempest.api.volume.test_volumes_negative.VolumesV2NegativeTest.test_list_volumes_with_nonexistent_name +| tempest.api.volume.v2.test_volumes_list.VolumesV2ListTestJSON.test_volume_list_details_pagination +| tempest.api.volume.v2.test_volumes_list.VolumesV2ListTestJSON.test_volume_list_details_with_multiple_params +| tempest.api.volume.v2.test_volumes_list.VolumesV2ListTestJSON.test_volume_list_pagination Volume metadata operations with the Cinder v2 API ------------------------------------------------- -tempest.api.volume.test_volume_metadata.VolumesV2MetadataTest.test_create_get_delete_volume_metadata -tempest.api.volume.test_volume_metadata.VolumesV2MetadataTest.test_update_volume_metadata_item +| tempest.api.volume.test_volume_metadata.VolumesV2MetadataTest.test_create_get_delete_volume_metadata +| tempest.api.volume.test_volume_metadata.VolumesV2MetadataTest.test_update_volume_metadata_item Verification of read-only status on volumes with the Cinder v2 API ------------------------------------------------------------------ -tempest.api.volume.test_volumes_actions.VolumesV2ActionsTest.test_volume_readonly_update + +| tempest.api.volume.test_volumes_actions.VolumesV2ActionsTest.test_volume_readonly_update Volume reservation operations with the Cinder v2 API ---------------------------------------------------- -tempest.api.volume.test_volumes_actions.VolumesV2ActionsTest.test_reserve_unreserve_volume -tempest.api.volume.test_volumes_negative.VolumesV2NegativeTest.test_reserve_volume_with_negative_volume_status -tempest.api.volume.test_volumes_negative.VolumesV2NegativeTest.test_reserve_volume_with_nonexistent_volume_id -tempest.api.volume.test_volumes_negative.VolumesV2NegativeTest.test_unreserve_volume_with_nonexistent_volume_id + +| tempest.api.volume.test_volumes_actions.VolumesV2ActionsTest.test_reserve_unreserve_volume +| tempest.api.volume.test_volumes_negative.VolumesV2NegativeTest.test_reserve_volume_with_negative_volume_status +| tempest.api.volume.test_volumes_negative.VolumesV2NegativeTest.test_reserve_volume_with_nonexistent_volume_id +| tempest.api.volume.test_volumes_negative.VolumesV2NegativeTest.test_unreserve_volume_with_nonexistent_volume_id Volume snapshot creation/deletion operations with the Cinder v2 API ------------------------------------------------------------------- -tempest.api.volume.test_snapshot_metadata.SnapshotV2MetadataTestJSON.test_create_get_delete_snapshot_metadata -tempest.api.volume.test_snapshot_metadata.SnapshotV2MetadataTestJSON.test_update_snapshot_metadata_item -tempest.api.volume.test_volumes_negative.VolumesV2NegativeTest.test_create_volume_with_nonexistent_snapshot_id -tempest.api.volume.test_volumes_negative.VolumesV2NegativeTest.test_delete_invalid_volume_id -tempest.api.volume.test_volumes_negative.VolumesV2NegativeTest.test_delete_volume_without_passing_volume_id -tempest.api.volume.test_volumes_negative.VolumesV2NegativeTest.test_volume_delete_nonexistent_volume_id -tempest.api.volume.test_volumes_snapshots.VolumesV2SnapshotTestJSON.test_snapshot_create_get_list_update_delete -tempest.api.volume.test_volumes_snapshots.VolumesV2SnapshotTestJSON.test_volume_from_snapshot -tempest.api.volume.test_volumes_snapshots.VolumesV2SnapshotTestJSON.test_snapshots_list_details_with_params -tempest.api.volume.test_volumes_snapshots.VolumesV2SnapshotTestJSON.test_snapshots_list_with_params -tempest.api.volume.test_volumes_snapshots_negative.VolumesV2SnapshotNegativeTestJSON.test_create_snapshot_with_nonexistent_volume_id -tempest.api.volume.test_volumes_snapshots_negative.VolumesV2SnapshotNegativeTestJSON.test_create_snapshot_without_passing_volume_id + +| tempest.api.volume.test_snapshot_metadata.SnapshotV2MetadataTestJSON.test_create_get_delete_snapshot_metadata +| tempest.api.volume.test_snapshot_metadata.SnapshotV2MetadataTestJSON.test_update_snapshot_metadata_item +| tempest.api.volume.test_volumes_negative.VolumesV2NegativeTest.test_create_volume_with_nonexistent_snapshot_id +| tempest.api.volume.test_volumes_negative.VolumesV2NegativeTest.test_delete_invalid_volume_id +| tempest.api.volume.test_volumes_negative.VolumesV2NegativeTest.test_delete_volume_without_passing_volume_id +| tempest.api.volume.test_volumes_negative.VolumesV2NegativeTest.test_volume_delete_nonexistent_volume_id +| tempest.api.volume.test_volumes_snapshots.VolumesV2SnapshotTestJSON.test_snapshot_create_get_list_update_delete +| tempest.api.volume.test_volumes_snapshots.VolumesV2SnapshotTestJSON.test_volume_from_snapshot +| tempest.api.volume.test_volumes_snapshots.VolumesV2SnapshotTestJSON.test_snapshots_list_details_with_params +| tempest.api.volume.test_volumes_snapshots.VolumesV2SnapshotTestJSON.test_snapshots_list_with_params +| tempest.api.volume.test_volumes_snapshots_negative.VolumesV2SnapshotNegativeTestJSON.test_create_snapshot_with_nonexistent_volume_id +| tempest.api.volume.test_volumes_snapshots_negative.VolumesV2SnapshotNegativeTestJSON.test_create_snapshot_without_passing_volume_id Volume update operations with the Cinder v2 API ----------------------------------------------- -tempest.api.volume.test_volumes_negative.VolumesV2NegativeTest.test_update_volume_with_empty_volume_id -tempest.api.volume.test_volumes_negative.VolumesV2NegativeTest.test_update_volume_with_invalid_volume_id -tempest.api.volume.test_volumes_negative.VolumesV2NegativeTest.test_update_volume_with_nonexistent_volume_id + +| tempest.api.volume.test_volumes_negative.VolumesV2NegativeTest.test_update_volume_with_empty_volume_id +| tempest.api.volume.test_volumes_negative.VolumesV2NegativeTest.test_update_volume_with_invalid_volume_id +| tempest.api.volume.test_volumes_negative.VolumesV2NegativeTest.test_update_volume_with_nonexistent_volume_id + + +--------------------------- +Test Area High Availability +--------------------------- + +Verify high availability of OpenStack controller services +------------------------------------------------------ + +| opnfv.ha.tc001.nova-api_service_down +| opnfv.ha.tc003.neutron-server_service_down +| opnfv.ha.tc004.keystone_service_down +| opnfv.ha.tc005.glance-api_service_down +| opnfv.ha.tc006.cinder-api_service_down +| opnfv.ha.tc009.cpu_overload +| opnfv.ha.tc010.disk_I/O_block +| opnfv.ha.tc011.load_balance_service_down + +---------------------------------------- +Test Area vPing - Basic VNF Connectivity +---------------------------------------- + +| opnfv.vping.userdata +| opnfv.vping.ssh Optional CVP Test Areas ======================== + +----------------- +Test Area BGP VPN +----------------- + +Verify association and dissasocitation of node using route targets +------------------------------------------------------------------ + +| opnfv.sdnvpn.subnet_connectivity +| opnfv.sdnvpn.tenant separation +| opnfv.sdnvpn.router_association +| opnfv.sdnvpn.router_association_floating_ip + +-------------------------------------------------- +IPv6 Compliance Testing Methodology and Test Cases +-------------------------------------------------- + +Test Case 1: Create and Delete an IPv6 Network, Port and Subnet +--------------------------------------------------------------- + +| tempest.api.network.test_networks.BulkNetworkOpsIpV6Test.test_bulk_create_delete_network +| tempest.api.network.test_networks.BulkNetworkOpsIpV6Test.test_bulk_create_delete_port +| tempest.api.network.test_networks.BulkNetworkOpsIpV6Test.test_bulk_create_delete_subnet + +Test Case 2: Create, Update and Delete an IPv6 Network and Subnet +----------------------------------------------------------------- + +| tempest.api.network.test_networks.NetworksIpV6Test.test_create_update_delete_network_subnet + +Test Case 3: Check External Network Visibility +---------------------------------------------- + +| tempest.api.network.test_networks.NetworksIpV6Test.test_external_network_visibility + +Test Case 4: List IPv6 Networks and Subnets of a Tenant +------------------------------------------------------- + +| tempest.api.network.test_networks.NetworksIpV6Test.test_list_networks +| tempest.api.network.test_networks.NetworksIpV6Test.test_list_subnets + +Test Case 5: Show Information of an IPv6 Network and Subnet +----------------------------------------------------------- + +| tempest.api.network.test_networks.NetworksIpV6Test.test_show_network +| tempest.api.network.test_networks.NetworksIpV6Test.test_show_subnet + +Test Case 6: Create an IPv6 Port in Allowed Allocation Pools +------------------------------------------------------------ + +| tempest.api.network.test_ports.PortsIpV6TestJSON.test_create_port_in_allowed_allocation_pools + +Test Case 7: Create an IPv6 Port without Security Groups +-------------------------------------------------------- + +| tempest.api.network.test_ports.PortsIpV6TestJSON.test_create_port_with_no_securitygroups + +Test Case 8: Create, Update and Delete an IPv6 Port +--------------------------------------------------- + +| tempest.api.network.test_ports.PortsIpV6TestJSON.test_create_update_delete_port + +Test Case 9: List IPv6 Ports of a Tenant +---------------------------------------- + +| tempest.api.network.test_ports.PortsIpV6TestJSON.test_list_ports + +Test Case 10: Show Information of an IPv6 Port +---------------------------------------------- + +| tempest.api.network.test_ports.PortsIpV6TestJSON.test_show_port + +Test Case 11: Add Multiple Interfaces for an IPv6 Router +-------------------------------------------------------- + +| tempest.api.network.test_routers.RoutersIpV6Test.test_add_multiple_router_interfaces + +Test Case 12: Add and Remove an IPv6 Router Interface with port_id +------------------------------------------------------------------ + +| tempest.api.network.test_routers.RoutersIpV6Test.test_add_remove_router_interface_with_port_id + +Test Case 13: Add and Remove an IPv6 Router Interface with subnet_id +-------------------------------------------------------------------- + +| tempest.api.network.test_routers.RoutersIpV6Test.test_add_remove_router_interface_with_subnet_id + +Test Case 14: Create, Update, Delete, List and Show an IPv6 Router +------------------------------------------------------------------ + +| tempest.api.network.test_routers.RoutersIpV6Test.test_create_show_list_update_delete_router + +Test Case 15: Create, Update, Delete, List and Show an IPv6 Security Group +-------------------------------------------------------------------------- + +| tempest.api.network.test_security_groups.SecGroupIPv6Test.test_create_list_update_show_delete_security_group + +Test Case 16: Create, Delete and Show Security Group Rules +---------------------------------------------------------- + +| tempest.api.network.test_security_groups.SecGroupIPv6Test.test_create_show_delete_security_group_rule + +Test Case 17: List All Security Groups +-------------------------------------- + +| tempest.api.network.test_security_groups.SecGroupIPv6Test.test_list_security_groups + +Test Case 18: IPv6 Address Assignment - Dual Stack, SLAAC, DHCPv6 Stateless +--------------------------------------------------------------------------- + +| tempest.scenario.test_network_v6.TestGettingAddress.test_dhcp6_stateless_from_os + +Test Case 19: IPv6 Address Assignment - Dual Net, Dual Stack, SLAAC, DHCPv6 Stateless +------------------------------------------------------------------------------------- + +| tempest.scenario.test_network_v6.TestGettingAddress.test_dualnet_dhcp6_stateless_from_os + +Test Case 20: IPv6 Address Assignment - Multiple Prefixes, Dual Stack, SLAAC, DHCPv6 Stateless +---------------------------------------------------------------------------------------------- + +| tempest.scenario.test_network_v6.TestGettingAddress.test_multi_prefix_dhcpv6_stateless + +Test Case 21: IPv6 Address Assignment - Dual Net, Multiple Prefixes, Dual Stack, SLAAC, DHCPv6 Stateless +-------------------------------------------------------------------------------------------------------- + +| tempest.scenario.test_network_v6.TestGettingAddress.test_dualnet_multi_prefix_dhcpv6_stateless + +Test Case 22: IPv6 Address Assignment - Dual Stack, SLAAC +--------------------------------------------------------- + +| tempest.scenario.test_network_v6.TestGettingAddress.test_slaac_from_os + +Test Case 23: IPv6 Address Assignment - Dual Net, Dual Stack, SLAAC +------------------------------------------------------------------- + +| tempest.scenario.test_network_v6.TestGettingAddress.test_dualnet_slaac_from_os + +Test Case 24: IPv6 Address Assignment - Multiple Prefixes, Dual Stack, SLAAC +---------------------------------------------------------------------------- + +| tempest.scenario.test_network_v6.TestGettingAddress.test_multi_prefix_slaac + +Test Case 25: IPv6 Address Assignment - Dual Net, Dual Stack, Multiple Prefixes, SLAAC +-------------------------------------------------------------------------------------- + +| tempest.scenario.test_network_v6.TestGettingAddress.test_dualnet_multi_prefix_slaac + diff --git a/docs/testing/user/testspecification/highavailability/index.rst b/docs/testing/user/testspecification/highavailability/index.rst index e69de29b..715f84d0 100644 --- a/docs/testing/user/testspecification/highavailability/index.rst +++ b/docs/testing/user/testspecification/highavailability/index.rst @@ -0,0 +1,743 @@ +.. This work is licensed under a Creative Commons Attribution 4.0 International +.. License. +.. http://creativecommons.org/licenses/by/4.0 +.. (c) OPNFV, China Mobile and others. + +========================================== +OpenStack Services HA test specification +========================================== + +.. toctree:: +:maxdepth: + +Scope +===== + +The HA test area evaluates the ability of the System Under Test to support service +continuity and recovery from component failures on part of OpenStack controller services("nova-api", +"neutron-server", "keystone", "glance-api", "cinder-api") and on "load balancer" service. + +The tests in this test area will emulate component failures by killing the +processes of above target services, stressing the CPU load or blocking +disk I/O on the selected controller node, and then check if the impacted +services are still available and the killed processes are recovered on the +selected controller node within a given time interval. + + +References +================ + +This test area references the following specifications: + +- ETSI GS NFV-REL 001 + + - http://www.etsi.org/deliver/etsi_gs/NFV-REL/001_099/001/01.01.01_60/gs_nfv-rel001v010101p.pdf + +- OpenStack High Availability Guide + + - https://docs.openstack.org/ha-guide/ + + +Definitions and abbreviations +============================= + +The following terms and abbreviations are used in conjunction with this test area + +- SUT - system under test +- Monitor - tools used to measure the service outage time and the process + outage time +- Service outage time - the outage time (seconds) of the specific OpenStack + service +- Process outage time - the outage time (seconds) from the specific processes + being killed to recovered + + +System Under Test (SUT) +======================= + +The system under test is assumed to be the NFVi and VIM in operation on a +Pharos compliant infrastructure. + +SUT is assumed to be in high availability configuration, which typically means +more than one controller nodes are in the System Under Test. + +Test Area Structure +==================== + +The HA test area is structured with the following test cases in a sequential +manner. + +Each test case is able to run independently. Preceding test case's failure will +not affect the subsequent test cases. + +Preconditions of each test case will be described in the following test +descriptions. + + +Test Descriptions +================= + +--------------------------------------------------------------- +Test Case 1 - Controller node OpenStack service down - nova-api +--------------------------------------------------------------- + +Short name +---------- + +opnfv.ha.tc001.nova-api_service_down + +Use case specification +---------------------- + +This test case verifies the service continuity capability in the face of the +software process failure. It kills the processes of OpenStack "nova-api" +service on the selected controller node, then checks whether the "nova-api" +service is still available during the failure, by creating a VM then deleting +the VM, and checks whether the killed processes are recovered within a given +time interval. + + +Test preconditions +------------------ + +There is more than one controller node, which is providing the "nova-api" +service for API end-point. +Denoted a controller node as Node1 in the following configuration. + + +Basic test flow execution description and pass/fail criteria +------------------------------------------------------------ + +Methodology for verifying service continuity and recovery +''''''''''''''''''''''''''''''''''''''''''''''''''''''''' + +The service continuity and process recovery capabilities of "nova-api" service +is evaluated by monitoring service outage time, process outage time, and results +of nova operations. + +Service outage time is measured by continuously executing "openstack server list" +command in loop and checking if the response of the command request is returned +with no failure. +When the response fails, the "nova-api" service is considered in outage. +The time between the first response failure and the last response failure is +considered as service outage time. + +Process outage time is measured by checking the status of "nova-api" processes on +the selected controller node. The time of "nova-api" processes being killed to +the time of the "nova-api" processes being recovered is the process outage time. +Process recovery is verified by checking the existence of "nova-api" processes. + +All nova operations are carried out correctly within a given time interval which +suggests that the "nova-api" service is continuously available. + +Test execution +'''''''''''''' +* Test action 1: Connect to Node1 through SSH, and check that "nova-api" + processes are running on Node1 +* Test action 2: Create a image with "openstack image create test-cirros + --file cirros-0.3.5-x86_64-disk.img --disk-format qcow2 --container-format bare" +* Test action 3: Execute"openstack flavor create m1.test --id auto --ram 512 + --disk 1 --vcpus 1" to create flavor "m1.test". +* Test action 4: Start two monitors: one for "nova-api" processes and the other + for "openstack server list" command. + Each monitor will run as an independent process +* Test action 5: Connect to Node1 through SSH, and then kill the "nova-api" + processes +* Test action 6: When "openstack server list" returns with no error, calculate + the service outage time, and execute command "openstack server create + --flavor m1.test --image test-cirros test-instance" +* Test action 7: Continuously Execute "openstack server show test-instance" + to check if the status of VM "test-instance" is "Active" +* Test action 8: If VM "test-instance" is "Active", execute "openstack server + delete test-instance", then execute "openstack server list" to check if the + VM is not in the list +* Test action 9: Continuously measure process outage time from the monitor until + the process outage time is more than 30s + +Pass / fail criteria +'''''''''''''''''''' + +The process outage time is less than 30s. + +The service outage time is less than 5s. + +The nova operations are carried out in above order and no errors occur. + +A negative result will be generated if the above is not met in completion. + +Post conditions +--------------- + +Restart the process of "nova-api" if they are not running. +Delete image with "openstack image delete test-cirros" +Delete flavor with "openstack flavor delete m1.test" + + +--------------------------------------------------------------------- +Test Case 2 - Controller node OpenStack service down - neutron-server +--------------------------------------------------------------------- + +Short name +---------- + +opnfv.ha.tc002.neutron-server_service_down + +Use case specification +---------------------- + +This test verifies the high availability of the "neutron-server" service +provided by OpenStack controller nodes. It kills the processes of OpenStack +"neutron-server" service on the selected controller node, then checks whether +the "neutron-server" service is still available, by creating a network and +deleting the network, and checks whether the killed processes are recovered. + +Test preconditions +------------------ + +There is more than one controller node, which is providing the "neutron-server" +service for API end-point. +Denoted a controller node as Node1 in the following configuration. + +Basic test flow execution description and pass/fail criteria +------------------------------------------------------------ + +Methodology for monitoring high availability +'''''''''''''''''''''''''''''''''''''''''''' + +The high availability of "neutron-server" service is evaluated by monitoring +service outage time, process outage time, and results of neutron operations. + +Service outage time is tested by continuously executing "openstack router list" +command in loop and checking if the response of the command request is returned +with no failure. +When the response fails, the "neutron-server" service is considered in outage. +The time between the first response failure and the last response failure is +considered as service outage time. + +Process outage time is tested by checking the status of "neutron-server" +processes on the selected controller node. The time of "neutron-server" +processes being killed to the time of the "neutron-server" processes being +recovered is the process outage time. Process recovery is verified by checking +the existence of "neutron-server" processes. + +Test execution +'''''''''''''' + +* Test action 1: Connect to Node1 through SSH, and check that "neutron-server" + processes are running on Node1 +* Test action 2: Start two monitors: one for "neutron-server" process and the + other for "openstack router list" command. + Each monitor will run as an independent process. +* Test action 3: Connect to Node1 through SSH, and then kill the + "neutron-server" processes +* Test action 4: When "openstack router list" returns with no error, calculate + the service outage time, and execute "openstack network create test-network" +* Test action 5: Continuously executing "openstack network show test-network", + check if the status of "test-network" is "Active" +* Test action 6: If "test-network" is "Active", execute "openstack network + delete test-network", then execute "openstack network list" to check if the + "test-network" is not in the list +* Test action 7: Continuously measure process outage time from the monitor until + the process outage time is more than 30s + +Pass / fail criteria +'''''''''''''''''''' + +The process outage time is less than 30s. + +The service outage time is less than 5s. + +The neutron operations are carried out in above order and no errors occur. + +A negative result will be generated if the above is not met in completion. + +Post conditions +--------------- + +Restart the processes of "neutron-server" if they are not running. + + +--------------------------------------------------------------- +Test Case 3 - Controller node OpenStack service down - keystone +--------------------------------------------------------------- + +Short name +---------- + +opnfv.ha.tc003.keystone_service_down + +Use case specification +---------------------- + +This test verifies the high availability of the "keystone" service provided by +OpenStack controller nodes. It kills the processes of OpenStack "keystone" +service on the selected controller node, then checks whether the "keystone" +service is still available by executing command "openstack user list" and +whether the killed processes are recovered. + +Test preconditions +------------------ + +There is more than one controller node, which is providing the "keystone" +service for API end-point. +Denoted a controller node as Node1 in the following configuration. + +Basic test flow execution description and pass/fail criteria +------------------------------------------------------------ + +Methodology for monitoring high availability +'''''''''''''''''''''''''''''''''''''''''''' + +The high availability of "keystone" service is evaluated by monitoring service +outage time and process outage time + +Service outage time is tested by continuously executing "openstack user list" +command in loop and checking if the response of the command request is reutrned +with no failure. +When the response fails, the "keystone" service is considered in outage. +The time between the first response failure and the last response failure is +considered as service outage time. + +Process outage time is tested by checking the status of "keystone" processes on +the selected controller node. The time of "keystone" processes being killed to +the time of the "keystone" processes being recovered is the process outage +time. Process recovery is verified by checking the existence of "keystone" +processes. + +Test execution +'''''''''''''' + +* Test action 1: Connect to Node1 through SSH, and check that "keystone" + processes are running on Node1 +* Test action 2: Start two monitors: one for "keystone" process and the other + for "openstack user list" command. + Each monitor will run as an independent process. +* Test action 3: Connect to Node1 through SSH, and then kill the "keystone" + processes +* Test action 4: Calculate the service outage time and process outage time +* Test action 5: The test passes if process outage time is less than 20s and + service outage time is less than 5s +* Test action 6: Continuously measure process outage time from the monitor until + the process outage time is more than 30s + +Pass / fail criteria +'''''''''''''''''''' + +The process outage time is less than 30s. + +The service outage time is less than 5s. + +A negative result will be generated if the above is not met in completion. + +Post conditions +--------------- + +Restart the processes of "keystone" if they are not running. + + +----------------------------------------------------------------- +Test Case 4 - Controller node OpenStack service down - glance-api +----------------------------------------------------------------- + +Short name +---------- + +opnfv.ha.tc004.glance-api_service_down + +Use case specification +---------------------- + +This test verifies the high availability of the "glance-api" service provided +by OpenStack controller nodes. It kills the processes of OpenStack "glance-api" +service on the selected controller node, then checks whether the "glance-api" +service is still available, by creating image and deleting image, and checks +whether the killed processes are recovered. + +Test preconditions +------------------ + +There is more than one controller node, which is providing the "glance-api" +service for API end-point. +Denoted a controller node as Node1 in the following configuration. + + +Basic test flow execution description and pass/fail criteria +------------------------------------------------------------ + +Methodology for monitoring high availability +'''''''''''''''''''''''''''''''''''''''''''' + +The high availability of "glance-api" service is evaluated by monitoring +service outage time, process outage time, and results of glance operations. + +Service outage time is tested by continuously executing "openstack image list" +command in loop and checking if the response of the command request is returned +with no failure. +When the response fails, the "glance-api" service is considered in outage. +The time between the first response failure and the last response failure is +considered as service outage time. + +Process outage time is tested by checking the status of "glance-api" processes +on the selected controller node. The time of "glance-api" processes being +killed to the time of the "glance-api" processes being recovered is the process +outage time. Process recovery is verified by checking the existence of +"glance-api" processes. + +Test execution +'''''''''''''' + +* Test action 1: Connect to Node1 through SSH, and check that "glance-api" + processes are running on Node1 +* Test action 2: Start two monitors: one for "glance-api" process and the other + for "openstack image list" command. + Each monitor will run as an independent process. +* Test action 3: Connect to Node1 through SSH, and then kill the "glance-api" + processes +* Test action 4: When "openstack image list" returns with no error, calculate + the service outage time, and execute "openstack image create test-image + --file cirros-0.3.5-x86_64-disk.img --disk-format qcow2 --container-format bare" +* Test action 5: Continuously execute "openstack image show test-image", check + if status of "test-image" is "active" +* Test action 6: If "test-image" is "active", execute "openstack image delete + test-image". Then execute "openstack image list" to check if "test-image" is + not in the list +* Test action 7: Continuously measure process outage time from the monitor until + the process outage time is more than 30s + +Pass / fail criteria +'''''''''''''''''''' + +The process outage time is less than 30s. + +The service outage time is less than 5s. + +The glance operations are carried out in above order and no errors occur. + +A negative result will be generated if the above is not met in completion. + +Post conditions +--------------- + +Restart the processes of "glance-api" if they are not running. + +Delete image with "openstack image delete test-image". + + +----------------------------------------------------------------- +Test Case 5 - Controller node OpenStack service down - cinder-api +----------------------------------------------------------------- + +Short name +---------- + +opnfv.ha.tc005.cinder-api_service_down + +Use case specification +---------------------- + +This test verifies the high availability of the "cinder-api" service provided +by OpenStack controller nodes. It kills the processes of OpenStack "cinder-api" +service on the selected controller node, then checks whether the "cinder-api" +service is still available by executing command "openstack volume list" and +whether the killed processes are recovered. + +Test preconditions +------------------ + +There is more than one controller node, which is providing the "cinder-api" +service for API end-point. +Denoted a controller node as Node1 in the following configuration. + +Basic test flow execution description and pass/fail criteria +------------------------------------------------------------ + +Methodology for monitoring high availability +'''''''''''''''''''''''''''''''''''''''''''' + +The high availability of "cinder-api" service is evaluated by monitoring +service outage time and process outage time + +Service outage time is tested by continuously executing "openstack volume list" +command in loop and checking if the response of the command request is returned +with no failure. +When the response fails, the "cinder-api" service is considered in outage. +The time between the first response failure and the last response failure is +considered as service outage time. + +Process outage time is tested by checking the status of "cinder-api" processes +on the selected controller node. The time of "cinder-api" processes being +killed to the time of the "cinder-api" processes being recovered is the process +outage time. Process recovery is verified by checking the existence of +"cinder-api" processes. + +Test execution +'''''''''''''' + +* Test action 1: Connect to Node1 through SSH, and check that "cinder-api" + processes are running on Node1 +* Test action 2: Start two monitors: one for "cinder-api" process and the other + for "openstack volume list" command. + Each monitor will run as an independent process. +* Test action 3: Connect to Node1 through SSH, and then execute kill the + "cinder-api" processes +* Test action 4: Continuously measure service outage time from the monitor until + the service outage time is more than 5s +* Test action 5: Continuously measure process outage time from the monitor until + the process outage time is more than 30s + +Pass / fail criteria +'''''''''''''''''''' + +The process outage time is less than 30s. + +The service outage time is less than 5s. + +The cinder operations are carried out in above order and no errors occur. + +A negative result will be generated if the above is not met in completion. + +Post conditions +--------------- + +Restart the processes of "cinder-api" if they are not running. + + +------------------------------------------------------------ +Test Case 6 - Controller Node CPU Overload High Availability +------------------------------------------------------------ + +Short name +---------- + +opnfv.ha.tc006.cpu_overload + +Use case specification +---------------------- + +This test verifies the availability of services when one of the controller node +suffers from heavy CPU overload. When the CPU usage of the specified controller +node is up to 100%, which breaks down the OpenStack services on this node, +the Openstack services should continue to be available. This test case stresses +the CPU usage of a specific controller node to 100%, then checks whether all +services provided by the SUT are still available with the monitor tools. + +Test preconditions +------------------ + +There is more than one controller node, which is providing the "cinder-api", +"neutron-server", "glance-api" and "keystone" services for API end-point. +Denoted a controller node as Node1 in the following configuration. + +Basic test flow execution description and pass/fail criteria +------------------------------------------------------------ + +Methodology for monitoring high availability +'''''''''''''''''''''''''''''''''''''''''''' + +The high availability of related OpenStack service is evaluated by monitoring service +outage time + +Service outage time is tested by continuously executing "openstack router list", +"openstack stack list", "openstack volume list", "openstack image list" commands +in loop and checking if the response of the command request is returned with no +failure. +When the response fails, the related service is considered in outage. The time +between the first response failure and the last response failure is considered +as service outage time. + + +Methodology for stressing CPU usage +''''''''''''''''''''''''''''''''''' + +To evaluate the high availability of target OpenStack service under heavy CPU +load, the test case will first get the number of logical CPU cores on the +target controller node by shell command, then use the number to execute 'dd' +command to continuously copy from /dev/zero and output to /dev/null in loop. +The 'dd' operation only uses CPU, no I/O operation, which is ideal for +stressing the CPU usage. + +Since the 'dd' command is continuously executed and the CPU usage rate is +stressed to 100%, the scheduler will schedule each 'dd' command to be +processed on a different logical CPU core. Eventually to achieve all logical +CPU cores usage rate to 100%. + +Test execution +'''''''''''''' + +* Test action 1: Start four monitors: one for "openstack image list" command, + one for "openstack router list" command, one for "openstack stack list" + command and the last one for "openstack volume list" command. Each monitor + will run as an independent process. +* Test action 2: Connect to Node1 through SSH, and then stress all logical CPU + cores usage rate to 100% +* Test action 3: Continuously measure all the service outage times until they are + more than 5s +* Test action 4: Kill the process that stresses the CPU usage + +Pass / fail criteria +'''''''''''''''''''' + +All the service outage times are less than 5s. + +A negative result will be generated if the above is not met in completion. + +Post conditions +--------------- + +No impact on the SUT. + + +----------------------------------------------------------------- +Test Case 7 - Controller Node Disk I/O Overload High Availability +----------------------------------------------------------------- + +Short name +---------- + +opnfv.ha.tc007.disk_I/O_overload + +Use case specification +---------------------- + +This test verifies the high availability of control node. When the disk I/O of +the specific disk is overload, which breaks down the OpenStack services on this +node, the read and write services should continue to be available. This test +case blocks the disk I/O of the specific controller node, then checks whether +the services that need to read or write the disk of the controller node are +available with some monitor tools. + +Test preconditions +------------------ + +There is more than one controller node. +Denoted a controller node as Node1 in the following configuration. +The controller node has at least 20GB free disk space. + +Basic test flow execution description and pass/fail criteria +------------------------------------------------------------ + +Methodology for monitoring high availability +'''''''''''''''''''''''''''''''''''''''''''' + +The high availability of nova service is evaluated by monitoring +service outage time + +Service availability is tested by continuously executing +"openstack flavor list" command in loop and checking if the response of the +command request is returned with no failure. +When the response fails, the related service is considered in outage. + + +Methodology for stressing disk I/O +'''''''''''''''''''''''''''''''''' + +To evaluate the high availability of target OpenStack service under heavy I/O +load, the test case will execute shell command on the selected controller node +to continuously writing 8kb blocks to /test.dbf + +Test execution +'''''''''''''' + +* Test action 1: Connect to Node1 through SSH, and then stress disk I/O by + continuously writing 8kb blocks to /test.dbf +* Test action 2: Start a monitor: for "openstack flavor list" command +* Test action 3: Create a flavor called "test-001" +* Test action 4: Check whether the flavor "test-001" is created +* Test action 5: Continuously measure service outage time from the monitor + until the service outage time is more than 5s +* Test action 6: Stop writing to /test.dbf and delete file /test.dbf + +Pass / fail criteria +'''''''''''''''''''' + +The service outage time is less than 5s. + +The nova operations are carried out in above order and no errors occur. + +A negative result will be generated if the above is not met in completion. + +Post conditions +--------------- + +Delete flavor with "openstack flavor delete test-001". + +-------------------------------------------------------------------- +Test Case 8 - Controller Load Balance as a Service High Availability +-------------------------------------------------------------------- + +Short name +---------- + +opnfv.ha.tc008.load_balance_service_down + +Use case specification +---------------------- + +This test verifies the high availability of "load balancer" service. When +the "load balancer" service of a specified controller node is killed, whether +"load balancer" service on other controller nodes will work, and whether the +controller node will restart the "load balancer" service are checked. This +test case kills the processes of "load balancer" service on the selected +controller node, then checks whether the request of the related OpenStack +command is processed with no failure and whether the killed processes are +recovered. + +Test preconditions +------------------ + +There is more than one controller node, which is providing the "load balancer" +service for rest-api. Denoted as Node1 in the following configuration. + +Basic test flow execution description and pass/fail criteria +------------------------------------------------------------ + +Methodology for monitoring high availability +'''''''''''''''''''''''''''''''''''''''''''' + +The high availability of "load balancer" service is evaluated by monitoring +service outage time and process outage time + +Service outage time is tested by continuously executing "openstack image list" +command in loop and checking if the response of the command request is returned +with no failure. +When the response fails, the "load balancer" service is considered in outage. +The time between the first response failure and the last response failure is +considered as service outage time. + +Process outage time is tested by checking the status of processes of "load +balancer" service on the selected controller node. The time of those processes +being killed to the time of those processes being recovered is the process +outage time. +Process recovery is verified by checking the existence of processes of "load +balancer" service. + +Test execution +'''''''''''''' + +* Test action 1: Connect to Node1 through SSH, and check that processes of + "load balancer" service are running on Node1 +* Test action 2: Start two monitors: one for processes of "load balancer" + service and the other for "openstack image list" command. Each monitor will + run as an independent process +* Test action 3: Connect to Node1 through SSH, and then kill the processes of + "load balancer" service +* Test action 4: Continuously measure service outage time from the monitor until + the service outage time is more than 5s +* Test action 5: Continuously measure process outage time from the monitor until + the process outage time is more than 30s + +Pass / fail criteria +'''''''''''''''''''' + +The process outage time is less than 30s. + +The service outage time is less than 5s. + +A negative result will be generated if the above is not met in completion. + +Post conditions +--------------- +Restart the processes of "load balancer" if they are not running. + + + diff --git a/docs/testing/user/testspecification/ipv6/index.rst b/docs/testing/user/testspecification/ipv6/index.rst new file mode 100644 index 00000000..c3dc844b --- /dev/null +++ b/docs/testing/user/testspecification/ipv6/index.rst @@ -0,0 +1,1787 @@ +.. This work is licensed under a Creative Commons Attribution 4.0 International License. +.. http://creativecommons.org/licenses/by/4.0 +.. (c) OPNFV + +======================== +IPv6 test specification +======================== + +.. toctree:: + :maxdepth: 2 + +Scope +===== + +The IPv6 test area will evaluate the ability for a SUT to support IPv6 +Tenant Network features and functionality. The tests in this test area will +evaluate, + +- network, subnet, port, router API CRUD operations +- interface add and remove operations +- security group and security group rule API CRUD operations +- IPv6 address assignment with dual stack, dual net, multiprefix in mode DHCPv6 stateless or SLAAC + +References +================ + +- upstream openstack API reference + + - http://developer.openstack.org/api-ref + +- upstream openstack IPv6 reference + + - https://docs.openstack.org/newton/networking-guide/config-ipv6.html + +Definitions and abbreviations +============================= + +The following terms and abbreviations are used in conjunction with this test area + +- API - Application Programming Interface +- CIDR - Classless Inter-Domain Routing +- CRUD - Create, Read, Update, and Delete +- DHCP - Dynamic Host Configuration Protocol +- DHCPv6 - Dynamic Host Configuration Protocol version 6 +- ICMP - Internet Control Message Protocol +- NFVI - Network Functions Virtualization Infrastructure +- NIC - Network Interface Controller +- RA - Router Advertisements +- radvd - The Router Advertisement Daemon +- SDN - Software Defined Network +- SLAAC - Stateless Address Auto Configuration +- TCP - Transmission Control Protocol +- UDP - User Datagram Protocol +- VM - Virtual Machine +- vNIC - virtual Network Interface Card + +System Under Test (SUT) +======================= + +The system under test is assumed to be the NFVI and VIM deployed with a Pharos compliant infrastructure. + +Test Area Structure +==================== + +The test area is structured based on network, port and subnet operations. Each test case +is able to run independently, i.e. irrelevant of the state created by a previous test. + +Test Descriptions +================= + +API Used and Reference +---------------------- + +Networks: https://developer.openstack.org/api-ref/networking/v2/index.html#networks + +- show network details +- update network +- delete network +- list networks +- create netowrk +- bulk create networks + +Subnets: https://developer.openstack.org/api-ref/networking/v2/index.html#subnets + +- list subnets +- create subnet +- bulk create subnet +- show subnet details +- update subnet +- delete subnet + +Routers and interface: https://developer.openstack.org/api-ref/networking/v2/index.html#routers-routers + +- list routers +- create router +- show router details +- update router +- delete router +- add interface to router +- remove interface from router + +Ports: https://developer.openstack.org/api-ref/networking/v2/index.html#ports + +- show port details +- update port +- delete port +- list port +- create port +- bulk create ports + +Security groups: https://developer.openstack.org/api-ref/networking/v2/index.html#security-groups-security-groups + +- list security groups +- create security groups +- show security group +- update security group +- delete security group + +Security groups rules: https://developer.openstack.org/api-ref/networking/v2/index.html#security-group-rules-security-group-rules + +- list security group rules +- create security group rule +- show security group rule +- delete security group rule + +Servers: https://developer.openstack.org/api-ref/compute/ + +- list servers +- create server +- create multiple servers +- list servers detailed +- show server details +- update server +- delete server + +------------------------------------------------------------------ +Test Case 1 - Create and Delete Bulk Network, IPv6 Subnet and Port +------------------------------------------------------------------ + +Short name +---------- + +opnfv.ipv6.bulk_network_subnet_port_create_delete + +Use case specification +---------------------- + +This test case evaluates the SUT API ability of creating and deleting multiple networks, +IPv6 subnets, ports in one request, the reference is, + +tempest.api.network.test_networks.BulkNetworkOpsIpV6Test.test_bulk_create_delete_network +tempest.api.network.test_networks.BulkNetworkOpsIpV6Test.test_bulk_create_delete_subnet +tempest.api.network.test_networks.BulkNetworkOpsIpV6Test.test_bulk_create_delete_port + +Test preconditions +------------------ + +None + +Basic test flow execution description and pass/fail criteria +------------------------------------------------------------ + +Test execution +''''''''''''''' + +* Test action 1: Create 2 networks using bulk create, storing the "id" parameters returned in the response +* Test action 2: List all networks, verifying the two network id's are found in the list +* **Test assertion 1:** The two "id" parameters are found in the network list +* Test action 3: Delete the 2 created networks using the stored network ids +* Test action 4: List all networks, verifying the network ids are no longer present +* **Test assertion 2:** The two "id" parameters are not present in the network list +* Test action 5: Create 2 networks using bulk create, storing the "id" parameters returned in the response +* Test action 6: Create an IPv6 subnets on each of the two networks using bulk create commands, + storing the associated "id" parameters +* Test action 7: List all subnets, verify the IPv6 subnets are found in the list +* **Test assertion 3:** The two IPv6 subnet "id" parameters are found in the network list +* Test action 8: Delete the 2 IPv6 subnets using the stored "id" parameters +* Test action 9: List all subnets, verify the IPv6 subnets are no longer present in the list +* **Test assertion 4:** The two IPv6 subnet "id" parameters, are not present in list +* Test action 10: Delete the 2 networks created in test action 5, using the stored network ids +* Test action 11: List all networks, verifying the network ids are no longer present +* **Test assertion 5:** The two "id" parameters are not present in the network list +* Test action 12: Create 2 networks using bulk create, storing the "id" parameters returned in the response +* Test action 13: Create a port on each of the two networks using bulk create commands, + storing the associated "port_id" parameters +* Test action 14: List all ports, verify the port_ids are found in the list +* **Test assertion 6:** The two "port_id" parameters are found in the ports list +* Test action 15: Delete the 2 ports using the stored "port_id" parameters +* Test action 16: List all ports, verify port_ids are no longer present in the list +* **Test assertion 7:** The two "port_id" parameters, are not present in list +* Test action 17: Delete the 2 networks created in test action 12, using the stored network ids +* Test action 18: List all networks, verifying the network ids are no longer present +* **Test assertion 8:** The two "id" parameters are not present in the network list + +Pass / fail criteria +''''''''''''''''''''' + +This test evaluates the ability to use bulk create commands to create networks, IPv6 subnets and ports on +the SUT API. Specifically it verifies that: + +* Bulk network create commands return valid "id" parameters which are reported in the list commands +* Bulk IPv6 subnet commands return valid "id" parameters which are reported in the list commands +* Bulk port commands return valid "port_id" parameters which are reported in the list commands +* All items created using bulk create commands are able to be removed using the returned identifiers + +Post conditions +--------------- + +N/A + +------------------------------------------------------------------- +Test Case 2 - Create, Update and Delete an IPv6 Network and Subnet +------------------------------------------------------------------- + +Short name +----------- + +opnfv.ipv6.network_subnet_create_update_delete + +Use case specification +---------------------- + +This test case evaluates the SUT API ability of creating, updating, deleting +network and IPv6 subnet with the network, the reference is + +tempest.api.network.test_networks.NetworksIpV6Test.test_create_update_delete_network_subnet + +Test preconditions +------------------ + +None + +Basic test flow execution description and pass/fail criteria +------------------------------------------------------------ + +Test execution +''''''''''''''' + +* Test action 1: Create a network, storing the "id" and "status" parameters returned + in the response +* Test action 2: Verify the value of the created network's "status" is ACTIVE +* **Test assertion 1:** The created network's "status" is ACTIVE +* Test action 3: Update this network with a new_name +* Test action 4: Verify the network's name equals the new_name +* **Test assertion 2:** The network's name equals to the new_name after name updating +* Test action 5: Create an IPv6 subnet within the network, storing the "id" parameters + returned in the response +* Test action 6: Update this IPv6 subnet with a new_name +* Test action 7: Verify the IPv6 subnet's name equals the new_name +* **Test assertion 3:** The IPv6 subnet's name equals to the new_name after name updating +* Test action 8: Delete the IPv6 subnet created in test action 5, using the stored subnet id +* Test action 9: List all subnets, verifying the subnet id is no longer present +* **Test assertion 4:** The IPv6 subnet "id" is not present in the subnet list +* Test action 10: Delete the network created in test action 1, using the stored network id +* Test action 11: List all networks, verifying the network id is no longer present +* **Test assertion 5:** The network "id" is not present in the network list + + +Pass / fail criteria +''''''''''''''''''''' + +This test evaluates the ability to create, update, delete network, IPv6 subnet on the +SUT API. Specifically it verifies that: + +* Create network commands return ACTIVE "status" parameters which are reported in the list commands +* Update network commands return updated "name" parameters which equals to the "name" used +* Update subnet commands return updated "name" parameters which equals to the "name" used +* All items created using create commands are able to be removed using the returned identifiers + +Post conditions +--------------- + +None + +------------------------------------------------- +Test Case 3 - Check External Network Visibility +------------------------------------------------- + +Short name +----------- + +opnfv.ipv6.external_network_visibility + +Use case specification +---------------------- + +This test case verifies user can see external networks but not subnets, the reference is, + +tempest.api.network.test_networks.NetworksIpV6Test.test_external_network_visibility + +Test preconditions +------------------ + +1. The SUT has at least one external network. +2. In the external network list, there is no network without external router, i.e., +all networks in this list are with external router. +3. There is one external network with configured public network id and there is +no subnet on this network + +Basic test flow execution description and pass/fail criteria +------------------------------------------------------------ + +Test execution +''''''''''''''' + +* Test action 1: List all networks with external router, storing the "id"s parameters returned in the response +* Test action 2: Verify list in test action 1 is not empty +* **Test assertion 1:** The network with external router list is not empty +* Test action 3: List all netowrks without external router in test action 1 list +* Test action 4: Verify list in test action 3 is empty +* **Test assertion 2:** networks without external router in the external network + list is empty +* Test action 5: Verify the configured public network id is found in test action 1 stored "id"s +* **Test assertion 3:** the public network id is found in the external network "id"s +* Test action 6: List the subnets of the external network with the configured + public network id +* Test action 7: Verify list in test action 6 is empty +* **Test assertion 4:** There is no subnet of the external network with the configured + public network id + +Pass / fail criteria +''''''''''''''''''''' + +This test evaluates the ability to use list commands to list external networks, pre-configured +public network. Specifically it verifies that: + +* Network list commands to find visible networks with external router +* Network list commands to find visible network with pre-configured public network id +* Subnet list commands to find no subnet on the pre-configured public network + +Post conditions +--------------- + +None + +--------------------------------------------- +Test Case 4 - List IPv6 Networks and Subnets +--------------------------------------------- + +Short name +----------- + +opnfv.ipv6.network_subnet_list + +Use case specification +---------------------- + +This test case evaluates the SUT API ability of listing netowrks, +subnets after creating a network and an IPv6 subnet, the reference is + +tempest.api.network.test_networks.NetworksIpV6Test.test_list_networks +tempest.api.network.test_networks.NetworksIpV6Test.test_list_subnets + +Test preconditions +------------------ + +None + +Basic test flow execution description and pass/fail criteria +------------------------------------------------------------ + +Test execution +''''''''''''''' + +* Test action 1: Create a network, storing the "id" parameter returned in the response +* Test action 2: List all networks, verifying the network id is found in the list +* **Test assertion 1:** The "id" parameter is found in the network list +* Test action 3: Create an IPv6 subnet of the network created in test action 1. + storing the "id" parameter returned in the response +* Test action 4: List all subnets of this network, verifying the IPv6 subnet id + is found in the list +* **Test assertion 2:** The "id" parameter is found in the IPv6 subnet list +* Test action 5: Delete the IPv6 subnet using the stored "id" parameters +* Test action 6: List all subnets, verify subnet_id is no longer present in the list +* **Test assertion 3:** The IPv6 subnet "id" parameter is not present in list +* Test action 7: Delete the network created in test action 1, using the stored network ids +* Test action 8: List all networks, verifying the network id is no longer present +* **Test assertion 4:** The network "id" parameter is not present in the network list + +Pass / fail criteria +'''''''''''''''''''' + +This test evaluates the ability to use create commands to create network, IPv6 subnet, list +commands to list the created networks, IPv6 subnet on the SUT API. Specifically it verifies that: + +* Create commands to create network, IPv6 subnet +* List commands to find that netowrk, IPv6 subnet in the all networks, subnets list after creating +* All items created using create commands are able to be removed using the returned identifiers + +Post conditions +--------------- + +None + +------------------------------------------------------------- +Test Case 5 - Show Details of an IPv6 Network and Subnet +------------------------------------------------------------- + +Short name +---------- + +opnfv.ipv6.network_subnet_show + +Use case specification +---------------------- + +This test case evaluates the SUT API ability of showing the network, subnet +details, the reference is, + +tempest.api.network.test_networks.NetworksIpV6Test.test_show_network +tempest.api.network.test_networks.NetworksIpV6Test.test_show_subnet + +Test preconditions +------------------ + +None + +Basic test flow execution description and pass/fail criteria +------------------------------------------------------------ + +Test execution +''''''''''''''' + +* Test action 1: Create a network, storing the "id" and "name" parameter returned in the response +* Test action 2: Show the network id and name, verifying the network id and name equal to the + "id" and "name" stored in test action 1 +* **Test assertion 1:** The id and name equal to the "id" and "name" stored in test action 1 +* Test action 3: Create an IPv6 subnet of the network, storing the "id" and CIDR parameter + returned in the response +* Test action 4: Show the details of the created IPv6 subnet, verifying the + id and CIDR in the details are equal to the stored id and CIDR in test action 3. +* **Test assertion 2:** The "id" and CIDR in show details equal to "id" and CIDR stored in test action 3 +* Test action 5: Delete the IPv6 subnet using the stored "id" parameter +* Test action 6: List all subnets on the network, verify the IPv6 subnet id is no longer present in the list +* **Test assertion 3:** The IPv6 subnet "id" parameter is not present in list +* Test action 7: Delete the network created in test action 1, using the stored network id +* Test action 8: List all networks, verifying the network id is no longer present +* **Test assertion 4:** The "id" parameter is not present in the network list + +Pass / fail criteria +''''''''''''''''''''' + +This test evaluates the ability to use create commands to create network, IPv6 subnet and show +commands to show network, IPv6 subnet details on the SUT API. Specifically it verifies that: + +* Network show commands return correct "id" and "name" parameter which equal to the returned response in the create commands +* IPv6 subnet show commands return correct "id" and CIDR parameter which equal to the returned response in the create commands +* All items created using create commands are able to be removed using the returned identifiers + +Post conditions +--------------- + +None + +------------------------------------------------------------- +Test Case 6 - Create an IPv6 Port in Allowed Allocation Pools +------------------------------------------------------------- + +Short name +---------- + +opnfv.ipv6.port_create_in_allocation_pool + +Use case specification +---------------------- + +This test case evaluates the SUT API ability of creating +an IPv6 subnet within allowed IPv6 address allocation pool and creating +a port whose address is in the range of the pool, the reference is, + +tempest.api.network.test_ports.PortsIpV6TestJSON.test_create_port_in_allowed_allocation_pools + +Test preconditions +------------------ + +There should be an IPv6 CIDR configuration, which prefixlen is less than 126. + +Basic test flow execution description and pass/fail criteria +------------------------------------------------------------ + +Test execution +''''''''''''''' + +* Test action 1: Create a network, storing the "id" parameter returned in the response +* Test action 2: Check the allocation pools configuration, verifying the prefixlen + of the IPv6 CIDR configuration is less than 126. +* **Test assertion 1:** The prefixlen of the IPv6 CIDR configuration is less than 126 +* Test action 3: Get the allocation pool by setting the start_ip and end_ip + based on the IPv6 CIDR configuration. +* Test action 4: Create an IPv6 subnet of the network within the allocation pools, + storing the "id" parameter returned in the response +* Test action 5: Create a port of the network, storing the "id" parameter returned in the response +* Test action 6: Verify the port's id is in the range of the allocation pools which is got is test action 3 +* **Test assertion 2:** the port's id is in the range of the allocation pools +* Test action 7: Delete the port using the stored "id" parameter +* Test action 8: List all ports, verify the port id is no longer present in the list +* **Test assertion 3:** The port "id" parameter is not present in list +* Test action 9: Delete the IPv6 subnet using the stored "id" parameter +* Test action 10: List all subnets on the network, verify the IPv6 subnet id is no longer present in the list +* **Test assertion 4:** The IPv6 subnet "id" parameter is not present in list +* Test action 11: Delete the network created in test action 1, using the stored network id +* Test action 12: List all networks, verifying the network id is no longer present +* **Test assertion 5:** The "id" parameter is not present in the network list + +Pass / fail criteria +''''''''''''''''''''' + +This test evaluates the ability to use create commands to create an IPv6 subnet within allowed +IPv6 address allocation pool and create a port whose address is in the range of the pool. Specifically it verifies that: + +* IPv6 subnet create command to create an IPv6 subnet within allowed IPv6 address allocation pool +* Port create command to create a port whose id is in the range of the allocation pools +* All items created using create commands are able to be removed using the returned identifiers + +Post conditions +--------------- + +None + +------------------------------------------------------------- +Test Case 7 - Create an IPv6 Port with Empty Security Groups +------------------------------------------------------------- + +Short name +----------- + +opnfv.ipv6.port_create_empty_security_group + +Use case specification +---------------------- + +This test case evaluates the SUT API ability of creating port with empty +security group, the reference is, + +tempest.api.network.test_ports.PortsIpV6TestJSON.test_create_port_with_no_securitygroups + +Test preconditions +------------------ + +None + +Basic test flow execution description and pass/fail criteria +------------------------------------------------------------ + +Test execution +''''''''''''''' + +* Test action 1: Create a network, storing the "id" parameter returned in the response +* Test action 2: Create an IPv6 subnet of the network, storing the "id" parameter returned in the response +* Test action 3: Create a port of the network with an empty security group, storing the "id" parameter returned in the response +* Test action 4: Verify the security group of the port is not none but is empty +* **Test assertion 1:** the security group of the port is not none but is empty +* Test action 5: Delete the port using the stored "id" parameter +* Test action 6: List all ports, verify the port id is no longer present in the list +* **Test assertion 2:** The port "id" parameter is not present in list +* Test action 7: Delete the IPv6 subnet using the stored "id" parameter +* Test action 8: List all subnets on the network, verify the IPv6 subnet id is no longer present in the list +* **Test assertion 3:** The IPv6 subnet "id" parameter is not present in list +* Test action 9: Delete the network created in test action 1, using the stored network id +* Test action 10: List all networks, verifying the network id is no longer present +* **Test assertion 4:** The "id" parameter is not present in the network list + +Pass / fail criteria +''''''''''''''''''''' + +This test evaluates the ability to use create commands to create port with +empty security group of the SUT API. Specifically it verifies that: + +* Port create commands to create a port with an empty security group +* All items created using create commands are able to be removed using the returned identifiers + +Post conditions +--------------- + +None + +----------------------------------------------------- +Test Case 8 - Create, Update and Delete an IPv6 Port +----------------------------------------------------- + +Short name +---------- + +opnfv.ipv6.port_create_update_delete + +Use case specification +---------------------- + +This test case evaluates the SUT API ability of creating, updating, +deleting IPv6 port, the reference is, + +tempest.api.network.test_ports.PortsIpV6TestJSON.test_create_update_delete_port + +Test preconditions +------------------ + +None + +Basic test flow execution description and pass/fail criteria +------------------------------------------------------------ + +Test execution +''''''''''''''' + +* Test action 1: Create a network, storing the "id" parameter returned in the response +* Test action 2: Create a port of the network, storing the "id" and "admin_state_up" parameters + returned in the response +* Test action 3: Verify the value of port's 'admin_state_up' is True +* **Test assertion 1:** the value of port's 'admin_state_up' is True after creating +* Test action 4: Update the port's name with a new_name and set port's admin_state_up to False, + storing the name and admin_state_up parameters returned in the response +* Test action 5: Verify the stored port's name equals to new_name and the port's admin_state_up is False. +* **Test assertion 2:** the stored port's name equals to new_name and the port's admin_state_up is False +* Test action 6: Delete the port using the stored "id" parameter +* Test action 7: List all ports, verify the port is no longer present in the list +* **Test assertion 3:** The port "id" parameter is not present in list +* Test action 8: Delete the network created in test action 1, using the stored network id +* Test action 9: List all networks, verifying the network id is no longer present +* **Test assertion 4:** The "id" parameter is not present in the network list + +Pass / fail criteria +'''''''''''''''''''' + +This test evaluates the ability to use create/update/delete commands to create/update/delete port +of the SUT API. Specifically it verifies that: + +* Port create commands return True of 'admin_state_up' in response +* Port update commands to update 'name' to new_name and 'admin_state_up' to false +* All items created using create commands are able to be removed using the returned identifiers + +Post conditions +--------------- + +None + +------------------------------ +Test Case 9 - List IPv6 Ports +------------------------------ + +Short name +---------- + +opnfv.ipv6.tc009.port_list + +Use case specification +---------------------- + +This test case evaluates the SUT ability of creating a port on a network and +finding the port in the all ports list, the reference is, + +tempest.api.network.test_ports.PortsIpV6TestJSON.test_list_ports + +Test preconditions +------------------ + +None + +Basic test flow execution description and pass/fail criteria +------------------------------------------------------------ + +Test execution +''''''''''''''' + +* Test action 1: Create a network, storing the "id" parameter returned in the response +* Test action 2: Create a port of the network, storing the "id" parameter returned in the response +* Test action 3: List all ports, verify the port id is found in the list +* **Test assertion 1:** The "id" parameter is found in the port list +* Test action 4: Delete the port using the stored "id" parameter +* Test action 5: List all ports, verify the port is no longer present in the list +* **Test assertion 2:** The port "id" parameter is not present in list +* Test action 6: Delete the network created in test action 1, using the stored network id +* Test action 7: List all networks, verifying the network id is no longer present +* **Test assertion 3:** The "id" parameter is not present in the network list + +Pass / fail criteria +''''''''''''''''''''' + +This test evaluates the ability to use list commands to list the networks and ports on +the SUT API. Specifically it verifies that: + +* Port list command to list all ports, the created port is found in the list. +* All items created using create commands are able to be removed using the returned identifiers + +Post conditions +--------------- + +None + +------------------------------------------------------- +Test Case 10 - Show Key/Valus Details of an IPv6 Port +------------------------------------------------------- + +Short name +---------- + +opnfv.ipv6.tc010.port_show_details + +Use case specification +---------------------- + +This test case evaluates the SUT ability of showing the port +details, the values in the details should be equal to the values to create the port, +the reference is, + +tempest.api.network.test_ports.PortsIpV6TestJSON.test_show_port + +Test preconditions +------------------ + +None + +Basic test flow execution description and pass/fail criteria +------------------------------------------------------------ + +Test execution +''''''''''''''' + +* Test action 1: Create a network, storing the "id" parameter returned in the response +* Test action 2: Create a port of the network, storing the "id" parameter returned in the response +* Test action 3: Show the details of the port, verify the stored port's id + in test action 2 exists in the details +* **Test assertion 1:** The "id" parameter is found in the port shown details +* Test action 4: Verify the values in the details of the port are the same as the values + to create the port +* **Test assertion 2:** The values in the details of the port are the same as the values + to create the port +* Test action 5: Delete the port using the stored "id" parameter +* Test action 6: List all ports, verify the port is no longer present in the list +* **Test assertion 3:** The port "id" parameter is not present in list +* Test action 7: Delete the network created in test action 1, using the stored network id +* Test action 8: List all networks, verifying the network id is no longer present +* **Test assertion 4:** The "id" parameter is not present in the network list + +Pass / fail criteria +''''''''''''''''''''' + +This test evaluates the ability to use show commands to show port details on the SUT API. +Specifically it verifies that: + +* Port show commands to show the details of the port, whose id is in the details +* Port show commands to show the details of the port, whose values are the same as the values + to create the port +* All items created using create commands are able to be removed using the returned identifiers + +Post conditions +--------------- + +None + +--------------------------------------------------------- +Test Case 11 - Add Multiple Interfaces for an IPv6 Router +--------------------------------------------------------- + +Short name +----------- + +opnfv.ipv6.router_add_multiple_interface + +Use case specification +---------------------- + +This test case evaluates the SUT ability of adding multiple interface +to a router, the reference is, + +tempest.api.network.test_routers.RoutersIpV6Test.test_add_multiple_router_interfaces + +Test preconditions +------------------ + +None + +Basic test flow execution description and pass/fail criteria +------------------------------------------------------------ + +Test execution +''''''''''''''' + +* Test action 1: Create 2 networks named network01 and network02 sequentially, + storing the "id" parameters returned in the response +* Test action 2: Create an IPv6 subnet01 in network01, an IPv6 subnet02 in network02 sequentially, + storing the "id" parameters returned in the response +* Test action 3: Create a router, storing the "id" parameter returned in the response +* Test action 4: Create interface01 with subnet01 and the router +* Test action 5: Verify the router_id stored in test action 3 equals to the interface01's 'device_id' + and subnet01_id stored in test action 2 equals to the interface01's 'subnet_id' +* **Test assertion 1:** the router_id equals to the interface01's 'device_id' + and subnet01_id equals to the interface01's 'subnet_id' +* Test action 5: Create interface02 with subnet02 and the router +* Test action 6: Verify the router_id stored in test action 3 equals to the interface02's 'device_id' + and subnet02_id stored in test action 2 equals to the interface02's 'subnet_id' +* **Test assertion 2:** the router_id equals to the interface02's 'device_id' + and subnet02_id equals to the interface02's 'subnet_id' +* Test action 7: Delete the interfaces, router, IPv6 subnets and networks, networks, subnets, then list + all interfaces, ports, IPv6 subnets, networks, the test passes if the deleted ones + are not found in the list. +* **Test assertion 3:** The interfaces, router, IPv6 subnets and networks ids are not present in the lists + after deleting + +Pass / fail criteria +''''''''''''''''''''' + +This test evaluates the ability to use bulk create commands to create networks, IPv6 subnets and ports on +the SUT API. Specifically it verifies that: + +* Interface create commands to create interface with IPv6 subnet and router, interface 'device_id' and + 'subnet_id' should equal to the router id and IPv6 subnet id, respectively. +* Interface create commands to create multiple interface with the same router and multiple IPv6 subnets. +* All items created using create commands are able to be removed using the returned identifiers + +Post conditions +--------------- + +None + +------------------------------------------------------------------- +Test Case 12 - Add and Remove an IPv6 Router Interface with port_id +------------------------------------------------------------------- + +Short name +---------- + +opnfv.ipv6.router_interface_add_remove_with_port + +Use case specification +---------------------- + +This test case evaluates the SUT abiltiy of adding, removing router interface to +a port, the subnet_id and port_id of the interface will be checked, +the port's device_id will be checked if equals to the router_id or not. The +reference is, + +tempest.api.network.test_routers.RoutersIpV6Test.test_add_remove_router_interface_with_port_id + +Test preconditions +------------------ + +None + +Basic test flow execution description and pass/fail criteria +------------------------------------------------------------ + +Test execution +''''''''''''''' + +* Test action 1: Create a network, storing the "id" parameter returned in the response +* Test action 2: Create an IPv6 subnet of the network, storing the "id" parameter returned in the response +* Test action 3: Create a router, storing the "id" parameter returned in the response +* Test action 4: Create a port of the network, storing the "id" parameter returned in the response +* Test action 5: Add router interface to the port created, storing the "id" parameter returned in the response +* Test action 6: Verify the interface's keys include 'subnet_id' and 'port_id' +* **Test assertion 1:** the interface's keys include 'subnet_id' and 'port_id' +* Test action 7: Show the port details, verify the 'device_id' in port details equals to the router id stored + in test action 3 +* **Test assertion 2:** 'device_id' in port details equals to the router id +* Test action 8: Delete the interface, port, router, subnet and network, then list + all interfaces, ports, routers, subnets and networks, the test passes if the deleted + ones are not found in the list. +* **Test assertion 3:** interfaces, ports, routers, subnets and networks are not found in the lists after deleting + +Pass / fail criteria +''''''''''''''''''''' + +This test evaluates the ability to use add/remove commands to add/remove router interface to the port, +show commands to show port details on the SUT API. Specifically it verifies that: + +* Router_interface add commands to add router interface to a port, the interface's keys should include 'subnet_id' and 'port_id' +* Port show commands to show 'device_id' in port details, which should be equal to the router id +* All items created using create commands are able to be removed using the returned identifiers + +Post conditions +--------------- + +None + +--------------------------------------------------------------------- +Test Case 13 - Add and Remove an IPv6 Router Interface with subnet_id +--------------------------------------------------------------------- + +Short name +---------- + +opnfv.ipv6.router_interface_add_remove + +Use case specification +---------------------- + +This test case evaluates the SUT API ability of adding and removing a router interface with +the IPv6 subnet id, the reference is + +tempest.api.network.test_routers.RoutersIpV6Test.test_add_remove_router_interface_with_subnet_id + +Test preconditions +------------------ + +None + +Basic test flow execution description and pass/fail criteria +------------------------------------------------------------ + +Test execution +''''''''''''''' + +* Test action 1: Create a network, storing the "id" parameter returned in the response +* Test action 2: Create an IPv6 subnet with the network created, storing the "id" parameter + returned in the response +* Test action 3: Create a router, storing the "id" parameter returned in the response +* Test action 4: Add a router interface with the stored ids of the router and IPv6 subnet +* **Test assertion 1:** Key 'subnet_id' is included in the added interface's keys +* **Test assertion 2:** Key 'port_id' is included in the added interface's keys +* Test action 5: Show the port info with the stored interface's port id +* **Test assertion 3:**: The stored router id is equal to the device id shown in the port info +* Test action 6: Delete the router interface created in test action 4, using the stored subnet id +* Test action 7: List all router interfaces, verifying the router interface is no longer present +* **Test assertion 4:** The router interface with the stored subnet id is not present + in the router interface list +* Test action 8: Delete the router created in test action 3, using the stored router id +* Test action 9: List all routers, verifying the router id is no longer present +* **Test assertion 5:** The router "id" parameter is not present in the router list +* Test action 10: Delete the subnet created in test action 2, using the stored subnet id +* Test action 11: List all subnets, verifying the subnet id is no longer present +* **Test assertion 6:** The subnet "id" parameter is not present in the subnet list +* Test action 12: Delete the network created in test action 1, using the stored network id +* Test action 13: List all networks, verifying the network id is no longer present +* **Test assertion 7:** The network "id" parameter is not present in the network list + +Pass / fail criteria +'''''''''''''''''''' + +This test evaluates the ability to add and remove router interface with the subnet id on the +SUT API. Specifically it verifies that: + +* Router interface add command returns valid 'subnet_id' parameter which is reported + in the interface's keys +* Router interface add command returns valid 'port_id' parameter which is reported + in the interface's keys +* All items created using create commands are able to be removed using the returned identifiers + +Post conditions +--------------- + +None + +------------------------------------------------------------------- +Test Case 14 - Create, Show, List, Update and Delete an IPv6 router +------------------------------------------------------------------- + +Short name +---------- + +opnfv.ipv6.router_create_show_list_update_delete + +Use case specification +---------------------- + +This test case evaluates the SUT API ability of creating, showing, listing, updating +and deleting routers, the reference is + +tempest.api.network.test_routers.RoutersIpV6Test.test_create_show_list_update_delete_router + +Test preconditions +------------------ + +There should exist an OpenStack external network. + +Basic test flow execution description and pass/fail criteria +------------------------------------------------------------ + +Test execution +''''''''''''''' + +* Test action 1: Create a router, set the admin_state_up to be False and external_network_id + to be public network id, storing the "id" parameter returned in the response +* **Test assertion 1:** The created router's admin_state_up is False +* **Test assertion 2:** The created router's external network id equals to the public network id +* Test action 2: Show details of the router created in test action 1, using the stored router id +* **Test assertion 3:** The router's name shown is the same as the router created +* **Test assertion 4:** The router's external network id shown is the same as the public network id +* Test action 3: List all routers and verify if created router is in response message +* **Test assertion 5:** The stored router id is in the router list +* Test action 4: Update the name of router and verify if it is updated +* **Test assertion 6:** The name of router equals to the name used to update in test action 4 +* Test action 5: Show the details of router, using the stored router id +* **Test assertion 7:** The router's name shown equals to the name used to update in test action 4 +* Test action 6: Delete the router created in test action 1, using the stored router id +* Test action 7: List all routers, verifying the router id is no longer present +* **Test assertion 8:** The "id" parameter is not present in the router list + +Pass / fail criteria +''''''''''''''''''''' + +This test evaluates the ability to create, show, list, update and delete router on +the SUT API. Specifically it verifies that: + +* Router create command returns valid "admin_state_up" and "id" parameters which equal to the + "admin_state_up" and "id" returned in the response +* Router show command returns valid "name" parameter which equals to the "name" returned in the response +* Router show command returns valid "external network id" parameters which equals to the public network id +* Router list command returns valid "id" parameter which equals to the stored router "id" +* Router update command returns updated "name" parameters which equals to the "name" used to update +* Router created using create command is able to be removed using the returned identifiers + +Post conditions +--------------- + +None + +--------------------------------------------------------------------------- +Test Case 15 - Create, List, Update, Show and Delete an IPv6 security group +--------------------------------------------------------------------------- + +Short name +---------- + +opnfv.ipv6.security_group_create_list_update_show_delete + +Use case specification +---------------------- + +This test case evaluates the SUT API ability of creating, listing, updating, showing +and deleting security groups, the reference is + +tempest.api.network.test_security_groups.SecGroupIPv6Test.test_create_list_update_show_delete_security_group + +Test preconditions +------------------ + +None + +Basic test flow execution description and pass/fail criteria +------------------------------------------------------------ + +Test execution +''''''''''''''' + +* Test action 1: Create a security group, storing the "id" parameter returned in the response +* Test action 2: List all security groups and verify if created security group is there in response +* **Test assertion 1:** The created security group's "id" is found in the list +* Test action 3: Update the name and description of this security group, using the stored id +* Test action 4: Verify if the security group's name and description are updated +* **Test assertion 2:** The security group's name equals to the name used in test action 3 +* **Test assertion 3:** The security group's description equals to the description used in test action 3 +* Test action 5: Show details of the updated security group, using the stored id +* **Test assertion 4:** The security group's name shown equals to the name used in test action 3 +* **Test assertion 5:** The security group's description shown equals to the description used in test action 3 +* Test action 6: Delete the security group created in test action 1, using the stored id +* Test action 7: List all security groups, verifying the security group's id is no longer present +* **Test assertion 6:** The "id" parameter is not present in the security group list + +Pass / fail criteria +'''''''''''''''''''' + +This test evaluates the ability to create list, update, show and delete security group on +the SUT API. Specifically it verifies that: + +* Security group create commands return valid "id" parameter which is reported in the list commands +* Security group update commands return valid "name" and "description" parameters which are + reported in the show commands +* Security group created using create command is able to be removed using the returned identifiers + +Post conditions +--------------- + +None + +--------------------------------------------------------------- +Test Case 16 - Create, Show and Delete IPv6 security group rule +--------------------------------------------------------------- + +Short name +---------- + +opnfv.ipv6.security_group_rule_create_show_delete + +Use case specification +---------------------- + +This test case evaluates the SUT API ability of creating, showing, listing and deleting +security group rules, the reference is + +tempest.api.network.test_security_groups.SecGroupIPv6Test.test_create_show_delete_security_group_rule + +Test preconditions +------------------ + +None + +Basic test flow execution description and pass/fail criteria +------------------------------------------------------------ + +Test execution +''''''''''''''' + +* Test action 1: Create a security group, storing the "id" parameter returned in the response +* Test action 2: Create a rule of the security group with protocol tcp, udp and icmp, respectively, + using the stored security group's id, storing the "id" parameter returned in the response +* Test action 3: Show details of the created security group rule, using the stored id of the + security group rule +* **Test assertion 1:** All the created security group rule's values equal to the rule values + shown in test action 3 +* Test action 4: List all security group rules +* **Test assertion 2:** The stored security group rule's id is found in the list +* Test action 5: Delete the security group rule, using the stored security group rule's id +* Test action 6: List all security group rules, verifying the security group rule's id is no longer present +* **Test assertion 3:** The security group rule "id" parameter is not present in the list +* Test action 7: Delete the security group, using the stored security group's id +* Test action 8: List all security groups, verifying the security group's id is no longer present +* **Test assertion 4:** The security group "id" parameter is not present in the list + +Pass / fail criteria +''''''''''''''''''''' + +This test evaluates the ability to create, show, list and delete security group rules on +the SUT API. Specifically it verifies that: + +* Security group rule create command returns valid values which are reported in the show command +* Security group rule created using create command is able to be removed using the returned identifiers + +Post conditions +--------------- + +None + +---------------------------------------- +Test Case 17 - List IPv6 Security Groups +---------------------------------------- + +Short name +---------- + +opnfv.ipv6.security_group_list + +Use case specification +---------------------- + +This test case evaluates the SUT API ability of listing security groups, the reference is + +tempest.api.network.test_security_groups.SecGroupIPv6Test.test_list_security_groups + +Test preconditions +------------------ + +There should exist a default security group. + +Basic test flow execution description and pass/fail criteria +------------------------------------------------------------ + +Test execution +''''''''''''''' + +* Test action 1: List all security groups +* Test action 2: Verify the default security group exists in the list, the test passes + if the default security group exists +* **Test assertion 1:** The default security group is in the list + +Pass / fail criteria +''''''''''''''''''''' + +This test evaluates the ability to list security groups on the SUT API. +Specifically it verifies that: + +* Security group list command return valid security groups which include the default security group + +Post conditions +--------------- + +None + +---------------------------------------------------------------------------- +Test Case 18 - IPv6 Address Assignment - Dual Stack, SLAAC, DHCPv6 Stateless +---------------------------------------------------------------------------- + +Short name +---------- + +opnfv.ipv6.dhcpv6_stateless + +Use case specification +---------------------- + +This test case evaluates IPv6 address assignment in ipv6_ra_mode 'dhcpv6_stateless' +and ipv6_address_mode 'dhcpv6_stateless'. +In this case, guest instance obtains IPv6 address from OpenStack managed radvd +using SLAAC and optional info from dnsmasq using DHCPv6 stateless. This test case then +verifies the ping6 available VM can ping the other VM's v4 and v6 addresses +as well as the v6 subnet's gateway ip in the same network, the reference is + +tempest.scenario.test_network_v6.TestGettingAddress.test_dhcp6_stateless_from_os + +Test preconditions +------------------ + +There should exist a public router or a public network. + +Basic test flow execution description and pass/fail criteria +------------------------------------------------------------ + +Test execution +''''''''''''''' + +* Test action 1: Create one network, storing the "id" parameter returned in the response +* Test action 2: Create one IPv4 subnet of the created network, storing the "id" + parameter returned in the response +* Test action 3: If there exists a public router, use it as the router. Otherwise, + use the public network to create a router +* Test action 4: Connect the IPv4 subnet to the router, using the stored IPv4 subnet id +* Test action 5: Create one IPv6 subnet of the network created in test action 1 in + ipv6_ra_mode 'dhcpv6_stateless' and ipv6_address_mode 'dhcpv6_stateless', + storing the "id" parameter returned in the response +* Test action 6: Connect the IPv6 subnet to the router, using the stored IPv6 subnet id +* Test action 7: Boot two VMs on this network, storing the "id" parameters returned in the response +* **Test assertion 1:** The vNIC of each VM gets one v4 address and one v6 address actually assigned +* **Test assertion 2:** Each VM can ping the other's v4 private address +* **Test assertion 3:** The ping6 available VM can ping the other's v6 address + as well as the v6 subnet's gateway ip +* Test action 8: Delete the 2 VMs created in test action 7, using the stored ids +* Test action 9: List all VMs, verifying the ids are no longer present +* **Test assertion 4:** The two "id" parameters are not present in the VM list +* Test action 10: Delete the IPv4 subnet created in test action 2, using the stored id +* Test action 11: Delete the IPv6 subnet created in test action 5, using the stored id +* Test action 12: List all subnets, verifying the ids are no longer present +* **Test assertion 5:** The "id" parameters of IPv4 and IPv6 are not present in the list +* Test action 13: Delete the network created in test action 1, using the stored id +* Test action 14: List all networks, verifying the id is no longer present +* **Test assertion 6:** The "id" parameter is not present in the network list + +Pass / fail criteria +''''''''''''''''''''' + +This test evaluates the ability to assign IPv6 addresses in ipv6_ra_mode +'dhcpv6_stateless' and ipv6_address_mode 'dhcpv6_stateless', +and verify the ping6 available VM can ping the other VM's v4 and v6 addresses as well as +the v6 subnet's gateway ip in the same network. Specifically it verifies that: + +* The IPv6 addresses in mode 'dhcpv6_stateless' assigned successfully +* The VM can ping the other VM's IPv4 and IPv6 private addresses as well as the v6 subnet's gateway ip +* All items created using create commands are able to be removed using the returned identifiers + +Post conditions +--------------- + +None + +-------------------------------------------------------------------------------------- +Test Case 19 - IPv6 Address Assignment - Dual Net, Dual Stack, SLAAC, DHCPv6 Stateless +-------------------------------------------------------------------------------------- + +Short name +---------- + +opnfv.ipv6.dualnet_dhcpv6_stateless + +Use case specification +---------------------- + +This test case evaluates IPv6 address assignment in ipv6_ra_mode 'dhcpv6_stateless' +and ipv6_address_mode 'dhcpv6_stateless'. +In this case, guest instance obtains IPv6 address from OpenStack managed radvd +using SLAAC and optional info from dnsmasq using DHCPv6 stateless. This test case then +verifies the ping6 available VM can ping the other VM's v4 address in one network +and v6 address in another network as well as the v6 subnet's gateway ip, the reference is + +tempest.scenario.test_network_v6.TestGettingAddress.test_dualnet_dhcp6_stateless_from_os + +Test preconditions +------------------ + +There should exists a public router or a public network. + +Basic test flow execution description and pass/fail criteria +------------------------------------------------------------ + +Test execution +''''''''''''''' + +* Test action 1: Create one network, storing the "id" parameter returned in the response +* Test action 2: Create one IPv4 subnet of the created network, storing the "id" + parameter returned in the response +* Test action 3: If there exists a public router, use it as the router. Otherwise, + use the public network to create a router +* Test action 4: Connect the IPv4 subnet to the router, using the stored IPv4 subnet id +* Test action 5: Create another network, storing the "id" parameter returned in the response +* Test action 6: Create one IPv6 subnet of network created in test action 5 in + ipv6_ra_mode 'dhcpv6_stateless' and ipv6_address_mode 'dhcpv6_stateless', + storing the "id" parameter returned in the response +* Test action 7: Connect the IPv6 subnet to the router, using the stored IPv6 subnet id +* Test action 8: Boot two VMs on these two networks, storing the "id" parameters returned in the response +* Test action 9: Turn on 2nd NIC of each VM for the network created in test action 5 +* **Test assertion 1:** The 1st vNIC of each VM gets one v4 address assigned and + the 2nd vNIC of each VM gets one v6 address actually assigned +* **Test assertion 2:** Each VM can ping the other's v4 private address +* **Test assertion 3:** The ping6 available VM can ping the other's v6 address + as well as the v6 subnet's gateway ip +* Test action 10: Delete the 2 VMs created in test action 8, using the stored ids +* Test action 11: List all VMs, verifying the ids are no longer present +* **Test assertion 4:** The two "id" parameters are not present in the VM list +* Test action 12: Delete the IPv4 subnet created in test action 2, using the stored id +* Test action 13: Delete the IPv6 subnet created in test action 6, using the stored id +* Test action 14: List all subnets, verifying the ids are no longer present +* **Test assertion 5:** The "id" parameters of IPv4 and IPv6 are not present in the list +* Test action 15: Delete the 2 networks created in test action 1 and 5, using the stored ids +* Test action 16: List all networks, verifying the ids are no longer present +* **Test assertion 6:** The two "id" parameters are not present in the network list + +Pass / fail criteria +'''''''''''''''''''' + +This test evaluates the ability to assign IPv6 addresses in ipv6_ra_mode 'dhcpv6_stateless' +and ipv6_address_mode 'dhcpv6_stateless', and verify the ping6 available VM can ping +the other VM's v4 address in one network and v6 address in another network as well as +the v6 subnet's gateway ip. Specifically it verifies that: + +* The IPv6 addresses in mode 'dhcpv6_stateless' assigned successfully +* The VM can ping the other VM's IPv4 address in one network and IPv6 address in another + network as well as the v6 subnet's gateway ip +* All items created using create commands are able to be removed using the returned identifiers + +Post conditions +--------------- + +None + +----------------------------------------------------------------------------------------------- +Test Case 20 - IPv6 Address Assignment - Multiple Prefixes, Dual Stack, SLAAC, DHCPv6 Stateless +----------------------------------------------------------------------------------------------- + +Short name +---------- + +opnfv.ipv6.multiple_prefixes_dhcpv6_stateless + +Use case specification +---------------------- + +This test case evaluates IPv6 address assignment in ipv6_ra_mode 'dhcpv6_stateless' +and ipv6_address_mode 'dhcpv6_stateless'. +In this case, guest instance obtains IPv6 addresses from OpenStack managed radvd +using SLAAC and optional info from dnsmasq using DHCPv6 stateless. This test case then +verifies the ping6 available VM can ping the other VM's one v4 address and two v6 +addresses with different prefixes as well as the v6 subnets' gateway ips in the +same network, the reference is + +tempest.scenario.test_network_v6.TestGettingAddress.test_multi_prefix_dhcpv6_stateless + +Test preconditions +------------------ + +There should exist a public router or a public network. + +Basic test flow execution description and pass/fail criteria +------------------------------------------------------------ + +Test execution +''''''''''''''' + +* Test action 1: Create one network, storing the "id" parameter returned in the response +* Test action 2: Create one IPv4 subnet of the created network, storing the "id" + parameter returned in the response +* Test action 3: If there exists a public router, use it as the router. Otherwise, + use the public network to create a router +* Test action 4: Connect the IPv4 subnet to the router, using the stored IPv4 subnet id +* Test action 5: Create two IPv6 subnets of the network created in test action 1 in + ipv6_ra_mode 'dhcpv6_stateless' and ipv6_address_mode 'dhcpv6_stateless', + storing the "id" parameters returned in the response +* Test action 6: Connect the two IPv6 subnets to the router, using the stored IPv6 subnet ids +* Test action 7: Boot two VMs on this network, storing the "id" parameters returned in the response +* **Test assertion 1:** The vNIC of each VM gets one v4 address and two v6 addresses with + different prefixes actually assigned +* **Test assertion 2:** Each VM can ping the other's v4 private address +* **Test assertion 3:** The ping6 available VM can ping the other's v6 addresses + as well as the v6 subnets' gateway ips +* Test action 8: Delete the 2 VMs created in test action 7, using the stored ids +* Test action 9: List all VMs, verifying the ids are no longer present +* **Test assertion 4:** The two "id" parameters are not present in the VM list +* Test action 10: Delete the IPv4 subnet created in test action 2, using the stored id +* Test action 11: Delete two IPv6 subnets created in test action 5, using the stored ids +* Test action 12: List all subnets, verifying the ids are no longer present +* **Test assertion 5:** The "id" parameters of IPv4 and IPv6 are not present in the list +* Test action 13: Delete the network created in test action 1, using the stored id +* Test action 14: List all networks, verifying the id is no longer present +* **Test assertion 6:** The "id" parameter is not present in the network list + +Pass / fail criteria +''''''''''''''''''''' + +This test evaluates the ability to assign IPv6 addresses in ipv6_ra_mode 'dhcpv6_stateless' +and ipv6_address_mode 'dhcpv6_stateless', +and verify the ping6 available VM can ping the other VM's v4 address and two +v6 addresses with different prefixes as well as the v6 subnets' gateway ips in the same network. +Specifically it verifies that: + +* The different prefixes IPv6 addresses in mode 'dhcpv6_stateless' assigned successfully +* The VM can ping the other VM's IPv4 and IPv6 private addresses as well as the v6 subnets' gateway ips +* All items created using create commands are able to be removed using the returned identifiers + +Post conditions +--------------- + +None + +--------------------------------------------------------------------------------------------------------- +Test Case 21 - IPv6 Address Assignment - Dual Net, Multiple Prefixes, Dual Stack, SLAAC, DHCPv6 Stateless +--------------------------------------------------------------------------------------------------------- + +Short name +---------- + +opnfv.ipv6.dualnet_multiple_prefixes_dhcpv6_stateless + +Use case specification +---------------------- + +This test case evaluates IPv6 address assignment in ipv6_ra_mode 'dhcpv6_stateless' +and ipv6_address_mode 'dhcpv6_stateless'. +In this case, guest instance obtains IPv6 addresses from OpenStack managed radvd +using SLAAC and optional info from dnsmasq using DHCPv6 stateless. This test case then +verifies the ping6 available VM can ping the other VM's v4 address in one network +and two v6 addresses with different prefixes in another network as well as the +v6 subnets' gateway ips, the reference is + +tempest.scenario.test_network_v6.TestGettingAddress.test_dualnet_multi_prefix_dhcpv6_stateless + +Test preconditions +------------------ + +There should exist a public router or a public network. + +Basic test flow execution description and pass/fail criteria +------------------------------------------------------------ + +Test execution +''''''''''''''' + +* Test action 1: Create one network, storing the "id" parameter returned in the response +* Test action 2: Create one IPv4 subnet of the created network, storing the "id" + parameter returned in the response +* Test action 3: If there exists a public router, use it as the router. Otherwise, + use the public network to create a router +* Test action 4: Connect the IPv4 subnet to the router, using the stored IPv4 subnet id +* Test action 5: Create another network, storing the "id" parameter returned in the response +* Test action 6: Create two IPv6 subnets of network created in test action 5 in + ipv6_ra_mode 'dhcpv6_stateless' and ipv6_address_mode 'dhcpv6_stateless', + storing the "id" parameters returned in the response +* Test action 7: Connect the two IPv6 subnets to the router, using the stored IPv6 subnet ids +* Test action 8: Boot two VMs on these two networks, storing the "id" parameters returned in the response +* Test action 9: Turn on 2nd NIC of each VM for the network created in test action 5 +* **Test assertion 1:** The vNIC of each VM gets one v4 address and two v6 addresses + with different prefixes actually assigned +* **Test assertion 2:** Each VM can ping the other's v4 private address +* **Test assertion 3:** The ping6 available VM can ping the other's v6 addresses + as well as the v6 subnets' gateway ips +* Test action 10: Delete the 2 VMs created in test action 8, using the stored ids +* Test action 11: List all VMs, verifying the ids are no longer present +* **Test assertion 4:** The two "id" parameters are not present in the VM list +* Test action 12: Delete the IPv4 subnet created in test action 2, using the stored id +* Test action 13: Delete two IPv6 subnets created in test action 6, using the stored ids +* Test action 14: List all subnets, verifying the ids are no longer present +* **Test assertion 5:** The "id" parameters of IPv4 and IPv6 are not present in the list +* Test action 15: Delete the 2 networks created in test action 1 and 5, using the stored ids +* Test action 16: List all networks, verifying the ids are no longer present +* **Test assertion 6:** The two "id" parameters are not present in the network list + +Pass / fail criteria +''''''''''''''''''''' + +This test evaluates the ability to assign IPv6 addresses in ipv6_ra_mode 'dhcpv6_stateless' +and ipv6_address_mode 'dhcpv6_stateless', +and verify the ping6 available VM can ping the other VM's v4 address in one network and two +v6 addresses with different prefixes in another network as well as the v6 subnets' +gateway ips. Specifically it verifies that: + +* The IPv6 addresses in mode 'dhcpv6_stateless' assigned successfully +* The VM can ping the other VM's IPv4 and IPv6 private addresses as well as the v6 subnets' gateway ips +* All items created using create commands are able to be removed using the returned identifiers + +Post conditions +--------------- + +None + +---------------------------------------------------------- +Test Case 22 - IPv6 Address Assignment - Dual Stack, SLAAC +---------------------------------------------------------- + +Short name +---------- + +opnfv.ipv6.slaac + +Use case specification +---------------------- + +This test case evaluates IPv6 address assignment in ipv6_ra_mode 'slaac' and +ipv6_address_mode 'slaac'. +In this case, guest instance obtains IPv6 address from OpenStack managed radvd +using SLAAC. This test case then verifies the ping6 available VM can ping the other +VM's v4 and v6 addresses as well as the v6 subnet's gateway ip in the +same network, the reference is + +tempest.scenario.test_network_v6.TestGettingAddress.test_slaac_from_os + +Test preconditions +------------------ + +There should exist a public router or a public network. + +Basic test flow execution description and pass/fail criteria +------------------------------------------------------------ + +Test execution +''''''''''''''' + +* Test action 1: Create one network, storing the "id" parameter returned in the response +* Test action 2: Create one IPv4 subnet of the created network, storing the "id" + parameter returned in the response +* Test action 3: If there exists a public router, use it as the router. Otherwise, + use the public network to create a router +* Test action 4: Connect the IPv4 subnet to the router, using the stored IPv4 subnet id +* Test action 5: Create one IPv6 subnet of the network created in test action 1 in + ipv6_ra_mode 'slaac' and ipv6_address_mode 'slaac', storing the "id" parameter returned in the response +* Test action 6: Connect the IPv6 subnet to the router, using the stored IPv6 subnet id +* Test action 7: Boot two VMs on this network, storing the "id" parameters returned in the response +* **Test assertion 1:** The vNIC of each VM gets one v4 address and one v6 address actually assigned +* **Test assertion 2:** Each VM can ping the other's v4 private address +* **Test assertion 3:** The ping6 available VM can ping the other's v6 address + as well as the v6 subnet's gateway ip +* Test action 8: Delete the 2 VMs created in test action 7, using the stored ids +* Test action 9: List all VMs, verifying the ids are no longer present +* **Test assertion 4:** The two "id" parameters are not present in the VM list +* Test action 10: Delete the IPv4 subnet created in test action 2, using the stored id +* Test action 11: Delete the IPv6 subnet created in test action 5, using the stored id +* Test action 12: List all subnets, verifying the ids are no longer present +* **Test assertion 5:** The "id" parameters of IPv4 and IPv6 are not present in the list +* Test action 13: Delete the network created in test action 1, using the stored id +* Test action 14: List all networks, verifying the id is no longer present +* **Test assertion 6:** The "id" parameter is not present in the network list + +Pass / fail criteria +''''''''''''''''''''' + +This test evaluates the ability to assign IPv6 addresses in ipv6_ra_mode 'slaac' +and ipv6_address_mode 'slaac', +and verify the ping6 available VM can ping the other VM's v4 and v6 addresses as well as +the v6 subnet's gateway ip in the same network. Specifically it verifies that: + +* The IPv6 addresses in mode 'slaac' assigned successfully +* The VM can ping the other VM's IPv4 and IPv6 private addresses as well as the v6 subnet's gateway ip +* All items created using create commands are able to be removed using the returned identifiers + +Post conditions +--------------- + +None + +-------------------------------------------------------------------- +Test Case 23 - IPv6 Address Assignment - Dual Net, Dual Stack, SLAAC +-------------------------------------------------------------------- + +Short name +---------- + +opnfv.ipv6.dualnet_slaac + +Use case specification +---------------------- + +This test case evaluates IPv6 address assignment in ipv6_ra_mode 'slaac' and +ipv6_address_mode 'slaac'. +In this case, guest instance obtains IPv6 address from OpenStack managed radvd +using SLAAC. This test case then verifies the ping6 available VM can ping the other +VM's v4 address in one network and v6 address in another network as well as the +v6 subnet's gateway ip, the reference is + +tempest.scenario.test_network_v6.TestGettingAddress.test_dualnet_slaac_from_os + +Test preconditions +------------------ + +There should exist a public router or a public network. + +Basic test flow execution description and pass/fail criteria +------------------------------------------------------------ + +Test execution +''''''''''''''' + +* Test action 1: Create one network, storing the "id" parameter returned in the response +* Test action 2: Create one IPv4 subnet of the created network, storing the "id" + parameter returned in the response +* Test action 3: If there exists a public router, use it as the router. Otherwise, + use the public network to create a router +* Test action 4: Connect the IPv4 subnet to the router, using the stored IPv4 subnet id +* Test action 5: Create another network, storing the "id" parameter returned in the response +* Test action 6: Create one IPv6 subnet of network created in test action 5 in + ipv6_ra_mode 'slaac' and ipv6_address_mode 'slaac', storing the "id" parameter returned in the response +* Test action 7: Connect the IPv6 subnet to the router, using the stored IPv6 subnet id +* Test action 8: Boot two VMs on these two networks, storing the "id" parameters returned in the response +* Test action 9: Turn on 2nd NIC of each VM for the network created in test action 5 +* **Test assertion 1:** The 1st vNIC of each VM gets one v4 address assigned and + the 2nd vNIC of each VM gets one v6 address actually assigned +* **Test assertion 2:** Each VM can ping the other's v4 private address +* **Test assertion 3:** The ping6 available VM can ping the other's v6 address + as well as the v6 subnet's gateway ip +* Test action 10: Delete the 2 VMs created in test action 8, using the stored ids +* Test action 11: List all VMs, verifying the ids are no longer present +* **Test assertion 4:** The two "id" parameters are not present in the VM list +* Test action 12: Delete the IPv4 subnet created in test action 2, using the stored id +* Test action 13: Delete the IPv6 subnet created in test action 6, using the stored id +* Test action 14: List all subnets, verifying the ids are no longer present +* **Test assertion 5:** The "id" parameters of IPv4 and IPv6 are not present in the list +* Test action 15: Delete the 2 networks created in test action 1 and 5, using the stored ids +* Test action 16: List all networks, verifying the ids are no longer present +* **Test assertion 6:** The two "id" parameters are not present in the network list + +Pass / fail criteria +''''''''''''''''''''' + +This test evaluates the ability to assign IPv6 addresses in ipv6_ra_mode 'slaac' +and ipv6_address_mode 'slaac', +and verify the ping6 available VM can ping the other VM's v4 address in one network and +v6 address in another network as well as the v6 subnet's gateway ip. Specifically it verifies that: + +* The IPv6 addresses in mode 'slaac' assigned successfully +* The VM can ping the other VM's IPv4 address in one network and IPv6 address + in another network as well as the v6 subnet's gateway ip +* All items created using create commands are able to be removed using the returned identifiers + +Post conditions +--------------- + +None + +----------------------------------------------------------------------------- +Test Case 24 - IPv6 Address Assignment - Multiple Prefixes, Dual Stack, SLAAC +----------------------------------------------------------------------------- + +Short name +---------- + +opnfv.ipv6.multiple_prefixes_slaac + +Use case specification +---------------------- + +This test case evaluates IPv6 address assignment in ipv6_ra_mode 'slaac' and +ipv6_address_mode 'slaac'. +In this case, guest instance obtains IPv6 addresses from OpenStack managed radvd +using SLAAC. This test case then verifies the ping6 available VM can ping the other +VM's one v4 address and two v6 addresses with different prefixes as well as the v6 +subnets' gateway ips in the same network, the reference is + +tempest.scenario.test_network_v6.TestGettingAddress.test_multi_prefix_slaac + +Test preconditions +------------------ + +There should exists a public router or a public network. + +Basic test flow execution description and pass/fail criteria +------------------------------------------------------------ + +Test execution +''''''''''''''' + +* Test action 1: Create one network, storing the "id" parameter returned in the response +* Test action 2: Create one IPv4 subnet of the created network, storing the "id + parameter returned in the response +* Test action 3: If there exists a public router, use it as the router. Otherwise, + use the public network to create a router +* Test action 4: Connect the IPv4 subnet to the router, using the stored IPv4 subnet id +* Test action 5: Create two IPv6 subnets of the network created in test action 1 in + ipv6_ra_mode 'slaac' and ipv6_address_mode 'slaac', storing the "id" parameters returned in the response +* Test action 6: Connect the two IPv6 subnets to the router, using the stored IPv6 subnet ids +* Test action 7: Boot two VMs on this network, storing the "id" parameters returned in the response +* **Test assertion 1:** The vNIC of each VM gets one v4 address and two v6 addresses with + different prefixes actually assigned +* **Test assertion 2:** Each VM can ping the other's v4 private address +* **Test assertion 3:** The ping6 available VM can ping the other's v6 addresses + as well as the v6 subnets' gateway ips +* Test action 8: Delete the 2 VMs created in test action 7, using the stored ids +* Test action 9: List all VMs, verifying the ids are no longer present +* **Test assertion 4:** The two "id" parameters are not present in the VM list +* Test action 10: Delete the IPv4 subnet created in test action 2, using the stored id +* Test action 11: Delete two IPv6 subnets created in test action 5, using the stored ids +* Test action 12: List all subnets, verifying the ids are no longer present +* **Test assertion 5:** The "id" parameters of IPv4 and IPv6 are not present in the list +* Test action 13: Delete the network created in test action 1, using the stored id +* Test action 14: List all networks, verifying the id is no longer present +* **Test assertion 6:** The "id" parameter is not present in the network list + +Pass / fail criteria +''''''''''''''''''''' + +This test evaluates the ability to assign IPv6 addresses in ipv6_ra_mode 'slaac' +and ipv6_address_mode 'slaac', +and verify the ping6 available VM can ping the other VM's v4 address and two +v6 addresses with different prefixes as well as the v6 subnets' gateway ips in the same network. +Specifically it verifies that: + +* The different prefixes IPv6 addresses in mode 'slaac' assigned successfully +* The VM can ping the other VM's IPv4 and IPv6 private addresses as well as the v6 subnets' gateway ips +* All items created using create commands are able to be removed using the returned identifiers + +Post conditions +--------------- + +None + +--------------------------------------------------------------------------------------- +Test Case 25 - IPv6 Address Assignment - Dual Net, Dual Stack, Multiple Prefixes, SLAAC +--------------------------------------------------------------------------------------- + +Short name +---------- + +opnfv.ipv6.dualnet_multiple_prefixes_slaac + +Use case specification +---------------------- + +This test case evaluates IPv6 address assignment in ipv6_ra_mode 'slaac' and +ipv6_address_mode 'slaac'. +In this case, guest instance obtains IPv6 addresses from OpenStack managed radvd +using SLAAC. This test case then verifies the ping6 available VM can ping the other +VM's v4 address in one network and two v6 addresses with different prefixes in another +network as well as the v6 subnets' gateway ips, the reference is + +tempest.scenario.test_network_v6.TestGettingAddress.test_dualnet_multi_prefix_slaac + +Test preconditions +------------------ + +There should exist a public router or a public network. + +Basic test flow execution description and pass/fail criteria +------------------------------------------------------------ + +Test execution +''''''''''''''' + +* Test action 1: Create one network, storing the "id" parameter returned in the response +* Test action 2: Create one IPv4 subnet of the created network, storing the "id" + parameter returned in the response +* Test action 3: If there exists a public router, use it as the router. Otherwise, + use the public network to create a router +* Test action 4: Connect the IPv4 subnet to the router, using the stored IPv4 subnet id +* Test action 5: Create another network, storing the "id" parameter returned in the response +* Test action 6: Create two IPv6 subnets of network created in test action 5 in + ipv6_ra_mode 'slaac' and ipv6_address_mode 'slaac', storing the "id" parameters returned in the response +* Test action 7: Connect the two IPv6 subnets to the router, using the stored IPv6 subnet ids +* Test action 8: Boot two VMs on these two networks, storing the "id" parameters returned in the response +* Test action 9: Turn on 2nd NIC of each VM for the network created in test action 5 +* **Test assertion 1:** The vNIC of each VM gets one v4 address and two v6 addresses + with different prefixes actually assigned +* **Test assertion 2:** Each VM can ping the other's v4 private address +* **Test assertion 3:** The ping6 available VM can ping the other's v6 addresses + as well as the v6 subnets' gateway ips +* Test action 10: Delete the 2 VMs created in test action 8, using the stored ids +* Test action 11: List all VMs, verifying the ids are no longer present +* **Test assertion 4:** The two "id" parameters are not present in the VM list +* Test action 12: Delete the IPv4 subnet created in test action 2, using the stored id +* Test action 13: Delete two IPv6 subnets created in test action 6, using the stored ids +* Test action 14: List all subnets, verifying the ids are no longer present +* **Test assertion 5:** The "id" parameters of IPv4 and IPv6 are not present in the list +* Test action 15: Delete the 2 networks created in test action 1 and 5, using the stored ids +* Test action 16: List all networks, verifying the ids are no longer present +* **Test assertion 6:** The two "id" parameters are not present in the network list + +Pass / fail criteria +''''''''''''''''''''' + +This test evaluates the ability to assign IPv6 addresses in ipv6_ra_mode 'slaac' +and ipv6_address_mode 'slaac', +and verify the ping6 available VM can ping the other VM's v4 address in one network and two +v6 addresses with different prefixes in another network as well as the v6 subnets' gateway ips. +Specifically it verifies that: + +* The IPv6 addresses in mode 'slaac' assigned successfully +* The VM can ping the other VM's IPv4 and IPv6 private addresses as well as the v6 subnets' gateway ips +* All items created using create commands are able to be removed using the returned identifiers + +Post conditions +--------------- + +None + + + diff --git a/docs/testing/user/testspecification/old_files/ipv6/designspecification.rst b/docs/testing/user/testspecification/old_files/ipv6/designspecification.rst deleted file mode 100644 index 9e403472..00000000 --- a/docs/testing/user/testspecification/old_files/ipv6/designspecification.rst +++ /dev/null @@ -1,133 +0,0 @@ -.. This work is licensed under a Creative Commons Attribution 4.0 International License. -.. http://creativecommons.org/licenses/by/4.0 -.. (c) Christopher Price (Ericsson AB) and others - -============================== -IPv6 test design specification -============================== - -This document outlines the approach and method for testing IPv6 in the OPNFV compliance test -suite. Providing a brief outline of the features to be tested, the methodology for testing, -schema's and criteria. - -Features to be tested -===================== - -The IPv6 compliance test plan outlines the method for testing IPv6 compliance to the OPNFV -platform behaviours and features of IPv6 enabled VNFi platforms. The specific features to -be tested by the IPv6 compliance test suite is outlined in the following table. - -.. table:: - :class: longtable - -+-----------------------------------------------------------+-------------------+--------------------------------------------------------------------+ -|Features / Requirements |Tests available | Test Cases | -+===========================================================+===================+====================================================================+ -|All topologies work in a multi-tenant environment |No | | -| | | | -| | | | -| | | | -| | | | -| | | | -+-----------------------------------------------------------+-------------------+--------------------------------------------------------------------+ -|IPv6 VM to VM only |No | | -| | | | -| | | | -+-----------------------------------------------------------+-------------------+--------------------------------------------------------------------+ -|IPv6 external L2 VLAN directly attached to a VM |No | | -| | | | -+-----------------------------------------------------------+-------------------+--------------------------------------------------------------------+ -|IPv6 subnet routed via L3 agent to an external IPv6 network|No | | -| | | | -|1. Both VLAN and overlay (e.g. GRE, VXLAN) subnet attached | | | -| to VMs; | | | -|2. Must be able to support multiple L3 agents for a given | | | -| external network to support scaling (neutron scheduler | | | -| to assign vRouters to the L3 agents) | | | -+-----------------------------------------------------------+-------------------+--------------------------------------------------------------------+ -|Ability for a NIC to support both IPv4 and IPv6 (dual |No | | -|stack) address. | | | -| | | | -|1. VM with a single interface associated with a network, | | | -| which is then associated with two subnets. | | | -|2. VM with two different interfaces associated with two | | | -| different networks and two different subnets. | | | -+-----------------------------------------------------------+-------------------+--------------------------------------------------------------------+ -|Support IPv6 Address assignment modes. |No | | -| | | | -|1. SLAAC | | | -|2. DHCPv6 Stateless | | | -|3. DHCPv6 Stateful | | | -+-----------------------------------------------------------+-------------------+--------------------------------------------------------------------+ -|Ability to create a port on an IPv6 DHCPv6 Stateful subnet |No | | -|and assign a specific IPv6 address to the port and have it | | | -|taken out of the DHCP address pool. | | | -+-----------------------------------------------------------+-------------------+--------------------------------------------------------------------+ -|Full support for IPv6 matching (i.e., IPv6, ICMPv6, TCP, |No | | -|UDP) in security groups. Ability to control and manage all | | | -|IPv6 security group capabilities via Neutron/Nova API (REST| | | -|and CLI) as well as via Horizon. | | | -+-----------------------------------------------------------+-------------------+--------------------------------------------------------------------+ -|During network/subnet/router create, there should be an |No | | -|option to allow user to specify the type of address | | | -|management they would like. This includes all options | | | -|including those low priority if implemented (e.g., toggle | | | -|on/off router and address prefix advertisements); It must | | | -|be supported via Neutron API (REST and CLI) as well as via | | | -|Horizon | | | -+-----------------------------------------------------------+-------------------+--------------------------------------------------------------------+ -|Security groups anti-spoofing: Prevent VM from using a |No | | -|source IPv6/MAC address which is not assigned to the VM | | | -+-----------------------------------------------------------+-------------------+--------------------------------------------------------------------+ -|Protect tenant and provider network from rogue RAs |No | | -| | | | -| | | | -| | | | -| | | | -| | | | -+-----------------------------------------------------------+-------------------+--------------------------------------------------------------------+ -|Support the ability to assign multiple IPv6 addresses to |No | | -|an interface; both for Neutron router interfaces and VM | | | -|interfaces. | | | -+-----------------------------------------------------------+-------------------+--------------------------------------------------------------------+ -|Ability for a VM to support a mix of multiple IPv4 and IPv6|No | | -|networks, including multiples of the same type. | | | -+-----------------------------------------------------------+-------------------+--------------------------------------------------------------------+ -|Support for IPv6 Prefix Delegation. |No | | -+-----------------------------------------------------------+-------------------+--------------------------------------------------------------------+ -|IPv6 First-Hop Security, IPv6 ND spoofing |No | | -+-----------------------------------------------------------+-------------------+--------------------------------------------------------------------+ -|IPv6 support in Neutron Layer3 High Availability |No | | -|(keepalived+VRRP). | | | -+-----------------------------------------------------------+-------------------+--------------------------------------------------------------------+ - - -Test approach for IPv6 -====================== - -The most common approach for testing IPv6 capabilities in the test suite is through interaction with the SUT control plane. -In this instance the test framework will exercise the NBI provided by the VIM to configure and leverage IPv6 related features -in the platform, instantiate workloads, and invoke behaviours in the platform. The suite may also interact directly with the -data plane to exercise platform capabilities and further invoke helper functions on the platform for the same purpose. - -Test result analysis --------------------- - -All functional tests in the IPv6 test suite will provide a pass/fail result on completion of the test. In addition test logs -and relevant additional information will be provided as part of the test log, available on test suite completion. - -Some tests in the compliance suite measure such metrics as latency and performance. At this time these tests are intended to -provide a feature based pass/fail metric not related to system performance. -These tests may however provide detailed results of performance and latency in the 'test report'_ document. - -Test identification -=================== - -TBD: WE need to identify the test naming scheme we will use in DoveTail in order that we can cross reference to the test -projects and maintain our suite effectively. This naming scheme needs to be externally relevant to non-OPNFV consumers and as -such some consideration is required on the selection. - -Pass Fail Criteria -================== - -This section requires some further work with the test teams to identify how and where we generate, store and provide results. diff --git a/docs/testing/user/testspecification/old_files/ipv6/index.rst b/docs/testing/user/testspecification/old_files/ipv6/index.rst deleted file mode 100644 index a806d644..00000000 --- a/docs/testing/user/testspecification/old_files/ipv6/index.rst +++ /dev/null @@ -1,19 +0,0 @@ -.. This work is licensed under a Creative Commons Attribution 4.0 International License. -.. http://creativecommons.org/licenses/by/4.0 -.. (c) OPNFV - -******************************* -OPNFV IPv6 Compliance Test Plan -******************************* - -.. toctree:: - :maxdepth: 2 - - ./testplan.rst - ./testprocedure.rst - ./testspecification.rst - ./designspecification.rst - ./ipv6.tc001.specification.rst - ./ipv6.tc026.specification.rst - ./ipv6_all_testcases.rst - diff --git a/docs/testing/user/testspecification/old_files/ipv6/ipv6.tc001.specification.rst b/docs/testing/user/testspecification/old_files/ipv6/ipv6.tc001.specification.rst deleted file mode 100644 index 5afb2095..00000000 --- a/docs/testing/user/testspecification/old_files/ipv6/ipv6.tc001.specification.rst +++ /dev/null @@ -1,59 +0,0 @@ -.. This work is licensed under a Creative Commons Attribution 4.0 International License. -.. http://creativecommons.org/licenses/by/4.0 -.. (c) OPNFV - -================================================================================================== -Dovetail IPv6 tc001 specification - Bulk Creation and Deletion of IPv6 Networks, Ports and Subnets -================================================================================================== - - -+-----------------------+----------------------------------------------------------------------------------------------------+ -|test case name |Bulk creation and deletion of IPv6 networks, ports and subnets | -| | | -+-----------------------+----------------------------------------------------------------------------------------------------+ -|id |dovetail.ipv6.tc001 | -+-----------------------+----------------------------------------------------------------------------------------------------+ -|objective |To verify that platform is able to create/delete networks, ports and subnets in bulk operation | -+-----------------------+----------------------------------------------------------------------------------------------------+ -|test items |tempest.api.network.test_networks.BulkNetworkOpsIpV6Test.test_bulk_create_delete_network | -| |{idempotent_id('d4f9024d-1e28-4fc1-a6b1-25dbc6fa11e2')} | -| |tempest.api.network.test_networks.BulkNetworkOpsIpV6Test.test_bulk_create_delete_port | -| |{idempotent_id('48037ff2-e889-4c3b-b86a-8e3f34d2d060')} | -| |tempest.api.network.test_networks.BulkNetworkOpsIpV6Test.test_bulk_create_delete_subnet | -| |{idempotent_id('8936533b-c0aa-4f29-8e53-6cc873aec489')} | -+-----------------------+----------------------------------------------------------------------------------------------------+ -|environmental | | -|requirements & | environment can be deployed on bare metal of virtualized infrastructure | -|preconditions | deployment can be HA or non-HA | -| | | -+-----------------------+----------------------------------------------------------------------------------------------------+ -|scenario dependencies | NA | -+-----------------------+----------------------------------------------------------------------------------------------------+ -|procedural |Step 1: create/delete network: | -|requirements | create 2 networks in one request | -| | asserting that the networks are found in the list after creation | -| | | -| |Step 2: create/delete subnet: | -| | create 2 subnets in one request | -| | asserting that the subnets are found in the list after creation | -| | | -| |Step 3: create/delete port: | -| | create 2 ports in one request | -| | asserting that the ports are found in the list after creation | -| | | -+-----------------------+----------------------------------------------------------------------------------------------------+ -|input specifications |The parameters needed to execute Neutron network APIs. | -| |Refer to Neutron Networking API v2.0 `[1]`_ `[2]`_ | -+-----------------------+----------------------------------------------------------------------------------------------------+ -|output specifications |The responses after executing Network network APIs. | -| |Refer to Neutron Networking API v2.0 `[1]`_ `[2]`_ | -+-----------------------+----------------------------------------------------------------------------------------------------+ -|pass/fail criteria |If normal response code 200 is returned, the test passes. | -| |Otherwise, the test fails with various error codes. | -| |Refer to Neutron Networking API v2.0 `[1]`_ `[2]`_ | -+-----------------------+----------------------------------------------------------------------------------------------------+ -|test report |TBD | -+-----------------------+----------------------------------------------------------------------------------------------------+ - -.. _`[1]`: http://developer.openstack.org/api-ref/networking/v2/ -.. _`[2]`: http://wiki.openstack.org/wiki/Neutron/APIv2-specification diff --git a/docs/testing/user/testspecification/old_files/ipv6/ipv6.tc026.specification.rst b/docs/testing/user/testspecification/old_files/ipv6/ipv6.tc026.specification.rst deleted file mode 100644 index e7fd82e7..00000000 --- a/docs/testing/user/testspecification/old_files/ipv6/ipv6.tc026.specification.rst +++ /dev/null @@ -1,54 +0,0 @@ -.. This work is licensed under a Creative Commons Attribution 4.0 International License. -.. http://creativecommons.org/licenses/by/4.0 -.. (c) OPNFV - -============================================================== -Dovetail IPv6 tc026 specification - Service VM as IPv6 vRouter -============================================================== - - -+-----------------------+--------------------------------------------------------------------------+ -|test case name |Service VM as IPv6 vRouter | -| | | -+-----------------------+--------------------------------------------------------------------------+ -|id |dovetail.ipv6.tc026 | -+-----------------------+--------------------------------------------------------------------------+ -|objective |IPv6 connnectivity, service VM as IPv6 vRouter | -+-----------------------+--------------------------------------------------------------------------+ -|modules under test |neutron, nova, etc | -+-----------------------+--------------------------------------------------------------------------+ -|dependent test project |yardstick | -+-----------------------+--------------------------------------------------------------------------+ -|test items |yardstick_tc027 | -+-----------------------+--------------------------------------------------------------------------+ -|environmental | OpenStack-only environment | -|requirements & | environment can be deplyed on bare metal of virtualized infrastructure | -|preconditions | deployment can be HA or non-HA | -| | test case image needs to be installed into Glance with ping6 included | -+-----------------------+--------------------------------------------------------------------------+ -|scenario dependencies | nosdn | -+-----------------------+--------------------------------------------------------------------------+ -|procedural |step 1: to setup IPv6 testing environment | -|requirements | 1.1 disable security group | -| | 1.2 create (ipv6, ipv4) router, network and subnet | -| | 1.3 create vRouter, VM1, VM2 | -| |step 2: to run ping6 to verify IPv6 connectivity | -| | 2.1 ssh to VM1 | -| | 2.2 ping6 to ipv6 router from VM1 | -| | 2.3 get the result and store the logs | -| |step 3: to teardown IPv6 testing environment | -| | 3.1 delete vRouter, VM1, VM2 | -| | 3.2 delete (ipv6, ipv4) router, network and subnet | -| | 3.3 enable security group | -+-----------------------+--------------------------------------------------------------------------+ -|input specifications |packetsize: 56 | -| |ping_count: 5 | -| | | -+-----------------------+--------------------------------------------------------------------------+ -|output specifications |output includes max_rtt, min_rtt, average_rtt | -+-----------------------+--------------------------------------------------------------------------+ -|pass/fail criteria |ping6 connectivity success, no SLA | -+-----------------------+--------------------------------------------------------------------------+ -|test report | dovetail dashboard DB here | -+-----------------------+--------------------------------------------------------------------------+ - diff --git a/docs/testing/user/testspecification/old_files/ipv6/ipv6_all_testcases.rst b/docs/testing/user/testspecification/old_files/ipv6/ipv6_all_testcases.rst deleted file mode 100644 index 02115ec3..00000000 --- a/docs/testing/user/testspecification/old_files/ipv6/ipv6_all_testcases.rst +++ /dev/null @@ -1,243 +0,0 @@ -.. This work is licensed under a Creative Commons Attribution 4.0 International License. -.. http://creativecommons.org/licenses/by/4.0 -.. (c) OPNFV - -================================================== -IPv6 Compliance Testing Methodology and Test Cases -================================================== - -IPv6 Compliance Testing focuses on overlay IPv6 capabilities, i.e. to validate that -IPv6 capability is supported in tenant networks, subnets and routers. Both Tempest API -testing and Tempest Scenario testing are reused as much as we can in IPv6 Compliance -Testing. In addition, Yardstick Test Case 027 is also used to validate a specific use case -of using a Service VM as an IPv6 vRouter. - -IPv6 Compliance Testing test cases are described as follows: - ---------------------------------------------------------------- -Test Case 1: Create and Delete an IPv6 Network, Port and Subnet ---------------------------------------------------------------- - -.. code-block:: bash - - tempest.api.network.test_networks.BulkNetworkOpsIpV6Test.test_bulk_create_delete_network - tempest.api.network.test_networks.BulkNetworkOpsIpV6Test.test_bulk_create_delete_port - tempest.api.network.test_networks.BulkNetworkOpsIpV6Test.test_bulk_create_delete_subnet - ------------------------------------------------------------------ -Test Case 2: Create, Update and Delete an IPv6 Network and Subnet ------------------------------------------------------------------ - -.. code-block:: bash - - tempest.api.network.test_networks.NetworksIpV6Test.test_create_update_delete_network_subnet - ----------------------------------------------- -Test Case 3: Check External Network Visibility ----------------------------------------------- - -.. code-block:: bash - - tempest.api.network.test_networks.NetworksIpV6Test.test_external_network_visibility - -------------------------------------------------------- -Test Case 4: List IPv6 Networks and Subnets of a Tenant -------------------------------------------------------- - -.. code-block:: bash - - tempest.api.network.test_networks.NetworksIpV6Test.test_list_networks - tempest.api.network.test_networks.NetworksIpV6Test.test_list_subnets - ------------------------------------------------------------ -Test Case 5: Show Information of an IPv6 Network and Subnet ------------------------------------------------------------ - -.. code-block:: bash - - tempest.api.network.test_networks.NetworksIpV6Test.test_show_network - tempest.api.network.test_networks.NetworksIpV6Test.test_show_subnet - ------------------------------------------------------------- -Test Case 6: Create an IPv6 Port in Allowed Allocation Pools ------------------------------------------------------------- - -.. code-block:: bash - - tempest.api.network.test_ports.PortsIpV6TestJSON.test_create_port_in_allowed_allocation_pools - --------------------------------------------------------- -Test Case 7: Create an IPv6 Port without Security Groups --------------------------------------------------------- - -.. code-block:: bash - - tempest.api.network.test_ports.PortsIpV6TestJSON.test_create_port_with_no_securitygroups - ---------------------------------------------------- -Test Case 8: Create, Update and Delete an IPv6 Port ---------------------------------------------------- - -.. code-block:: bash - - tempest.api.network.test_ports.PortsIpV6TestJSON.test_create_update_delete_port - ----------------------------------------- -Test Case 9: List IPv6 Ports of a Tenant ----------------------------------------- - -.. code-block:: bash - - tempest.api.network.test_ports.PortsIpV6TestJSON.test_list_ports - ----------------------------------------------- -Test Case 10: Show Information of an IPv6 Port ----------------------------------------------- - -.. code-block:: bash - - tempest.api.network.test_ports.PortsIpV6TestJSON.test_show_port - --------------------------------------------------------- -Test Case 11: Add Multiple Interfaces for an IPv6 Router --------------------------------------------------------- - -.. code-block:: bash - - tempest.api.network.test_routers.RoutersIpV6Test.test_add_multiple_router_interfaces - ------------------------------------------------------------------- -Test Case 12: Add and Remove an IPv6 Router Interface with port_id ------------------------------------------------------------------- - -.. code-block:: bash - - tempest.api.network.test_routers.RoutersIpV6Test.test_add_remove_router_interface_with_port_id - --------------------------------------------------------------------- -Test Case 13: Add and Remove an IPv6 Router Interface with subnet_id --------------------------------------------------------------------- - -.. code-block:: bash - - tempest.api.network.test_routers.RoutersIpV6Test.test_add_remove_router_interface_with_subnet_id - ------------------------------------------------------------------- -Test Case 14: Create, Update, Delete, List and Show an IPv6 Router ------------------------------------------------------------------- - -.. code-block:: bash - - tempest.api.network.test_routers.RoutersIpV6Test.test_create_show_list_update_delete_router - --------------------------------------------------------------------------- -Test Case 15: Create, Update, Delete, List and Show an IPv6 Security Group --------------------------------------------------------------------------- - -.. code-block:: bash - - tempest.api.network.test_security_groups.SecGroupIPv6Test.test_create_list_update_show_delete_security_group - ----------------------------------------------------------- -Test Case 16: Create, Delete and Show Security Group Rules ----------------------------------------------------------- - -.. code-block:: bash - - tempest.api.network.test_security_groups.SecGroupIPv6Test.test_create_show_delete_security_group_rule - --------------------------------------- -Test Case 17: List All Security Groups --------------------------------------- - -.. code-block:: bash - - tempest.api.network.test_security_groups.SecGroupIPv6Test.test_list_security_groups - --------------------------------------------------------- -Test Case 18: IPv6 Address Assignment - DHCPv6 Stateless --------------------------------------------------------- - -.. code-block:: bash - - tempest.scenario.test_network_v6.TestGettingAddress.test_dhcp6_stateless_from_os - --------------------------------------------------------------------- -Test Case 19: IPv6 Address Assignment - Dual Stack, DHCPv6 Stateless --------------------------------------------------------------------- - -.. code-block:: bash - - tempest.scenario.test_network_v6.TestGettingAddress.test_dualnet_dhcp6_stateless_from_os - ---------------------------------------------------------------------------- -Test Case 20: IPv6 Address Assignment - Multiple Prefixes, DHCPv6 Stateless ---------------------------------------------------------------------------- - -.. code-block:: bash - - tempest.scenario.test_network_v6.TestGettingAddress.test_multi_prefix_dhcpv6_stateless - ---------------------------------------------------------------------------------------- -Test Case 21: IPv6 Address Assignment - Dual Stack, Multiple Prefixes, DHCPv6 Stateless ---------------------------------------------------------------------------------------- - -.. code-block:: bash - - tempest.scenario.test_network_v6.TestGettingAddress.test_dualnet_multi_prefix_dhcpv6_stateless - ---------------------------------------------- -Test Case 22: IPv6 Address Assignment - SLAAC ---------------------------------------------- - -.. code-block:: bash - - tempest.scenario.test_network_v6.TestGettingAddress.test_slaac_from_os - ---------------------------------------------------------- -Test Case 23: IPv6 Address Assignment - Dual Stack, SLAAC ---------------------------------------------------------- - -.. code-block:: bash - - tempest.scenario.test_network_v6.TestGettingAddress.test_dualnet_slaac_from_os - ----------------------------------------------------------------- -Test Case 24: IPv6 Address Assignment - Multiple Prefixes, SLAAC ----------------------------------------------------------------- - -.. code-block:: bash - - tempest.scenario.test_network_v6.TestGettingAddress.test_multi_prefix_slaac - ----------------------------------------------------------------------------- -Test Case 25: IPv6 Address Assignment - Dual Stack, Multiple Prefixes, SLAAC ----------------------------------------------------------------------------- - -.. code-block:: bash - - tempest.scenario.test_network_v6.TestGettingAddress.test_dualnet_multi_prefix_slaac - -------------------------------------------- -Test Case 26: Service VM as an IPv6 vRouter -------------------------------------------- - -.. code-block:: bash - - # Refer to Yardstick Test Case 027 - # Instruction: http://artifacts.opnfv.org/ipv6/docs/configurationguide/index.html - # Step 1: Set up Service VM as an IPv6 vRouter - # 1.1: Install OPNFV and Preparation - # 1.2: Disable Security Groups in OpenStack ML2 Setup - # 1.3: Create IPv4 and IPv6 Neutron routers, networks and subnets - # 1.4: Boot vRouter VM, and Guest VM1 and Guest VM2 - # Step 2: Verify IPv6 Connectivity - # 2.1: ssh to Guest VM1 - # 2.2: Ping6 from Guest VM1 to Guest VM2 - # 2.3: Ping6 from Guest VM1 to vRouter VM - # 2.4: Ping6 from Guest VM1 to Neutron IPv6 Router Namespace - # Step 3: Tear down Setup - # 3.1: Delete Guest VM1, Guest VM2 and vRouter VM - # 3.2: Delete IPv4 and IPv6 Neutron routers, networks and subnets - # 3.3: Enable Security Groups - diff --git a/docs/testing/user/testspecification/old_files/ipv6/testplan.rst b/docs/testing/user/testspecification/old_files/ipv6/testplan.rst deleted file mode 100644 index 3470e7a6..00000000 --- a/docs/testing/user/testspecification/old_files/ipv6/testplan.rst +++ /dev/null @@ -1,34 +0,0 @@ -.. This work is licensed under a Creative Commons Attribution 4.0 International License. -.. http://creativecommons.org/licenses/by/4.0 -.. (c) OPNFV - -=============================== -OPNFV IPv6 Compliance Test Plan -=============================== - -Introduction -============ - -The IPv6 compliance test plan outlines the method for testing IPv6 Tenant Network feature -compliance with the OPNFV platform. - -Scope ------ - -This test, and other tests in the test suite, are designed to verify an entire SUT, -and not any individual component of the system. - -Test suite scope and procedures -=============================== - -The IPv6 compliance test suite will evaluate the ability for a SUT to support IPv6 -Tenant Network features and functionality provided by OPNFV platform. - -Please refer to the complete list of the test cases for details. - -Test suite execution -==================== - -Please refer to each test case for specific setup and execution procedure. - -.._[1]: http://www.opnfv.org diff --git a/docs/testing/user/testspecification/old_files/ipv6/testprocedure.rst b/docs/testing/user/testspecification/old_files/ipv6/testprocedure.rst deleted file mode 100644 index 2119ed61..00000000 --- a/docs/testing/user/testspecification/old_files/ipv6/testprocedure.rst +++ /dev/null @@ -1,9 +0,0 @@ -.. This work is licensed under a Creative Commons Attribution 4.0 International License. -.. http://creativecommons.org/licenses/by/4.0 -.. (c) Christopher Price (Ericsson AB) and others - -=================== -IPv6 test procedure -=================== - -Draft to be patched this week, someone feel free to work on this in parallel. diff --git a/docs/testing/user/testspecification/old_files/ipv6/testspecification.rst b/docs/testing/user/testspecification/old_files/ipv6/testspecification.rst deleted file mode 100644 index e51f2a5b..00000000 --- a/docs/testing/user/testspecification/old_files/ipv6/testspecification.rst +++ /dev/null @@ -1,57 +0,0 @@ -.. This work is licensed under a Creative Commons Attribution 4.0 International License. -.. http://creativecommons.org/licenses/by/4.0 -.. (c) Christopher Price (Ericsson AB) and others - -=============================================== -Test specification - Service VM as IPv6 vRouter -=============================================== - -Draft to be worked on, this represents the YardStick test but I would suggest we need to break -this into a set of tests which provide more details per action with boundary validation. - -Test Item -========= - -TBD -> IPv6 Ping... - -Identify the items or features to be tested by this test case. The item description and -definition can be referenced from any one of several sources, depending on the level of the -test case specification. It may be a good idea to reference the source documents as well. - -Environmental requirements -========================== - -For ipv6 Test Case 18-25, those test cases are scenario tests, they need to boot virtual -machines and ping6 in addition to test APIs, ping6 to vRouter is not supported by SDN controller -yet, such as Opendaylight (Boron and previous releases), so they are scenario dependent, -i.e., currently ipv6 Test Case 18-25 can only run on scenario os-nosdn-nofeature. - -Preconditions and procedural requirements -========================================= - -TBD - -.. <Start> -.. this section may be iterated over for a set of simillar test cases that would be run as one. - -Input Specifications -==================== - -TBD - -Output Specifications -===================== - -TBD - -.. <End> - -Test Reporting -============== - -The test report for this test case will be generated with links to relevant data sources. -This section can be updated once we have a template for the report in place. - -http://testresults.opnfv.org/grafana/dashboard/db/yardstick-tc027 - - diff --git a/docs/testing/user/testspecification/vping/index.rst b/docs/testing/user/testspecification/vping/index.rst new file mode 100644 index 00000000..d7a207c0 --- /dev/null +++ b/docs/testing/user/testspecification/vping/index.rst @@ -0,0 +1,279 @@ +.. This work is licensed under a Creative Commons Attribution 4.0 International License. +.. http://creativecommons.org/licenses/by/4.0 +.. (c) Ericsson AB + +======================== +Vping test specification +======================== + +.. toctree:: + :maxdepth: 2 + +Scope +===== + +The vping test area evaluates basic NFVi capabilities of the system under test. +These capabilities include creating a small number of virtual machines, +establishing basic L3 connectivity between them and verifying connectivity by +means of ICMP packets. + + +References +========== + +- Neutron Client + + - https://docs.openstack.org/developer/python-neutronclient/usage/library.html + +- Nova Client + + - https://docs.openstack.org/developer/python-novaclient/ref/v2/servers.html + +- SSHClient + + - http://docs.paramiko.org/en/2.2/ + +- SCPClient + + - https://pypi.python.org/pypi/scp + + +Definitions and abbreviations +============================= + +The following terms and abbreviations are used in conjunction with this test +area + +- ICMP - Internet Control Message Protocol +- L3 - Layer 3 +- NFVi - Network functions virtualization infrastructure +- SCP - Secure Copy +- SSH - Secure Shell +- VM - Virtual machine + + +System Under Test (SUT) +======================= + +The system under test is assumed to be the NFVi and VIM in operation on a +Pharos compliant infrastructure. + + +Test Area Structure +=================== + +The test area is structured in two separate tests which are executed +sequentially. The order of the tests is arbitrary as there are no dependencies +across the tests. + + +Test Descriptions +================= + +-------------------------------------------------------------------- +Test Case 1 - vPing using userdata provided by nova metadata service +-------------------------------------------------------------------- + +Short name +---------- + +opnfv.vping.userdata + + +Use case specification +---------------------- + +This test evaluates the use case where an NFVi tenant boots up two VMs and +requires L3 connectivity between those VMs. The target IP is passed to the VM +that will initiate pings by using a custom userdata script provided by nova metadata service. + + +Test preconditions +------------------ + +At least one compute node is available. No further pre-configuration needed. + + +Basic test flow execution description and pass/fail criteria +------------------------------------------------------------ + +Methodology for verifying connectivity +'''''''''''''''''''''''''''''''''''''' + +Connectivity between VMs is tested by sending ICMP ping packets between +selected VMs. The target IP is passed to the VM sending pings by using a +custom userdata script by means of the config driver mechanism provided by +Nova metadata service. Whether or not a ping was successful is determined by +checking the console output of the source VMs. + + +Test execution +'''''''''''''' + +* Test action 1: + * Create a private tenant network by using neutron client + * Create one subnet and one router in the network by neutron client + * Add one interface between the subnet and router + * Add one gateway route to the router by neutron client + * Store the network id in the response +* **Test assertion 1:** The network id, subnet id and router id can be found in the response +* Test action 2: + * Create an security group by using neutron client + * Store the security group id parameter in the response +* **Test assertion 2:** The security group id can be found in the response +* Test action 3: boot VM1 by using nova client with configured name, image, flavor, private tenant + network created in test action 1, security group created in test action 2 +* **Test assertion 3:** The VM1 object can be found in the response +* Test action 4: Generate ping script with the IP of VM1 to be passed as userdata provided by + the **nova metadata service**. +* Test action 5: Boot VM2 by using nova client with configured name, image, flavor, private tenant + network created in test action 1, security group created in test action 2, userdata created + in test action 4 +* **Test assertion 4:** The VM2 object can be found in the response +* Test action 6: Inside VM2, the ping script is executed automatically when booted and it contains a + loop doing the ping until the return code is 0 or timeout reached. For each ping, when the return + code is 0, "vPing OK" is printed in the VM2 console-log, otherwise, "vPing KO" is printed. + Monitoring the console-log of VM2 to see the response generated by the script. +* **Test assertion 5:** "vPing OK" is detected, when monitoring the console-log in VM2 +* Test action 7: delete VM1, VM2 +* **Test assertion 6:** VM1 and VM2 are not present in the VM list +* Test action 8: delete security group, gateway, interface, router, subnet and network +* **Test assertion 7:** The security group, gateway, interface, router, subnet and network are + no longer present in the lists after deleting + + +Pass / fail criteria +'''''''''''''''''''' + +This test evaluates basic NFVi capabilities of the system under test. +Specifically, the test verifies that: + +* Neutron client network, subnet, router, interface create commands return valid "id" parameters + which are shown in the create response message +* Neutron client interface add command to add between subnet and router returns success code +* Neutron client gateway add command to add to router returns success code +* Neutron client security group create command returns valid "id" parameter which is shown in + the response message +* Nova client VM create command returns valid VM attributes response message +* Nova metadata server can transfer userdata configuration at nova client VM booting time +* Ping command from one VM to the other in same private tenant network returns valid code +* All items created using neutron client or nova client create commands are able to be removed by + using the returned identifiers + +In order to pass this test, all test assertions listed in the test execution +above need to pass. + + +Post conditions +--------------- + +None + + +---------------------------------------------- +Test Case 2 - vPing using SSH to a floating IP +---------------------------------------------- + +Short name +---------- + +opnfv.vping.ssh + + +Use case specification +---------------------- + +This test evaluates the use case where an NFVi tenant boots up two VMs and requires +L3 connectivity between those VMs. An SSH connection is establised from the host to +a floating IP associated with VM2 and ``ping`` is executed on VM2 with the IP of VM1 as target. + + +Test preconditions +------------------ + +At least one compute node is available. There should exist an OpenStack external network +and can assign floating IP. + + +Basic test flow execution description and pass/fail criteria +------------------------------------------------------------ + +Methodology for verifying connectivity +'''''''''''''''''''''''''''''''''''''' + +Connectivity between VMs is tested by sending ICMP ping packets between +selected VMs. To this end, the test establishes an SSH connection from the host +running the test suite to a floating IP associated with VM2 and executes ``ping`` +on VM2 with the IP of VM1 as target. + + +Test execution +'''''''''''''' + + +* Test action 1: + * Create a private tenant network by neutron client + * Create one subnet and one router are created in the network by using neutron client + * Create one interface between the subnet and router + * Add one gateway route to the router by neutron client + * Store the network id in the response +* **Test assertion 1:** The network id, subnet id and router id can be found in the response +* Test action 2: + * Create an security group by using neutron client + * Store the security group id parameter in the response +* **Test assertion 2:** The security group id can be found in the response +* Test action 3: Boot VM1 by using nova client with configured name, image, flavor, private tenant + network created in test action 1, security group created in test action 2 +* **Test assertion 3:** The VM1 object can be found in the response +* Test action 4: Boot VM2 by using nova client with configured name, image, flavor, private tenant + network created in test action 1, security group created in test action 2 +* **Test assertion 4:** The VM2 object can be found in the response +* Test action 5: create one floating IP by using neutron client, storing the floating IP address + returned in the response +* **Test assertion 5:** Floating IP address can be found in the response +* Test action 6: Assign the floating IP address created in test action 5 to VM2 by using nova client +* **Test assertion 6:** The assigned floating IP can be found in the VM2 console log file +* Test action 7: Establish SSH connection between the test host and VM2 through the floating IP +* **Test assertion 7:** SSH connection between the test host and VM2 is established within + 300 seconds +* Test action 8: Copy the Ping script from the test host to VM2 by using SCPClient +* **Test assertion 8:** The Ping script can be found inside VM2 +* Test action 9: Inside VM2, to execute the Ping script to ping VM1, the Ping script contains a + loop doing the ping until the return code is 0 or timeout reached, for each ping, when the return + code is 0, "vPing OK" is printed in the VM2 console-log, otherwise, "vPing KO" is printed. + Monitoring the console-log of VM2 to see the response generated by the script. +* **Test assertion 9:** "vPing OK" is detected, when monitoring the console-log in VM2 +* Test action 10: delete VM1, VM2 +* **Test assertion 10:** VM1 and VM2 are not present in the VM list +* Test action 11: delete floating IP, security group, gateway, interface, router, subnet and network +* **Test assertion 11:** The security group, gateway, interface, router, subnet and network are + no longer present in the lists after deleting + +Pass / fail criteria +'''''''''''''''''''' + +This test evaluates basic NFVi capabilities of the system under test. +Specifically, the test verifies that: + +* Neutron client network, subnet, router, interface create commands return valid "id" parameters + which are shown in the create response message +* Neutron client interface add command to add between subnet and router return success code +* Neutron client gateway add command to add to router return success code +* Neutron client security group create command returns valid "id" parameter which is shown in the + response message +* Nova client VM create command returns valid VM attributes response message +* Neutron client floating IP create command return valid floating IP address +* Nova client add floating IP command returns valid response message +* SSH connection can be established using a floating IP +* Ping command from one VM to another in same private tenant network returns valid code +* All items created using neutron client or nova client create commands are able to be removed by + using the returned identifiers + +In order to pass this test, all test assertions listed in the test execution +above need to pass. + + +Post conditions +--------------- + +None diff --git a/docs/testing/user/testspecification/vpn/index.rst b/docs/testing/user/testspecification/vpn/index.rst index 1b5fe439..0a8a8d17 100644 --- a/docs/testing/user/testspecification/vpn/index.rst +++ b/docs/testing/user/testspecification/vpn/index.rst @@ -12,14 +12,17 @@ VPN test specification Scope ===== -The VPN test area evaluates the ability of the system under test to support VPN networking -for virtual workdloads. The tests in this suite will evaluate establishing VPN networks, -publishing and communication between endpoints using BGP and tear down of the networks. +The VPN test area evaluates the ability of the system under test to support VPN +networking for virtual workloads. The tests in this test area will evaluate +establishing VPN networks, publishing and communication between endpoints using +BGP and tear down of the networks. References -================ +========== -This test suite assumes support for the following specifications: +This test area evaluates the ability of the system to perform selected actions +defined in the following specifications. Details of specific features evaluated +are described in the test descriptions. - RFC 4364 - BGP/MPLS IP Virtual Private Networks @@ -33,10 +36,12 @@ This test suite assumes support for the following specifications: - https://tools.ietf.org/html/rfc2547 + Definitions and abbreviations ============================= -The following terms and abreviations are used in conunction with this test suite +The following terms and abbreviations are used in conjunction with this test +area - BGP - Border gateway protocol - eRT - Export route target @@ -48,15 +53,27 @@ The following terms and abreviations are used in conunction with this test suite - VPN - Virtual private network - VLAN - Virtual local area network + System Under Test (SUT) ======================= -The system under test is assumed to be the NFVi in operation on an Pharos compliant infrastructure. +The system under test is assumed to be the NFVi and VIM in operation on a +Pharos compliant infrastructure. + -Test Suite Structure -==================== +Test Area Structure +=================== + +The test area is structured in four separate tests which are executed +sequentially. The order of the tests is arbitrary as there are no dependencies +across the tests. Specifially, every test performs clean-up operations which +return the system to the same state as before the test. + +The test area evaluates the ability of the SUT to establish connectivity +between Virtual Machines using an appropriate route target configuration, +reconfigure the route targets to remove connectivity between the VMs, then +reestablish connectivity by re-association. -The test suite is structured in some way that I am unable to articulate at this time. Test Descriptions ================= @@ -65,43 +82,451 @@ Test Descriptions Test Case 1 - VPN provides connectivity between Neutron subnets ---------------------------------------------------------------- +Short name +---------- + +opnfv.sdnvpn.subnet_connectivity + + Use case specification ---------------------- -This test evaluate the instance where an NFVi tenant wants to use a BGPVPN to provide -connectivity between VMs on different Neutron networks and Subnets that reside on different hosts. +This test evaluates the use case where an NFVi tenant uses a BGPVPN to provide +connectivity between VMs on different Neutron networks and subnets that reside +on different hosts. + Test preconditions ------------------ -2 compute nodes are available, denoted Node1 and Node 2 in the following. +2 compute nodes are available, denoted Node1 and Node2 in the following. + Basic test flow execution description and pass/fail criteria ------------------------------------------------------------ -Set up VM1 and VM2 on Node1 and VM3 on Node2, all having ports in the same Neutron Network N1 -and all having 10.10.10/24 addresses (this subnet is denoted SN1 in the following). +Methodology for verifying connectivity +'''''''''''''''''''''''''''''''''''''' + +Connectivity between VMs is tested by sending ICMP ping packets between +selected VMs. The target IPs are passed to the VMs sending pings by means of a +custom user data script. Whether or not a ping was successful is determined by +checking the console output of the source VMs. + + +Test execution +'''''''''''''' + +* Create Neutron network N1 and subnet SN1 with IP range 10.10.10.0/24 +* Create Neutron network N2 and subnet SN2 with IP range 10.10.11.0/24 + +* Create VM1 on Node1 with a port in network N1 +* Create VM2 on Node1 with a port in network N1 +* Create VM3 on Node2 with a port in network N1 +* Create VM4 on Node1 with a port in network N2 +* Create VM5 on Node2 with a port in network N2 + +* Create VPN1 with eRT<>iRT +* Create network association between network N1 and VPN1 + +* VM1 sends ICMP packets to VM2 using ``ping`` + +* **Test assertion 1:** Ping from VM1 to VM2 succeeds: ``ping`` exits with return code 0 + +* VM1 sends ICMP packets to VM3 using ``ping`` + +* **Test assertion 2:** Ping from VM1 to VM3 succeeds: ``ping`` exits with return code 0 + +* VM1 sends ICMP packets to VM4 using ``ping`` + +* **Test assertion 3:** Ping from VM1 to VM4 fails: ``ping`` exits with a non-zero return code + +* Create network association between network N2 and VPN1 + +* VM4 sends ICMP packets to VM5 using ``ping`` -Set up VM4 on Node1 and VM5 on Node2, both having ports in Neutron Network N2 -and having 10.10.11/24 addresses (this subnet is denoted SN2 in the following). +* **Test assertion 4:** Ping from VM4 to VM5 succeeds: ``ping`` exits with return code 0 -* Create VPN1 with eRT<>iRT and associate SN1 to it -* Test action 1: SSH into VM1 and ping VM2, test passes if ping works -* Test action 2: SSH into VM1 and ping VM3, test passes is ping works -* Test action 3: SSH into VM1 and ping VM4, test passes if ping does not work -* Associate SN2 to VPN1 -* Test action 4: Ping from VM4 to VM5 should work -* Test action 5: Ping from VM1 to VM4 should not work -* Test action 6: Ping from VM1 to VM5 should not work * Configure iRT=eRT in VPN1 -* Test action 7: Ping from VM1 to VM4 should work -* Test action 8: Ping from VM1 to VM5 should work -The pass criteria for this test case is that all instructions are able to be carried out -according to the described behaviour without deviation. -A negative result will be generated if the above is not met in completion. +* VM1 sends ICMP packets to VM4 using ``ping`` + +* **Test assertion 5:** Ping from VM1 to VM4 succeeds: ``ping`` exits with return code 0 + +* VM1 sends ICMP packets to VM5 using ``ping`` + +* **Test assertion 6:** Ping from VM1 to VM5 succeeds: ``ping`` exits with return code 0 + +* Delete all instances: VM1, VM2, VM3, VM4 and VM5 + +* Delete all networks and subnets: networks N1 and N2 including subnets SN1 and SN2 + +* Delete all network associations and VPN1 + + +Pass / fail criteria +'''''''''''''''''''' + +This test evaluates the capability of the NFVi and VIM to provide routed IP +connectivity between VMs by means of BGP/MPLS VPNs. Specifically, the test +verifies that: + +* VMs in the same Neutron subnet have IP connectivity regardless of BGP/MPLS + VPNs (test assertion 1, 2, 4) + +* VMs in different Neutron subnets do not have IP connectivity by default - in + this case without associating VPNs with the same import and export route + targets to the Neutron networks (test assertion 3) + +* VMs in different Neutron subnets have routed IP connectivity after + associating both networks with BGP/MPLS VPNs which have been configured with + the same import and export route targets (test assertion 5, 6). Hence, + adjusting the ingress and egress route targets enables as well as prohibits + routing. + +In order to pass this test, all test assertions listed in the test execution +above need to pass. + + +Post conditions +--------------- + +N/A + +------------------------------------------------------------ +Test Case 2 - VPNs ensure traffic separation between tenants +------------------------------------------------------------ + +Short Name +---------- + +opnfv.sdnvpn.tenant_separation + + +Use case specification +---------------------- + +This test evaluates if VPNs provide separation of traffic such that overlapping +IP ranges can be used. + + +Test preconditions +------------------ + +2 compute nodes are available, denoted Node1 and Node2 in the following. + + +Basic test flow execution description and pass/fail criteria +------------------------------------------------------------ + +Methodology for verifying connectivity +'''''''''''''''''''''''''''''''''''''' + +Connectivity between VMs is tested by establishing an SSH connection. Moreover, +the command "hostname" is executed at the remote VM in order to retrieve the +hostname of the remote VM. The retrieved hostname is furthermore compared +against an expected value. This is used to verify tenant traffic separation, +i.e., despite overlapping IPs, a connection is made to the correct VM as +determined by means of the hostname of the target VM. + + + +Test execution +'''''''''''''' + +* Create Neutron network N1 +* Create subnet SN1a of network N1 with IP range 10.10.10.0/24 +* Create subnet SN1b of network N1 with IP range 10.10.11.0/24 + +* Create Neutron network N2 +* Create subnet SN2a of network N2 with IP range 10.10.10.0/24 +* Create subnet SN2b of network N2 with IP range 10.10.11.0/24 + +* Create VM1 on Node1 with a port in network N1 and IP 10.10.10.11. +* Create VM2 on Node1 with a port in network N1 and IP 10.10.10.12. +* Create VM3 on Node2 with a port in network N1 and IP 10.10.11.13. +* Create VM4 on Node1 with a port in network N2 and IP 10.10.10.12. +* Create VM5 on Node2 with a port in network N2 and IP 10.10.11.13. + +* Create VPN1 with iRT=eRT=RT1 +* Create network association between network N1 and VPN1 + +* VM1 attempts to execute the command ``hostname`` on the VM with IP 10.10.10.12 via SSH. + +* **Test assertion 1:** VM1 can successfully connect to the VM with IP + 10.10.10.12. via SSH and execute the remote command ``hostname``. The + retrieved hostname equals the hostname of VM2. + +* VM1 attempts to execute the command ``hostname`` on the VM with IP 10.10.11.13 via SSH. + +* **Test assertion 2:** VM1 can successfully connect to the VM with IP + 10.10.11.13 via SSH and execute the remote command ``hostname``. The + retrieved hostname equals the hostname of VM3. + +* Create VPN2 with iRT=eRT=RT2 +* Create network association between network N2 and VPN2 + +* VM4 attempts to execute the command ``hostname`` on the VM with IP 10.10.11.13 via SSH. + +* **Test assertion 3:** VM4 can successfully connect to the VM with IP + 10.10.11.13 via SSH and execute the remote command ``hostname``. The + retrieved hostname equals the hostname of VM5. + +* VM4 attempts to execute the command ``hostname`` on the VM with IP 10.10.11.11 via SSH. + +* **Test assertion 4:** VM4 cannot connect to the VM with IP 10.10.11.11 via SSH. + +* Delete all instances: VM1, VM2, VM3, VM4 and VM5 + +* Delete all networks and subnets: networks N1 and N2 including subnets SN1a, SN1b, SN2a and SN2b + +* Delete all network associations, VPN1 and VPN2 + + +Pass / fail criteria +'''''''''''''''''''' + +This test evaluates the capability of the NFVi and VIM to provide routed IP +connectivity between VMs by means of BGP/MPLS VPNs. Specifically, the test +verifies that: + +* VMs in the same Neutron subnet (still) have IP connectivity between each + other when a BGP/MPLS VPN is associated with the network (test assertion 1). + +* VMs in different Neutron subnets have routed IP connectivity between each + other when BGP/MPLS VPNs with the same import and expert route targets are + associated with both networks (assertion 2). + +* VMs in different Neutron networks and BGP/MPLS VPNs with different import and + export route targets can have overlapping IP ranges. The BGP/MPLS VPNs + provide traffic separation (assertion 3 and 4). + +In order to pass this test, all test assertions listed in the test execution +above need to pass. + + +Post conditions +--------------- + +N/A + +-------------------------------------------------------------------------------- +Test Case 3 - VPN provides connectivity between subnets using router association +-------------------------------------------------------------------------------- + +Short Name +---------- + +opnfv.sdnvpn.router_association + + +Use case specification +---------------------- + +This test evaluates if a VPN provides connectivity between two subnets by +utilizing two different VPN association mechanisms: a router association and a +network association. + +Specifically, the test network topology comprises two networks N1 and N2 with +corresponding subnets. Additionally, network N1 is connected to a router R1. +This test verifies that a VPN V1 provides connectivity between both networks +when applying a router association to router R1 and a network association to +network N2. + + +Test preconditions +------------------ + +2 compute nodes are available, denoted Node1 and Node2 in the following. + +Basic test flow execution description and pass/fail criteria +------------------------------------------------------------ + +Methodology for verifying connectivity +'''''''''''''''''''''''''''''''''''''' + +Connectivity between VMs is tested by sending ICMP ping packets between +selected VMs. The target IPs are passed to the VMs sending pings by means of a +custom user data script. Whether or not a ping was successful is determined by +checking the console output of the source VMs. + + +Test execution +'''''''''''''' + +* Create a network N1, a subnet SN1 with IP range 10.10.10.0/24 and a connected router R1 +* Create a network N2, a subnet SN2 with IP range 10.10.11.0/24 + +* Create VM1 on Node1 with a port in network N1 +* Create VM2 on Node1 with a port in network N1 +* Create VM3 on Node2 with a port in network N1 +* Create VM4 on Node1 with a port in network N2 +* Create VM5 on Node2 with a port in network N2 + +* Create VPN1 with eRT<>iRT so that connected subnets should not reach each other + +* Create route association between router R1 and VPN1 + +* VM1 sends ICMP packets to VM2 using ``ping`` + +* **Test assertion 1:** Ping from VM1 to VM2 succeeds: ``ping`` exits with return code 0 + +* VM1 sends ICMP packets to VM3 using ``ping`` + +* **Test assertion 2:** Ping from VM1 to VM3 succeeds: ``ping`` exits with return code 0 + +* VM1 sends ICMP packets to VM4 using ``ping`` + +* **Test assertion 3:** Ping from VM1 to VM4 fails: ``ping`` exits with a non-zero return code + +* Create network association between network N2 and VPN1 + +* VM4 sends ICMP packets to VM5 using ``ping`` + +* **Test assertion 4:** Ping from VM4 to VM5 succeeds: ``ping`` exits with return code 0 + +* Change VPN1 so that iRT=eRT + +* VM1 sends ICMP packets to VM4 using ``ping`` + +* **Test assertion 5:** Ping from VM1 to VM4 succeeds: ``ping`` exits with return code 0 + +* VM1 sends ICMP packets to VM5 using ``ping`` + +* **Test assertion 6:** Ping from VM1 to VM5 succeeds: ``ping`` exits with return code 0 + +* Delete all instances: VM1, VM2, VM3, VM4 and VM5 + +* Delete all networks, subnets and routers: networks N1 and N2 including subnets SN1 and SN2, router R1 + +* Delete all network and router associations and VPN1 + + +Pass / fail criteria +'''''''''''''''''''' + +This test evaluates the capability of the NFVi and VIM to provide routed IP +connectivity between VMs by means of BGP/MPLS VPNs. Specifically, the test +verifies that: + +* VMs in the same Neutron subnet have IP connectivity regardless of the import + and export route target configuration of BGP/MPLS VPNs (test assertion 1, 2, 4) + +* VMs in different Neutron subnets do not have IP connectivity by default - in + this case without associating VPNs with the same import and export route + targets to the Neutron networks or connected Neutron routers (test assertion 3). + +* VMs in two different Neutron subnets have routed IP connectivity after + associating the first network and a router connected to the second network + with BGP/MPLS VPNs which have been configured with the same import and export + route targets (test assertion 5, 6). Hence, adjusting the ingress and egress + route targets enables as well as prohibits routing. + +* Network and router associations are equivalent methods for binding Neutron networks + to VPN. + +In order to pass this test, all test assertions listed in the test execution +above need to pass. + + +Post conditions +--------------- + +N/A + +--------------------------------------------------------------------------------------------------- +Test Case 4 - Verify interworking of router and network associations with floating IP functionality +--------------------------------------------------------------------------------------------------- + +Short Name +---------- + +opnfv.sdnvpn.router_association_floating_ip + + +Use case specification +---------------------- + +This test evaluates if both the router association and network association +mechanisms interwork with floating IP functionality. + +Specifically, the test network topology comprises two networks N1 and N2 with +corresponding subnets. Additionally, network N1 is connected to a router R1. +This test verifies that i) a VPN V1 provides connectivity between both networks +when applying a router association to router R1 and a network association to +network N2 and ii) a VM in network N1 is reachable externally by means of a +floating IP. + + +Test preconditions +------------------ + +At least one compute node is available. + +Basic test flow execution description and pass/fail criteria +------------------------------------------------------------ + +Methodology for verifying connectivity +'''''''''''''''''''''''''''''''''''''' + +Connectivity between VMs is tested by sending ICMP ping packets between +selected VMs. The target IPs are passed to the VMs sending pings by means of a +custom user data script. Whether or not a ping was successful is determined by +checking the console output of the source VMs. + + +Test execution +'''''''''''''' + +* Create a network N1, a subnet SN1 with IP range 10.10.10.0/24 and a connected router R1 +* Create a network N2 with IP range 10.10.20.0/24 + +* Create VM1 with a port in network N1 +* Create VM2 with a port in network N2 + +* Create VPN1 +* Create a router association between router R1 and VPN1 +* Create a network association between network N2 and VPN1 + + +* VM1 sends ICMP packets to VM2 using ``ping`` + +* **Test assertion 1:** Ping from VM1 to VM2 succeeds: ``ping`` exits with return code 0 + +* Assign a floating IP to VM1 + +* The host running the test framework sends ICMP packets to VM1 using ``ping`` + +* **Test assertion 2:** Ping from the host running the test framework to the + floating IP of VM1 succeeds: ``ping`` exits with return code 0 + +* Delete floating IP assigned to VM1 + +* Delete all instances: VM1, VM2 + +* Delete all networks, subnets and routers: networks N1 and N2 including subnets SN1 and SN2, router R1 + +* Delete all network and router associations as well as VPN1 + + +Pass / fail criteria +'''''''''''''''''''' + +This test evaluates the capability of the NFVi and VIM to provide routed IP +connectivity between VMs by means of BGP/MPLS VPNs. Specifically, the test +verifies that: + +* VMs in the same Neutron subnet have IP connectivity regardless of the import + and export route target configuration of BGP/MPLS VPNs (test assertion 1) + +* VMs connected to a network which has been associated with a BGP/MPLS VPN are + reachable through floating IPs. + +In order to pass this test, all test assertions listed in the test execution +above need to pass. + Post conditions --------------- -TBD - should there be any other than the system is in the same state it started out as? +N/A diff --git a/docs/testing/user/userguide/cli_reference.rst b/docs/testing/user/userguide/cli_reference.rst new file mode 100644 index 00000000..719a991f --- /dev/null +++ b/docs/testing/user/userguide/cli_reference.rst @@ -0,0 +1,9 @@ +.. This work is licensed under a Creative Commons Attribution 4.0 International License. +.. http://creativecommons.org/licenses/by/4.0 +.. (c) OPNFV + +========================================= +Dovetail Command Line Interface Reference +========================================= + + diff --git a/docs/testing/user/userguide/index.rst b/docs/testing/user/userguide/index.rst index d8eb124b..aec3e861 100644 --- a/docs/testing/user/userguide/index.rst +++ b/docs/testing/user/userguide/index.rst @@ -1,479 +1,12 @@ .. This work is licensed under a Creative Commons Attribution 4.0 International License. .. http://creativecommons.org/licenses/by/4.0 -.. (c) Ericsson AB +.. (c) OPNFV -============================================== -Compliance and Verification program user guide -============================================== +******************************************************** +Compliance Verification Program Testing User Guide +******************************************************** .. toctree:: :maxdepth: 2 -Version history -=============== - -+------------+----------+------------------+----------------------------------+ -| **Date** | **Ver.** | **Author** | **Comment** | -| | | | | -+------------+----------+------------------+----------------------------------+ -| 2017-03-15 | 0.0.1 | Chris Price | Draft version | -| | | | | -+------------+----------+------------------+----------------------------------+ - - -Dovetail CVP Testing Overview -============================= - -The Dovetail testing framework consists of two major parts: the testing client that executes -all test cases in a vendor lab (self-testing) or a third party lab, and the server system that -is under the OPNFV's administration to store and view test results based on OPNFV Test API. The -following diagram illustrates this overall framework. - -/* here is a draft diagram that needs to be revised when exact information is known and fixed */ - -This section mainly focuses on helping the testers in the vendor's domain attempting to run the -CVP tests. - -Dovetail client tool (or just Dovetail tool or Dovetail for short) can be installed in the -jumphost either directly as Python software, or as a Docker(r) container. Comments of pros -and cons of the two options TBD. - -The section 'Installing the test tool'_ describes the steps the tester needs to take to install -Dovetail directly from the source. In 2.3, we describe steps needed for installing Dovetail -Docker(r) container. Once installed, and properly configured, the remaining test process is mostly -identical for the two options. In 2.4, we go over the steps of actually running the test suite. -In 2.5, we discuss how to view test results and make sense of them, for example, what the tester -may do in case of unexpected test failures. Section 2.6 describes additional Dovetail features -that are not absolutely necessary in CVP testing but users may find useful for other purposes. -One example is to run Dovetail for in-house testing as preparation before official CVP testing; -another example is to run Dovetail experimental test suites other than the CVP test suite. -Experimental tests may be made available by the community for experimenting less mature test -cases or functionalities for the purpose of getting feedbacks for improvement. - -Installing the test tool -======================== - -Before taking this step, testers should check the hardware and networking requirements of -the POD, and the jumphost in particular, to make sure they are compliant. - -In this section, we describe the procedure to install Dovetail client tool that runs the CVP -test suite from the jumphost. The jumphost must have network access to both the public Internet -and to the O&M (Operation and Management) network with access rights to all VIM APIs being tested. - -------------------------------- -Checking the Jumphost Readiness -------------------------------- - -While Dovetail does not have hard requirement on a specific operating system type or version, -these have been validated by the community through some level of exercise in OPNFV labs or PlugFests. - -Ubuntu 16.04.2 LTS (Xenial) for x86_64 -Ubuntu 14.04 LTS (Trusty) for x86_64 -CentOS-7-1611 for x86_64 -Red Hat Enterprise Linux 7.3 for x86_64 -Fedora 24 Server for x86_64 -Fedora 25 Server for x86_64 - ------------------------------------- -Configuring the Jumphost Environment ------------------------------------- - -/* First, openstack env variables to be passed to Functest */ - -The jumphost needs to have the right environmental variable setting to enable access to the -Openstack API. This is usually done through the Openstack credential file. - -Sample Openstack credential file environment_config.sh: - -/*Project-level authentication scope (name or ID), recommend admin project.*/ - -export OS_PROJECT_NAME=admin - -/* Authentication username, belongs to the project above, recommend admin user.*/ - -export OS_USERNAME=admin - - -/* Authentication password.*/ - -export OS_PASSWORD=secret - - -/* Authentication URL, one of the endpoints of keystone service. If this is v3 version, there need some extra variables as follows.*/ - -export OS_AUTH_URL='http://xxx.xxx.xxx.xxx:5000/v3' - - -/* Default is 2.0. If use keystone v3 API, this should be set as 3.*/ - -export OS_IDENTITY_API_VERSION=3 - - -/* Domain name or ID containing the user above. Command to check the domain: openstack -user show <OS_USERNAME>*/ - -export OS_USER_DOMAIN_NAME=default - - -/* Domain name or ID containing the project above. Command to check the domain: openstack -project show <OS_PROJECT_NAME>*/ - -export OS_PROJECT_DOMAIN_NAME=default - - -/* home directory for dovetail, if install Dovetail Docker container, DOVETAIL_HOME can -just be /home/opnfv*/ - -export DOVETAIL_HOME=$HOME/cvp - -Export all these variables into environment by, - -% source <OpenStack-credential-file-path> - - -The tester should validate that the Openstack environmental settings are correct by, -% openstack service list - ------------------------------------ -Installing Prerequisite on Jumphost ------------------------------------ - -1. Dovetail requires Python 2.7 and later - -Use the following steps to check if the right version of python is already installed, -and if not, install it. - -% python --version - -2. Dovetail requires Docker 1.8.0 and later - -Use the following steps to check if the right version of Docker is already installed, -and if not, install it. - -% docker --version - -As the docker installation process is much complex, you can refer to the official -document: https://docs.docker.com/engine/installation/linux/ - -------------------------------------- -2.2.4 Installing Dovetail on Jumphost -------------------------------------- - -A tester can choose one of the following two methods for installing and running Dovetail. -In part1, we explain the steps to install Dovetail from the source. In part2, an alternative -using a Docker image with preinstalled Dovetail is introduced. part1. Installing Dovetail directly - -Update and install packages - -a) Ubuntu - -sudo apt-get update - -sudo apt-get -y install gcc git vim python-dev python-pip --no-install-recommends - -b) centos and redhat - -sudo yum -y update - -sudo yum -y install epel-release - -sudo yum -y install gcc git vim-enhanced python-devel python-pip - -c) fedora - -sudo dnf -y update - -sudo dnf -y install gcc git vim-enhanced python-devel python-pip redhat-rpm-config - -p.s When testing SUT's https service, there need some extra packages, such as -apt-transport-https. This still remains to be verified. - - -Installing Dovetail - -Now we are ready to install Dovetail. - -/* Version of dovetail is not specified yet? we are still using the latest in the master -- this needs to be fixed before launch. */ - -First change directory to $DOVETAIL_HOME, - -% cd $DOVETAIL_HOME - -% sudo git clone https://git.opnfv.org/dovetail - -% cd $DOVETAIL_HOME/dovetail - -% sudo pip install -e ./ - -/* test dovetail install is successful */ - -% dovetail -h -part2. Installing Dovetail Docker Container - -The Dovetail project also maintains a Docker image that has Dovetail test tools preinstalled. - -Running CVP Test Suite -====================== - ------------------- -Running Test Suite ------------------- - -The Dovetail client CLI allows the tester to specify which test suite to run. -By default the results are stored in a local file $DOVETAIL_HOME/dovetail/results. - -% dovetail run --testsuite <test suite name> --openrc <path-to-openrc-file> /*?? */ - -Multiple test suites may be available, testsuites named "debug" and "proposed_tests" are just provided for testing. But for the purpose of running CVP test suite, the test suite name follows the following format, - -CVP.<major>.<minor>.<patch> /* test if this format works */ - -For example, CVP_1_0_0 - -% dovetail run --testsuite CVP_1_0_0 - -When the SUT's VIM (Virtual Infrastructure Manager) is Openstack, its configuration is commonly defined in the openrc file. In that case, you can specify the openrc file in the command line, - -% dovetail run --testsuite CVP_1_0_0 --openrc <path-to-openrc-file> - -In order to report official results to OPNFV, run the CVP test suite and report to OPNFV official URL, - -% dovetail run --testsuite <test suite name> --openrc <path-to-openrc-file> --report https://www.opnfv.org/cvp - -The official server https://www.opnfv.org/cvp is still under development, there is a temporal server to use http://205.177.226.237:9997/api/v1/results - --------------------------------- -Making Sense of CVP Test Results --------------------------------- - -When a tester is performing trial runs, Dovetail stores results in a local file by default. - -% cd $DOVETAIL_HOME/dovetail/results - - - -1. local file - -a) Log file: dovetail.log - -/* review the dovetail.log to see if all important information has been captured - in default mode without DEBUG */ - -/* the end of the log file has a summary of all test case test results */ - -Additional log files may be of interests: refstack.log, opnfv_yardstick_tcXXX.out ... - -b) Example: Openstack refstack test case example - -can see the log details in refstack.log, which has the passed/skipped/failed test cases result, the failed test cases have rich debug information - -for the users to see why this test case fails. - -c) Example: OPNFV Yardstick test case example - -for yardstick tool, its log is stored in yardstick.log - -for each test case result in Yardstick, the logs are stored in opnfv_yardstick_tcXXX.out, respectively. - - - -2. OPNFV web interface - -wait for the complement of LF, test community, etc. -2.3.3 Updating Dovetail or Test Suite - -% cd $DOVETAIL_HOME/dovetail - -% sudo git pull - -% sudo pip install -e ./ - -This step is necessary if dovetail software or the CVP test suite have updates. - - -Other Dovetail Usage -==================== - ------------------------- -Running Dovetail Locally ------------------------- - -/*DB*/ - ---------------------------------------------- -Running Dovetail with Experimental Test Cases ---------------------------------------------- - - --------------------------------------------------- -Running Individual Test Cases or for Special Cases --------------------------------------------------- - -1. Refstack client to run Defcore testcases - -a) By default, for Defcore test cases run by Refstack-client, which are consumed by -DoveTail, are run followed with automatically generated configuration file, i.e., -refstack_tempest.conf. - -In some circumstances, the automatic configuration file may not quite satisfied with -the SUT, DoveTail provide a way for users to set its configuration file according -to its own SUT manually, - -besides, the users should define Defcore testcase file, i.e., defcore.txt, at the -same time. The steps are shown as, - -when "Installing Dovetail Docker Container" method is used, - - -% sudo mkdir /home/opnfv/dovetail/userconfig - -% cd /home/opnfv/dovetail/userconfig - -% touch refstack_tempest.conf defcore.txt - -% vim refstack_tempest.conf - -% vim defcore.txt - - -the recommend way to set refstack_tempest.conf is shown in -https://aptira.com/testing-openstack-tempest-part-1/ - -the recommended way to edit defcore.txt is to open -https://refstack.openstack.org/api/v1/guidelines/2016.08/tests?target=compute&type=required&alias=true&flag=false -and copy all the test cases into defcore.txt. - -Then use “docker run” to create a container, - - -% sudo docker run --privileged=true -it -v <openrc_path>:<openrc_path> \ - --v /home/opnfv/dovetail/results:/home/opnfv/dovetail/results \ - --v /home/opnfv/dovetail/userconfig:/home/opnfv/dovetail/userconfig \ - --v /var/run/docker.sock:/var/run/docker.sock \ - ---name <DoveTail_Container_Name> (optional) \ - -opnfv/dovetail:<Tag> /bin/bash - - - -there is a need to adjust the CVP_1_0_0 testsuite, for dovetail, -defcore.tc001.yml and defcore.tc002.yml are used for automatic and -manual running method, respectively. - -Inside the dovetail container, - - -% cd /home/opnfv/dovetail/compliance - -% vim CVP_1_0_0.yml - - -to add defcore.tc002 and annotate defcore.tc001. - - -b) when "Installing Dovetail Directly" method is used, before to run -the dovetail commands, there is a need to set configuration file and -defcore test cases file - - -% cd $DOVETAIL_HOME/dovetail - -% mkdir userconfig - -% cd userconfig - -% touch refstack_tempest.conf defcore.txt - -% vim refstack_tempest.conf - -% vim defcore.txt - -recommended way to set refstack_tempest.conf and defcore.txt is -same as above in "Installing Dovetail Docker Container" method section. - - - -For Defcore test cases manually running method, there is a need to adjust -the compliance_set test suite, - -for dovetail, defcore.tc001.yml and defcore.tc002.yml are used for automatic -and manual running method, respectively. - - - -% cd $DOVETAIL_HOME/dovetail/compliance - -% vim CVP_1_0_0.yml - - -to add defcore.tc002 and annotate defcore.tc001 - -3 Dovetail Client CLI Manual - -This section contains a brief manual for all the features available through the Dovetail client command line interface (CLI). -3.1 Check dovetail commands - -% dovetail -h - -dovetail.PNG - -Dovetail has three commands: list, run and show. -6.2 List -6.2.1 List help - -% dovetail list -h - -list-help.PNG -6.2.2 List a test suite - -List command will list all test cases belong to the given test suite. - -% dovetail list compliance_set - -list-compliance.PNG - -% dovetail list debug - -list-debug.PNG - -The ipv6, example and nfvi are test areas. If no <TESTSUITE> is given, it will list all testsuites. -6.3 Show - -Show command will give the detailed info of one certain test case. -6.3.1 Show help - -% dovetail show -h - -show-help.PNG -6.3.2 Show test case - -show-ipv6.PNG -6.4 Run - -Dovetail supports running a named test suite, or one named test area of a test suite. -6.4.1 Run help - -% dovetail run -h - -run-help.PNGThere are some options: - -func_tag: set FuncTest’s Docker tag, for example stable,latest and danube.1.0 - -openrc: give the path of OpenStack credential file - -yard_tag: set Yardstick’s Docker tag - -testarea: set a certain testarea within a certain testsuite - -offline: run without pull the docker images, and it requires the jumphost to have these images locally. This will ensure DoveTail run in an offline environment. - -report: push results to DB or store with files - -testsuite: set the testsuite to be tested - -debug: flag to show the debug log messages - + testing_guide.rst diff --git a/docs/testing/user/userguide/testing_guide.rst b/docs/testing/user/userguide/testing_guide.rst new file mode 100644 index 00000000..08fd8acf --- /dev/null +++ b/docs/testing/user/userguide/testing_guide.rst @@ -0,0 +1,517 @@ +.. This work is licensed under a Creative Commons Attribution 4.0 International License. +.. http://creativecommons.org/licenses/by/4.0 +.. (c) OPNFV, Huawei Technologies Co.,Ltd and others. + +========================================== +Conducting CVP Testing with Dovetail +========================================== + +Overview +------------------------------ + +The Dovetail testing framework for CVP consists of two major parts: the testing client that +executes all test cases in a lab (vendor self-testing or a third party lab), +and the server system that is hosted by the CVP administrator to store and +view test results based on a web API. The following diagram illustrates +this overall framework. + +.. image:: ../../../images/dovetail_online_mode.png + :align: center + :scale: 50% + +Within the tester's lab, the Test Host is the machine where Dovetail executes all +automated test cases. As it hosts the test harness, the Test Host must not be part of +the System Under Test (SUT) itself. +The above diagram assumes that the tester's Test Host is situated in a DMZ which +has internal network access to the SUT and external access to the OPNFV server +via the public Internet. +This arrangement may not be supported in some labs. +Dovetail also supports an offline mode of testing that is +illustrated in the next diagram. + +.. image:: ../../../images/dovetail_offline_mode.png + :align: center + :scale: 50% + +In the offline mode, the Test Host only needs to have access to the SUT +via the internal network, but does not need to connect to the public Internet. This +user guide will highlight differences between the online and offline modes of +the Test Host. While it is possible to run the Test Host as a virtual machine, +this user guide assumes it is a physical machine for simplicity. + +The rest of this guide will describe how to install the Dovetail tool as a +Docker container image, go over the steps of running the CVP test suite, and +then discuss how to view test results and make sense of them. + +Readers interested +in using Dovetail for its functionalities beyond CVP testing, e.g. for in-house +or extended testing, should consult the Dovetail developer's guide for additional +information. + +Installing Dovetail +-------------------- + +In this section, we describe the procedure to install Dovetail client tool on the Test Host. +The Test Host must have network access to the management network with access rights to +the Virtual Infrastructure Manager's API. + +Checking the Test Host Readiness +^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ + +The Test Host must have network access to the Virtual Infrastructure Manager's API +hosted in the SUT so that the Dovetail tool can exercise the API from the Test Host. +It must also have ``ssh`` access to the Linux operating system +of the compute nodes in the SUT. The ``ssh`` mechanism is used by some test cases +to generate test events in the compute nodes. You can find out which test cases +use this mechanism in the test specification document. + +We have tested the Dovetail tool on the following host operating systems. Other versions +or distribution of Linux may also work, but community support may be more available on +these versions. + +- Ubuntu 16.04.2 LTS (Xenial) or 14.04 LTS (Trusty) +- CentOS-7-1611 +- Red Hat Enterprise Linux 7.3 +- Fedora 24 or 25 Server + +Non-Linux operating systems, such as Windows, Mac OS, have not been tested +and are not supported. + +If online mode is used, the tester should also validate that the Test Host can reach +the public Internet. For example, + +.. code-block:: bash + + $ ping www.opnfv.org + PING www.opnfv.org (50.56.49.117): 56 data bytes + 64 bytes from 50.56.49.117: icmp_seq=0 ttl=48 time=52.952 ms + 64 bytes from 50.56.49.117: icmp_seq=1 ttl=48 time=53.805 ms + 64 bytes from 50.56.49.117: icmp_seq=2 ttl=48 time=53.349 ms + ... + + +Or, if the lab environment does not allow ping, try validating it using HTTPS instead. + +.. code-block:: bash + + $ curl https://www.opnfv.org + <!doctype html> + + + <html lang="en-US" class="no-js"> + <head> + ... + + +Configuring the Test Host Environment +^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ + +The Test Host needs a few environment variables set correctly in order to access the +Openstack API required to drive the Dovetail tests. For convenience and as a convention, +we will also create a home directory for storing all Dovetail related config files and +results files: + +.. code-block:: bash + + $ mkdir -p /home/dovetail + $ export DOVETAIL_HOME=/home/dovetail + +Here we set dovetail home directory to be ``/home/dovetail`` for an example. +Then create a directory named ``pre_config`` in this directory to store all +Dovetail related config files: + +.. code-block:: bash + + $ mkdir -p ${DOVETAIL_HOME}/pre_config + +At this point, you will need to consult your SUT (Openstack) administrator to correctly set +the configurations in a file named ``env_config.sh``. +The Openstack settings need to be configured such that the Dovetail client has all the necessary +credentials and privileges to execute all test operations. If the SUT uses terms +somewhat differently from the standard Openstack naming, you will need to adjust +this file accordingly. + +In our example, we will use the file '${DOVETAIL_HOME}/pre_config/env_config.sh'. Create and edit +the file so that all parameters are set correctly to match your SUT. Here is an example of what +this file should contain. + +.. code-block:: bash + + $ cat ${DOVETAIL_HOME}/pre_config/env_config.sh + + # Project-level authentication scope (name or ID), recommend admin project. + export OS_PROJECT_NAME=admin + + # For identity v2, it uses OS_TENANT_NAME rather than OS_PROJECT_NAME. + export OS_TENANT_NAME=admin + + # Authentication username, belongs to the project above, recommend admin user. + export OS_USERNAME=admin + + # Authentication password. Use your own password + export OS_PASSWORD=xxxxxxxx + + # Authentication URL, one of the endpoints of keystone service. If this is v3 version, + # there need some extra variables as follows. + export OS_AUTH_URL='http://xxx.xxx.xxx.xxx:5000/v3' + + # Default is 2.0. If use keystone v3 API, this should be set as 3. + export OS_IDENTITY_API_VERSION=3 + + # Domain name or ID containing the user above. + # Command to check the domain: openstack user show <OS_USERNAME> + export OS_USER_DOMAIN_NAME=default + + # Domain name or ID containing the project above. + # Command to check the domain: openstack project show <OS_PROJECT_NAME> + export OS_PROJECT_DOMAIN_NAME=default + + # Home directory for dovetail that you have created before. + export DOVETAIL_HOME=/home/dovetail + +Export all these variables into environment by, + +.. code-block:: bash + + $ source ${DOVETAIL_HOME}/pre_config/env_config.sh + +If OpenStack client is installed, you can validate that the OpenStack environmental +settings are correct by, + +.. code-block:: bash + + $ openstack service list + + +Installing Prerequisite on the Test Host +^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ + +The main prerequisite software for Dovetail are Python and Docker. + +In the CVP test suite for the Danube release, Dovetail requires Python 2.7. Python 3.x +is not supported at this time. + +Use the following steps to check if the right version of python is already installed, +and if not, install it. + +.. code-block:: bash + + $ python --version + Python 2.7.6 + +If your Test Host does not have Python installed, or the version is not 2.7, you +should consult Python installation guides corresponding to the operating system +in your Test Host on how to install Python 2.7. + +Dovetail does not work with Docker versions prior to 1.12.3. We have validated +Dovetail with Docker 17.03 CE. Other versions of Docker later than 1.12.3 may +also work, but community support may be more available on Docker 17.03 CE. + +.. code-block:: bash + + $ sudo docker version + Client: + Version: 17.03.1-ce + API version: 1.27 + Go version: go1.7.5 + Git commit: c6d412e + Built: Mon Mar 27 17:10:36 2017 + OS/Arch: linux/amd64 + + Server: + Version: 17.03.1-ce + API version: 1.27 (minimum version 1.12) + Go version: go1.7.5 + Git commit: c6d412e + Built: Mon Mar 27 17:10:36 2017 + OS/Arch: linux/amd64 + Experimental: false + +If your Test Host does not have Docker installed, or Docker is older than 1.12.3, +or you have Docker version other than 17.03 CE and wish to change, +you will need to install, upgrade, or re-install in order to run Dovetail. +The Docker installation process +can be more complex, you should refer to the official +Docker installation guide that is relevant to your Test Host's operating system. + +The above installation steps assume that the Test Host is in the online mode. For offline +testing, use the following offline installation steps instead. + +In order to install or upgrade Python offline, you may download packaged Python 2.7 +for your Test Host's operating system on a connected host, copy the packge to +the Test Host, then install from that local copy. + +In order to install Docker offline, download Docker static binaries and copy the +tar file to the Test Host, such as for Ubuntu14.04, you may follow the following link +to install, + +.. code-block:: bash + + https://github.com/meetyg/docker-offline-install + + +Installing Dovetail on the Test Host +^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ + +The Dovetail project maintains a Docker image that has Dovetail test tools preinstalled. +This Docker image is tagged with versions. Before pulling the Dovetail image, check the +OPNFV's CVP web page first to determine the right tag for CVP testing. + +If the Test Host is online, you can directly pull. + +.. code-block:: bash + + $ sudo docker pull opnfv/dovetail:cvp.0.5.0 + cvp.0.5.0: Pulling from opnfv/dovetail + 30d541b48fc0: Pull complete + 8ecd7f80d390: Pull complete + 46ec9927bb81: Pull complete + 2e67a4d67b44: Pull complete + 7d9dd9155488: Pull complete + cc79be29f08e: Pull complete + e102eed9bf6a: Pull complete + 952b8a9d2150: Pull complete + bfbb639d1f38: Pull complete + bf7c644692de: Pull complete + cdc345e3f363: Pull complete + Digest: sha256:d571b1073b2fdada79562e8cc67f63018e8d89268ff7faabee3380202c05edee + Status: Downloaded newer image for opnfv/dovetail:cvp.0.5.0 + +An example of the <tag> is *cvp.0.5.0*. + +If the Test Host is offline, you will need to first pull the Dovetail Docker image, and all the +dependent images that Dovetail uses, to a host that is online. The reason that you need +to pull all dependent images is because Dovetail normally does dependency checking at run-time +and automatically pull images as needed, if the Test Host is online. If the Test Host is +offline, then all these dependencies will also need to be manually copied. + +.. code-block:: bash + + $ sudo docker pull opnfv/dovetail:cvp.0.5.0 + $ sudo docker pull opnfv/functest:cvp.0.5.0 + $ sudo docker pull opnfv/yardstick:danube.3.2 + $ sudo docker pull opnfv/bottlenecks:cvp.0.4.0 + $ sudo wget -nc http://artifacts.opnfv.org/sdnvpn/ubuntu-16.04-server-cloudimg-amd64-disk1.img -P {ANY_DIR} + +Once all these images are pulled, save the images, copy to the Test Host, and then load +the Dovetail and all dependent images at the Test Host. + +At the online host, save images. + +.. code-block:: bash + + $ sudo docker save -o dovetail.tar opnfv/dovetail:cvp.0.5.0 opnfv/functest:cvp.0.5.0 \ + opnfv/yardstick:danube.3.2 opnfv/bottlenecks:cvp.0.4.0 + +Copy dovetail.tar file to the Test Host, and then load the images on the Test Host. + +.. code-block:: bash + + $ sudo docker load --input dovetail.tar + +Copy sdnvpn test area image ubuntu-16.04-server-cloudimg-amd64-disk1.img to ${DOVETAIL_HOME}/pre_config/. + +Now check to see that the Dovetail image has been pulled or loaded properly. + +.. code-block:: bash + + $ sudo docker images + REPOSITORY TAG IMAGE ID CREATED SIZE + opnfv/functest cvp.0.5.0 9eaeaea5f203 8 days ago 1.53GB + opnfv/dovetail cvp.0.5.0 5d25b289451c 8 days ago 516MB + opnfv/yardstick danube.3.2 574596b6ea12 8 days ago 1.2GB + opnfv/bottlenecks cvp.0.4.0 00450688bcae 3 hours ago 622 MB + +Regardless of whether you pulled down the Dovetail image directly online, or loaded from +a static image tar file, you are ready to run Dovetail. + +.. code-block:: bash + + $ sudo docker run --privileged=true -it \ + -e DOVETAIL_HOME=$DOVETAIL_HOME \ + -v $DOVETAIL_HOME:$DOVETAIL_HOME \ + -v /var/run/docker.sock:/var/run/docker.sock \ + opnfv/dovetail:<tag> /bin/bash + +The ``-e`` options set the env variables in the container and the ``-v`` options map files +in the host to files in the container. + +Running the CVP Test Suite +---------------------------- + +Now you should be in the Dovetail container's prompt and ready to execute +test suites. + +The Dovetail client CLI allows the tester to specify which test suite to run. +By default the results are stored in a local file +``$DOVETAIL_HOME/results``. + +.. code-block:: bash + + $ dovetail run --testsuite <test-suite-name> + +Multiple test suites may be available. For the purpose of running +CVP test suite, the test suite name follows the following format, +``CVP_<major>_<minor>_<patch>`` +For example, CVP_1_0_0. + +.. code-block:: bash + + $ dovetail run --testsuite CVP_1_0_0 + +If you are not running the entire test suite, you can choose to run an +individual test area instead. + +.. code-block:: bash + + $ dovetail run --testsuite CVP_1_0_0 --testarea ipv6 + +Until the official test suite is approved and released, you can use +the *proposed_tests* for your trial runs, like this. + +.. code-block:: bash + + $ dovetail run --testsuite proposed_tests --testarea ipv6 + 2017-05-23 05:01:49,488 - run - INFO - ================================================ + 2017-05-23 05:01:49,488 - run - INFO - Dovetail compliance: proposed_tests! + 2017-05-23 05:01:49,488 - run - INFO - ================================================ + 2017-05-23 05:01:49,488 - run - INFO - Build tag: daily-master-4bdde6b8-afa6-40bb-8fc9-5d568d74c8d7 + 2017-05-23 05:01:49,536 - run - INFO - + 2017-05-23 05:01:49,710 - run - INFO - >>[testcase]: dovetail.ipv6.tc001 + 2017-05-23 05:08:22,532 - run - INFO - Results have been stored with file /home/dovetail/results/functest_results.txt. + 2017-05-23 05:08:22,538 - run - INFO - >>[testcase]: dovetail.ipv6.tc002 + ... + +Special Configuration for Running HA Test Cases +^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ + +HA test cases need to know the info of a controller node of the OpenStack. +It should include the node's name, role, ip, as well as the user and key_filename +or password to login the node. Users should create file ${DOVETAIL_HOME}/pre_config/pod.yaml +to store the info. + +There is a sample file for users. + +.. code-block:: bash + + nodes: + - + # This can not be changed and must be node1. + name: node1 + + # This must be controller. + role: Controller + + # This is the install IP of a controller node. + ip: xx.xx.xx.xx + + # User name of this node. This user must have sudo privileges. + user: root + + # Password of the user. + password: root + +Besides the 'password', user could also provide 'key_filename' to login the node. +Users need to create file $DOVETAIL_HOME/pre_config/id_rsa to store the private key. + +.. code-block:: bash + + name: node1 + role: Controller + ip: 10.1.0.50 + user: root + + # Private key of this node. It must be /root/.ssh/id_rsa + # Dovetail will move the key file from $DOVETAIL_HOME/pre_config/id_rsa + # to /root/.ssh/id_rsa of Yardstick container + key_filename: /root/.ssh/id_rsa + + +Making Sense of CVP Test Results +^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ + +When a tester is performing trial runs, Dovetail stores results in a local file by default. + +.. code-block:: bash + + cd $DOVETAIL_HOME/results + +#. Local file + + * Log file: dovetail.log + + * Review the dovetail.log to see if all important information has been captured + - in default mode without DEBUG. + + * The end of the log file has a summary of all test case test results. + + * Additional log files may be of interests: refstack.log, dovetail_ha_tcXXX.out ... + + * Example: Openstack refstack test case example + + * Can see the log details in refstack.log, which has the passed/skipped/failed + test cases result, the failed test cases have rich debug information for the + users to see why this test case fails. + + * Example: OPNFV Functest test case example + + * For Functest tool, its log is stored in functest.log + + * For each test case result in Functest, the logs are stored in functest_results.txt. + + * Example: OPNFV Yardstick test case example + + * For Yardstick tool, its log is stored in yardstick.log + + * For each test case result in Yardstick, the logs are stored in dovetail_ha_tcXXX.out, respectively. + +#. OPNFV web interface + CVP will host a web site to collect test results. Users can upload their results to this web site, + so they can review these results in the future. + + * web site url + + * Wait for the complement of LF, test community, etc. + + * Sign in / Sign up + + * You need to sign in you account, then you can upload results, and check your private results. + CVP is now using openstack id as account provider, but will soon support Linux Foundation ID + as well. + + * If you already have a openstack id, you can sign in directly with your id. + + * If you do not have a openstack id, you can sign up a new one on the sign up page. + + * If you do not sign in, you can only check the community results. + + * My results + + * This page lists all results uploaded by you after you signed in, + + * You can also upload your results on this page. + + * There is a *choose file* button, once you click it, you can choose your reuslt file in your harddisk + then click the *upload* button, and you will see a results id once your uploading succeed. + + * Check the *review* box to submit your result to the OPNFV. Uncheck the box to withdraw your result. + + * profile + + * This page shows your account info after you signed in. + +Updating Dovetail or a Test Suite +^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ + +Follow the instructions in section `Installing Dovetail on the Test Host`_ and +`Running the CVP Test Suite`_ by replacing the docker images with new_tags, + +.. code-block:: bash + + sudo docker pull opnfv/dovetail:<dovetail_new_tag> + sudo docker pull opnfv/functest:<functest_new_tag> + sudo docker pull opnfv/yardstick:<yardstick_new_tag> + +This step is necessary if dovetail software or the CVP test suite have updates. + + |