1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
|
Testing with DevStack
=====================
This document describes how to test OpenStack with OVN using DevStack. We will
start by describing how to test on a single host.
Single Node Test Environment
----------------------------
1. Create a test system.
It's best to use a throwaway dev system for running DevStack. Your best bet is
to use either CentOS 7 or the latest Ubuntu LTS (16.04, Xenial).
2. Create the ``stack`` user.
::
$ git clone https://git.openstack.org/openstack-dev/devstack.git
$ sudo ./devstack/tools/create-stack-user.sh
3. Switch to the ``stack`` user and clone DevStack and networking-ovn.
::
$ sudo su - stack
$ git clone https://git.openstack.org/openstack-dev/devstack.git
$ git clone https://git.openstack.org/openstack/networking-ovn.git
4. Configure DevStack to use networking-ovn.
networking-ovn comes with a sample DevStack configuration file you can start
with. For example, you may want to set some values for the various PASSWORD
variables in that file so DevStack doesn't have to prompt you for them. Feel
free to edit it if you'd like, but it should work as-is.
::
$ cd devstack
$ cp ../networking-ovn/devstack/local.conf.sample local.conf
5. Run DevStack.
This is going to take a while. It installs a bunch of packages, clones a bunch
of git repos, and installs everything from these git repos.
::
$ ./stack.sh
Once DevStack completes successfully, you should see output that looks
something like this::
This is your host IP address: 172.16.189.6
This is your host IPv6 address: ::1
Horizon is now available at http://172.16.189.6/dashboard
Keystone is serving at http://172.16.189.6/identity/
The default users are: admin and demo
The password: password
2017-03-09 15:10:54.117 | stack.sh completed in 2110 seconds.
Environment Variables
---------------------
Once DevStack finishes successfully, we're ready to start interacting with
OpenStack APIs. OpenStack provides a set of command line tools for interacting
with these APIs. DevStack provides a file you can source to set up the right
environment variables to make the OpenStack command line tools work.
::
$ . openrc
If you're curious what environment variables are set, they generally start with
an OS prefix::
$ env | grep OS
OS_REGION_NAME=RegionOne
OS_IDENTITY_API_VERSION=2.0
OS_PASSWORD=password
OS_AUTH_URL=http://192.168.122.8:5000/v2.0
OS_USERNAME=demo
OS_TENANT_NAME=demo
OS_VOLUME_API_VERSION=2
OS_CACERT=/opt/stack/data/CA/int-ca/ca-chain.pem
OS_NO_CACHE=1
Default Network Configuration
-----------------------------
By default, DevStack creates networks called ``private`` and ``public``.
Run the following command to see the existing networks::
$ openstack network list
+--------------------------------------+---------+----------------------------------------------------------------------------+
| ID | Name | Subnets |
+--------------------------------------+---------+----------------------------------------------------------------------------+
| 40080dad-0064-480a-b1b0-592ae51c1471 | private | 5ff81545-7939-4ae0-8365-1658d45fa85c, da34f952-3bfc-45bb-b062-d2d973c1a751 |
| 7ec986dd-aae4-40b5-86cf-8668feeeab67 | public | 60d0c146-a29b-4cd3-bd90-3745603b1a4b, f010c309-09be-4af2-80d6-e6af9c78bae7 |
+--------------------------------------+---------+----------------------------------------------------------------------------+
A Neutron network is implemented as an OVN logical switch. networking-ovn
creates logical switches with a name in the format neutron-<network UUID>.
We can use ``ovn-nbctl`` to list the configured logical switches and see that
their names correlate with the output from ``neutron net-list``::
$ ovn-nbctl ls-list
71206f5c-b0e6-49ce-b572-eb2e964b2c4e (neutron-40080dad-0064-480a-b1b0-592ae51c1471)
8d8270e7-fd51-416f-ae85-16565200b8a4 (neutron-7ec986dd-aae4-40b5-86cf-8668feeeab67)
$ ovn-nbctl get Logical_Switch neutron-40080dad-0064-480a-b1b0-592ae51c1471 external_ids
{"neutron:network_name"=private}
Booting VMs
-----------
In this section we'll go through the steps to create two VMs that have a
virtual NIC attached to the ``private`` Neutron network.
DevStack uses libvirt as the Nova backend by default. If KVM is available, it
will be used. Otherwise, it will just run qemu emulated guests. This is
perfectly fine for our testing, as we only need these VMs to be able to send
and receive a small amount of traffic so performance is not very important.
1. Get the Network UUID.
Start by getting the UUID for the ``private`` network from the output of
``neutron net-list`` from earlier and save it off::
$ PRIVATE_NET_ID=40080dad-0064-480a-b1b0-592ae51c1471
2. Create an SSH keypair.
Next create an SSH keypair in Nova. Later, when we boot a VM, we'll ask that
the public key be put in the VM so we can SSH into it.
::
$ openstack keypair create demo > id_rsa_demo
$ chmod 600 id_rsa_demo
3. Choose a flavor.
We need minimal resources for these test VMs, so the ``m1.nano`` flavor is
sufficient.
::
$ openstack flavor list
+----+-----------+-------+------+-----------+-------+-----------+
| ID | Name | RAM | Disk | Ephemeral | VCPUs | Is Public |
+----+-----------+-------+------+-----------+-------+-----------+
| 1 | m1.tiny | 512 | 1 | 0 | 1 | True |
| 2 | m1.small | 2048 | 20 | 0 | 1 | True |
| 3 | m1.medium | 4096 | 40 | 0 | 2 | True |
| 4 | m1.large | 8192 | 80 | 0 | 4 | True |
| 42 | m1.nano | 64 | 0 | 0 | 1 | True |
| 5 | m1.xlarge | 16384 | 160 | 0 | 8 | True |
| 84 | m1.micro | 128 | 0 | 0 | 1 | True |
| c1 | cirros256 | 256 | 0 | 0 | 1 | True |
| d1 | ds512M | 512 | 5 | 0 | 1 | True |
| d2 | ds1G | 1024 | 10 | 0 | 1 | True |
| d3 | ds2G | 2048 | 10 | 0 | 2 | True |
| d4 | ds4G | 4096 | 20 | 0 | 4 | True |
+----+-----------+-------+------+-----------+-------+-----------+
$ FLAVOR_ID=42
4. Choose an image.
DevStack imports the CirrOS image by default, which is perfect for our testing.
It's a very small test image.
::
$ openstack image list
+--------------------------------------+--------------------------+--------+
| ID | Name | Status |
+--------------------------------------+--------------------------+--------+
| 849a8db2-3754-4cf6-9271-491fa4ff7195 | cirros-0.3.5-x86_64-disk | active |
+--------------------------------------+--------------------------+--------+
$ IMAGE_ID=849a8db2-3754-4cf6-9271-491fa4ff7195
5. Setup a security rule so that we can access the VMs we will boot up next.
By default, DevStack does not allow users to access VMs, to enable that, we
will need to add a rule. We will allow both ICMP and SSH.
::
$ openstack security group rule create --ingress --ethertype IPv4 --dst-port 22 --protocol tcp default
$ openstack security group rule create --ingress --ethertype IPv4 --protocol ICMP default
$ openstack security group rule list
+--------------------------------------+-------------+-----------+------------+--------------------------------------+--------------------------------------+
| ID | IP Protocol | IP Range | Port Range | Remote Security Group | Security Group |
+--------------------------------------+-------------+-----------+------------+--------------------------------------+--------------------------------------+
...
| ade97198-db44-429e-9b30-24693d86d9b1 | tcp | 0.0.0.0/0 | 22:22 | None | a47b14da-5607-404a-8de4-3a0f1ad3649c |
| d0861a98-f90e-4d1a-abfb-827b416bc2f6 | icmp | 0.0.0.0/0 | | None | a47b14da-5607-404a-8de4-3a0f1ad3649c |
...
+--------------------------------------+-------------+-----------+------------+--------------------------------------+--------------------------------------+
$ neutron security-group-rule-create --direction ingress --ethertype IPv4 --port-range-min 22 --port-range-max 22 --protocol tcp default
$ neutron security-group-rule-create --direction ingress --ethertype IPv4 --protocol ICMP default
$ neutron security-group-rule-list
+--------------------------------------+----------------+-----------+-----------+---------------+-----------------+
| id | security_group | direction | ethertype | protocol/port | remote |
+--------------------------------------+----------------+-----------+-----------+---------------+-----------------+
| 8b2edbe6-790e-40ef-af54-c7b64ced8240 | default | ingress | IPv4 | 22/tcp | any |
| 5bee0179-807b-41d7-ab16-6de6ac051335 | default | ingress | IPv4 | icmp | any |
...
+--------------------------------------+----------------+-----------+-----------+---------------+-----------------+
6. Boot some VMs.
Now we will boot two VMs. We'll name them ``test1`` and ``test2``.
::
$ openstack server create --nic net-id=$PRIVATE_NET_ID --flavor $FLAVOR_ID --image $IMAGE_ID --key-name demo test1
+-----------------------------+-----------------------------------------------------------------+
| Field | Value |
+-----------------------------+-----------------------------------------------------------------+
| OS-DCF:diskConfig | MANUAL |
| OS-EXT-AZ:availability_zone | |
| OS-EXT-STS:power_state | NOSTATE |
| OS-EXT-STS:task_state | scheduling |
| OS-EXT-STS:vm_state | building |
| OS-SRV-USG:launched_at | None |
| OS-SRV-USG:terminated_at | None |
| accessIPv4 | |
| accessIPv6 | |
| addresses | |
| adminPass | BzAWWA6byGP6 |
| config_drive | |
| created | 2017-03-09T16:56:08Z |
| flavor | m1.nano (42) |
| hostId | |
| id | d8b8084e-58ff-44f4-b029-a57e7ef6ba61 |
| image | cirros-0.3.5-x86_64-disk (849a8db2-3754-4cf6-9271-491fa4ff7195) |
| key_name | demo |
| name | test1 |
| progress | 0 |
| project_id | b6522570f7344c06b1f24303abf3c479 |
| properties | |
| security_groups | name='default' |
| status | BUILD |
| updated | 2017-03-09T16:56:08Z |
| user_id | c68f77f1d85e43eb9e5176380a68ac1f |
| volumes_attached | |
+-----------------------------+-----------------------------------------------------------------+
$ openstack server create --nic net-id=$PRIVATE_NET_ID --flavor $FLAVOR_ID --image $IMAGE_ID --key-name demo test2
+-----------------------------+-----------------------------------------------------------------+
| Field | Value |
+-----------------------------+-----------------------------------------------------------------+
| OS-DCF:diskConfig | MANUAL |
| OS-EXT-AZ:availability_zone | |
| OS-EXT-STS:power_state | NOSTATE |
| OS-EXT-STS:task_state | scheduling |
| OS-EXT-STS:vm_state | building |
| OS-SRV-USG:launched_at | None |
| OS-SRV-USG:terminated_at | None |
| accessIPv4 | |
| accessIPv6 | |
| addresses | |
| adminPass | YB8dmt5v88JV |
| config_drive | |
| created | 2017-03-09T16:56:50Z |
| flavor | m1.nano (42) |
| hostId | |
| id | 170d4f37-9299-4a08-b48b-2b90fce8e09b |
| image | cirros-0.3.5-x86_64-disk (849a8db2-3754-4cf6-9271-491fa4ff7195) |
| key_name | demo |
| name | test2 |
| progress | 0 |
| project_id | b6522570f7344c06b1f24303abf3c479 |
| properties | |
| security_groups | name='default' |
| status | BUILD |
| updated | 2017-03-09T16:56:51Z |
| user_id | c68f77f1d85e43eb9e5176380a68ac1f |
| volumes_attached | |
+-----------------------------+-----------------------------------------------------------------+
Once both VMs have been started, they will have a status of ``ACTIVE``::
$ openstack server list
+--------------------------------------+-------+--------+---------------------------------------------------------+--------------------------+
| ID | Name | Status | Networks | Image Name |
+--------------------------------------+-------+--------+---------------------------------------------------------+--------------------------+
| 170d4f37-9299-4a08-b48b-2b90fce8e09b | test2 | ACTIVE | private=fd5d:9d1b:457c:0:f816:3eff:fe24:49df, 10.0.0.3 | cirros-0.3.5-x86_64-disk |
| d8b8084e-58ff-44f4-b029-a57e7ef6ba61 | test1 | ACTIVE | private=fd5d:9d1b:457c:0:f816:3eff:fe3f:953d, 10.0.0.10 | cirros-0.3.5-x86_64-disk |
+--------------------------------------+-------+--------+---------------------------------------------------------+--------------------------+
Our two VMs have addresses of ``10.0.0.3`` and ``10.0.0.10``. If we list
Neutron ports, there are two new ports with these addresses associated
with them::
$ openstack port list
+--------------------------------------+------+-------------------+-----------------------------------------------------------------------------------------------------+--------+
| ID | Name | MAC Address | Fixed IP Addresses | Status |
+--------------------------------------+------+-------------------+-----------------------------------------------------------------------------------------------------+--------+
...
| 97c970b0-485d-47ec-868d-783c2f7acde3 | | fa:16:3e:3f:95:3d | ip_address='10.0.0.10', subnet_id='da34f952-3bfc-45bb-b062-d2d973c1a751' | ACTIVE |
| | | | ip_address='fd5d:9d1b:457c:0:f816:3eff:fe3f:953d', subnet_id='5ff81545-7939-4ae0-8365-1658d45fa85c' | |
| e003044d-334a-4de3-96d9-35b2d2280454 | | fa:16:3e:24:49:df | ip_address='10.0.0.3', subnet_id='da34f952-3bfc-45bb-b062-d2d973c1a751' | ACTIVE |
| | | | ip_address='fd5d:9d1b:457c:0:f816:3eff:fe24:49df', subnet_id='5ff81545-7939-4ae0-8365-1658d45fa85c' | |
...
+--------------------------------------+------+-------------------+-----------------------------------------------------------------------------------------------------+--------+
$ TEST1_PORT_ID=97c970b0-485d-47ec-868d-783c2f7acde3
$ TEST2_PORT_ID=e003044d-334a-4de3-96d9-35b2d2280454
Now we can look at OVN using ``ovn-nbctl`` to see the logical switch ports
that were created for these two Neutron ports. The first part of the output
is the OVN logical switch port UUID. The second part in parentheses is the
logical switch port name. Neutron sets the logical switch port name equal to
the Neutron port ID.
::
$ ovn-nbctl lsp-list neutron-$PRIVATE_NET_ID
...
fde1744b-e03b-46b7-b181-abddcbe60bf2 (97c970b0-485d-47ec-868d-783c2f7acde3)
7ce284a8-a48a-42f5-bf84-b2bca62cd0fe (e003044d-334a-4de3-96d9-35b2d2280454)
...
These two ports correspond to the two VMs we created.
VM Connectivity
---------------
We can connect to our VMs by associating a floating IP address from the public
network.
::
$ openstack floating ip create --port $TEST1_PORT_ID public
+---------------------+--------------------------------------+
| Field | Value |
+---------------------+--------------------------------------+
| created_at | 2017-03-09T18:58:12Z |
| description | |
| fixed_ip_address | 10.0.0.10 |
| floating_ip_address | 172.24.4.8 |
| floating_network_id | 7ec986dd-aae4-40b5-86cf-8668feeeab67 |
| id | 24ff0799-5a72-4a5b-abc0-58b301c9aee5 |
| name | None |
| port_id | 97c970b0-485d-47ec-868d-783c2f7acde3 |
| project_id | b6522570f7344c06b1f24303abf3c479 |
| revision_number | 1 |
| router_id | ee51adeb-0dd8-4da0-ab6f-7ce60e00e7b0 |
| status | DOWN |
| updated_at | 2017-03-09T18:58:12Z |
+---------------------+--------------------------------------+
Devstack does not wire up the public network by default so we must do
that before connecting to this floating IP address.
::
$ sudo ip link set br-ex up
$ sudo ip route add 172.24.4.0/24 dev br-ex
$ sudo ip addr add 172.24.4.1/24 dev br-ex
Now you should be able to connect to the VM via its floating IP address.
First, ping the address.
::
$ ping -c 1 172.24.4.8
PING 172.24.4.8 (172.24.4.8) 56(84) bytes of data.
64 bytes from 172.24.4.8: icmp_seq=1 ttl=63 time=0.823 ms
--- 172.24.4.8 ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 0.823/0.823/0.823/0.000 ms
Now SSH to the VM::
$ ssh -i id_rsa_demo cirros@172.24.4.8 hostname
test1
Adding Another Compute Node
---------------------------
After completing the earlier instructions for setting up devstack, you can use
a second VM to emulate an additional compute node. This is important for OVN
testing as it exercises the tunnels created by OVN between the hypervisors.
Just as before, create a throwaway VM but make sure that this VM has a
different host name. Having same host name for both VMs will confuse Nova and
will not produce two hypervisors when you query nova hypervisor list later.
Once the VM is setup, create the ``stack`` user::
$ git clone https://git.openstack.org/openstack-dev/devstack.git
$ sudo ./devstack/tools/create-stack-user.sh
Switch to the ``stack`` user and clone DevStack and networking-ovn::
$ sudo su - stack
$ git clone https://git.openstack.org/openstack-dev/devstack.git
$ git clone https://git.openstack.org/openstack/networking-ovn.git
networking-ovn comes with another sample configuration file that can be used
for this::
$ cd devstack
$ cp ../networking-ovn/devstack/computenode-local.conf.sample local.conf
You must set SERVICE_HOST in local.conf. The value should be the IP address of
the main DevStack host. You must also set HOST_IP to the IP address of this
new host. See the text in the sample configuration file for more
information. Once that is complete, run DevStack::
$ cd devstack
$ ./stack.sh
This should complete in less time than before, as it's only running a single
OpenStack service (nova-compute) along with OVN (ovn-controller, ovs-vswitchd,
ovsdb-server). The final output will look something like this::
This is your host IP address: 172.16.189.30
This is your host IPv6 address: ::1
2017-03-09 18:39:27.058 | stack.sh completed in 1149 seconds.
Now go back to your main DevStack host. You can use admin credentials to
verify that the additional hypervisor has been added to the deployment::
$ cd devstack
$ . openrc admin
$ openstack hypervisor list
+----+------------------------+-----------------+---------------+-------+
| ID | Hypervisor Hostname | Hypervisor Type | Host IP | State |
+----+------------------------+-----------------+---------------+-------+
| 1 | centos7-ovn-devstack | QEMU | 172.16.189.6 | up |
| 2 | centos7-ovn-devstack-2 | QEMU | 172.16.189.30 | up |
+----+------------------------+-----------------+---------------+-------+
You can also look at OVN and OVS to see that the second host has shown up. For
example, there will be a second entry in the Chassis table of the
OVN_Southbound database. You can use the ``ovn-sbctl`` utility to list
chassis, their configuration, and the ports bound to each of them::
$ ovn-sbctl show
Chassis "ddc8991a-d838-4758-8d15-71032da9d062"
hostname: "centos7-ovn-devstack"
Encap vxlan
ip: "172.16.189.6"
options: {csum="true"}
Encap geneve
ip: "172.16.189.6"
options: {csum="true"}
Port_Binding "97c970b0-485d-47ec-868d-783c2f7acde3"
Port_Binding "e003044d-334a-4de3-96d9-35b2d2280454"
Port_Binding "cr-lrp-08d1f28d-cc39-4397-b12b-7124080899a1"
Chassis "b194d07e-0733-4405-b795-63b172b722fd"
hostname: "centos7-ovn-devstack-2.os1.phx2.redhat.com"
Encap geneve
ip: "172.16.189.30"
options: {csum="true"}
Encap vxlan
ip: "172.16.189.30"
options: {csum="true"}
You can also see a tunnel created to the other compute node::
$ ovs-vsctl show
...
Bridge br-int
fail_mode: secure
...
Port "ovn-b194d0-0"
Interface "ovn-b194d0-0"
type: geneve
options: {csum="true", key=flow, remote_ip="172.16.189.30"}
...
...
Provider Networks
-----------------
Neutron has a "provider networks" API extension that lets you specify
some additional attributes on a network. These attributes let you
map a Neutron network to a physical network in your environment.
The OVN ML2 driver is adding support for this API extension. It currently
supports "flat" and "vlan" networks.
Here is how you can test it:
First you must create an OVS bridge that provides connectivity to the
provider network on every host running ovn-controller. For trivial
testing this could just be a dummy bridge. In a real environment, you
would want to add a local network interface to the bridge, as well.
::
$ ovs-vsctl add-br br-provider
ovn-controller on each host must be configured with a mapping between
a network name and the bridge that provides connectivity to that network.
In this case we'll create a mapping from the network name "providernet"
to the bridge 'br-provider".
::
$ ovs-vsctl set open . \
external-ids:ovn-bridge-mappings=providernet:br-provider
Now create a Neutron provider network.
::
$ neutron net-create provider --shared \
--provider:physical_network providernet \
--provider:network_type flat
Alternatively, you can define connectivity to a VLAN instead of a flat network:
::
$ neutron net-create provider-101 --shared \
--provider:physical_network providernet \
--provider:network_type vlan \
--provider:segmentation_id 101
Observe that the OVN ML2 driver created a special logical switch port of type
localnet on the logical switch to model the connection to the physical network.
::
$ ovn-nbctl show
...
switch 5bbccbbd-f5ca-411b-bad9-01095d6f1316 (neutron-729dbbee-db84-4a3d-afc3-82c0b3701074)
port provnet-729dbbee-db84-4a3d-afc3-82c0b3701074
addresses: ["unknown"]
...
$ ovn-nbctl lsp-get-type provnet-729dbbee-db84-4a3d-afc3-82c0b3701074
localnet
$ ovn-nbctl lsp-get-options provnet-729dbbee-db84-4a3d-afc3-82c0b3701074
network_name=providernet
If VLAN is used, there will be a VLAN tag shown on the localnet port as well.
Finally, create a Neutron port on the provider network.
::
$ neutron port-create provider
or if you followed the VLAN example, it would be:
::
$ neutron port-create provider-101
Run Unit Tests
--------------
Run the unit tests in the local environment with ``tox``.
::
$ tox -e py27
$ tox -e py27 networking_ovn.tests.unit.test_ovn_db_sync
$ tox -e py27 networking_ovn.tests.unit.test_ovn_db_sync.TestOvnSbSyncML2
$ tox -e py27 networking_ovn.tests.unit.test_ovn_db_sync.TestOvnSbSyncML2
.test_ovn_sb_sync
Run Functional Tests
--------------------
you can run the functional tests with ``tox`` in your devstack environment:
::
$ cd networking_ovn/tests/functional
$ tox -e dsvm-functional
$ tox -e dsvm-functional networking_ovn.tests.functional.test_mech_driver\
.TestPortBinding.test_port_binding_create_port
If you want to run functional tests in your local clean environment, you may
need a new working directory.
::
$ export BASE=/opt/stack
$ mkdir -p /opt/stack/new
$ cd /opt/stack/new
Next, get networking_ovn, neutron and devstack.
::
$ git clone https://git.openstack.org/openstack/networking-ovn.git
$ git clone https://git.openstack.org/openstack/neutron.git
$ git clone https://git.openstack.org/openstack-dev/devstack.git
Then execute the script to prepare the environment.
::
$ cd networking-ovn/
$ ./networking_ovn/tests/contrib/gate_hook.sh
Finally, run the functional tests with ``tox``
::
$ cd networking_ovn/tests/functional
$ tox -e dsvm-functional
$ tox -e dsvm-functional networking_ovn.tests.functional.test_mech_driver\
.TestPortBinding.test_port_binding_create_port
|