summaryrefslogtreecommitdiffstats
path: root/docs/setupservicevm/4-ipv6-configguide-servicevm.rst
blob: 295aab328c58703279dae7e26ab7c8a1947394c3 (plain)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
==========================================
Setting Up a Service VM as an IPv6 vRouter
==========================================

Now we can start to set up a service VM as an IPv6 vRouter. For exemplary purpose, we assume:

* The hostname of  Open Daylight Controller Node is ``opnfv-odl-controller``
* The hostname of OpenStack Controller Node is ``opnfv-os-controller``
* The hostname of OpenStack Compute Node is ``opnfv-os-compute``
* We use ``opnfv`` as username to login.
* We use ``devstack`` to install OpenStack Kilo, and the directory is ``~/devstack``
* Note: all IP addresses as shown below are for exemplary purpose.

***************************************************
Source the Credentials in OpenStack Controller Node
***************************************************

**SETUP-SVM-1**: Login with username ``opnfv`` in OpenStack Controller Node ``opnfv-os-controller``.
Start a new terminal, and change directory to where OpenStack is installed.

.. code-block:: bash

    cd ~/devstack

**SETUP-SVM-2**: Source the credentials.

.. code-block:: bash

    opnfv@opnfv-os-controller:~/devstack$ source openrc admin demo

**************************************
Add External Connectivity to ``br-ex``
**************************************

Because we need to manually create networks/subnets to achieve the IPv6 vRouter, we have used the flag
``NEUTRON_CREATE_INITIAL_NETWORKS=False`` in ``local.conf`` file. When this flag is set to False,
``devstack`` does not create any networks/subnets during the setup phase.

In OpenStack Controller Node ``opnfv-os-controller``, ``eth1`` is configured to provide external/public connectivity
for both IPv4 and IPv6. So let us add this interface to ``br-ex`` and move the IP address, including the default route
from ``eth1`` to ``br-ex``.

**SETUP-SVM-3**: Add ``eth1`` to ``br-ex`` and move the IP address and the default route from ``eth1`` to ``br-ex``

.. code-block:: bash

    sudo ip addr del <External IP address of opnfv-os-controller> dev eth1
    sudo ovs-vsctl add-port br-ex eth1
    sudo ifconfig eth1 up
    sudo ip addr add <External IP address of opnfv-os-controller> dev br-ex
    sudo ifconfig br-ex up
    sudo ip route add default via <Default gateway IP address of opnfv-os-controller> dev br-ex

Please note that **this can be automated in /etc/network/interfaces**.

**SETUP-SVM-4**: Verify that ``br-ex`` now has the original external IP address, and that the default route is on
``br-ex``

.. code-block:: bash

    opnfv@opnfv-os-controller:~/devstack$ ip a s br-ex
    38: br-ex: <BROADCAST,UP,LOWER_UP> mtu 1430 qdisc noqueue state UNKNOWN group default
        link/ether 00:50:56:82:42:d1 brd ff:ff:ff:ff:ff:ff
        inet 198.59.156.113/24 brd 198.59.156.255 scope global br-ex
           valid_lft forever preferred_lft forever
        inet6 fe80::543e:28ff:fe70:4426/64 scope link
           valid_lft forever preferred_lft forever
    opnfv@opnfv-os-controller:~/devstack$ ip route
    default via 198.59.156.1 dev br-ex
    10.134.156.0/24 dev eth0  proto kernel  scope link  src 10.134.156.113
    192.168.122.0/24 dev virbr0  proto kernel  scope link  src 192.168.122.1
    198.59.156.0/24 dev br-ex  proto kernel  scope link  src 198.59.156.113

Please  note that The IP addresses above are exemplary purpose

********************************************************
Create IPv4 Subnet and Router with External Connectivity
********************************************************

**SETUP-SVM-5**: Create a Neutron router ``ipv4-router`` which needs to provide external connectivity.

.. code-block:: bash

    neutron router-create ipv4-router

**SETUP-SVM-6**: Create an external network/subnet ``ext-net`` using the appropriate values based on the
data-center physical network setup.

.. code-block:: bash

    neutron net-create --router:external ext-net
    neutron subnet-create --disable-dhcp --allocation-pool start=198.59.156.251,end=198.59.156.254 --gateway 198.59.156.1 ext-net 198.59.156.0/24

Please note that the IP addresses in the command above are for exemplary purpose. **Please replace the IP addresses of
your actual network**.

**SETUP-SVM-7**: Associate the ``ext-net`` to the Neutron router ``ipv4-router``.

.. code-block:: bash

    neutron router-gateway-set ipv4-router ext-net

**SETUP-SVM-8**: Create an internal/tenant IPv4 network ``ipv4-int-network1``

.. code-block:: bash

    neutron net-create ipv4-int-network1

**SETUP-SVM-9**: Create an IPv4 subnet ``ipv4-int-subnet1`` in the internal network ``ipv4-int-network1``

.. code-block:: bash

    neutron subnet-create --name ipv4-int-subnet1 --dns-nameserver 8.8.8.8 ipv4-int-network1 20.0.0.0/24

Please note that the IP addresses in the command above are for exemplary purpose. **Please replace the
IP addresses of your actual network**

**SETUP-SVM-10**: Associate the IPv4 internal subnet ``ipv4-int-subnet1`` to the Neutron router ``ipv4-router``.

.. code-block:: bash

    neutron router-interface-add ipv4-router ipv4-int-subnet1

********************************************************
Create IPv6 Subnet and Router with External Connectivity
********************************************************

Now, let us create a second neutron router where we can "manually" spawn a ``radvd`` daemon to simulate an external
IPv6 router.

**SETUP-SVM-11**:  Create a second Neutron router ``ipv6-router`` which needs to provide external connectivity

.. code-block:: bash

    neutron router-create ipv6-router

**SETUP-SVM-12**: Associate the ``ext-net`` to the Neutron router ``ipv6-router``

.. code-block:: bash

    neutron router-gateway-set ipv6-router ext-net

**SETUP-SVM-13**: Create a second internal/tenant IPv4 network ``ipv4-int-network2``

.. code-block:: bash

    neutron net-create ipv4-int-network2

**SETUP-SVM-14**: Create an IPv4 subnet ``ipv4-int-subnet2`` for the ``ipv6-router`` internal network
``ipv4-int-network2``

.. code-block:: bash

    neutron subnet-create --name ipv4-int-subnet2 --dns-nameserver 8.8.8.8 ipv4-int-network2 10.0.0.0/24

Please note that the IP addresses in the command above are for exemplary purpose. **Please replace the IP addresses of
your actual network**

**SETUP-SVM-15**: Associate the IPv4 internal subnet ``ipv4-int-subnet2`` to the Neutron router ``ipv6-router``.

.. code-block:: bash

    neutron router-interface-add ipv6-router ipv4-int-subnet2

**************************************************
Prepare Image, Metadata and Keypair for Service VM
**************************************************

**SETUP-SVM-16**: Download ``fedora20`` image which would be used as ``vRouter``

.. code-block:: bash

    glance image-create --name 'Fedora20' --disk-format qcow2 --container-format bare --is-public true --copy-from http://cloud.fedoraproject.org/fedora-20.x86_64.qcow2

**SETUP-SVM-17**: Create a keypair

.. code-block:: bash

    nova keypair-add vRouterKey > ~/vRouterKey

**SETUP-SVM-18**: Copy the contents from the following url to ``metadata.txt``, i.e. preparing metadata which enables
IPv6 router functionality inside ``vRouter``

.. code-block:: bash

    http://fpaste.org/303942/50781923/

Please note that this ``metadata.txt`` will enable the ``vRouter`` to automatically spawn a ``radvd`` daemon,
which advertises its IPv6 subnet prefix ``2001:db8:0:2::/64`` in RA (Router Advertisement) message through
its ``eth1`` interface to other VMs on ``ipv4-int-network1``. The ``radvd`` daemon also advertises the routing
information, which routes to ``2001:db8:0:2::/64`` subnet, in RA (Router Advertisement) message through its
``eth0`` interface to ``eth1`` interface of ``ipv6-router`` on ``ipv4-int-network2``.

**********************************************************************************************************
Boot Service VM (``vRouter``) with ``eth0`` on ``ipv4-int-network2`` and ``eth1`` on ``ipv4-int-network1``
**********************************************************************************************************

Let us boot the service VM (``vRouter``) with ``eth0`` interface on ``ipv4-int-network2`` connecting to ``ipv6-router``,
and ``eth1`` interface on ``ipv4-int-network1`` connecting to ``ipv4-router``.

**SETUP-SVM-19**: Boot the ``vRouter`` using ``Fedora20`` image on the OpenStack Compute Node with hostname
``opnfv-os-compute``

.. code-block:: bash

    nova boot --image Fedora20 --flavor m1.small --user-data ./metadata.txt --availability-zone nova:opnfv-os-compute --nic net-id=$(neutron net-list | grep -w ipv4-int-network2 | awk '{print $2}') --nic net-id=$(neutron net-list | grep -w ipv4-int-network1 | awk '{print $2}') --key-name vRouterKey vRouter

**SETUP-SVM-20**: Verify that ``Fedora20`` image boots up successfully and the ``ssh`` keys are properly injected

.. code-block:: bash

    nova list
    nova console-log vRouter

Please note that **it may take a few minutes** for the necessary packages to get installed and ``ssh`` keys
to be injected.

.. code-block:: bash

    # Sample Output
    [  762.884523] cloud-init[871]: ec2: #############################################################
    [  762.909634] cloud-init[871]: ec2: -----BEGIN SSH HOST KEY FINGERPRINTS-----
    [  762.931626] cloud-init[871]: ec2: 2048 e3:dc:3d:4a:bc:b6:b0:77:75:a1:70:a3:d0:2a:47:a9   (RSA)
    [  762.957380] cloud-init[871]: ec2: -----END SSH HOST KEY FINGERPRINTS-----
    [  762.979554] cloud-init[871]: ec2: #############################################################

*******************************************
Boot Two Other VMs in ``ipv4-int-network1``
*******************************************

In order to verify that the setup is working, let us create two cirros VMs with ``eth1`` interface on the
``ipv4-int-network1``, i.e., connecting to ``vRouter`` ``eth1`` interface for internal network.

We will have to configure appropriate ``mtu`` on the VMs' interface by taking into account the tunneling
overhead and any physical switch requirements. If so, push the ``mtu`` to the VM either using ``dhcp``
options or via ``meta-data``.

**SETUP-SVM-21**: Create VM1 on OpenStack Controller Node with hostname ``opnfv-os-controller``

.. code-block:: bash

    nova boot --image cirros-0.3.4-x86_64-uec --flavor m1.tiny --nic net-id=$(neutron net-list | grep -w ipv4-int-network1 | awk '{print $2}') --availability-zone nova:opnfv-os-controller --key-name vRouterKey VM1

**SETUP-SVM-22**: Create VM2 on OpenStack Compute Node with hostname ``opnfv-os-compute``

.. code-block:: bash

    nova boot --image cirros-0.3.4-x86_64-uec --flavor m1.tiny --nic net-id=$(neutron net-list | grep -w ipv4-int-network1 | awk '{print $2}') --availability-zone nova:opnfv-os-compute --key-name vRouterKey VM2

**SETUP-SVM-23**: Confirm that both the VMs are successfully booted.

.. code-block:: bash

    nova list
    nova console-log VM1
    nova console-log VM2

**********************************
Spawn ``RADVD`` in ``ipv6-router``
**********************************

Let us manually spawn a ``radvd`` daemon inside ``ipv6-router`` namespace to simulate an external router.
First of all, we will have to identify the ``ipv6-router`` namespace and move to the namespace.

**SETUP-SVM-24**: identify the ``ipv6-router`` namespace and move to the namespace

.. code-block:: bash

    sudo ip netns exec qrouter-$(neutron router-list | grep -w ipv6-router | awk '{print $2}') bash

**SETUP-SVM-25**: Upon successful execution of the above command, you will be in the router namespace.
Now let us configure the IPv6 address on the <qr-xxx> interface.

.. code-block:: bash

    router_interface=$(ip a s | grep -w "global qr-*" | awk '{print $7}')
    ip -6 addr add 2001:db8:0:1::1 dev $router_interface

**SETUP-SVM-26**: Copy the following contents to some file, e.g. ``/tmp/br-ex.radvd.conf``

.. code-block:: bash

    interface $router_interface
      {
         AdvSendAdvert on;
         MinRtrAdvInterval 3;
         MaxRtrAdvInterval 10;
         prefix 2001:db8:0:1::/64
           {
              AdvOnLink on;
              AdvAutonomous on;
           };
      };

**SETUP-SVM-27**: Spawn a ``radvd`` daemon to simulate an external router. This ``radvd`` daemon advertises its
IPv6 subnet prefix ``2001:db8:0:1::/64`` in RA (Router Advertisement) message through its ``eth1`` interface to
``eth0`` interface of ``vRouter`` on ``ipv4-int-network2``.

.. code-block:: bash

    $radvd -C /tmp/br-ex.radvd.conf -p /tmp/br-ex.pid.radvd -m syslog

**SETUP-SVM-28**: Configure the ``$router_interface`` process entries to process the RA (Router Advertisement)
message from ``vRouter``, and automatically add a downstream route pointing to the LLA (Link Local Address) of
``eth0`` interface of the ``vRouter``.

.. code-block:: bash

    sysctl -w net.ipv6.conf.$router_interface.accept_ra=2
    sysctl -w net.ipv6.conf.$router_interface.accept_ra_rt_info_max_plen=64

**SETUP-SVM-29**: Please note that after the vRouter successfully initializes and starts sending RA (Router
Advertisement) message (**SETUP-SVM-20**), you would see an IPv6 route to the ``2001:db8:0:2::/64`` prefix
(subnet) reachable via LLA (Link Local Address) of ``eth0`` interface of the ``vRouter``. You can execute the
following command to list the IPv6 routes.

.. code-block:: bash

    ip -6 route show

********************************
Testing to Verify Setup Complete
********************************

Now, let us ``ssh`` to one of the VMs, e.g. VM1, to confirm that it has successfully configured the IPv6 address
using ``SLAAC`` with prefix ``2001:db8:0:2::/64`` from ``vRouter``.

Please note that you need to get the IPv4 address associated to VM1. This can be inferred from ``nova list`` command.

**SETUP-SVM-30**: ``ssh`` VM1

.. code-block:: bash

    ssh -i /home/odl/vRouterKey cirros@<VM1-IPv4-address>

If everything goes well, ``ssh`` will be successful and you will be logged into VM1. Run some commands to verify
that IPv6 addresses are configured on ``eth0`` interface.

**SETUP-SVM-31**: Show an IPv6 address with a prefix of ``2001:db8:0:2::/64``

.. code-block:: bash

    ip address show

**SETUP-SVM-32**: ping some external IPv6 address, e.g. ``ipv6-router``

.. code-block:: bash

    ping6 2001:db8:0:1::1

If the above ping6 command succeeds, it implies that ``vRouter`` was able to successfully forward the IPv6 traffic
to reach external ``ipv6-router``.

**SETUP-SVM-33**: When all tests show that the setup works as expected, You can now exit the ``ipv6-router`` namespace.

.. code-block:: bash

    exit

**********
Next Steps
**********

Congratulations, you have completed the setup of using a service VM to act as an IPv6 vRouter. This setup allows further
open innovation by any 3rd-party. Please refer to relevant sections in User's Guide for further value-added services on
this IPv6 vRouter.

********************************************************
Sample Network Topology of this Setup through Horizon UI
********************************************************

The sample network topology of above setup is shown in Horizon UI as follows :numref:`figure3`:

.. figure:: images/ipv6-sample-in-horizon.png
   :name: figure3
   :width: 100%

   Sample Network Topology in Horizon UI