summaryrefslogtreecommitdiffstats
path: root/docs/release/installation/recovery.rst
blob: d61357b3716f6d9d28a5db3ec10220b47bf9679c (plain)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
.. This work is licensed under a Creative Commons Attribution 4.0 International Licence.
.. http://creativecommons.org/licenses/by/4.0

Deployment Error Recovery Guide
===============================

Deployment may fail due to different kinds of reasons, such as Daisy VM creation
error, target nodes failure during OS installation, or Kolla deploy command
error. Different errors can be grouped into several error levels. We define
Recovery Levels below to fulfill recover requirements in different error levels.

1. Recovery Level 0
-------------------

This level restart whole deployment again. Mainly to retry to solve errors such
as Daisy VM creation failed. For example we use the following command to do
virtual deployment(in the jump host):


.. code-block:: console

    sudo ./ci/deploy/deploy.sh -b ./ -l zte -p virtual1 -s os-nosdn-nofeature-ha



If command failed because of Daisy VM creation error, then redoing above command
will restart whole deployment which includes rebuilding the daisy VM image and
restarting Daisy VM.


2. Recovery Level 1
-------------------

If Daisy VM was created successfully, but bugs were encountered in Daisy code
or software of target OS which prevent deployment from being done, in this case,
the user or the developer does not want to recreate the Daisy VM again during
next deployment process but just to modify some pieces of code in it. To achieve
this, he/she can redo deployment by deleting all clusters and hosts first(in the
Daisy VM):


.. code-block:: console

    source /root/daisyrc_admin
    for i in `daisy cluster-list | awk -F "|" '{print $2}' | sed -n '4p' | tr -d " "`;do daisy cluster-delete $i;done
    for i in `daisy host-list | awk -F "|" '{print $2}'| grep -o "[^ ]\+\( \+[^ ]\+\)*"|tail -n +2`;do daisy host-delete $i;done



Then, adjust deployment command as below and run it again(in the jump host):


.. code-block:: console

    sudo ./ci/deploy/deploy.sh -S -b ./ -l zte -p virtual1 -s os-nosdn-nofeature-ha



Pay attention to the "-S" argument above, it lets the deployment process to
skip re-creating Daisy VM and use the existing one.


3. Recovery Level 2
-------------------

If both Daisy VM and target node's OS are OK, but error ocurred when doing
OpenStack deployment, then there is even no need to re-install target OS for
the deployment retrying. In this level, all we need to do is just retry the
Daisy deployment command as follows(in the Daisy VM):


.. code-block:: console

    source /root/daisyrc_admin
    daisy uninstall <cluster-id>
    daisy install <cluster-id>



This basically does kolla-ansible destruction and kolla-asnible deployment.

4. Recovery Level 3
-------------------

If previous deployment was failed during kolla deploy, but the kolla
configuration file (/etc/kolla/globals.yml) is present, or if previous
deployment was successful but the default configration is not what you want
and it is OK for you to destroy the OPNFV software stack and re-deploy it
again, then you can try recovery level 3.

For example, in order to use external iSCSI storage, you are about to deploy
iSCSI cinder backend which is not enabled by default. First, cleanup the
previous deployment.

ssh into daisy node, then do:


.. code-block:: console

    [root@daisy daisy]# source /etc/kolla/admin-openrc.sh
    [root@daisy daisy]# openstack server delete <all vms you created>




Note: /etc/kolla/admin-openrc.sh may not have existed if previous
deployment was failed during kolla deploy.


.. code-block:: console

    [root@daisy daisy]# cd /home/kolla_install/kolla-ansible/
    [root@daisy kolla-ansible]# ./tools/kolla-ansible destroy \
    -i ./ansible/inventory/multinode --yes-i-really-really-mean-it




Then, edit /etc/kolla/globals.yml and append the follwoing line:


.. code-block:: console

    enable_cinder_backend_iscsi: "yes"
    enable_cinder_backend_lvm: "no"




Then, re-deploy again:


.. code-block:: console


    [root@daisy kolla-ansible]# ./tools/kolla-ansible prechecks -i ./ansible/inventory/multinode
    [root@daisy kolla-ansible]# ./tools/kolla-ansible deploy -i ./ansible/inventory/multinode




After successfully deploying, issue the following command to generate
/etc/kolla/admin-openrc.sh file.


.. code-block:: console


    [root@daisy kolla-ansible]# ./tools/kolla-ansible post-deploy -i ./ansible/inventory/multinode




Finally, issue the following command to create necessary resources, and your
environment are ready for running OPNFV functest.


.. code-block:: console


    [root@daisy kolla-ansible]# cd /home/daisy
    [root@daisy daisy]# ./deploy/post.sh -n /home/daisy/labs/zte/virtual1/daisy/config/network.yml




Note: "zte/virtual1" in above path may vary in your environment.