Age | Commit message (Collapse) | Author | Files | Lines |
|
Change-Id: Ibda422d7b4042e9b2e6c54eae66bd76f1cde0a1e
Signed-off-by: Guillermo Herrero <guillermo.herrero@enea.com>
|
|
Change-Id: I6f3ea5e2103ae75d96834d8317cc3c505d01e45b
Signed-off-by: Alexandru Avadanii <Alexandru.Avadanii@enea.com>
|
|
Change-Id: Id8e184458fe6dfaec3127195cfb865cd9fdabb9f
Signed-off-by: Alexandru Avadanii <Alexandru.Avadanii@enea.com>
|
|
Change-Id: I44f63fb7f9e4398a16e1d0b897a2491a60bb1727
Signed-off-by: Alexandru Avadanii <Alexandru.Avadanii@enea.com>
|
|
Change-Id: Ide977ef48a6339631e2e3cb6fdbacc88a639c0aa
Signed-off-by: Guillermo Herrero <guillermo.herrero@enea.com>
|
|
Now the daisy template is not compatible with lf-pod4, see [1].
The meaning of 'interfaces' in this pod is different with other pods.
Make daisy template compatible with it.
[1] https://build.opnfv.org/ci/job/validate-jinja2-templates-master/80/console
Change-Id: I29058f705bab333c48614b7eab8c8b9e83b9e1cb
Signed-off-by: Alex Yang <yangyang1@zte.com.cn>
|
|
Change-Id: I2e8b789cbc10def76a1b7ee673bc1ca6d7f4137f
Signed-off-by: Blaisonneau David <david.blaisonneau@orange.com>
|
|
JIRA: DAISY-42
JIRA: DAISY-56
Now the deploy script cannot distinguish the discovered hosts, then
the roles are assigned to hosts randomly. The MAC addresses of hosts
can help daisy to assign roles correctly.
Change-Id: If413ad776706eb4e25db5223917a7518d856ba8e
Signed-off-by: Alex Yang <yangyang1@zte.com.cn>
|
|
Change-Id: Ie043eb252e2bfdbf42f1403b218958190a1070a8
Signed-off-by: Alex Yang <yangyang1@zte.com.cn>
|
|
Firstly, this patchset looks a bit messy at the onset. relevant
parts are
installers/apex/*.j2
installers/joid/*.j2
installers/compass4nfv/*.j2
and the new verify job that runs check-jinja2.sh
If you look at installers/*/pod_config.yaml.j2 you will see the network
settings for apex joid and compass4nfv installers, the possible to template
hard coded values have been replaced with jinja2 vales, that are populated by
reading one of labs/*/*/config/pod.yaml
eg:
nodes:
- name: pod1-node1
becomes
- name: {{ conf['nodes'][0]['name'] }}
In my last patchset I had ignored data already present in the pod.yaml (which is defined in the pharos spec here: https://gerrit.opnfv.org/gerrit/gitweb?p=pharos.git;a=blob;f=config/pod1.yaml )
I created by own yaml file in an attempt to figure out what all the
installers needed to know to install on any given pod.
this was counter productive.
I have included a script (securedlab/check-jinja2.sh) that will check all
securedlab/installers/*/pod_config.yaml.j2
against all
securedlab/labs/*/pod*.yaml
This is a first step towards having your installers run on any pod that
has a pod file created for it. (securedlab/labs/*/pod[pod-number].yaml)
Moving forward I would like your input on identifing variables in your
installers configs that are needed for deployment but not covered by
securedlab/labs/*/pod*.yaml
Thanks for your time and feedback
Best Regards,
Aric
Change-Id: I5f2f2b403f219a1ec4b35e46a5bc49037a0a89cf
Signed-off-by: agardner <agardner@linuxfoundation.org>
|
|
Values come from a pod config file.
This is just an example, only ipmi_ips are templated at this time.
eg: address: {{ config['global_details']['ipmi_ips'][0] }}
test like this:
./utils/generate_config.py -y labs/intel/pod5/pod.yaml -j
installers/joid/labconfig.jinja2
releng should have a new job, validate-templates
that looks for
pattern: 'utils/generate_config.yml'
pattern: '**/*.jinja2'
and tests that templating does not error.
Change-Id: I7f781abb702afcfccf7ed17674378cffe4a7177d
Signed-off-by: agardner <agardner@linuxfoundation.org>
|