aboutsummaryrefslogtreecommitdiffstats
path: root/network/service_net_map.j2.yaml
diff options
context:
space:
mode:
authorDan Prince <dprince@redhat.com>2017-08-25 23:01:24 -0400
committerEmilien Macchi <emilien@redhat.com>2017-08-29 16:49:25 +0000
commit3ebb05d9877e1961e5df53e05eae4f2b7a96a836 (patch)
tree1e37f10acc24aae68a53b18b6360b2f8b70c59da /network/service_net_map.j2.yaml
parent3dfffebaaef84c390e089d325c3c41f2182cd08e (diff)
Add DockerPuppetProcessCount defaults to 3
docker-puppet.py is very aggressive about running concurrently. It uses python multiprocessing to run multiple config generating containers at once. This seems to work well in general, but in some cases... perhaps when the registry is slow or under heavy load can cause timeouts to occur. Lately I'm seeing several 'container did not start before the specified timeout' errors that always seem to occur when config files are generated (docker-puppet.py is initially executed. A couple of things: -when config files are generated this is the first time most of the containers are pulled to each host machine during deployment -docker-puppet.py runs many of these processes at once. Some of them run faster, other not. -docker daemon's pull limit defaults to 3. This would throttle the above a bit perhaps contributing the the likelyhood of a timeout. One solution that seems to work for me is to set the PROCESS_COUNT in docker-puppet.py to 3. As this matches docker daemon's default it is probably safer at the cost of being slightly slower in some cases. Change-Id: I17feb3abd9d36fe7c95865a064502ce9902a074e Closes-bug: #1713188 (cherry picked from commit 949d367ddeb42eff913cdbed733ccf6239b4864b)
Diffstat (limited to 'network/service_net_map.j2.yaml')
0 files changed, 0 insertions, 0 deletions