In designate bobcat the designate team re-organised the sql-alchemy code
and this has resulted in the bug [1] that means that the default values
for various zone creations are no longer used from the
/etc/designate/designate.conf file. i.e. the defaults are fixed.
Related-Bug: LP#2042944
(cherry picked from commit a365823de0)
The nova service may not be quite ready after vault initialisation for
the basic network configuration, so add a retrier wrapper around the
client for every test to make sure that it retries.
(cherry picked from commit def0148642)
Some endpoints (heat) return 401s if you try and access them
without authenticating. Given the ep has been able to verify
whether you are authenticated a 401 is ok for a rudementary
health check
Recent test runs have shown that the current number of retries is
not enough and traefik has still not updated its config. This PR
increases the number of retries in the hope that it will be
enough.
Add tactical work around for traefik readyness. Traefik takes
sometime to process changes to load balancer configuration.
Traefik has an api but it is turned off by default and currently
the charm does not expose a way to enable it. So, this change
looks at the traefik config on disk to check if it has the
correct number if units.
Tempest will by default run with a concurrency matching the
number of cores on the test runner.
When running on a workstation, it is likely that the default
concurrency will be too high for the scale of deployed workload.
Signed-off-by: Frode Nordahl <frode.nordahl@canonical.com>
Traefik might take some time to update its configuration,
and switch multiple times between active and maintenance for the duration.
Waiting only for tests is not enough. This change introduce a wait
for endpoints to be ready.
Moreover, at glance call to setup the image might fail because of this
very reason, add a retry on render_tempest_config_keystone_v3.
Let the units run all the hooks before attempting to check if the ring
is synced, because the action set-weight triggers the hook
swift-storage-relation-changed in the swift-storage units which it's the
one in charge of getting the new ring data.
Replace "assert type(...) is Foo" with "self.assertIsInstance(..., Foo)"
this produces a more meaningful test error message and fixes pep8 rule
E721[0]
[0] https://www.flake8rules.com/rules/E721.html
The existing order of operations after restarting the share is:
1. mount share on ins-1
2. validate the test file on ins-1
3. mount share on ins-2
4. clear the test file using 'rm' on ins-1
5. write the test file on ins-1
6. read the test file on ins-2
The issue is that step 4. can result in a stale file handle on ins-2
which takes an unspecified amount of time to clear. Mounting the share
on ins-2 after the file is re-written does however work without a stale
file handle. The new order is:
1. mount share on ins-1
2. validate the test file on ins-1
3. clear the test file using 'rm' on ins-1
4. write the test file on ins-1
5. mount share on ins-2
6. read the test file on ins-2
Closes-Bug: #1107
When running the pool setup we test after the model is idle, but that
is not a guarantee that all pools have finished setup, so that the test
could race the pool creation.
Add retry to the test to give the system another chance to complete
pool creation