Add tactical work around for traefik readyness. Traefik takes
sometime to process changes to load balancer configuration.
Traefik has an api but it is turned off by default and currently
the charm does not expose a way to enable it. So, this change
looks at the traefik config on disk to check if it has the
correct number if units.
Tempest will by default run with a concurrency matching the
number of cores on the test runner.
When running on a workstation, it is likely that the default
concurrency will be too high for the scale of deployed workload.
Signed-off-by: Frode Nordahl <frode.nordahl@canonical.com>
Traefik might take some time to update its configuration,
and switch multiple times between active and maintenance for the duration.
Waiting only for tests is not enough. This change introduce a wait
for endpoints to be ready.
Moreover, at glance call to setup the image might fail because of this
very reason, add a retry on render_tempest_config_keystone_v3.
Let the units run all the hooks before attempting to check if the ring
is synced, because the action set-weight triggers the hook
swift-storage-relation-changed in the swift-storage units which it's the
one in charge of getting the new ring data.
Replace "assert type(...) is Foo" with "self.assertIsInstance(..., Foo)"
this produces a more meaningful test error message and fixes pep8 rule
E721[0]
[0] https://www.flake8rules.com/rules/E721.html
The existing order of operations after restarting the share is:
1. mount share on ins-1
2. validate the test file on ins-1
3. mount share on ins-2
4. clear the test file using 'rm' on ins-1
5. write the test file on ins-1
6. read the test file on ins-2
The issue is that step 4. can result in a stale file handle on ins-2
which takes an unspecified amount of time to clear. Mounting the share
on ins-2 after the file is re-written does however work without a stale
file handle. The new order is:
1. mount share on ins-1
2. validate the test file on ins-1
3. clear the test file using 'rm' on ins-1
4. write the test file on ins-1
5. mount share on ins-2
6. read the test file on ins-2
Closes-Bug: #1107
When running the pool setup we test after the model is idle, but that
is not a guarantee that all pools have finished setup, so that the test
could race the pool creation.
Add retry to the test to give the system another chance to complete
pool creation
This test launches 1 instance per hypervisor, and then launches a new
audit to optimize the use of hypervisors and consolidate the instances
in a single hypervisor, but also disabling the nova-compute service to
avoid new instances get allocated.
Convert images to raw if ceph image backend
We are currently uploading qcow2 images, and
Nova is converting them to raw when running
the tests, sometimes timing out the tests.
With this change we are pre-converting the
images and uploading them as raw, so Nova
does not have to convert them.
This patch ensures kerberos setup does not create duplicate
domains, projects, or users. Duplicate domains were seen to
be caused when the tenacity decorator ran the function multiple
times.
Fixes#1094
Fix below pep8 error message
zaza/openstack/charm_tests/ceilometer/tests.py:129:16: E721 do not compare types, for exact checks use `is` / `is not`, for instance checks use `isinst
ance()`
The configure step for the DRAgent tests configures a name for a
floating IP to be checked for at the peer side. Since the data plane
tests add one as well for an instance, make sure the control plane-only
tests rely on the FIP that has a specific name. This can be useful if
the tests are run in a different order.