If `self.promote_rgw_to_primary(self.primary_rgw_app)` is executed
before sites are syncronised, the sites will not be syncronised after.
Signed-off-by: Ionut Balutoiu <ibalutoiu@cloudbasesolutions.com>
* Rename `test_004_migration_and_multisite_failover` to `test_100_migration_and_multisite_failover`
* This will allow us to insert more multi-site tests after `test_003`,
before scale-down scenario is run in `test_100_migration_and_multisite_failover`.
* Rename `test_005_virtual_hosted_bucket` to `test_101_virtual_hosted_bucket`.
* This was previously run after `test_004_migration_and_multisite_failover`.
So, we rename the test case to `test_101` to keep the same order.
Signed-off-by: Ionut Balutoiu <ibalutoiu@cloudbasesolutions.com>
This PR implements tests for the ceph-mon charm's action 'rotate-key'.
For now, the charm only supports key rotation for managers, but
more entities will be added. This PR will be used to test those
new entities as well.
This PR implements the needed changes so that the Ceph charms
can be run without a full Openstack environment. In particular,
the RBD mirror charm needed tweaking so as to not depend on
Cinder, using pools of its own creating.
When running the pool setup we test after the model is idle, but that
is not a guarantee that all pools have finished setup, so that the test
could race the pool creation.
Add retry to the test to give the system another chance to complete
pool creation
When a charm has a non-pristine status, it should leave the
previously-set blocked state (LP #1988088) if there are no
non-pristine disks
Also: seeing some spurious test failures due to timeouts
waiting on disk removal -- wait a bit longer
While the disk replacement rests are good for the OSD charm,
and it's useful to have more disks by default in that case,
requiring extra disk for all applications that tun the Ceph tests
is an unreasonable burden, requiring additional changed just
to accommodate the OSD charm tests.
This change updates to check if we have enough disks available
before running the disk replacement tests.
The compression mode after it's set via config-changed might take a
while to fully propagate to the whole cluster, hence the hook may be
done before the cluster is fully configured with the new compression
mode. This is more noticeable in the gate when there are many jobs
running in parallel.
This patch modes the check to its own method wrapped with a
tenacity.retry decorator to make the test more robust.
Closes#946
* Add functional tests for ceph-mon actions.
This PR adds functional validation for 2 actions of the ceph-mon
charm that are currently being rewritten.
Cleanup after the tests run.
Co-authored-by: Luciano Lo Giudice <luciano.logiudice@canonical.com>
In order to correctly test the upgrade from stable, we have to
move the tests that haven't landed yet to a different class so
that the charm can select to run them only after performing the
upgrade to the locally built charm.
The updated testcase handles verifying the expected states
that are safe to migrate, as well as verifying that pre- and post-
migration data are correctly synced.
Change prometheus test to check it has a non-zero amount of OSDs
registered rather than a specific number. This stops the number of
OSDs having to be managed in the tests.
With the upcoming transition to an operator-framework charm, the
Ceph tests need to be idempotent, since we'll be testing the upgrade
path from classical to operator. The tests that need fixing are the
ones that call out to add/remove disks, since they create loopback
devices to back the OSD's but never remove them.