This PR implements the needed changes so that the Ceph charms
can be run without a full Openstack environment. In particular,
the RBD mirror charm needed tweaking so as to not depend on
Cinder, using pools of its own creating.
When running the pool setup we test after the model is idle, but that
is not a guarantee that all pools have finished setup, so that the test
could race the pool creation.
Add retry to the test to give the system another chance to complete
pool creation
Ceph rbd mirroring fixes:
- Upstream ceph mirror tooling modernized its output, need to update
test to reflect that.
- Avoid spamming run-action in a loop
- Don't consider system pools for failover
- Remove obsolete workaround for LP: #1820976
When a charm has a non-pristine status, it should leave the
previously-set blocked state (LP #1988088) if there are no
non-pristine disks
Also: seeing some spurious test failures due to timeouts
waiting on disk removal -- wait a bit longer
While the disk replacement rests are good for the OSD charm,
and it's useful to have more disks by default in that case,
requiring extra disk for all applications that tun the Ceph tests
is an unreasonable burden, requiring additional changed just
to accommodate the OSD charm tests.
This change updates to check if we have enough disks available
before running the disk replacement tests.
The compression mode after it's set via config-changed might take a
while to fully propagate to the whole cluster, hence the hook may be
done before the cluster is fully configured with the new compression
mode. This is more noticeable in the gate when there are many jobs
running in parallel.
This patch modes the check to its own method wrapped with a
tenacity.retry decorator to make the test more robust.
Closes#946
* Add functional tests for ceph-mon actions.
This PR adds functional validation for 2 actions of the ceph-mon
charm that are currently being rewritten.
Cleanup after the tests run.
Co-authored-by: Luciano Lo Giudice <luciano.logiudice@canonical.com>
This change removes a repeated call to ceph daemon config
show and replaces it with a single call that we then
parse in Python.
It also removes a deprecated usage of handling async
code in Zaza that causes an infinite block in the test
code.
In order to correctly test the upgrade from stable, we have to
move the tests that haven't landed yet to a different class so
that the charm can select to run them only after performing the
upgrade to the locally built charm.
The updated testcase handles verifying the expected states
that are safe to migrate, as well as verifying that pre- and post-
migration data are correctly synced.
One of the pep8 target dependencies must have updated,
causing a bunch of new lint errors in these categories:
- line length > 79 chars
- no whitespace after keyword
Change prometheus test to check it has a non-zero amount of OSDs
registered rather than a specific number. This stops the number of
OSDs having to be managed in the tests.
With the upcoming transition to an operator-framework charm, the
Ceph tests need to be idempotent, since we'll be testing the upgrade
path from classical to operator. The tests that need fixing are the
ones that call out to add/remove disks, since they create loopback
devices to back the OSD's but never remove them.
The test suite for ceph-iscsi does a bunch of operations on filesystems
in quick succession, so some tests are prone to fail if the kernel doesn't
update the partition table. This PR does this manually between tests by
calling the 'partprobe' command.
Co-authored-by: Luciano Lo Giudice <luciano.logiudice@canonical.com>