Fix a bug where the wrong paramter from the /etc/cinder/cinder.conf
file is being used ('transport' -> 'transport_url').
Fix issue around using json instead of yaml for loading the usernames
from the action data from the list_service_usernames action, which uses
single quotes to quote the data, which isn't compatible with json.
Make the testing for the password change in the cinder.conf more robust
by retrying, as juju is essentially async and it's difficult to
determine exactly when the hook will get fired on cinder.
This new test verifies that cinder can have its password rotated and
then still operate afterwards. It verifies that the on-disk password is
changed in the cinder application and that the user list can be
performed.
When the nrpe application is not deployed in the model skip the test
that checks for the correct configuration of the nagios checks, this is
because at the moment nrpe has no support for jammy, hence is not being
deployed in the jammy-yoga bundles.
Jammy support for nrpe is tracked at http://pad.lv/1968008
The current rabbit tests post a message on one unit and then
consumes the first message on another unit. If the two do not match
then the test fails. This means that a single pre-existing message
can break all these tests as the consumer always gets the wrong
message. This change updates the tests to search for the target
message rather than assuming that the first message is the target
message.
Messages that are not the target message are not reposted so any
pre-existing messages are consumed and in effect thrown away but
given these are all test messages in a test queue and the tests do
not run in parallel this should be fine.
When doing a series upgrade from bionic to focal, the percona-cluster
charm needs to be avoided. This filters the applications for percona if
the target is focal. Note if percona is placed on the same unit as
something else that needs to be upgraded (bad idea) then it may still
get 'upgraded' and the operation will fail.
* Add ovn-chassis test
* Yet another refactor to reduce the amount that implementations need to override.
* Add ovn dedicated chassis support
* Fix race with checking wlm
For some reason test cases are sometimes executed out of order and so
test_921_remove_unit is sometimes run before the pause_and_resume test
case which results in an error.
While the root cause for it must be found it would also be good to avoid
side-effects in individual test cases and return the environment back to
its original state.
There is no 'start' hook implementation for charm-rabbitmq-server,
however, changes that close to the 20.05 release are discouraged so this
change uses an upgrade-charm event simulation to re-trigger the addition
of a unit (which was previously removed) to the cluster.
NOTE: after an execution of hooks/upgrade-charm finishes, the charm will
stay in the waiting state with the following status until the next
update-status event: 'Unit has peers, but RabbitMQ not clustered'
Related-Bug: #1730709
This function was previously called test_901_remove_unit, but had to
be renamed (moved to the end of the func tests); The way in which unit
removal is now performed (by running the "stop" hook) puts the the
removed unit in a "waiting" state -- which consequently causes
wait_for_cluster() (e.g. used in 910) to fail (timeout).
Same message as my previous commit: As per the code: is there a
function to determine unit's release? Otherwise, I'll just implement
a generic function that run_on_unit lsb_release -cs
Note: This test may have exposed a bug, where the
`block_until_unit_wl_status` returns once it reaches the "maintenance"
state, but subsequent queries to `unit.workload_status ==
"maintenance"` fail. Recreating the unit object (via
`zaza.model.get_unit_from_name`) returns the correct workload_status
when queried.