Update Master / Slave fails

I… don’t know how to continue the upgrade at that point, to be honest.

I would still recommend some kind of restoration. However, the only truly important part is that the system files are restored, not the mail server content. Meaning you should be able to do something like this:

  1. backup the current content of the mail server’s database and all associated directories with content (e.g. for Kopano you’d backup both the database and the directory containing the attachments; with Dovecot you’d backup all the maildirs); maybe even create a full backup at this point, just to be sure
  2. restore from backup to before the update
  3. restore the mail server content from the backup in step 1
  4. purge any package still in rc state (see dpkg -l | grep -i '^rc')
  5. remove the init files that the update process threw warnings about
  6. rejoin the slave
  7. retry the upgrade

Sorry if I don’t have a better way, but the “restore & retry” way seems way safer and easier to me than trying to fiddle around with LDAP components where the database tools & libraries don’t match.

Yeah…seems legit…
I would suggest the following:

  • Restore a backup from yesterday to another vm
  • copy the contents of /var/spool/dovecot/ to the restored machine from the messed up one.
  • then proceed with the upgrade…someday…maybe next week…sometime…:slight_smile:

The changes of one day in the OX DB are pretty negligible, i would consider, dovecot is the important part.
Only thing i would like to know:
Anything i must be aware of, when copying the content of /var/spoo/dovecot?
Would that work out of the box in your eyes, or do i miss something here?

thanks
Sascha

You’re right, I have not clearly pointed to move only the 4 with a WARNING neither I said to move all of them. But some posts above it were written. I’m very sorry for the follow up problems. I will try to be more specific in future.

For your Slave, I hope you also move only init scripts with a WARNING and verify that the package is in “rc” state, as @Moritz_Bunkus pointed out.

From the error message it seems that “plymouth” is missing some information. And for “ldap” a dependency is missing (libldap is still from UCS 4.1 and libperl was not updated).

Hey peichert, no worries plz, all good!
the master is up and running, so we are happy here now. :smiley:
As for the slave, of course i only removed the WARNING scripts this time, but the problem was something else this time…
During the update the VM simply stopped working, due to a bug in qcow2 and snapshots, renderering the Vdisk corrupt…so something new, not having anything to do with UCS…but thats the way it goes sometimes, or as they say in germany…“haste kacke am schuh, haste kacke am schuh”… LOL!
So the problem here now is, that the Upgrade progress stopped suddenly and completely unexpected, and now we have an inconsistent system state.

The question is, do you think it can be repaired (in a decent amount of work and time) or would you consider it best to restore the backup and then rsync the mail-spool directory, as i suggested?
And if the second is a good idea, do i miss something when recovering dovecot mails here?

Thanks
Sascha

In general one can try resolve a failed upgrade manually, if one know the problem, and do it package by package to fix it. After resolving the situation run univention-upgrade and the setup should continue. That could take many many hours and is not recommended, because also some UCR-Values (version and errata level) has to be adjusted to do the trick. Also by installing packages manually “dpkg” gets confused and did not see the package as a automatically installed package by some dependency. So later auto-remove will not work as expected to remove all unneeded packages.

But as you don’t know the state of your harddisk in the VM, because the qcow2 is damaged, its seems the best to start there and resolve problems at KVM level first. If the physical harddisk is damaged, this could happen again in the future. You could check the SMART values and syslog of your virtualization node.

No, don’t wory about the corrupted VD, i fixed the disk and resolved the underlying problem.
The disk now works perfectly.

Allright,
Restored the slave from the backup, rsynced /var/spool/dovecot from the defunct VM to the restored VM, et voilá…it all works again.

:confetti_ball::sparkler::fireworks::sparkler::sparkler::fireworks::fireworks::confetti_ball::tada:

Will retry the update this weekend, hopefully better vibrations around me then…
Thanks for all your incredible help!

Cheers
Sascha

Glad to hear that! Good luck with the next try.

ok, just to come to a conclusion here…updates ran fine, systems up to date, rc purged…
All good!

YEAH!

wish all of you a great weekend!
Sascha

1 Like

I’m usually not a fan of the inflation of superlatives and the resulting hyperbole, but this is truly great!

One lesson I’ve learned from this myself is to always check packages in “removed but configuration still present” state (dpkg -l | grep '^rc') before doing big upgrades. It’s now part of my personal TODO list for UCS upgrades.

1 Like

yup! Hallelujah!
thanks again for your help…

Really interessting. We had similar issues trying to upgrade in a test environment, a while ago. I think it’s time for the next try.

Mastodon